This should be the final post on data security. I have only two more items I have mentioned at some point in this series I want to address. The first is the mirroring of the two NVME SSDs I mentioned way back in the parts selection. The Second is the backup of the ZFS primary pool I created at the tail end of the Storage VM series. To accomplish both of these I ended up setting up replication tasks, which isn’t what I originally was thinking, but I believe works better than the alternatives.
First let’s define a few things. I have noticed quite a bit of confusion around the following terms, Snapshot, Replica, and Backup. They all serve kinda similar purposes, at least in some cases, but it would be a mistake to think of them as identical.

A snapshot creates a point in time save of a particular hard disk. This does not always involve an actual copy of the data itself [1]. In fact, it is probably best to assume that it does not. It merely marks a certain point in the file system and stores changes elsewhere. Snapshots are primarily supposed to be used as a quick fallback option. This is the basic flowchart. Take doing something risky very liberally. Pretty much any change could break a system. I know most pushes, no matter if they are a small change of one file or a big one of the operating system, can crash a system.
Backup does take real copies of all the files and folders of a hard drive. There are even block level backups where understanding the file system is not actually necessary, just raw block access. Admittedly, that isn’t as efficient because it doesn’t support many compression options, but it can easily be done. With a backup, a complete copy of the file system should be possible and a new clone of the system should be deployable.
A backup is necessary to recover files from a disk failure. If a system goes down or the corruption is bad enough, a good recovery option is to just deploy a backup. The primary benefit for backup here is that backups represent insurance against single failures. Once a full backup is taken, incremental backups can then be done to save space as changes are made. From there, individual files could be restored (say this file 2 weeks ago at 5pm may in fact be possible even if the file has been changed). In addition deduplication and compression algorithms can be used here to reduce the space necessary.
The last option is a replica. A replica is an actual real time or near real time copy of the data that is waiting to take over in the case of disaster. So if a backup can be stored elsewhere to prevent disaster, a replica is having two systems with the same data. Usually I see read replicas versus trying to keep changing on both sides in sync. In fact I refuse to do the latter as it always ends in disaster in my experience. Replicas are primarily to prevent downtime. If a business loses $1000 every minute it is down, then it is almost certainly worth it to keep a replica available at all times in case of disaster. The cost of replicas is actual resources, an entirely separate infrastructure must be maintained to keep it up and running. This is a good overview [2].
With that covered, I do have some systems I would consider home critical. If the Home Assistant VM went down, I would lose a great deal of control over the house, or the streaming server as I don’t have cable anymore. None of these are disasters in the business sense, I just would really like to keep them running.
Keeping these critical systems up was the original goal of mirroring the two NVME drives. However, I discovered that ESXi does not support any kind of software mirroring natively [3] [4]. I would need to buy a PCI RAID card and configure a mirror. I don’t have any available PCI slots for this card. Even then, I’m not really a fan of this for my case. I considered just rsyncing the two datastores. However, most mounts don’t support write yet for VMFS (VMWare’s datastores use this file system) [5]. Also, there is no native rsync in ESXi, though there is a compiled binary that can be grabbed [6]. I don’t like being that far away from how VMWare wanted to use ESXi. I am worried they could break it at any time and I would have no way of detecting that it broke.
Next I considered VMWare Content Libraries [7]. I think this would actually work. The issue I encountered was that I don’t seem to be able to just move a VM disk into the content library. I need to copy it into the content library, and I am not sure if that would go well. I encountered another potential solution as well, replication. With a replica all VM properties are copied and the disks are copied. It is like having a hot spare drive, just ready to go in case of failure. I would simply need to login to the ESXi host and turn on the replicas. Then flip a few DHCP entries, and the whole system is up and running again.
Veeam Replication
Since I already had Veeam Community edition installed for backup, I decided to use the other part of the Veeam suite, Replication. I fired up a replication job, ran it and..

It failed. I will cover how I created the job in a minute, but I wanted to point out a few things to start with.
First, I ran into the same snapshotting issue that affected backup of these VMs. I cannot create replicas of running VMs with PCI passthrough devices. Presumably because devices could be writing to the disk at the same time as a read and the asynchronous access could create an issue (DMA). Second, is that I have exceeded my licenses. There are 10 licenses for VMs in the Community Edition. My experience is that there are actually 9 licenses, and 1 license used to manage the standalone clients. I can work with that.
Alright, with that covered, my actual goal here is not to keep a real time exact replica of the VMs up and ready to go on the Secondary datastore (the Primary datastore is where I created the VMs in the first place, Secondary is where I wanted to mirror with). I want to reduce my Recovery Time Objective (RTO) to something I can handle in a few hours.
A replica, even a stale replica from a few months ago, should be sufficient. I am not going to be constantly updating all of this core infrastructure. I haven’t touched the Home Assistant VM, Virtual Reality VM, or Streaming VM since I completed them, at least from a configuration perspective. Even if I lost a few months of changes it would not really affect their primary usage. Most of that is stored elsewhere. Even then, a turn on of the replica and a file recovery from Veeam from the most recent backup should more than suffice for my purposes.
With that in mind, I decided I can just shut down these troublesome VMs, take their replicas, then boot them back up and keep the replicas shut down. That will result in a reasonably fast RTO for myself.
What I consider my Core Infrastructure is the following: DNS VM, Certificate Authority VM, Storage DNS and DHCP VM, Veeam Backup management VM, VMWare vCenter vServer VM, UrBackup VM, Streaming VM, Virtual Reality VM, Home Assistant VM, and Storage VM. If these are all up and working, my home life is pretty much where I want it to be. Some of these are more necessary to support the others, but that is my list. Most of these are fairly straightforward. DNS, Certificate Authority, Storage DNS and DHCP, Veeam Backup, and UrBackup VM all replicate without any issues. I can even keep those up to date.
The VMWare vCenter vServer is actually on protected disks already and not located on the Primary Datastore at all. The Streaming VM, Virtual Reality VM, and Home Assistant VM I cannot keep up to date, but it is as simple as turning them off, then running the task to get it to update to present. The storage VM is an issue. It is a prerequisite service for a lot of the previous VMs. In addition it serves storage back to ESXi specifically to run the vCenter vServer VM, which is used in performing replication tasks in Veeam. I had to do something specific to get it to work.
First I created a replica of the VMWare vCenter vServer on the Secondary Datastore. Then I booted that instance up, followed by shutting down the instance running off of the storage server’s disks. I changed the DHCP to give the replica the original’s IP address. From there I could shut down the Storage VM and attempt to take its replica.
I started at the Home page. I clicked Replication Job
and selected Virtual Machine
.

This brought up the Job Creation
wizard.

I named the job after the VM it was replicating, then clicked Next
.

This brings up the Virtual Machines
page. I clicked Add...

From there I navigated down the path to my ESXi box and selected Alexandria
. Then I clicked Add
.

That added Alexandria
to the active list for the Job. Then I clicked Next
.

Normally, I would want a replication job to be on a completely different server. That would give minimum downtime between the two in the case one failed and the other needed to take over. For this case though, I wanted to replicate on the same machine to a different datastore. That is possible by selecting the correct options. I clicked Choose...

Then selected the same esxi1.internal
box, and clicked OK
.

It appears to auto select the datastore with the most open space. If it hadn’t selected the correct one, I could have selected the Choose…
next to the datastore and then selected Secondary
. I had to do that once in this replication setup, but not for this VM. I clicked Next
here.

Here I realized a small mistake I had made. Replication tasks use a Repository.

Based on what I can see poking around in them it’s more a scratch space than a backup or anything, but they want to keep the information about the replication tasks in a place that can be recovered if necessary. A backup repository is a natural place for this. In my case, the Alexandria VM is the one being copied. It was unavailable. I ended up going and creating a local repository on the Veeam VM for this in C:/Backup/
. It doesn’t need much space to perform this task, and even if it needed a complete backup of the main Alexandria system, I know from UrBackup, it’s only about 4GB, which should be possible.
After making the new repository (see previous article on Veeam for directions), I selected this one here. The suffix shows up in the new VM name. I clicked Next
.

Here it decided the best way to copy the data. Though I am not sure if it did that very well. More on that later. I clicked Next
.

Since I didn’t want to mess with application aware processing, I just clicked Next
.

This is a task that required manual intervention to run correctly. I didn’t want to attempt to schedule it. I just clicked Apply
.

Then I reviewed the final job and clicked Finish
.

I was brought back to the Job screen. I right clicked on the Alexandria Replication Job
and clicked Start
.

Which it did.

And it quickly failed for a licensing reason.

So from my poking around on Veeam, I have 10 VM licenses to work with. These are 10 VMs active at any given point in time. For the VMs that need to be shut down and manually run, I don’t need an active license on them. I just need a one time replica that I can leave dormant until it is necessary, then I can use backups to get them back up to date. For that I just need to have one license free that I can assign then remove after I complete a replication.
From the Home screen I clicked the snack bar in the top left.

Then I selected the License
option.

I then saw all my uses, and my 10 licenses are up. I clicked on the instances
tab.

There I saw the 9 VMs and 1 for the workstations. Considering the workstations are Veeam Agent controlled, I’m not sure why they are using a VM license, but whatever. I clicked Manage…

I selected all the VMs I don’t want to be holding an active license and click remove. For my case, those are the Streaming
, VR
, and Home Assistant
. None of these will work without manual intervention.

After that I have reduced my License usage. I clicked OK
.

At that point I had available licenses. I clicked Close
.

Then I attempted to run the replication again.

I was somewhat concerned that it may attempt to copy the two Raw Device Mapped drives to the Secondary Datastore, I was pleased to see that Veeam detected this case and didn’t copy them.

It eventually finished. However, it took a great deal of time to failover on the hotadd, which did not work at all. It then uses the network to transfer the data, which worked fine. In any event it did take around 45 minutes of waiting, 30 of which are just waiting for it to decide to failover to the network.
The last item I needed to complete my ready replica was to create a second Raw Device Map for the disks in the backup pool. I SSH’d into the ESXi machine and ran the vmkfstools again (covered previously), only changing the targets to the new replica.

Then I went ahead and added the new Disks to the VM, just like before.

With that, my replica of the Storage VM was just waiting to be turned on.
I finished creating replicas of all my VMs that used PCI Passthrough.

This isn’t as good as an active mirroring service. I still think that would be genuinely useful. I wonder if VMWare will ever build such a thing. It shouldn’t be complicated, there are several versions of this that already exist. Until then though, this is an acceptable solution. I should be able to fix any serious disk failure on the Primary Datastore with little fanfare. The only actual issue is to change all the IP addresses to be the corresponding entries in the DNS VM. Ubiquiti manages my DHCP server on the primary network, so it isn’t part of this configuration.
The only other thought I had is that I can reduce the number of VMs I need to replicate by moving them onto the Alexandria iSCSI device I made. The issue there is that the iSCSI device is much slower than the Primary Datastore, so I would want services that don’t need that kind of speed. The DNS, Certificate Authority, UrBackup, and the Home Assistant VMs are candidates. The Storage DNS and DHCP VMs are necessary for bare initialization of the storage VM’s iSCSI device. Therefore, it cannot be moved. Ultimately I am going to leave that for another time. This is plenty good enough.
ZFS Replication
The last remaining task was to configure the backup pool on the storage VM. This was a relatively easy task. A little searching and I found out that ZFS already supports a replication task [8]. Again, I had considered a rsync task, but that appears to require a remote server [9]. It can work locally, but it will still use the network connection, thus slowing it down. I could probably run it as a cron job, but that would still be less efficient than a replication task. I am still a little worried because it appears to use snapshot transfers. Which I don’t have enough space for multiple full snapshots, but I trust they have worked this out.
I started at the main page. I clicked on Tasks
, then Replication Tasks
.

Then I clicked Add
.

Next I selected On this system
for both Source Location
and Destination Location
. Then I navigated to the two Backup
folders. Additionally, I clicked on recursive
as well. I finished by clicking Next
.

I then clicked on the Schedule
drop down.

And selected Custom
.

I put in 5 for hours
to make sure this runs at 5AM. There is a little bit of sequencing here. I am hoping that backups will be completed by then. In a more production system environment I would probably spawn an event when a backup is complete and wait for all these before starting this replication. That may be a good future project to connect all of these systems. For now, 5 am is good enough. I clicked Done
.

Back on this page I actually clicked Back
because I wanted to look at the advanced options I didn’t click before.

I clicked advanced replication creation
.

And I saw this. I mostly just reviewed the options. The only interesting thing I noted here was that the read only policy was set. This would be a read replica. That is what I wanted. I clicked save
.

Then the replication task was created.

I stepped through that again for Stacks/Data
-> Archives/Data
.

I tried to run them but got the following errors:
zettarepl.log:[2020/12/26 04:30:00] ERROR [replication_task__task_2] [zettarepl.replication.run] For task 'task_2' non-recoverable replication error ReplicationError("Target dataset 'Archives/Data' does not have snapshots but has data (e.g. 'Data' and replication from scratch is not allowed. Refusing to overwrite existing data.")
zettarepl.log:[2020/12/26 05:00:00] ERROR [replication_task__task_1] [zettarepl.replication.run] For task 'task_1' non-recoverable replication error ReplicationError("Target dataset 'Archives/Backup' does not have snapshots but has data(e.g. 'Backups' and replication from scratch is not allowed. Refusing to overwrite existing data.")
zettarepl.log:[2020/12/26 11:20:14] INFO [MainThread] [zettarepl.zettarepl]Scheduled tasks: [<Replication Task 'task_1'>]
Well, remember when I rsync’d this at the end of this post? Well turns out that was a waste of time. At least deletion is a lot quicker than copy. I ended up deleting the Backup
dataset files first, then letting that complete before I deleted the Data
dataset files. This was mostly because I thought the replication might do block based operations, and seek on hard disk drives is particularly bad. That is what defragmentation is all about, making sure files are arranged linearly on the platter, because that is easier for reading. It’s a minor point, but when copying 9 TB total, little things add up. Two days later and I had them both working.

So far there are no issues with hard disk space. I am not sure if anything will develop, but for now it looks to be working great.
Data Security Conclusion
So I just want to review the entire plan for data security in this lab. There are two NVME SSDs on the main ESXi box. The Primary datastore hosts core infrastructure VMs. These are backed up first by Veeam, then by UrBackup. As a rule, Veeam cannot do hypervisor based backups on VMs with PCI passthrough, so I have configured individual Veeam agents on the machines that have this issue. Veeam doesn’t support FreeBSD, so UrBackup does the frequent backups of the storage VM, with Veeam backing up the UrBackup system. I technically have a Bacula system configured, but I have decided to shut that down for now. It is more effort than it is worth.
I have used a ZFS replication task to copy the Backup
and Data
datasets from the Stacks
pool to the Archives
pool. This results in 4 copies for all critical data. That is because the Stacks pool is a stripe of mirrored Solid State Drives and the Archives pool is just a single mirror of two 16 TB Hard Disk Drives. Splitting the type of technology also helps since it is unlikely the same event would knock both technologies out. In addition the SSDs in the Stacks pool have power-loss-prevention, which should complete writes in the event of a failure. ZFS also includes basic corruption detection and correction.
I have roughly configured these for my recovery point objective (RPO) of 2 days. Therefore backups take place about every two days, usually incremental, with a full backup every month, being stored for 6 months. I will re-examine this in a month or two to view the drive space being used.
For my recovery time objective (RTO), I wanted to be able to recover from a total failure in about an afternoon for the core infrastructure. To accomplish this, I created replication tasks for each VM, and have left a hot replica of each VM in the core infrastructure on the Secondary Datastore (based on a different SSD), therefore in the event of a failure of the Primary Datastore, it is as simple as flipping some DHCP config and booting everything up to get the core infrastructure working again. I can then proceed with backup restore if needed, but it shouldn’t be needed for core operations. I should make replica updates whenever I do major maintenance on these core infrastructure VMs.
I’m not going to say this is enterprise grade. It is probably missing a few important points, especially multiple physical servers to prevent the single point of failure that is my one hypervisor. Also, most backup and replication systems now will involve a cloud component which enables easy replication around the globe to prevent data loss. But this uses practically everything that is necessary to actually create an enterprise grade data security system. Huzzah!
[1] https://blog.sepusa.com/snapshots-vs-backups
[3] https://www.reddit.com/r/vmware/comments/djse54/software_raid_for_esxi_installation/
[4] https://serverfault.com/questions/519688/software-raid-underneath-esxi-datastore
[5] http://woshub.com/how-to-access-vmfs-datastore-from-linux-windows/
[6] https://serverfault.com/questions/549858/rsync-over-ssh-to-esxi/549859
[8] https://www.tyler-wright.com/replicating-a-dataset-with-freenas-11-2/[9] https://thesolving.com/storage/how-to-sync-two-freenas-storage-using-rsync/