A few years ago, when I constructed my entire smart home network, I actually purchased a few web cameras. I was thinking of testing out a few of them to see how well they worked. My interest was more a bunch of discussions with a co-worker than a hard and fast desire to watch everything. He was much more excited about it than I am. He wanted to install them all over his house so he could keep watch on things all the time. I always liked the idea of being able to watch my dogs no matter where I am. One of them can be rather rambunctious. Look at what she progressively did to my door.
![]() | ![]() | ![]() | ![]() | ![]() | ![]() |

Ultimately, this culminated in a window escape. Needless to say I kinda developed a complex about this. This dog needed to be watched, lest my house be destroyed by her antics.
From my discussions it seemed Foscam and Amcrest were kinda looked down upon by this hobby’s community. I am less convinced, but my research suggested that there were generally better options. I bought one of each Foscam R4 [1] and Amcrest Ultra HD 2K [2]. Additionally, I added a Reolink RLC-422w [3] and Vivotek FD8281-T [4] as well. At the time I was interested in WDR (Wide Dynamic Range), which is a feature that enables great vision at night time [5]. This is something that pushed me towards the high end of cameras available.
I also found some reviews at the time suggesting what I wanted was more megapixels (MP) for my cameras. 1080P cameras are only 2.1 MP. I was reading at the time that the hobby considered 720P practically useless for identification in court. I found an old handbook that kinda covers this from DHS [6]. The truth is much more about the number of pixels that face covers for identification. 1080P will be useful at a larger distance. The cameras I bought were at minimum 1080P because of this. That DHS handbook doesn’t even describe 720P as much of a real option. The Reo-Link and Vivotek are both 5MP cameras, which uses a lot of bandwidth, but that is a good number of pixels. For reference, 4K UHD is 8.3 MP. Combined with WDR, these should give good video at night and during the day, with larger MP for identification at a distance.
At the same time I bought a DoorBird D101S doorbell [7] because it supported local capture, whereas my Ring Pro does not. In addition the Ring Pro wants to charge me to store motion captures. I took offense to that offering. First it was not true at the time I purchased it. I got unlimited storage as a test, but they let me review the previous three captures whenever I wanted. While I recognize that disk space is not free, I refuse to pay for storing motion captures when the device itself won’t let me record locally and store them. This, in my opinion, amounts to an anti-competitive practice. Video storage is not the same market as smart doorbells. This is the same kind of thing that Microsoft got in trouble for by using their Windows product to push their Internet Explorer product.
Now the DoorBird is only 720P, but at the time the only 1080P video doorbell was the Ring Pro, which I tried first. The only thing worse than a 1080P camera for identification is no actual video. Since Ring decided to charge for actual video, that made the differences in resolution basically moot. I may upgrade this doorbell to a newer version if I really like how this whole setup works, but for now a 720P recording is better than a 1080P nothing.
When it comes to video monitoring software, Zoneminder [8] is the biggest open source project by a mile. That isn’t to say there aren’t any other options. Blue Iris is actually a pretty compelling option [9]. There are several comparisons between the two, I even hear about iSpy or Milestone from time to time. But nearest I can tell the choices tend to be between Zoneminder and Blue Iris. Blue Iris costs $69.95 for the 64 Camera option. It has a single camera option for $34.95 for a single camera version.
I am mostly looking for 2 things here. Integration with Home Assistant, which both ZoneMinder and Blue Iris have support for, and a web interface, which again both have. I do believe Blue Iris is probably easier to install and configure. The truth is that Docker has removed a lot of the difficulty in ZoneMinder configuration. Second, ZoneMinder has a lot of the most advanced options, like facial recognition and motion capture, but I don’t consider those necessary. Combined with the fact that Blue Iris needs a Windows install, and since I am running without a Datacenter Windows license, so I don’t get unlimited VM installs, that is actually darn near $200 to get the software. I really don’t want to spend that. I will go with ZoneMinder.
This is the same conclusion I had last time I was looking into this, but I did not actually proceed, I have no reference points yet. Perhaps next time I review this I will know enough to build a better system overall. I am testing with good cameras, if a few years old, and a free software system. With better knowledge I may make a different choice in the future. For now though? This is good enough.
Video Storage Configuration
Alright, with the software re-decided, I had one more hardware issue to work through. All of these cameras will generate quite a bit of data. Most of the time when one buys a surveillance system, they get a small server, which is little more than a chip running the management software and some hard drives to store the video files.
As I had no experience with ZoneMinder in the first place, I didn’t know exactly what to expect, except that it would take a lot of space. This is actually the very rare case of data which is written a lot but read almost never. Most of the video will never be viewed. The expectation is that it is there for when it is necessary, but otherwise the data is ignored. When the data storage is full, older video is simply deleted and new video replaces it. Given this data access pattern, my storage VM does not actually serve as a good point for this.
The storage VM is designed for something like mixed use or read mostly drives. In addition, having SSDs adds nothing here. If a file needs to be stored in the long term, it can be moved to the SSDs. But these files basically never need to be used, it would be a terrible waste of a high performance array to use it as a surveillance storage option. I had originally considered using the 16TB HDDs for surveillance, but at the end of the storage VM I decided to use them for backup.
Ultimately, I decided to pull four 8 TB drives from the original storage VM (before building this lab). I have long since moved all the data over, and no longer even need it as a backup, since I replaced the backup option as well. I still had six drive bays available after the 16TB drives were used. There is a slight issue here, that these drives are optimized for NAS. These are slightly different use cases. Surveillance drives are designed for writing all the time, whereas NAS are probably considered reading all the time [10]. There are enough similarities here that I am not that concerned, namely these are both designed for 24/7 operation and significant write time. Since I intend to mirror these drives for data security, even in this kind of operation, that should be more than enough to recover from any issues. I also added four drives when I only intend to use two directly. That should provide two hot swaps for use whenever I actually run into issues.
As a side note, I also decided to pull four drives from the old storage system for a different reason. I wanted at least three to destroy the old array. I think some friends are interested in the spare hardware I have. By pulling enough of these drives to kill the raid-z2 they run in, that should destroy the entire pool and not leave enough actual data to recover it. I still intend to overwrite the drives, but this just makes it even more unlikely to recover anything from them.
Alright, I started by creating a Raw Device Map for two of these drives. I took some notes to make sure I selected the two I wanted. Below, I have cropped out some of the output to remove some confusion. I had to pick them out.
ls -l /vmfs/devices/disks
total 157157734457
...
-rw------- 1 root root 8001563222016 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0TW0TY____________
-rw------- 1 root root 2147483648 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0TW0TY____________:1
-rw------- 1 root root 7999415652352 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0TW0TY____________:2
-rw------- 1 root root 8001563222016 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W8JTY____________
-rw------- 1 root root 2147483648 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W8JTY____________:1
-rw------- 1 root root 7999415652352 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W8JTY____________:2
-rw------- 1 root root 8001563222016 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W9R1Y____________
-rw------- 1 root root 2147483648 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W9R1Y____________:1
-rw------- 1 root root 7999415652352 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W9R1Y____________:2
-rw------- 1 root root 8001563222016 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0WJ1RY____________
-rw------- 1 root root 2147483648 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0WJ1RY____________:1
-rw------- 1 root root 7999415652352 Dec 30 01:59 t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0WJ1RY____________:2
...
vml.0100000000564b305457305459202020202020202020202020574443205744 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0TW0TY____________
lrwxrwxrwx 1 root root 75 Dec 30 01:59 vml.0100000000564b305457305459202020202020202020202020574443205744:1 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0TW0TY____________:1
lrwxrwxrwx 1 root root 75 Dec 30 01:59 vml.0100000000564b305457305459202020202020202020202020574443205744:2 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0TW0TY____________:2
lrwxrwxrwx 1 root root 73 Dec 30 01:59 vml.0100000000564b3057384a5459202020202020202020202020574443205744 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W8JTY____________
lrwxrwxrwx 1 root root 75 Dec 30 01:59 vml.0100000000564b3057384a5459202020202020202020202020574443205744:1 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W8JTY____________:1
lrwxrwxrwx 1 root root 75 Dec 30 01:59 vml.0100000000564b3057384a5459202020202020202020202020574443205744:2 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W8JTY____________:2
lrwxrwxrwx 1 root root 73 Dec 30 01:59 vml.0100000000564b305739523159202020202020202020202020574443205744 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W9R1Y____________
lrwxrwxrwx 1 root root 75 Dec 30 01:59 vml.0100000000564b305739523159202020202020202020202020574443205744:1 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W9R1Y____________:1
lrwxrwxrwx 1 root root 75 Dec 30 01:59 vml.0100000000564b305739523159202020202020202020202020574443205744:2 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0W9R1Y____________:2
lrwxrwxrwx 1 root root 73 Dec 30 01:59 vml.0100000000564b30574a315259202020202020202020202020574443205744 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0WJ1RY____________
lrwxrwxrwx 1 root root 75 Dec 30 01:59 vml.0100000000564b30574a315259202020202020202020202020574443205744:1 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0WJ1RY____________:1
lrwxrwxrwx 1 root root 75 Dec 30 01:59 vml.0100000000564b30574a315259202020202020202020202020574443205744:2 -> t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0WJ1RY____________:2
The first thing I noticed is that the drives had multiple volumes! This is leftover from the ZFS array they were pulled from. It was important to map the base drive and not any of the ZFS partitions.
[root@Alexandria:~] vmkfstools -z /vmfs/devices/disks/t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0WJ1RY____________ /vmfs/volumes/Primary/ZoneMinder/capture_rdm1.vmdk
[root@Alexandria:~] vmkfstools -z /vmfs/devices/disks/t10.ATA_____WDC_WD80EFZX2D68UW8N0____________________VK0TW0TY____________ /vmfs/volumes/Primary/ZoneMinder/capture_rdm2.vmdk
Next I created the VM for these drives. I created it with 32 vCPUs, 64 GB of RAM, a 16GB install drive, and the 2 RDM drives from above.

That was actually quite a few resources for the VM. I was a little worried about transcoding. I intend to store these files in their original format, and most of these cameras actually do native H.264 encoding, so the zoneminder system shouldn’t need to actually do encoding. However, when I connect this to Home Assistant, I think it may need to do some transcoding. I also wanted a lot of RAM for the same reason the storage VM has a lot of RAM, ZFS.
As described above I intended to mirror two 8TB NAS drives from my old storage server. My first configuration of these involved using the old MDADM config and an ext4 built on top of it.

It appears that LVM has added basic RAID 0 support, but a bit of research suggests that that is not a great option [11]. MDADM is an old and mature software base, and LVM’s implementation is newer. It appears to be based on the theory that fewer layers between the CPU and the drives is better performance. That is true for what it is worth, but MDADM has a few decades of performance improvement on these RAID implementations on LVM. I suspect that MDADM is more performant simply because of the development hours put into it versus the newer RAID support in LVM. I chose to stick with MDADM.
So, as I was researching exactly what to do with this mirror, but before I had decided to go back to ZFS, I encountered a GUI management utility for all of linux called Cockpit [12]. This is an amazing utility for basic linux management. I am used to having to search all of these items from the command line. Here I get a nice web utility that uses my local authentication to get all of this data. In addition it supports basic RAID configurations under the storage section. It is here that I actually created the partition and filesystem of the MDADM RAID 0 array.
I started by navigating to https://zoneminder.internal:9090
(the default port for Cockpit). I should also note that I quickly configured a DNS record for this machine and gave it a DHCP based static IP.

After that, I logged in with my username and password. Which brought me to the status screen.

I navigated to the storage
screen, then selected /dev/md0
. Unfortunately, I am missing the in-between step here. I clicked create partition table
.

I selected the GPT
format and to not overwrite existing data, which should happen anyways with the video recording. Then clicked Format
.

Now I can create a filesystem. I selected Create Partition
.

Here I did a little research on XFS vs Ext4. It appears that XFS is theoretically a little faster on writes [13]. That is the overwhelming factor for this case, So I went with that.

At this point I had a thought about how much, exactly, data security would be important. I encountered a very long article, that is definitely worth a read, about corruption and recovery of BTRFS, MDADM + dm-integrity, and ZFS [14]. I had intended to work with dm-integrity to add data integrity to the standard MDADM configuration. However, I could not find the base Ubuntu package for it. In addition, I was worried that I was signing myself up for two completely different recovery ecosystems.
That is not, inherently, a bad thing. But my time is not infinite. I do not like the idea of having to maintain two completely separate data security and integrity systems. At least, not without a darn good reason. The article above has a couple really good points it makes in the conclusion on this idea.
However, not only do you need to carefully study the documentation of each piece of technology you put together with mdadm and make sure you understand how you put these things together and how you best deal with potential problems, but you’re still limited by the “regular” filesystem you put on top of all that, and you don’t get any of really well designed and superior protection or management that ZFS or Btrfs provides.
https://www.unixsheikh.com/articles/battle-testing-zfs-btrfs-and-mdadm-dm.html#zfs-raid-z
That is not good. ZFS takes a holistic approach to data integrity. That means its applications have a unified interface for actions taken. The data integrity being separate programs entirely, not just an MDADM extension, loses that. I would need familiarity with completely separate application designs. I don’t want that. Another good point made is:
ZFS is a copy-on-write filesystem that is extremely well designed and it is light years ahead of Btrfs. ZFS is also very easy to use. Yes, you are allowed to shoot yourself in the foot with ZFS, this is *NIX after all, and if you don’t plan ahead you can also end up with a big mess, but then it is mostly your own fault. ZFS is very well documented, but with ZFS you almost know by intuition how a command needs to be constructed.
https://www.unixsheikh.com/articles/battle-testing-zfs-btrfs-and-mdadm-dm.html#zfs-raid-z
I wasn’t really considering Btrfs, though I am a little familiar with it. I always heard it was a sort of ZFS redesign for linux. It does use less memory to implement similar features. He seems to agree, but overall, I don’t see a reason to go with it when ZFS is available for free as well. This kinda confirms what I am thinking too. ZFS is simply more mature. Why reinvent the wheel unless you have to? I wish the Btrfs people the best of luck here. Having options is always good, but I probably won’t consider Btrfs unless ZFS goes completely closed source.
I decided at this point to just go back to ZFS. There is an openzfs implementation on linux. I also encountered an openzfs Cockpit extension [15]. This is actually pretty well featured. I should be able to easily deploy a simple ZFS mirror here. This is also why I ended up with so much memory. ZFS needs it. That will make the recording work better I believe as the memory serves as a great cache. I am not certain MDADM does that nearly as well. It probably even needs another component to do that.
In any event, I started by installing openzfs on linux.
Apt install zfsutils-linux
This installs quite a few utilities, but it includes ZFS. I then went to /root
to install the git repo for the ZFS cockpit manager.
/root# git clone https://github.com/optimans/cockpit-zfs-manager.git
Cloning into 'cockpit-zfs-manager'...
remote: Enumerating objects: 99, done.
remote: Total 99 (delta 0), reused 0 (delta 0), pack-reused 99
Unpacking objects: 100% (99/99), 873.28 KiB | 2.49 MiB/s, done.
cp -r cockpit-zfs-manager/zfs /usr/share/cockpit
Then I was able to restart cockpit and a new option is available in the menu ZFS. I did need to delete the old MDADM array though. So I navigated back to the Storage
section.

Then I clicked the Delete
option on the /dev/md0
RAID 1 array. (I missed the screenshot for the confirmation box)

Back at the ZFS section. I clicked Create Storage Pool
.

Here I filled out the options for a new mirror. I am calling this pool the VideoData
pool. I Selected the two 8TB disks, which auto selected the Mirror Virtual Device
. I selected a 1MB Record size
since these are going to be linear writes of large video files. I don’t need Deduplication
or LZ4 Compression
. The cameras automatically encode in H.264
, further compression would be a waste of processing time. As these are pure recordings, deduplication would never find any repeat files. I did click to automatically expand
, since I have a pair of 8TB on stand by if I decide I need them. SELinux contexts
for Samba is just to make it a SMB share. In the end, I don’t think I used that, and I believe I later removed it. I finalized by clicking Create
.

Which succeeded.

That created the bare pool, I now need to create a dataset
(TrueNAS/FreeNAS terminology). It appears that openZFS calls this a File System
, at least in this extension. I selected the newly created pool and clicked Create File System
.

On this screen, I named this dataset Capture
. I mostly left this alone because I had the base pool doing all of the configuration needed. Next, I clicked Create
.

At this point I decided to check out the docker containers for ZoneMinder. I found two of them, first by zoneminderhq [16], which sounds official but doesn’t appear to be. I didn’t quite figure that out until later. The second is from someone who’s account is dlandon [17], I assume that is a name. I don’t know why it is just some guy’s account, and not an official project account. But based on the zoneminder project website [18].
Given that I knew what the zoneminderhq container wanted, at least as it related to volumes, I decided to create four file systems.
docker run -d -t -p 1080:80 \
-e TZ='Europe/London' \
-v ~/zoneminder/events:/var/cache/zoneminder/events \
-v ~/zoneminder/images:/var/cache/zoneminder/images \
-v ~/zoneminder/mysql:/var/lib/mysql \
-v ~/zoneminder/logs:/var/log/zm \
--shm-size="512m" \
--name zoneminder \
zoneminderhq/zoneminder:latest-ubuntu18.04
So I created those. I also decided to rename the base from camelcase to snake case, so VideoData
to video_data
.

Next I then wanted to change some of the settings for the zoneminder_mysql
dataset, since optimization for a video recording file system is slightly different than a sql database file system. Namely, I reduced the record size to 32KB
.

The weird thing about this was that the zoneminderhq files are actually in the zoneminder github repository [19]. This needs to be cleared up. What is the officially supported container, Dan Landon’s container or zoneminderhq’s? In any event, This was the end of my ZFS file system creation.
I had one last thing I wanted to do for ZFS. the ZFS write cache will eventually take all of the available memory for it’s cache. I wanted to cap it at 32GB, or half of the VM’s memory. To do that I needed to change the zfs.conf [20]. This file didn’t exist yet, so I had to create it at /etc/modprobe.d/zfs.conf
/etc/modprobe.d/zfs.conf
# Min 512MB / Max 2048 MB Limit
options zfs zfs_arc_min=4294967295
options zfs zfs_arc_max=34359738368
I then rebooted the VM.
reboot
And checked that the ZFS system registered the new settings.
cat /proc/spl/kstat/zfs/arcstats |grep c_
c_min 4 4294967295
c_max 4 34359738368
arc_no_grow 4 0
arc_tempreserve 4 0
arc_loaned_bytes 4 0
arc_prune 4 554014
arc_meta_used 4 6509875824
arc_meta_limit 4 25769803776
arc_dnode_limit 4 2576980377
arc_meta_max 4 16149150264
arc_meta_min 4 16777216
async_upgrade_sync 4 800
arc_need_free 4 0
arc_sys_free 4 1055385664
arc_raw_size 4 0
With that done, the video recording ZFS subsystem was configured. It took me a minute to find the ZFS system, it creates the storage pool in the root directory / then the pool, then the filesystems. So the file systems are located on the system at:
/video_data/video
/video_data/zoneminder_events
/video_data/zoneminder_images
/video_data/zoneminder_logs
/video_data/zoneminder_mysql
ZoneMinder Install and Configuration
As previously mentioned, there are actually two somewhat official Zoneminder docker containers. I first tried to get the zoneminderhq container working. Here is the docker_compose.yaml I used.
version: '3'
services:
zoneminder:
container_name: zoneminder
restart: always
image: zoneminderhq/zoneminder:latest-ubuntu18.04
ports:
- "1080:80"
shm_size: "1024m"
volumes:
- /video_data/zoneminder_events:/var/cache/zoneminder/events
- /video_data/zoneminder_images:/var/cache/zoneminder/images
- /video_data/zoneminder_mysql:/var/lib/mysql
- /video_data/zoneminder_logs:/var/log/zm
- /video_data/video:/var/cache/zoneminder/video_storage
environment:
- TZ=America/Chicago
Once I got the container running, I navigated to zoneminder.internal:1080
. All I saw was this:

The default apache page. It took me some time to deduce what is happening. Turns out that for Zoneminder the default page is not the root page. Unfortunately the zoneminderhq container doesn’t tell me how to access the web interface. The dlandon one does. This page is in the dlandon docker container. I configured it into the docker-compose.yaml
Version: '3'
services:
# zoneminder:
# container_name: zoneminder
# restart: always
# image: zoneminderhq/zoneminder:latest-ubuntu18.04
# ports:
# - "1080:80"
# shm_size: "1024m"
# volumes:
# - /video_data/zoneminder_events:/var/cache/zoneminder/events
# - /video_data/zoneminder_images:/var/cache/zoneminder/images
# - /video_data/zoneminder_mysql:/var/lib/mysql
# - /video_data/zoneminder_logs:/var/log/zm
# - /video_data/video:/var/cache/zoneminder/video_storage
# environment:
# - TZ=America/Chicago
zoneminder:
container_name: zoneminder
restart: always
network_mode: "bridge"
image: dlandon/zoneminder:latest
ports:
- "8443:443/tcp"
- "9000:9000/tcp"
- "8080:80/tcp"
environment:
- TZ=America/Chicago
- SHMEM="50%"
- PUID="99"
- PGID="100"
- INSTALL_HOOK="0"
- INSTALL_FACE="0"
- INSTALL_TINY_YOLOV3="0"
- INSTALL_YOLOV3="0"
- INSTALL_TINY_YOLOV4="0"
- INSTALL_YOLOV4="0"
- MULTI_PORT_START="0"
- MULTI_PORT_END="0"
volumes:
- /video_data/video:/var/cache/zoneminder:rw
- /video_data/zoneminder_events:/var/cache/zoneminder/events
- /home/<myaccount>/zoneminder/config:/config:rw
I was able to get that one running and access it at zoneminder.internal:8080/zm/
, though I did have to first agree to terms of use. Unfortunately, I did not screen cap that page, and I am not sure how to get it back. I did see this as the blank page.

After a brief bit of exploring, that also included turning the system into dark mode. I did figure out how to configure camera streams to monitor. Additionally, I ended up giving every camera its own DNS name.
I am going to step through adding the Vivotek camera. There are a lot of configuration options in the Zoneminder Wiki about how to get these streams recording and working. The baseline I used for my Vivotek cam was here [21] the FD8154 option. To add a camera or stream to record, I started at the base page. Then I clicked add
at the top center.

This opened a new window, in the general tab. The most important items here are the source type
and the function
. I always picked Ffmpeg
, which is a video processing utility in linux, and Mocord
, which is always recording with motion being highlighted in the recording. There are several other options, I read them and liked these settings [22]. I have 8tb, I didn’t need to go to extremes to save space.

Alright, I needed to configure the stream for recording. I logged into the Vivotek camera system and saw the streaming protocols.

This appears to closely match the section on the FD 8154. I decided to construct the stream like it said:
http://<user>:<password>@<ip address>/video.mjpg
So I tested it with VLC. I clicked Media -> Open Network Stream

Then I entered my stream:
rtsp://<user>:<password>@<Vivotek DNS address>:554/live.sdp

And the stream opened.

So I added it back in the ZoneMinder source tab. The resolution is 2560 x 1920. I was able to find that in the configuration website for Vivotek.

Then I clicked save. The new stream is now recording.

I configured Zoneminder accounts for all of the cameras.
Now, I will briefly cover the cameras and the streams I used. I should say that for passwords and username, URL encoding [23] is required to get these to work correctly.
DoorBird D101S [24] Stream path: rtsp://<username>:<password>@<dnsname>:554/mpeg/media.amp Resolution: 1280 x 720 |
Foscam R4 [25] Stream path: rtsp://<username>:<password>@<dnsname>.internal:88/videoMain Resolution: 1920 x 1080 |
ReoLink RLC-422w [26] (I used the RLC-423 section) Stream path: rtmp://<dnsname>:1935/bcs/channel0_main.bcs?channel=0&stream=0&user=<username>&password=<password> Resolution: 3072 x 1728 (I discovered this by looking at the warnings in the zoneminder logs) |
Amcrest Ultra HD 2K [27] Stream path: rtsp://<username>:<password>@<dnsname>:554/cam/realmonitor?channel=1&subtype=0 Resolution: 2304 x 1296 (Also discovered by looking at the warnings in the zoneminder logs) |
Next I configured the drive purging. This is located under the Filters
section. I selected the filter PurgeWhenFull
.

And I set it to 95%

With that, hopefully the disk space will be freed up when it gets full.
There was only one more thing I wanted to do on the Zoneminder side of things. It appears that the Foscam R4 and the Amcrest support PTZ (standing for Pan, Tilt, Zoom) which will allow the movement of the cameras to be controlled from here. For Foscam there were actually several options. I tried most of them. It wasn’t until I hit this page [28] that I found out that the Foscam FI9831W should be working with most of them. I had to add the following config options to the Foscam edit page:

Control Device: user=<username>&pwd=<password>
Control Address: <dnsname>:88
Which makes the PTZ controls appear on the monitor screen.

The annoying part of this one is that it needs to be timed. Each arrow makes it move, but I needed to hit the center button to stop the movement. This also shows up no matter which protocol is selected, so it doesn’t really provide great feedback that it is working. The above worked.
I also got the Amcrest PTZ controls working with:
Control Type: Amcrest HTTP API
Control Address: http://<username>:<password>@<dnsname>
With only one option for Amcrest, this just worked.
The last configuration option I noted was that at the top of the screen it displays shared memory usage percent.

This appears to make it need more shared memory. That is a docker option [29]. I updated the docker_compose.yaml
.
Version: '3'
services:
# zoneminder:
# container_name: zoneminder
# restart: always
# image: zoneminderhq/zoneminder:latest-ubuntu18.04
# ports:
# - "1080:80"
# shm_size: "1024m"
# volumes:
# - /video_data/zoneminder_events:/var/cache/zoneminder/events
# - /video_data/zoneminder_images:/var/cache/zoneminder/images
# - /video_data/zoneminder_mysql:/var/lib/mysql
# - /video_data/zoneminder_logs:/var/log/zm
# - /video_data/video:/var/cache/zoneminder/video_storage
# environment:
# - TZ=America/Chicago
zoneminder:
container_name: zoneminder
restart: always
network_mode: "bridge"
image: dlandon/zoneminder:latest
ports:
- "8443:443/tcp"
- "9000:9000/tcp"
- "8080:80/tcp"
shm_size: "16384m"
environment:
- TZ=America/Chicago
- SHMEM="50%"
- PUID="99"
- PGID="100"
- INSTALL_HOOK="0"
- INSTALL_FACE="0"
- INSTALL_TINY_YOLOV3="0"
- INSTALL_YOLOV3="0"
- INSTALL_TINY_YOLOV4="0"
- INSTALL_YOLOV4="0"
- MULTI_PORT_START="0"
- MULTI_PORT_END="0"
volumes:
- /video_data/video:/var/cache/zoneminder:rw
- /video_data/zoneminder_events:/var/cache/zoneminder/events
- /home/nrweaver/zoneminder/config:/config:rw
That lets it use 16GB of memory for shared memory space. That fixed the issue.

At this point, I had all of the recordings working. I can review them whenever I want, and the cameras are working. I considered zoneminder configured. The last item was to connect it to the Home Assistant.
Home Assistant Integration
With ZoneMinder configured and running, I wanted to make sure I can see the streams from a single point. I don’t really like the idea of needing to log into the ZoneMinder web interface or paying for the zmninja
mobile application [30]. There is a page describing how this works for Home Assistant [31]. It is just adding some options into the configuration.yaml
of Home Assistant.
zoneminder:
- host: <dnsname>:8443
ssl: true
verify_ssl: false
username: <username>
password: <password>
camera:
- platform: zoneminder
I have not yet configured the ssl certificates to use my custom certificate authority. I was just trying to get the basic camera options working. They did not. I made a minor mistake when I created the home assistant user.

I thought it needed API
access and that was it. It actually needed at least view rights for all the streams
, events
, and monitors
. I then tried to get the configuration working, I found an example configuration.yaml
file [32].
zoneminder:
- host: !secret zm_host
ssl: true
verify_ssl: false
username: !secret zm_username
password: !secret zm_password
switch:
- platform: zoneminder
command_on: Modect
command_off: Monitor
sensor:
- platform: zoneminder
include_archived: true
monitored_conditions: hour
camera:
- platform: zoneminder
When I did that, it ran into the issue that I already had a sensor section. I moved it there and then had the following.
zoneminder:
- host: <dnsname>:8443
ssl: true
verify_ssl: false
username: <username>
password: <password>
camera:
- platform: zoneminder
switch:
- platform: zoneminder
command_on: Mocord
command_off: Monitor
With that and the user permissions update. I was able to get a stream working.

The next thing I did was to try and get this off of the main network. Since the connection from the Zoneminder VM to the rest of the network is through the passthrough of the 10G x550 ethernet port. Therefore, when it is streaming to the Home Assistant VM it is getting these streams it is through the physical network. I would prefer for this to not leave the local VM network. To fix this, I decided to create a dedicated Video Network
, in the hopes that these streams could be managed better when isolated. I would need to perform the same trick as when I created the storage network, but without the need of the physical port. In ESXi I created the new network and vSwitch.

I also made a replica of the Storage DNS
and DHCP VM
. Then configured a new zone. At first I called it vid
, but that ran into an issue with nslookup failing. It turned out to be an issue with the three letter label. I increased it to just video
and the nslookup started succeeding. Here are the DNS configuration files. I chose to use the 10.0.4.0/24
network for this.
$ORIGIN .
$TTL 604800 ; 1 week
video IN SOA video. root.video. (
2 ; serial
604800 ; refresh (1 week)
86400 ; retry (1 day)
2419200 ; expire (4 weeks)
604800 ; minimum (1 week)
)
NS video-ns1.video.
NS video-ns2.video.
$ORIGIN video.
ha1 A 10.0.4.50
video-ns1 A 10.0.4.4
video-ns2 A 10.0.4.5
zoneminder A 10.0.4.27
;
; BIND reverse data file for local loopback interface
;
$TTL 604800
@ IN SOA video. root.video. (
6 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
; name servers - NS records
IN NS video-ns1.video.
IN NS video-ns2.video.
; PTR Records
4 IN PTR video-ns1.video. ; 10.0.4.4
5 IN PTR video-ns2.video. ; 10.0.4.5
27 IN PTR zoneminder.video. ; 10.0.4.27
50 IN PTR ha1.video. ; 10.0.4.50
// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind9/README.Debian.gz for information on the
// structure of BIND configuration files in Debian, *BEFORE* you customize
// this configuration file.
//
// If you are just adding zones, please do that in /etc/bind/named.conf.local
include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.video";
include "/etc/bind/named.conf.log";
options {
directory "/var/cache/bind";
// If there is a firewall between you and nameservers you want
// to talk to, you may need to fix the firewall to allow multiple
// ports to talk. See http://www.kb.cert.org/vuls/id/800113
// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addresses replacing
// the all-0's placeholder.
recursion yes;
allow-recursion { 10.0.4.0/24; };
listen-on { 10.0.4.4; };
allow-transfer { none; };
forwarders {
8.8.8.8;
4.4.4.4;
};
//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
dnssec-validation auto;
listen-on-v6 { any; };
};
//
// Do any vid configuration here
//
// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/var/lib/bind/zones.rfc1918";
zone "video" {
type master;
file "/var/lib/bind/db.video";
allow-transfer { 10.0.4.4; 127.0.0.1; };
check-names warn;
};
zone "4.0.10.in-addr.arpa" {
type master;
file "/var/lib/bind/db.4.0.10"; #10.0.4.0/24 subnet
allow-transfer { 10.0.4.4; 127.0.0.1; };
};
And the following is the DHCP configuration file
subnet 10.0.4.0 netmask 255.255.255.0 {
range 10.0.4.5 10.0.4.100;
option domain-name-servers 10.0.4.4;
option domain-name "video";
option subnet-mask 255.255.255.0;
default-lease-time 600;
max-lease-time 7200;
host ha1 {
hardware ethernet 00:50:56:a4:6d:56;
fixed-address 10.0.4.50;
}
host zoneminder {
hardware ethernet 00:50:56:a4:84:18;
fixed-address 10.0.4.27;
}
}
Then I added the static ip for netplan.
network:
version: 2
renderer: networkd
ethernets:
ens192:
addresses:
- 10.0.4.4/24
gateway4: 10.0.4.1
nameservers:
search: [vid]
addresses: [10.0.4.4]
And on the Zoneminder VM I added to following to make sure it brought up the VM nic for the video network.
# This is the network config written by 'subiquity'
network:
ethernets:
ens160:
dhcp4: true
ens192:
dhcp4: true
version: 2
The same was done on the home assistant VM. With all of that configuration done, the video network was ready to go. I updated the configuration.yaml
for home assistant.
then had the following.
zoneminder:
- host: zoneminder.video:8443
ssl: true
verify_ssl: false
username: <username>
password: <password>
camera:
- platform: zoneminder
switch:
- platform: zoneminder
command_on: Mocord
command_off: Monitor
And the streaming is now contained within the hypervisor and host system. I have noticed that it appears the streams are less reliable on this private .video network. I am not sure why. The Wi-Fi network did get saturated, I may need to do some further configuration to limit the bandwidth and make sure the Wi-Fi network is still reliable for the other, non-camera users. I am thinking of something like a second Wi-Fi network that I can limit its bandwidth, combined with a recording setting in ZoneMinder that doesn’t target the full recording frequency but is limited to something like 10-15 frames per second.
In any event, everything is configured here. I am now able to easily watch all of my home surveillance cameras.
[2] https://amcrest.com/amcrest-ultrahd-2k-wifi-video-security-ip-camera-pt-black.html
[3] https://reolink.com/product/rlc-422w/
[4] https://www.vivotek.com/FD8182-T
[5] https://securitynewsdesk.com/how-can-true-wide-dynamic-range-benefit-security-applications/
[7] https://www.doorbird.com/downloads/datasheet_d101s_en.pdf
[9] https://blueirissoftware.com/
[10] https://www.howtogeek.com/662440/what-exactly-is-a-surveillance-or-nas-hard-drive/
[11] https://serverfault.com/questions/133673/lvm2-vs-mdadm-performance
[12] https://cockpit-project.org/
[15] https://github.com/optimans/cockpit-zfs-manager
[16] https://hub.docker.com/r/zoneminderhq/zoneminder
[17] https://hub.docker.com/r/dlandon/zoneminder
[18] https://zoneminder.readthedocs.io/en/1.34.8/installationguide/easydocker.html
[19] https://github.com/ZoneMinder/zmdockerfiles
[21] https://wiki.zoneminder.com/Vivotek
[22] https://zoneminder.readthedocs.io/en/stable/userguide/definemonitor.html
[23] https://meyerweb.com/eric/tools/dencoder/
[24] https://forums.zoneminder.com/viewtopic.php?t=25438
[25] https://forums.zoneminder.com/viewtopic.php?t=26810
[26] https://wiki.zoneminder.com/Reolink
[27] https://wiki.zoneminder.com/Amcrest
[30] https://pliablepixels.github.io/