I remember back in high school I was typing a paper for my AP European History course, specifically on the Defenestration of Prague, one of many apparently, and its lead up into the 30-years war. The Protestants threw three Catholics out the window, and the Catholics survived. The Protestants declared victory because they threw the Catholics out, the Catholics proclaimed it a miracle that they survived. All sides geared for war. History is stranger than fiction apparently. Also, English is a weird language. Seriously, who decided we needed a specific word meaning “to throw something out the window”? Is that really a common enough occurrence? Apparently, because there were four defenestrations in Prague.
Anyways, in the middle of writing this paper the power went out. This was before automatic local saves were a thing in Microsoft Word, in fact I’m not even sure I was using Word, it might have been WordPerfect. I also hated the fact that my childhood home would lose power due to a stiff breeze hitting the transformer out back. I lost the paper, had to start over. Teachers were not very forgiving of that kind of thing in my experience. So let’s talk about the service you didn’t know you needed until it was already too late, backup!
Backup Service Decision
For this whole project I have deployed a total of eight VMs: The Streaming VM, the Virtual Reality VM, the Home Assistant VM, the Storage VM, the Domain Name System VM, the Certificate Authority VM, the Storage Network DNS and DHCP VM, and the vCenter VM. This indicates many hours of work. Although I have redeployed and reconfigured these sometimes while getting them working, I really don’t want to have to do that again. I decided I wanted to try my hand at installing and configuring an enterprise grade backup solution.
I have limited experience with these enterprise backup solutions. Windows has a built-in backup system from the Windows 7 era called Easy Transfer [1]. I used this when I was building a new computer to transfer the files between my old one and the new install. I was also aware of a personal backup solution sold along with those western digital backup drives [2]. Though, I have never actually used them. I tended to do backup by just copying the files around. There is still a large collection of files labeled “backup” that are really just file copies floating around my storage pool. This has proven useful many times in the past. Especially when I was looking for old keys or saves or more.
When I started researching how these enterprise systems actually work I started to define what I actually wanted in an enterprise backup software solution. First, I wanted something free. I found many backup solutions that were very intriguing and well designed. However, they all cost something like $700 a year. I found a particularly good one in Acronis, which would sell me a perpetual license for around that [3]. This free requirement was complicated by the fact that I actually had a fair number of machines I wanted to backup. I had the 8 VMs before, my workstation computer, my surface laptop, my partners macbooks, and I’d kinda like to backup my phones if it’s possible, though that one is a lot more optional.
Things get weird here. I found quite a few lists of available backup software solutions [4] [5] [6]. The biggest thing I realized here is that these solutions seem to limit themselves based on the number of VMs they backup. At least as far as the free versions they offer here. The largest single numbers I got were from Veeam and Nakivo, both of whom’s free versions offer 10 VMs. I can’t confirm this on Nakivo, but there is actually a conflict here. Some said it could handle only 2 VMs, some said 10 VMs. I am not sure which is true. What I do know is that I wanted hypervisor level backups. My thinking was that this should get around specific OS issues (I was wrong, but not because of the reasons I thought).
These limitations pushed me to open source solutions [7] [8]. From my days at the web hosting company I knew of Bacula as a potential open source solution. Checking on Wikipedia [9], I can get an idea of which projects are still being worked on. Bacula and duplicity seemed like good options still. Mostly because they have been around for a fair number of years, especially Bacula. That is a good sign for an open source project. These also had no limitations on the number of machines that could be backed up.
I also briefly looked at market share for backup. There are quite a few services here. I want to ignore the solutions that come with a storage deployment (think like an EMC system) as I’m not sure they can work with my case. But even then, Veritas comes up quite a bit. I also read about Commvault. In fourth is Veeam [10]. I also found a blog post by Veeam bragging about its growth now stating it is third [11]. There are undoubtedly many methods of measuring this. Veeam is a big player here, and since I can’t find a free version of either Veritas, Dell EMC, or Commvault, this is a strong contender here for my usage.
Learning Veeam would seem to at least be potentially helpful, and one of the few options at the top. I couldn’t find any numbers for the open source solutions. Perhaps they don’t report because it’s not part of the competitive market? I’m not sure. I found many references for them on the sysadmin websites, but I couldn’t get hard numbers.
The largest other consideration I had is that I wanted centralized management. This is actually fairly big for me. With all of these VMs, workstations, and other hardware, I just wanted one central point to see and do everything. It was a bonus that many of the newer services also support direct hypervisor control, so I wouldn’t need to install an agent on every system.
I chatted with a few friends of mine about it, one of whom recommended Acronis. I still want to try them, but I don’t want to spend that much. Maybe someone can convince them to offer a community or non-commercial licensed version? Until then though, they were out. The other actually did some time at Veeam, and recommended them. Acronis was actually out for different reasons (no FreeBSD agent, but I will get to that later). I decided to go with Veeam.
I decided to start with Veeam Community Edition. It has strong market share, appears to be growing, and a solid offering on it’s free version. I also believed that it had Windows agents that work with the free version and may not count towards the 10 VM limit. Also, I know it works with vCenter Server directly, and thus maybe can backup the FreeBSD based TrueNAS system without the need to have a direct FreeBSD agent.
Veeam Community Edition
Let’s cover the installation of Veeam Community edition. Veeam is a Windows based utility. I don’t know why I didn’t quite notice this at first. It has Linux agents, but not management software. Luckily I can still deploy Windows VMs. I deployed a Windows VM with 16 CPUs, 16 GB of memory, and 80GB of disk space.

I decided to give this VM more resources because I was expecting it to move quite a bit of data. There may also be compression and database concerns about it. When installing the Veeam software it installed SQLServer, which made me think this was a good decision. I also gave this machine two network ports, one for the regular network, and one for the storage network. The storage network was designed for this. Since this network is completely contained within the host, it should not be capped in its bandwidth by the physical characteristics of the port. Or at least, not when backing up the VMs. That also turned out to be a good decision.
To start the installation I had to download the installation ISO [12]. From there I clicked on the setup executable on the ISO.

It needed to elevate privilege, I clicked Yes.

That brought up the installation screen. I clicked on the Veeam Backup & Replication
install. I wasn’t sure what all was free here, and just wanted to focus on the basic install.

It took a minute to load.

Then I accepted the terms and conditions of the EULA.

It asked for a licence, but said to just click next for Community Edition. I did so.

I left it as the default install and clicked Next
.
Next it needed to install a few SQL libraries and SQL Server instances. I clicked on Install
to mark them as necessary to install.

It then needed to enable a few features, but mostly I think its marking packages for install.

Next, it has completed the install configuration checks. I clicked Next
.
Followed up by the summary screen. I simply clicked Install
.

It completed the install, but it did take about 10 minutes.

It ended with installation succeeded
.

Followed by taking me back to the original installation selection screen.

Alright, slight clarification: I stepped through the first part from a clean VM because I did not screenshot that the first time. I didn’t really select much and just let it do its thing, but I decided later I wanted to include it. I did screenshot the initial setup on my first VM, which I will switch to now.
To start I clicked on the Veeam Backup & Replication Console
.

It needed to make changes, so it asked for permission.

Next it prompted for authentication. I entered the password and clicked connect.
It started to load. One small note here, if this is after a restart, it takes ~5 mins for all the background services to start. For the initial install, the last step of the installation is to start them.
This brought up the home screen. I clicked on Backup Infrastructure
in the bottom left.

Then I clicked on Backup Repositories
.

Next I clicked on Add Repository
in the top left.

Next I clicked on Network Attached Storage
.

Here I had a small decision. I could have gone and setup a NFS share, and that was somewhat recommended. However, I was not looking for the best performance. I just wanted to use the basic components I had already configured.

I named it Alexandria-Backup
.
Next I filled out the share folder with \\alexandria.storage\Stacks\Backups
. I made a mistake on NFS vs CIFS here. That is how I would mount an NFS mount. I figured that out quickly. Then I clicked This share requires access credentials
. Then Add…
I had to add the Veeam account on TrueNAS in order to give it access to the Backups share. After completing that, I added the credentials here.

Next I fixed the CIFS path and clicked Next
.

It validated the credentials and then brought me to some more configuration. I click Next
, four connections should be plenty.

Here I also just keep the defaults by clicking Next
.

The next screen is for validating that the correct components are installed and potentially searching for existing backups. I clicked Apply
.

Once completed it displays everything it installed. I clicked Next
to get to the Summary
.

Here I finished up by clicking Finish
.

And the repository showed up as available.

This is actually a new machine I created for demonstration purposes. I already had a Veeam instance running elsewhere. That is why it has some usage already noted. I also looked into the advanced
section during the repository designation once.

I didn’t select any of these, as it didn’t seem to apply to me.
The next configuration I needed to do was to configure the backup of the ESXi box. I navigated to the Inventory screen to add the VMWare hypervisor. I clicked on Add Server
.

Then I clicked to add Vmware vSphere
.

From Here I selected that I wanted to use vSphere
. It does say it could use the Hypervisor directly, but it recommended vCenter. This is why I went through the installation process in the last post.
This brought me to the server name screen. I inserted vsphere.internal
, which is wrong. I forgot I had named it vcenter.internal
.
Next it asked me to add credentials for the vCenter vSphere install.
At this point, I realized I needed to add an account for the Veeam system in vSphere. I jumped over to vsphere and navigated to the Administration->
Users and Groups
section.

I clicked on Add
to add a new user. Then I filled out everything for the Veeam account. This involved giving it a name veeam-adm
, a password, fake first and last names, plus telling it to email me about it. I finished up by clicking Add
.

I popped back over to Veeam and clicked add
.
Then I inserted the username and password I just typed. I did make a small mistake here. I used veeam-adm
as the username. That is not how vSphere usernames work. The correct answer is veeam-adm@vsphere.internal
.
Then I realized I used the wrong server address so I clicked Previous
to fix it.
I fixed it to vcenter.internal
. Then clicked next
.
Back on this screen I clicked Apply
.
It notified me that there was a certificate it could not verify there.
It was after this that I realized my mistake before about the username needing to be veeam-adm@vsphere.internal
. I clicked OK
.
Next, I clicked add
.
Then
At this point, I got a little frustrated. I thought I had created the account correctly. So, I clicked OK
.
I decided to try my administrator’s account. I clicked add
.
Then Apply
And continue on the Certificate Alert. It worked.
I realized I must not have set the permissions correctly. I went back to vCenter and looked into the permissions. Veeam states the account needs local administrator permissions [13]. So I popped back over to the vCenter page. I took a guess that the new veeam-adm
account is not in the administrators group. I selected the Groups
option near the top.

I saw the Administrators
group near the top and selected that by clicking on the Administrators
group (not the option bubble).
This brought up the group members page. I clicked add members
.

Which brought up an edit group page.

I searched for the veeam-adm
, and clicked on the user.

It added them to the group. I finalized the edit by clicking save
.

This solved the issue with the Veeam-adm@vsphere.internal
account.
The next thing I needed to configure was a backup job. I went back to Home and selected Backup Job-> Virtual Machine
in the top left.
This opened the Job definition screen. I named the new job Home DC Backup
because I called the datacenter in vCenter setup the Home
datacenter. I am trying to keep these consistent.
Here I needed to add the virtual machines. I clicked Add
, navigated to them and selected them.

I proceeded by clicking Next
.
Next, I needed to select the correct Backup repository
. I later deleted the default one, so this wouldn’t get in the way.
Then, I wanted to set up archival backups. These VMs are more infrastructure than use. I wanted to maintain a working version for a while, so I clicked the checkbox.
Then I clicked configure
.
I selected one full backup for each month of the last year. Then yearly backups for the last 5 years. Followed by clicking OK
.

I selected Next
.
Alright here I have a brief aside. I wanted to enable guest file sharing. I tried many times to configure and set up the credentials to make this work.
Because of this, I thought I needed Enable guest file system indexing
. Therefore, I tried to get Veeam linux accounts working. The biggest issue I encountered here was that Veeam accounts actually have quite a few requirements [14]. I created a veeam-adm
account on all the linux VMs. However, Veeam reported that it couldn’t connect. This is because Veeam doesn’t support the current algorithms for SSH [15] [16]. I found an article about this in the Veeam knowledge base [17], and a general one about this for ubuntu [18]. I also found references to these being outdated, termed legacy issues [19]. That is concerning. I did edit the /etc/ssh/ssh_config
files to include the correct algorithms.
# This is the ssh client system-wide configuration file. See
# ssh_config(5) for more information. This file provides defaults for
# users, and the values can be changed in per-user configuration files
# or on the command line.
# Configuration data is parsed as follows:
# 1. command line options
# 2. user-specific file
# 3. system-wide file
# Any configuration value is only changed the first time it is set.
# Thus, host-specific definitions should be at the beginning of the
# configuration file, and defaults at the end.
# Site-wide defaults for some commonly used options. For a comprehensive
# list of available options, their meanings and defaults, please see the
# ssh_config(5) man page.
Include /etc/ssh/ssh_config.d/*.conf
Host *
# ForwardAgent no
# ForwardX11 no
# ForwardX11Trusted yes
# PasswordAuthentication yes
# HostbasedAuthentication no
# GSSAPIAuthentication no
# GSSAPIDelegateCredentials no
# GSSAPIKeyExchange no
# GSSAPITrustDNS no
# BatchMode no
# CheckHostIP yes
# AddressFamily any
# ConnectTimeout 0
# StrictHostKeyChecking ask
# IdentityFile ~/.ssh/id_rsa
# IdentityFile ~/.ssh/id_dsa
# IdentityFile ~/.ssh/id_ecdsa
# IdentityFile ~/.ssh/id_ed25519
# Port 22
Ciphers aes128-ctr,aes192-ctr,aes256-ctr,aes128-cbc,3des-cbc
MACs hmac-md5,hmac-sha1,umac-64@openssh.com
KexAlgorithms diffie-hellman-group1-sha1,curve25519-sha256@libssh.org,ecdh-sha2-nistp256,ecdh-sha2-nistp384,ecdh-sha2-nistp521,diffie-hellman-group-exchange-sha256,diffie-hellman-group14-sha1
# EscapeChar ~
# Tunnel no
# TunnelDevice any:any
# PermitLocalCommand no
# VisualHostKey no
# ProxyCommand ssh -q -W %h:%p gateway.example.com
# RekeyLimit 1G 1h
SendEnv LANG LC_*
HashKnownHosts yes
GSSAPIAuthentication yes
Followed by the keygen and restart.
ssh-keygen -A
service ssh restart
That did not work. It got past the first issue, but there is apparently still more configuration that needs to be done because these accounts need passwordless sudo
as well as no tty execution
[20] [21]. There is a bare mention of sudo
config and disabling the requiretty
option [22].
#
# This file MUST be edited with the 'visudo' command as root.
#
# Please consider adding local content in /etc/sudoers.d/ instead of
# directly modifying this file.
#
# See the man page for details on how to write a sudoers file.
#
Defaults env_reset
Defaults mail_badpass
Defaults secure_path="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/ sbin:/bin:/snap/bin"
Defaults:veeam !requiretty
# Host alias specification
# User alias specification
# Cmnd alias specification
# User privilege specification
root ALL=(ALL:ALL) ALL
veeam ALL=(ALL) NOPASSWD:ALL
# Members of the admin group may gain root privileges
%admin ALL=(ALL) ALL
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
# See sudoers(5) for more information on "#include" directives:
#includedir /etc/sudoers.d
And it still didn’t work. At this point I read the forum a good quote on this.
I called tech support today, and was told I’m wasting my time doing indexing anyway. He said indexing is only useful if you’re using Enterprise Manager. I don’t believe we have Enterprise Manager. We only have one backup server.
It would be nice to know why I couldn’t get it it to work with the user account I created, but I’m going to abandon it and just untick the option.
The option says:
“Creates catalog of guest files to enable browsing, searching and 1-click restores of individual files.
Indexing is optional, and is not required to perform instant file level recoveries.”
Somewhere along the line I read that and made the assumption that I needed it. Probably because I was new to VBR, and didn’t know what an “instant file level recovery” or a “1-click restore” was. I assumed I would need to browse and click to restore individual files, so therefore I would need a catalog. I ticked it, set the credentials for a Windows machine, did a test backup, then successfully tested restoring one file, so assumed I’d done the right thing. Then later I added Linux machines to the backup, and started seeing errors about indexing. When I tried to test FLR on a Linux backup, it asked awkward questions about nodes, etc, so I assumed it was falling back to some other method of accessing the file system.
Can I suggest that this be reworded to make this all clearer? Perhaps mention Enterprise Manager in the option description.
[23]
That settles it. This isn’t necessary to get file level backup like I had thought. I later tested this by trying to do so, it just spawns a VM for reading the files and it still works. Additionally, I also undid the ssh_config
. I don’t want older algorithms running. I really think Veeam should update this, there really isn’t a good excuse, especially with open source versions of the newer implementations exist.
After learning this, in several sessions of attempting to get this to work, I just chose to click Next
and move on.
This next brings up the run time of the job. I edited it to run at 3:00AM and then selected On these days
. That enables the Days…
button, which I clicked.
Here I deselected until only Monday, Wednesday, and Friday. I clicked OK
and Apply
.
This brings up the summary screen. I clicked Finish
.
The job was showing up.
I ran the job for the first time, somewhat excited to see it work. It did not.
And after I scrolled down a bit.
I clicked to see the report here.
So I noticed a couple things. First I needed to deselect index guest file system
. That was not necessary and was not useful. It was also creating warnings, which I wanted to clear. But the outright failures. Those needed to be fixed.
The outright failures actually have something in common. These are the VMs that have PCI passthrough devices. Veeam is attempting to use ESXi snapshots during its backup. That does not work when there are PCI passthrough devices. This is a minor point I had noticed earlier, but didn’t think worth mentioning. After all, I intended to use enterprise backup software, whether a hypervisor snapshot works or not wasn’t really critical. Or so I had thought. Now it was.
If I am unable to use Veeam for backup on these VMs, I had to figure something else out for the VMs that Veeam fails on. There is a standalone Veeam client for Windows [24]. Which does basic backups, and that should work with the Veeam Community edition management console. However, there is no Veeam FreeBSD client [25], and no plans to create one. That is a bigger issue. TrueNAS is FreeBSD based, as covered before. I needed a solution for that.
A quick review of the previous backup options I had researched reveals that there are actually limited options for FreeBSD support. I double checked the Wikipedia page on backup comparisons [26]. Bacula does actually have a FreeBSD client, and appears to be among the most feature rich solutions. I figured if I had Veeam backing up the core infrastructure, including the Bacula server, then having Bacula backup the FreeBSD based TrueNAS would be acceptable as an end around. Though configuring Bacula was a beast I hadn’t really prepared for.
Bacula Configuration
Let me start by saying this. Bacula is not for the faint of heart. This service is difficult to get working. I almost decided to not type this out entirely. I am worried that giving this description may push some people towards it where it really isn’t necessary. There are better options most of the time. For me though? In my home lab? It is free and has support for what I need.
Bacula was clearly designed for a different era in backups, where the primary backup option was a tape drive. Most modern backups are simply hard disk backed. Drives became cheap a decade ago, hence my decision to just buy more hard drive space when needed versus tape drives. Tapes are more for entities and companies that cannot lose data and need to keep it forever. Think like a government department.
I started by deploying another VM with 16 CPUs, 16 GB of ram, and 30 GB of drive space. Then I installed the base Ubuntu like before. After that I installed the base bacula in ubuntu with apt.
apt install bacula
This somewhat follows here [27], but it appears to involve more than what it was saying. I continued.

It then asked for a system mail name, I left it with the default and hit enter.

I selected the Internet Site
mail configuration, even though I didn’t actually expect this to work because I hadn’t set up a SMTP server. Maybe I should do that at some point.

Next it asked for a SQL database setup. I prefer Postgresql.

Then it wanted me to configure the directory postgresql hostname. I left it as localhost.

Now it asked me to type in a password. I realized later I needed to type something in and remember it. But I recovered it.

Here is where things got a bit hairy. Bacula has a couple web GUIs. The first I found was Bacula-Web [28], but this one appears to only monitor Bacula and doesn’t control it. The Wikipedia page says it has a full interface. I eventually discovered it was called Baculum [29]. That is an unfortunate name, because it is also the name of the penis bone in mammals [30]. I’m pretty sure that’s why it was hard to find in Google, for obvious reasons.
Anyways, once I found it, there is a guide for installing it [31]. I followed the Ubuntu installs.
wget -qO - http://bacula.org/downloads/baculum/baculum.pub | apt-key add -
I then added the apt repository file to /etc/apt/sources.list.d/baculum.list
deb [ arch=amd64 ] http://bacula.org/downloads/baculum/stable/ubuntu focal main
deb-src http://bacula.org/downloads/baculum/stable/ubuntu focal main
Following through with the Apache install
To install Baculum API access via Apache Web server by using apt packages manager use the command:
apt-get install baculum-common baculum-api baculum-api-apache2
Next you must enable mod_rewrite module for Apache, with the following command:
a2enmod rewrite
and include Baculum VirtualHost definition in the Apache configuration with:
a2ensite baculum-api
The restart your Apache server with:
service apache2 restart
https://www.bacula.org/9.4.x-manuals/en/console/Baculum_API_Web_GUI_Tools.html
Next I added rights for the www-data
user and sudoers. I added the basics from the guide:
Defaults:www-data !requiretty
www-data ALL=NOPASSWD: /usr/sbin/bconsole
www-data ALL=NOPASSWD: /usr/sbin/bdirjson
www-data ALL=NOPASSWD: /usr/sbin/bsdjson
www-data ALL=NOPASSWD: /usr/sbin/bfdjson
www-data ALL=NOPASSWD: /usr/sbin/bbconsjson
I also found some guides that suggest there is more to it [32] [33]. So I actually edited it to include a few extra rights.
Defaults:www-data !requiretty
www-data ALL=NOPASSWD: /usr/sbin/bconsole
www-data ALL=NOPASSWD: /usr/sbin/bdirjson
www-data ALL=NOPASSWD: /usr/sbin/bsdjson
www-data ALL=NOPASSWD: /usr/sbin/bfdjson
www-data ALL=NOPASSWD: /usr/sbin/bbconsjson
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-fd
Then I needed to go ahead with the Baculum web install.
To install Baculum Web access via Apache Web server by using apt packages manager use the command:
apt-get install baculum-common baculum-web baculum-web-apache2
Next you must enable mod_rewrite module for Apache, with the following command:
a2enmod rewrite
and include Baculum VirtualHost definition in the Apache configuration with:
a2ensite baculum-web
The restart your Apache server with:
service apache2 restart
The first thing to note is that there are two configurations to proceed with here. First is the API configuration at http://localhost:9096
. The second is the web configuration at http://localhost:9095
. The Guide includes a step through, but it wasn’t clear to me, and it didn’t work like it described. I’ll start with the API configuration.
Navigating to 192.168.2.230:9096
(the DHCP assigned IP for the box) I typed in the default username of admin
and password of admin
.

Then I selected the English
install.

Next I set up the Catalog database for this instance.

This next page is where I realized my mistake on the previous postgres install. I also wanted to use my DNS name trick like before, so I added a bacula-postgres.internal
for it. However, since I did not actually remember my database password (since I never actually selected it) I had to go and change it to something I remember.
Here is a nice guide for recovering from that [34].
- find the file pg_hba.conf – it may be located, for example in /etc/postgresql-9.1/pg_hba.conf.
cd /etc/postgresql-9.1/- Back it up
cp pg_hba.conf pg_hba.conf-backup- place the following line (as either the first uncommented line, or as the only one):
For all occurrence of below (local and host) , exepct replication section if you don’t have any it has to be changed as follow ,no MD5 or Peer autehication should be present.
local all all trust
https://stackoverflow.com/questions/10845998/i-forgot-the-password-i-entered-during-postgres-installation
- restart your PostgreSQL server (e.g., on Linux:)
sudo /etc/init.d/postgresql restart
If the service (daemon) doesn’t start reporting in log file:
local connections are not supported by this build
you should change
local all all trust
to
host all all 127.0.0.1/32 trust- you can now connect as any user. Connect as the superuser postgres (note, the superuser name may be different in your installation. In some systems it is called pgsql, for example.)
psql -U postgres
or
psql -h 127.0.0.1 -U postgres
(note that with the first command you will not always be connected with local host)- Reset password (‘replace my_user_name with postgres since you are resetting postgres user)
ALTER USER my_user_name with password ‘my_secure_password’;- Restore the old pg_hba.conf as it is very dangerous to keep around
cp pg_hba.conf-backup pg_hba.conf- restart the server, in order to run with the safe pg_hba.conf
sudo /etc/init.d/postgresql restart
I was able to find the pg_hba.conf
for postgresql 12 in /etc/postgresql/12/main/
then followed the guide to get into the database. I executed the command
\du
To list all the users. And the commands
Grant all privileges on database bacula
to bacala
;
alter user bacula with password ‘<THIS IS A PASSWORD>’;
This should set up the bacula
user correctly and with the password I then knew. Back to the API install. I ran the test and it worked. I include the postgres configuration since I altered the listen address to follow the new DNS name I gave it.
On the next page I needed to set up the bconsole application for use in the API. I tested it.

It worked without me needing to do anything.

The next thing to configure are the json binaries. These are the apps that generate the config files the Bacula services use. I will get to these in more detail after I encounter the next major problem (yay!).

I changed the directories to the correct executables as they were installed. These tests all failed.
I then changed the /etc/sudoers.d/baculum
files to reflect that www-data should have access to this location.
Defaults:www-data !requiretty
www-data ALL=NOPASSWD: /usr/lib/bacula/bconsole
www-data ALL=NOPASSWD: /usr/lib/bacula/bdirjson
www-data ALL=NOPASSWD: /usr/lib/bacula/bsdjson
www-data ALL=NOPASSWD: /usr/lib/bacula/bfdjson
www-data ALL=NOPASSWD: /usr/lib/bacula/bbconsjson
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-fd
And restarted apache.
systemctl restart apache2.service
I made a small mistake here. I changed the location of bconsole . That was working before, it took me a while to figure out this mistake.
And some of the tests passed.
Then I gave blanket access to the /opt/bacula/working
directory
Defaults:www-data !requiretty
www-data ALL=NOPASSWD: /usr/lib/bacula/bconsole
www-data ALL=NOPASSWD: /usr/lib/bacula/bdirjson
www-data ALL=NOPASSWD: /usr/lib/bacula/bsdjson
www-data ALL=NOPASSWD: /usr/lib/bacula/bfdjson
www-data ALL=NOPASSWD: /usr/lib/bacula/bbconsjson
www-data ALL=NOPASSWD: /opt/bacula/working/*
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-fd
The final one worked then. I clicked Next
to proceed.
Then I reviewed the methods for restarting all the various parts of Bacula. There wasn’t a test there. I just clicked Next
.
Next was the authentication configuration. I set up a bacula-adm
account and password.

Which it wasn’t allowed because it has a dash (-) in it. I changed it to baculaadm
. And clicked Next
.

I reviewed everything and finished.
Now it was time to configure the web GUI. I went to bacula.internal:9096. I had configured that after fixing the postgres setup. This is the same device as 192.168.2.130 used to be.
Then I configured the API and tested it. It said Console wasn’t supported (because of the mistake previously outlined), but it didn’t give me any warnings that the app wouldn’t work, so I clicked Next.
Next I configured the credentials for the web GUI component.

I reviewed everything and clicked Save
.

And the first thing I saw was the error about bconsole not working.
The API page let me step through the configuration again. So I fixed the /etc/sudoers.d/baculum
file.
Defaults:www-data !requiretty
www-data ALL=NOPASSWD: /usr/sbin/bconsole
www-data ALL=NOPASSWD: /usr/lib/bacula/bdirjson
www-data ALL=NOPASSWD: /usr/lib/bacula/bsdjson
www-data ALL=NOPASSWD: /usr/lib/bacula/bfdjson
www-data ALL=NOPASSWD: /usr/lib/bacula/bbconsjson
www-data ALL=NOPASSWD: /opt/bacula/working/*
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-fd
And tested this again.

Then I went and reran the config in the web GUI. This time the console support worked.
And I got in just fine.
Alright. Here is where I ran into a new set of issues. Every time I tried to configure any Client, Job, Storage, Pool, Volume, or anything really I got an error:
baculum Error Error 94: Config validation error.Array
I did learn that I could configure everything manually and just run jobs through the web GUI. For a couple of days, I kinda fought myself on this. I really didn’t want to have to get that far into the weeds. I also wanted a working solution for FreeBSD. So I embarked to learn how all of this works on the text level.
Here are the basics of how Bacula works. Bacula is divided into three components (5 really, but three need to be configured).
The first component is the Bacula Director. It has a config file that defines everything that exists. It is located in /etc/bacula/bacula-dir.conf
# Default Bacula Director Configuration file
#
# The only thing that MUST be changed is to add one or more
# file or directory names in the Include directive of the
# FileSet resource.
#
# For Bacula release 9.4.2 (04 February 2019) -- ubuntu 20.04
#
# You might also want to change the default email address
# from root to your address. See the "mail" and "operator"
# directives in the Messages resource.
#
# Copyright (C) 2000-2017 Kern Sibbald
# License: BSD 2-Clause; see file LICENSE-FOSS
#
Director { # define myself
Name = bacula-dir
DIRport = 9101 # where we listen for UA connections
QueryFile = "/etc/bacula/scripts/query.sql"
WorkingDirectory = "/var/lib/bacula"
PidDirectory = "/run/bacula"
Maximum Concurrent Jobs = 20
Password = "<CONSOLEPASSWORD>" # Console password
Messages = Daemon
DirAddress = 127.0.0.1
}
JobDefs {
Name = "DefaultJob"
Type = Backup
Level = Incremental
Client = bacula-fd
FileSet = "Full Set"
Schedule = "WeeklyCycle"
Storage = TrueNAS
Messages = Standard
Pool = TrueNAS
SpoolAttributes = yes
Priority = 10
Write Bootstrap = "/var/lib/bacula/%c.bsr"
}
JobDefs {
Name = "MonthlyBackup"
Type = Backup
Level = Incremental
Client = Alexandria-TrueNAS-fd
FileSet = "Full Set"
Schedule = "MonthlyFirstDailyIncremental"
Storage = TrueNAS
Messages = Standard
Pool = TrueNAS
SpoolAttributes = yes
Priority = 10
Write Bootstrap = "/var/lib/bacula/%c.bsr"
}
#
# Define the main nightly save backup job
# By default, this job will back up to disk in /nonexistant/path/to/file/archive/dir
Job {
Name = "BaculaServerBackup"
JobDefs = "DefaultJob"
}
Job {
Name = "TrueNAS Job"
Client = Alexandria-TrueNAS-fd
JobDefs = "MonthlyBackup"
}
#Job {
# Name = "BackupClient2"
# Client = bacula2-fd
# JobDefs = "DefaultJob"
#}
#Job {
# Name = "BackupClient1-to-Tape"
# JobDefs = "DefaultJob"
# Storage = LTO-4
# Spool Data = yes # Avoid shoe-shine
# Pool = Default
#}
#}
# Backup the catalog database (after the nightly save)
Job {
Name = "BackupCatalog"
JobDefs = "DefaultJob"
Level = Full
FileSet="Catalog"
Schedule = "WeeklyCycleAfterBackup"
# This creates an ASCII copy of the catalog
# Arguments to make_catalog_backup.pl are:
# make_catalog_backup.pl <catalog-name>
RunBeforeJob = "/etc/bacula/scripts/make_catalog_backup.pl MyCatalog"
# This deletes the copy of the catalog
RunAfterJob = "/etc/bacula/scripts/delete_catalog_backup"
Write Bootstrap = "/var/lib/bacula/%n.bsr"
Priority = 11 # run after main backup
}
#
# Standard Restore template, to be changed by Console program
# Only one such job is needed for all Jobs/Clients/Storage ...
#
Job {
Name = "RestoreBaculaFiles"
Type = Restore
Client=bacula-fd
Storage = TrueNAS
# The FileSet and Pool directives are not used by Restore Jobs
# but must not be removed
FileSet="Full Set"
Pool = TrueNAS
Messages = Standard
Where = /nonexistant/path/to/file/archive/dir/bacula-restores
}
# List of files to be backed up
FileSet {
Name = "Full Set"
Include {
Options {
signature = MD5
}
#
# Put your list of files here, preceded by 'File =', one per line
# or include an external list with:
#
# File = <file-name
#
# Note: / backs up everything on the root partition.
# if you have other partitions such as /usr or /home
# you will probably want to add them too.
#
# By default this is defined to point to the Bacula binary
# directory to give a reasonable FileSet to backup to
# disk storage during initial testing.
#
File = /
}
#
# If you backup the root directory, the following two excluded
# files can be useful
#
Exclude {
File = /var/lib/bacula
File = /proc
File = /tmp
File = /sys
File = /.journal
File = /.fsck
File = /mnt
}
}
#
# When to do the backups, full backup on first sunday of the month,
# differential (i.e. incremental since full) every other sunday,
# and incremental backups other days
Schedule {
Name = "WeeklyCycle"
Run = Full 1st sun at 23:05
Run = Differential 2nd-5th sun at 23:05
Run = Incremental mon-sat at 23:05
}
# This schedule does the catalog. It starts after the WeeklyCycle
Schedule {
Name = "WeeklyCycleAfterBackup"
Run = Full sun-sat at 23:10
}
Schedule {
Name = "MonthlyFirstDailyIncremental"
Run = Level=Full on 1 at 2:05
Run = Level=Incremental on 2-31 at 2:05
}
Schedule {
Name = "MonthlyFirst"
Run = Level=Full on 1 at 2:05
}
# This is the backup of the catalog
FileSet {
Name = "Catalog"
Include {
Options {
signature = MD5
}
File = "/var/lib/bacula/bacula.sql"
}
}
# Client (File Services) to backup
Client {
Name = bacula-fd
Address = "bacula.internal"
FDPort = 9102
Catalog = MyCatalog
Password = "<FDPASS>" # password for FileDaemon
File Retention = 60 days # 60 days
Job Retention = 6 months # six months
AutoPrune = yes # Prune expired Jobs/Files
}
#
# Second Client (File Services) to backup
# You should change Name, Address, and Password before using
#
Client {
Name = Alexandria-TrueNAS-fd
Address = "alexandria.internal"
FDPort = 9102
Catalog = MyCatalog
Password = "<PASSWORD>" # password for FileDaemon 2
File Retention = 60 days # 60 days
Job Retention = 6 months # six months
AutoPrune = yes # Prune expired Jobs/Files
}
Storage {
Name = TrueNAS
Device = Alexandria-Backup
Media Type = TrueNAS
Address = "bacula.internal"
Password = "<SDPASS>"
}
# Definition of file Virtual Autochanger device
#Autochanger {
# Name = File1
## Do not use "localhost" here
# Address = localhost # N.B. Use a fully qualified name here
# SDPort = 9103
# Password = "<SDPASS>"
# Device = FileChgr1
# Media Type = File1
# Maximum Concurrent Jobs = 10 # run up to 10 jobs a the same time
# Autochanger = File1 # point to ourself
#}
#
## Definition of a second file Virtual Autochanger device
## Possibly pointing to a different disk drive
#Autochanger {
# Name = File2
## Do not use "localhost" here
# Address = localhost # N.B. Use a fully qualified name here
# SDPort = 9103
# Password = "<SDPASS>"
# Device = FileChgr2
# Media Type = File2
# Autochanger = File2 # point to ourself
# Maximum Concurrent Jobs = 10 # run up to 10 jobs a the same time
#}
# Definition of LTO-4 tape Autochanger device
#Autochanger {
# Name = LTO-4
# Do not use "localhost" here
# Address = localhost # N.B. Use a fully qualified name here
# SDPort = 9103
# Password = "<SDPASS>" # password for Storage daemon
# Device = LTO-4 # must be same as Device in Storage daemon
# Media Type = LTO-4 # must be same as MediaType in Storage daemon
# Autochanger = LTO-4 # enable for autochanger device
# Maximum Concurrent Jobs = 10
#}
# Generic catalog service
Catalog {
Name = MyCatalog
dbname = "bacula"; DB Address = "bacula-postgres.internal"; dbuser = "bacula"; dbpassword = "<DBPASS>"
}
# Reasonable message delivery -- send most everything to email address
# and to the console
Messages {
Name = Standard
#
# NOTE! If you send to two email or more email addresses, you will need
# to replace the %r in the from field (-f part) with a single valid
# email address in both the mailcommand and the operatorcommand.
# What this does is, it sets the email address that emails would display
# in the FROM field, which is by default the same email as they're being
# sent to. However, if you send email to more than one address, then
# you'll have to set the FROM address manually, to a single address.
# for example, a 'no-reply@mydomain.com', is better since that tends to
# tell (most) people that its coming from an automated source.
#
mailcommand = "/usr/sbin/bsmtp -h \"bacula.internal\" -f \"\(Bacula\) \<%r\>\" -s \"Bacula: %t %e of %c %l\" %r"
operatorcommand = "/usr/sbin/bsmtp -h \"bacula.internal\" -f \"\(Bacula\) \<%r\>\" -s \"Bacula: Intervention needed for %j\" %r"
mail = root = all, !skipped
operator = root = mount
console = all, !skipped, !saved
#
# WARNING! the following will create a file that you must cycle from
# time to time as it will grow indefinitely. However, it will
# also keep all your messages if they scroll off the console.
#
append = "/var/log/bacula/bacula.log" = all, !skipped
catalog = all
}
#
# Message delivery for daemon messages (no job).
Messages {
Name = Daemon
mailcommand = "/usr/sbin/bsmtp -h \"bacula.internal\" -f \"\(Bacula\) \<%r\>\" -s \"Bacula daemon message\" %r"
mail = root = all, !skipped
console = all, !skipped, !saved
append = "/var/log/bacula/bacula.log" = all, !skipped
}
# Default pool definition
#Pool {
# Name = Default
# Pool Type = Backup
# Recycle = yes # Bacula can automatically recycle Volumes
# AutoPrune = yes # Prune expired volumes
# Volume Retention = 365 days # one year
# Maximum Volume Bytes = 50G # Limit Volume size to something reasonable
# Maximum Volumes = 100 # Limit number of Volumes in Pool
#}
# File Pool definition
Pool {
Name = TrueNAS
Pool Type = Backup
Recycle = yes # Bacula can automatically recycle Volumes
AutoPrune = yes # Prune expired volumes
Volume Retention = 365 days # one year
Maximum Volume Bytes = 5T # Limit Volume size to something reasonable
Maximum Volumes = 100 # Limit number of Volumes in Pool
Label Format = "Vol-" # Auto label
}
# Scratch pool definition
#Pool {
# Name = Scratch
# Pool Type = Backup
#}
#
# Restricted console used by tray-monitor to get the status of the director
#
Console {
Name = bacula-mon
Password = "<CONSOLEPASSWORD>"
CommandACL = status, .status
}
The daemon runs like this
/opt/bacula/bin/bacula-dir -c /opt/bacula/etc/bacula-dir.conf
Though it is managed by systemctl. That is how I first found the configuration files. Let’s cover what the major components are.
First is the Director
section, defining where the director looks for waits for connections and basic information. The most important thing here is the Name
. This needs to be propagated to all the clients and the storage daemon as it is checked as part of authentication like a login.
Next it lists all the JobDef
s, which are just templates for jobs. Individual Job
instances can override the definitions in the JobDef
s. Job
s are individual backup run definitions. They can reference JobDef
s for baseline, which is why they tend to be small. Schedules
are basically a different way of writing out a cron job
. Think in those terms. Job
s run on schedules. JobDef
s can define a default Schedule
.
Storage
is the definition that connects the director to the storage daemon
. In fact this will be communicated to the server agents for direct communication with the Storage Daemon
. The Name
must match the Storage Daemon
config, the Media Type
must match as well. Pool
is a definition for a storage pool
backups are to be stored to. It defines the retention period
. I am not really sure why this is important, I think it is more for larger deployments.
Catalog
is the method for defining database access. Make sure this is defined with the correct address and credentials. Fileset
is about what directories should be included or excluded from a backup. Messages
is about communication within the whole system for logging. Console
defines how to monitor this daemon (I think in the case of multiple directors) [35].
The most important thing to note about the director is that there are a lot of passwords that need to match. The Director
connects to the Clients
and the Storage Daemon
. It must know the passwords from their individual configs. I also configured it to know that bacula.internal
is where all of the services locally should be looked for. This gets around the potential of having multiple ethernet nics.
Another part of this whole thing that is quite annoying is that the online documentation is completely disjoint. For instance, the 9.6 manual [36], which references a configuration, but doesn’t actually tell me anything. It says to look for “Director configurationDirectorChapter chapter,” which, as far as I can tell, doesn’t exist. I did find older versions for the 5.x which still contain useful information [35]. Which is what I used to piece this all together.
I have no idea if this is right, only that it does appear to work at the end. I had to constantly shift between the different manuals about how all of this works, and did not know which ones are current nor which ones should be used. Google also constantly references old manuals. I started just replacing the url with “9.6.x” whenever I found something to see if there was a more current manual. By doing that I found the 9.0.x manual for the same configuration [37]. Does this work with 9.4.2? I don’t really know. All I know is I think so?
The Storage Daemon
manages access to all the storage systems. This was clearly designed for physical media, like tapes or DVDs. It is quite fully featured. I only wanted a simple hard drive for backup. I don’t want long term drives, or pauses for loading the next writable tape or anything else. Just a hard disk. That made figuring out what the Autoloader that was default configured difficult to work through, because it wasn’t necessary. Here is my /etc/bacula/bacula-sd.conf
#
# Default Bacula Storage Daemon Configuration file
#
# For Bacula release 9.4.2 (04 February 2019) -- ubuntu 20.04
#
# You may need to change the name of your tape drive
# on the "Archive Device" directive in the Device
# resource. If you change the Name and/or the
# "Media Type" in the Device resource, please ensure
# that dird.conf has corresponding changes.
#
#
# Copyright (C) 2000-2017 Kern Sibbald
# License: BSD 2-Clause; see file LICENSE-FOSS
#
Storage { # definition of myself
Name = bacula-sd
SDPort = 9103 # Director's port
WorkingDirectory = "/var/lib/bacula"
Pid Directory = "/run/bacula"
Plugin Directory = "/usr/lib/bacula"
Maximum Concurrent Jobs = 20
SDAddress = bacula.internal
}
#
# List Directors who are permitted to contact Storage daemon
#
Director {
Name = bacula-dir
Password = "<DIRPASS>"
}
#
# Restricted Director, used by tray-monitor to get the
# status of the storage daemon
#
Director {
Name = bacula-mon
Password = "<DIR2PASS>"
Monitor = yes
}
#
# Note, for a list of additional Device templates please
# see the directory <bacula-source>/examples/devices
# Or follow the following link:
# http://www.bacula.org/git/cgit.cgi/bacula/tree/bacula/examples/devices?h=Branch-7.4
#
#
# Devices supported by this Storage daemon
# To connect, the Director's bacula-dir.conf must have the
# same Name and MediaType.
#
Device {
Name = Alexandria-Backup
Media Type = TrueNAS
Archive Device = /mnt/backup/bacula
LabelMedia = yes; # lets Bacula label unlabeled media
Random Access = Yes;
AutomaticMount = yes; # when device opened, read it
RemovableMedia = no;
AlwaysOpen = no;
Maximum Concurrent Jobs = 5
}
#
# Define a Virtual autochanger
#
#Autochanger {
# Name = FileChgr1
# Device = FileChgr1-Dev1, FileChgr1-Dev2
# Changer Command = ""
# Changer Device = /dev/null
#}
#
#Device {
# Name = FileChgr1-Dev1
# Media Type = File1
# Archive Device = /nonexistant/path/to/file/archive/dir
# LabelMedia = yes; # lets Bacula label unlabeled media
# Random Access = Yes;
# AutomaticMount = yes; # when device opened, read it
# RemovableMedia = no;
# AlwaysOpen = no;
# Maximum Concurrent Jobs = 5
#}
#
#Device {
# Name = FileChgr1-Dev2
# Media Type = File1
# Archive Device = /nonexistant/path/to/file/archive/dir
# LabelMedia = yes; # lets Bacula label unlabeled media
# Random Access = Yes;
# AutomaticMount = yes; # when device opened, read it
# RemovableMedia = no;
# AlwaysOpen = no;
# Maximum Concurrent Jobs = 5
#}
#
##
## Define a second Virtual autochanger
##
#Autochanger {
# Name = FileChgr2
# Device = FileChgr2-Dev1, FileChgr2-Dev2
# Changer Command = ""
# Changer Device = /dev/null
#}
#
#Device {
# Name = FileChgr2-Dev1
# Media Type = File2
# Archive Device = /nonexistant/path/to/file/archive/dir
# LabelMedia = yes; # lets Bacula label unlabeled media
# Random Access = Yes;
# AutomaticMount = yes; # when device opened, read it
# RemovableMedia = no;
# AlwaysOpen = no;
# Maximum Concurrent Jobs = 5
#}
#
#Device {
# Name = FileChgr2-Dev2
# Media Type = File2
# Archive Device = /nonexistant/path/to/file/archive/dir
# LabelMedia = yes; # lets Bacula label unlabeled media
# Random Access = Yes;
# AutomaticMount = yes; # when device opened, read it
# RemovableMedia = no;
# AlwaysOpen = no;
# Maximum Concurrent Jobs = 5
#}
#
#
# An autochanger device with two drives
#
#Autochanger {
# Name = Autochanger
# Device = Drive-1
# Device = Drive-2
# Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
# Changer Device = /dev/sg0
#}
#Device {
# Name = Drive-1 #
# Drive Index = 0
# Media Type = DLT-8000
# Archive Device = /dev/nst0
# AutomaticMount = yes; # when device opened, read it
# AlwaysOpen = yes;
# RemovableMedia = yes;
# RandomAccess = no;
# AutoChanger = yes
# #
# # New alert command in Bacula 9.0.0
# # Note: you must have the sg3_utils (rpms) or the
# # sg3-utils (deb) installed on your system.
# # and you must set the correct control device that
# # corresponds to the Archive Device
# Control Device = /dev/sg?? # must be SCSI ctl for /dev/nst0
# Alert Command = "/etc/bacula/scripts/tapealert %l"
#
# #
# # Enable the Alert command only if you have the mtx package loaded
# # Note, apparently on some systems, tapeinfo resets the SCSI controller
# # thus if you turn this on, make sure it does not reset your SCSI
# # controller. I have never had any problems, and smartctl does
# # not seem to cause such problems.
# #
# Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
# If you have smartctl, enable this, it has more info than tapeinfo
# Alert Command = "sh -c 'smartctl -H -l error %c'"
#}
#Device {
# Name = Drive-2 #
# Drive Index = 1
# Media Type = DLT-8000
# Archive Device = /dev/nst1
# AutomaticMount = yes; # when device opened, read it
# AlwaysOpen = yes;
# RemovableMedia = yes;
# RandomAccess = no;
# AutoChanger = yes
# # Enable the Alert command only if you have the mtx package loaded
# Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
# If you have smartctl, enable this, it has more info than tapeinfo
# Alert Command = "sh -c 'smartctl -H -l error %c'"
#}
#
# A Linux or Solaris LTO-2 tape drive
#
#Device {
# Name = LTO-2
# Media Type = LTO-2
# Archive Device = /dev/nst0
# AutomaticMount = yes; # when device opened, read it
# AlwaysOpen = yes;
# RemovableMedia = yes;
# RandomAccess = no;
# Maximum File Size = 3GB
## Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
## Changer Device = /dev/sg0
## AutoChanger = yes
# # Enable the Alert command only if you have the mtx package loaded
## Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
## If you have smartctl, enable this, it has more info than tapeinfo
## Alert Command = "sh -c 'smartctl -H -l error %c'"
#}
#
# A Linux or Solaris LTO-3 tape drive
#
#Device {
# Name = LTO-3
# Media Type = LTO-3
# Archive Device = /dev/nst0
# AutomaticMount = yes; # when device opened, read it
# AlwaysOpen = yes;
# RemovableMedia = yes;
# RandomAccess = no;
# Maximum File Size = 4GB
# Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
# Changer Device = /dev/sg0
# AutoChanger = yes
# #
# # New alert command in Bacula 9.0.0
# # Note: you must have the sg3_utils (rpms) or the
# # sg3-utils (deb) installed on your system.
# # and you must set the correct control device that
# # corresponds to the Archive Device
# Control Device = /dev/sg?? # must be SCSI ctl for /dev/nst0
# Alert Command = "/etc/bacula/scripts/tapealert %l"
#
# # Enable the Alert command only if you have the mtx package loaded
## Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
## If you have smartctl, enable this, it has more info than tapeinfo
## Alert Command = "sh -c 'smartctl -H -l error %c'"
#}
#
# A Linux or Solaris LTO-4 tape drive
#
#Device {
# Name = LTO-4
# Media Type = LTO-4
# Archive Device = /dev/nst0
# AutomaticMount = yes; # when device opened, read it
# AlwaysOpen = yes;
# RemovableMedia = yes;
# RandomAccess = no;
# Maximum File Size = 5GB
# Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d"
# Changer Device = /dev/sg0
# AutoChanger = yes
# #
# # New alert command in Bacula 9.0.0
# # Note: you must have the sg3_utils (rpms) or the
# # sg3-utils (deb) installed on your system.
# # and you must set the correct control device that
# # corresponds to the Archive Device
# Control Device = /dev/sg?? # must be SCSI ctl for /dev/nst0
# Alert Command = "/etc/bacula/scripts/tapealert %l"
#
# # Enable the Alert command only if you have the mtx package loaded
## Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
## If you have smartctl, enable this, it has more info than tapeinfo
## Alert Command = "sh -c 'smartctl -H -l error %c'"
#}
#
# An HP-UX tape drive
#
#Device {
# Name = Drive-1 #
# Drive Index = 0
# Media Type = DLT-8000
# Archive Device = /dev/rmt/1mnb
# AutomaticMount = yes; # when device opened, read it
# AlwaysOpen = yes;
# RemovableMedia = yes;
# RandomAccess = no;
# AutoChanger = no
# Two EOF = yes
# Hardware End of Medium = no
# Fast Forward Space File = no
# #
# # New alert command in Bacula 9.0.0
# # Note: you must have the sg3_utils (rpms) or the
# # sg3-utils (deb) installed on your system.
# # and you must set the correct control device that
# # corresponds to the Archive Device
# Control Device = /dev/sg?? # must be SCSI ctl for /dev/rmt/1mnb
# Alert Command = "/etc/bacula/scripts/tapealert %l"
#
# #
# # Enable the Alert command only if you have the mtx package loaded
# Alert Command = "sh -c 'tapeinfo -f %c |grep TapeAlert|cat'"
# If you have smartctl, enable this, it has more info than tapeinfo
# Alert Command = "sh -c 'smartctl -H -l error %c'"
#}
#
# A FreeBSD tape drive
#
#Device {
# Name = DDS-4
# Description = "DDS-4 for FreeBSD"
# Media Type = DDS-4
# Archive Device = /dev/nsa1
# AutomaticMount = yes; # when device opened, read it
# AlwaysOpen = yes
# Offline On Unmount = no
# Hardware End of Medium = no
# BSF at EOM = yes
# Backward Space Record = no
# Fast Forward Space File = no
# TWO EOF = yes
# #
# # New alert command in Bacula 9.0.0
# # Note: you must have the sg3_utils (rpms) or the
# # sg3-utils (deb) installed on your system.
# # and you must set the correct control device that
# # corresponds to the Archive Device
# Control Device = /dev/sg?? # must be SCSI ctl for /dev/nsa1
# Alert Command = "/etc/bacula/scripts/tapealert %l"
#
# If you have smartctl, enable this, it has more info than tapeinfo
# Alert Command = "sh -c 'smartctl -H -l error %c'"
#}
#
# Send all messages to the Director,
# mount messages also are sent to the email address
#
Messages {
Name = Standard
director = bacula-dir = all
}
All of these default configuration options did not actually help me. They listed a huge number of tape drives, DVDs, and older long term storage. But none of them helped with a simple hard drive. I had to look it up in the manuals and riddle out how to do it.
For my case, the storage system is simply a file on the TrueNAS Backups CIFS store (done through a mount). The Storage section defines how to access this Storage Daemon
. It also includes directories for working information. The Director
section is which director Name
is allowed access and password is the password this director must provide to gain access. The optional parameter monitor restricts the ability to send commands, it can only monitor. The Device
section defines the actual backup location. For me this is /mnt/backup/bacula
. Media Type
and Name
must match the Storage
section in the bacula-dir.conf
. Most of the parameters aren’t really important. They were designed for physical tape backup or DVD backup, not just a backup hard drive. Messages
just defines where to send messages for logging, and which to include. All of this I deduce from here [38].
Quick aside, I set up a mount for the Storage Daemon
in /etc/fstab
:
//alexandria.internal/Backups /mnt/backup cifs credentials=/etc/samba/credentials,uid=nobody,iocharset=utf8,noperm,file_mode=0777,dir_mode=0777 0 0
Followed by creating the mount point.
mkdir /mnt/backup
This made it so the Storage Daemon can access TrueNAS on boot.
The last Bacula service to configure is the File Daemon
client. This is the local backup client. I dislike that everyone uses a different name for it. Sometimes it’s Client
, sometimes Agent
, sometimes File Daemon
. There is a local client installed to backup the Bacula instance itself. Here is my /etc/bacula/bacula-fd.conf
file.
# Default Bacula File Daemon Configuration file
#
# For Bacula release 9.4.2 (04 February 2019) -- ubuntu 20.04
#
# There is not much to change here except perhaps the
# File daemon Name to
#
#
# Copyright (C) 2000-2015 Kern Sibbald
# License: BSD 2-Clause; see file LICENSE-FOSS
#
#
# List Directors who are permitted to contact this File daemon
#
Director {
Name = bacula-dir
Password = "<DIRPASS>"
}
#
# Restricted Director, used by tray-monitor to get the
# status of the file daemon
#
Director {
Name = bacula-mon
Password = "<DIR2PASS>"
Monitor = yes
}
#
# "Global" File daemon configuration specifications
#
FileDaemon { # this is me
Name = bacula-fd
FDport = 9102 # where we listen for the director
WorkingDirectory = /var/lib/bacula
Pid Directory = /run/bacula
Maximum Concurrent Jobs = 20
Plugin Directory = /usr/lib/bacula
FDAddress = bacula.internal
}
# Send all messages except skipped files back to Director
Messages {
Name = Standard
director = bacula-dir = all, !skipped, !restored
}
The Director
section is which director name is allowed access and password is the password this director must provide to gain access. The optional parameter monitor
restricts the ability to send commands, it can only monitor. FileDaemon
defines information about this backup agent. It defines which port/address it is listening for the director on. and some information about where its temporary files go. The Name
must match the Client->Name
in the bacula-dir.conf
. Messages
just defines where to send messages for logging, and which to include
With these I was able to actually create a backup! Victory! Sort of.
I now had to embark on configuring the FreeBSD client.
I am not as familiar with package management in FreeBSD. It uses the pkg
application, which I think is an offshoot of the dpkg
on Ubuntu and Debian Linux (or the other way around). A simple pkg
search shows me that bacula already exists on TrueNAS’ FreeBSD bare operating system. However, it actually doesn’t work at first on a TrueNAS install. I had to fix a couple files to make that work. Specifically I needed to switch enabled status on /usr/local/etc/pkg/repos/local.conf
from yes
to no
and /usr/local/etc/pkg/repos/FreeBSD.conf
from no
to yes
[39].
Then I could use pkg
correctly.
pkg search Bacula.
This is the already installed version, as I am not going to deploy a blank TrueNAS to just check this. One thing that is important to know is where exactly the config files for the bacula client are installed on FreeBSD.
It is /usr/local/etc/bacula/bacula-fd.conf
#
# Default Bacula File Daemon Configuration file
#
# For Bacula release 9.6.6 (20 September 2020) -- freebsd 12.1-RELEASE-p12
#
# There is not much to change here except perhaps the
# File daemon Name to
#
#
# Copyright (C) 2000-2020 Kern Sibbald
# License: BSD 2-Clause; see file LICENSE-FOSS
#
#
# List Directors who are permitted to contact this File daemon
#
Director {
Name = bacula-dir
Password = "<PASSWORD>"
}
#
# Restricted Director, used by tray-monitor to get the
# status of the file daemon
#
Director {
Name = bacula-mon
Password = "<PASSWORD>"
Monitor = yes
}
#
# "Global" File daemon configuration specifications
#
FileDaemon { # this is me
Name = Alexandria-TrueNAS-fd
FDport = 9102 # where we listen for the director
WorkingDirectory = /var/db/bacula
Pid Directory = /var/run
Maximum Concurrent Jobs = 20
Plugin Directory = /usr/local/lib
}
# Send all messages except skipped files back to Director
Messages {
Name = Standard
director = bacula-dir = all, !skipped, !restored
}
Alright, with that located, I made a minor mistake at first not copying the correct Name
in the FileDaemon
definition to the bacula-dir.conf
file, but that was quickly realized and fixed. I also mixed up which direction the passwords needed to be copied. The bacula-dir.conf
’s client definition needs the password in the bacula-fd.conf
’s Director
definition.
It also didn’t help that overwhelmingly the guides out there are about installing the full server version of bacula (the director
and storage daemon
) [40]. Even when they did include the client
, they wouldn’t list where the configuration files were. I swear, the best things these guides could do is to update exactly where the install files are located. I spend a lot of time tracking them down.
Also, I had to figure out how to restart a service in FreeBSD. It looks like the old rc.d
style linux, pre-systemctl days.
service bacula-fd restart
With that figured, I copied a new job for its backup and ran it. The last item that was difficult to figure out was that the File Daemons
directly communicate with the Storage Daemon
, which isn’t that surprising. But when they fail because of authentication or misconfiguration the job just stalls for about 5 minutes. I basically figured that out by accident one time when I was frustrated the jobs weren’t doing anything and took a break. After taking a break, there were actual error messages in the logs. I fixed the Media Type
mismatch (between the bacula-dir.conf
and the bacula-sd.conf
) I had.
Now, all of these backups appeared to be working. However, I prefer to at least practice a restore to make sure I have a general idea of how this works. For the local bacula server restore I saw all the files I expected to:
But when I loaded the FreeBSD backup for a restore test, it did not display anything:
I was quite perplexed. The jobs stated that they succeeded. And not a single file? The logs clearly state it. The data clearly is in the file. All of these backups show up in one file on Alexandria.internal/Stacks/Backups/bacula/vol-0014
. I even saw it increase in size. It had to be there. All of the internal logs and status agree it was written, but the restore utility does not see anything.
I kinda took a break at this point. It was discouraging to think I had it all configured correctly, and somehow the backup restore just didn’t work.
Ultimately I decided to try the one thing I noticed was a bit odd. Ubuntu’s packages installed 9.4.2 bacula
, and the most recent version of the FreeBSD was 9.6.6
. I figured, maybe it was a backwards incompatibility. My first thought was to just install the old client on TrueNAS. I found a little information on how to do that [41]. However, I couldn’t easily track down the pkg
version of this client. These mostly cover reverting after a new release. There is apparently a FreeBSD cache that can be fallen back upon in the case the update breaks something. However, I had never downloaded the 9.4.2
version, it wasn’t there.
I gave up and decided to try and install the 9.6.6
version of bacula-dir
and bacula-sd
. That isn’t as simple, because the most recent versions of bacula haven’t reached the standard repositories on Ubuntu yet. I did find an installation guide for that [42]. I needed to sign up for the community edition officially [43]. That sends me a key I can use. I then followed the manual instructions, since I am not doing a clean install. I am hoping to upgrade my already configured version.
#!/bin/bash
bacula_version="9.2.1"
bacula_key="XXXXXXXXXXXXX"
export DEBIAN_FRONTEND=noninteractive
# Requirements to install Bacula by packages
apt-get install -y zip wget bzip2
# Download the repository keys
wget -c https://www.bacula.org/downloads/Bacula-4096-Distribution-Verification-key.asc -O /tmp/Bacula-4096-Distribution-Verification-key.asc
# Add key in the local repository
apt-key add /tmp/Bacula-4096-Distribution-Verification-key.asc
# Create the Bacula Community repository
echo "# Bacula Community
deb http://www.bacula.org/packages/${bacula_key}/debs/${bacula_version}/stretch/amd64" > /etc/apt/sources.list.d/bacula-community.list
It took me a minute to correctly configure that last option, since I didn’t know which version of Ubuntu was available. I browsed the repo files [44]. I eventually defined the /etc/apt/sources.list.d/bacula-community.list
correctly as:
# Bacula Community
deb https://bacula.org/packages/5fdba49f8988f/debs/9.6.6/bionic/amd64 bionic main
After an
apt update
I saw the new packages.
bacula-client/stable,now 9.6.6-1 amd64
bacula-common/stable,now 9.6.6-1 amd64
bacula-console/stable,now 9.6.6-1 amd64
bacula-sqlite3/stable,now 9.6.6-1 amd64
bacula-postgresql/stable,now 9.6.6-1 amd64
bacula-mysql/stable,now 9.6.6-1 amd64
However, it intermixed them with all of the original Ubuntu installs, where a sort of super-package including the lower level ones. I had to uninstall the old versions first since it didn’t detect how to update it.
While trying to configure the new bacula-postgresql
I got caught in an install loop. Unfortunately I don’t have any screencaps of the issue, but the issue was with permissions on the postgresql database for removing the bacula database. I tried to do it manually, but failed. Eventually I just manually uninstalled postgresql, and let the installer install it again. I received the following error over and over again:
No bacula-director SQL package installed
After searching and reinstalling again, I couldn’t get it to work. I decided to abandon the old install under the idea that maybe there is residual configuration of either postgres or bacula 9.4.2
that was interfering and a fresh install would be best.
So I started over on a fresh VM and went through the same configuration for acquiring the 9.6.6
packages. I attempted to install bacula-postgresq
l, but that did not work, even on a fresh install due to an inability to set up the postgres database during the config. It couldn’t access the postgres user to actually create the bacula database per its definition. I decided to give up on that, and I didn’t really want to use bacula-mysql
as mysql is actually a proprietary technology now. The current version of open source is known as mariadb. I am not sure if it would have worked fine or not, but there was a third option for sqlite3
. I have worked with that before and decided to just use that.
apt install bacula-client bacula-common bacula-console bacula-sqlite3
This installed correctly. Then I discovered that the daemons weren’t started. So I had to unmask their systemctl services to get them to start.
systemctl unmask bacula-fd.service
systemctl unmask bacula-sd.service
systemctl unmask bacula-dir.service
The config files are located in /opt/bacula/etc
now, so I copied in my working configuration files over.
scp nrweaver@192.168.2.25:/etc/bacula/*.conf /opt/bacula/etc/
After that I had my configuration migrated over, which hopefully still works.
This time, I did a little more looking into apache vs lighttpd as a web server and decided to work with lighttpd[45]. Mostly because I think apache is getting a little old. I haven’t heard about new LAMP stacks being deployed for a while. I have some familiarity already with apache, and would like to add familiarity with more tools. Also apache is a heavy application. I decided to proceed with lighttpd.
I started with the API install
apt-get install baculum-common baculum-api baculum-api-lighttpd
systemctl start baculum-api-lighttpd
Then the web GUI install
apt install baculum-common baculum-web baculum-web-lighttpd
systemctl start baculum-web-lighttpd
I had to add the sudo rights again to /etc/sudoers.d/baculum
but this time I had a few changes to make
Defaults:www-data !requiretty
www-data ALL=NOPASSWD: /usr/sbin/bconsole
www-data ALL=NOPASSWD: /usr/sbin/bdirjson
www-data ALL=NOPASSWD: /usr/sbin/bsdjson
www-data ALL=NOPASSWD: /usr/sbin/bfdjson
www-data ALL=NOPASSWD: /usr/sbin/bbconsjson
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-fd
Then I stepped through the same configuration as before. There was only one small issue. I didn’t know where the bacula sqlite3 database was actually located.
It isn’t listed in any of the manuals. I eventually found it in /opt/bacula/working/bacula.db
. I had to add it to the /etc/sudoers.d/baculum
definition as well to pass the test.
Defaults:www-data !requiretty
www-data ALL=NOPASSWD: /usr/sbin/bconsole
www-data ALL=NOPASSWD: /usr/sbin/bdirjson
www-data ALL=NOPASSWD: /usr/sbin/bsdjson
www-data ALL=NOPASSWD: /usr/sbin/bfdjson
www-data ALL=NOPASSWD: /usr/sbin/bbconsjson
www-data ALL=NOPASSWD: /opt/bacula/working/bacula.db
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-dir
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-sd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl start bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl stop bacula-fd
www-data ALL=(root) NOPASSWD: /usr/bin/systemctl restart bacula-fd
The last issue I encountered was that the json executables needed to be in the sbin directory, otherwise they would require a console to work. I copied them from their locations into /usr/sbin
.
cp /opt/bacula/bin/bconsole /usr/sbin/
cp /opt/bacula/bin/bdirjson /usr/sbin/
cp /opt/bacula/bin/bsdjson /usr/sbin/
cp /opt/bacula/bin/bfdjson /usr/sbin/
cp /opt/bacula/bin/bbconsjson /usr/sbin/
At first the baculum web GUI didn’t have all the php files it needed.
apt install php-bcmath php-cgi php-mysql php-pgsql php-json php-xml php-curl
At the end I got everything working and tested correctly.
After that the web interface worked correctly. It could even edit jobs, create clients, do everything without that error before. That is at least progress. Though it was likely predictable that the 9.6.6
GUI wouldn’t necessarily work with version 9.4.2
. I do think a basic version check could get around that issue I had.
Anyways, I created a new full backup. It succeeded.
bacula-dir JobId 26: End auto prune.
\
bacula-dir JobId 26: No Files found to prune.
\
bacula-dir JobId 26: Begin pruning Files.
\
bacula-dir JobId 26: No Jobs found to prune.
\
bacula-dir JobId 26: Begin pruning Jobs older than 6 months .
\
bacula-dir JobId 26: Bacula bacula-dir 9.6.6 (20Sep20):
Build OS: x86_64-pc-linux-gnu ubuntu 18.04
JobId: 26
Job: TrueNAS_Job.2020-12-18_22.34.12_32
Backup Level: Full (upgraded from Incremental)
Client: "Alexandria-TrueNAS-fd" 9.6.6 (20Sep20) amd64-portbld-freebsd12.1,freebsd,12.1-RELEASE-p12
FileSet: "Full Set" 2020-12-18 22:34:12
Pool: "TrueNAS" (From Command input)
Catalog: "MyCatalog" (From Client resource)
Storage: "TrueNAS" (From Command input)
Scheduled time: 18-Dec-2020 22:34:12
Start time: 18-Dec-2020 22:34:14
End time: 18-Dec-2020 22:34:44
Elapsed time: 30 secs
Priority: 10
FD Files Written: 89,203
SD Files Written: 89,203
FD Bytes Written: 2,140,836,347 (2.140 GB)
SD Bytes Written: 2,154,971,976 (2.154 GB)
Rate: 71361.2 KB/s
Software Compression: None
Comm Line Compression: 49.0% 2.0:1
Snapshot/VSS: no
Encryption: no
Accurate: no
Volume name(s): Vol-0002
Volume Session Id: 21
Volume Session Time: 1608251352
Last Volume Bytes: 11,420,434,537 (11.42 GB)
Non-fatal FD errors: 0
SD Errors: 0
FD termination status: OK
SD termination status: OK
Termination: Backup OK
\
bacula-sd JobId 26: Sending spooled attrs to the Director. Despooling 23,207,604 bytes ...
\
bacula-sd JobId 26: Elapsed time=00:00:14, Transfer rate=153.9 M Bytes/second
\
Alexandria-TrueNAS-fd JobId 26: /dev is a different filesystem. Will not descend from / into it.
\
Alexandria-TrueNAS-fd JobId 26: /var is a different filesystem. Will not descend from / into it.
\
bacula-sd JobId 26: Volume "Vol-0002" previously written, moving to end of data.
\
bacula-dir JobId 26: Using Device "Alexandria-Backup" to write.
\
bacula-dir JobId 26: Start Backup JobId 26, Job=TrueNAS_Job.2020-12-18_22.34.12_32
\
bacula-dir JobId 26: No prior or suitable Full backup found in catalog. Doing FULL backup.
\
bacula-dir JobId 26: No prior Full backup Job record found.
\
Went ahead and tried to initiate a file recovery.
Still not reading it correctly. Despite all the previous successes. I was very frustrated at this point. I took a break and set up a different backup software, which I will cover next. Eventually, I came back to this and tried the manual recovery.
root@bacula:/opt/bacula/working# bconsole
Connecting to Director bacula:9101
1000 OK: 103 bacula-dir Version: 9.6.6 (20 September 2020)
Enter a period to cancel a command.
*restore
Automatically selected Catalog: MyCatalog
Using Catalog "MyCatalog"
First you select one or more JobIds that contain files
to be restored. You will be presented several methods
of specifying the JobIds. Then you will be allowed to
select which files from those JobIds are to be restored.
To select the JobIds, you have the following choices:
1: List last 20 Jobs run
2: List Jobs where a given File is saved
3: Enter list of comma separated JobIds to select
4: Enter SQL list command
5: Select the most recent backup for a client
6: Select backup for a client before a specified time
7: Enter a list of files to restore
8: Enter a list of files to restore before a specified time
9: Find the JobIds of the most recent backup for a client
10: Find the JobIds for a backup for a client before a specified time
11: Enter a list of directories to restore for found JobIds
12: Select full restore to a specified Job date
13: Cancel
Select item: (1-13): 5
Defined Clients:
1: Alexandria-TrueNAS-fd
2: bacula-fd
Select the Client (1-2): 1
Automatically selected FileSet: Full Set
+-------+-------+----------+------------+---------------------+------------+
| JobId | Level | JobFiles | JobBytes | StartTime | VolumeName |
+-------+-------+----------+------------+---------------------+------------+
| 26 | F | 89203 | 2140836347 | 2020-12-18 22:34:14 | Vol-0002 |
+-------+-------+----------+------------+---------------------+------------+
You have selected the following JobId: 26
Building directory tree for JobId(s) 26 ... ++++++++++++++++++++++++++++++++++++++++++++
79,562 files inserted into the tree.
You are now entering file selection mode where you add (mark) and
remove (unmark) files to be restored. No files are initially added, unless
you used the "all" keyword on the command line.
Enter "done" to leave this mode.
cwd is: /
$ cd usr/local/etc/
Invalid path given.
cwd is: /
$ ls
.cshrc
.profile
.rnd
COPYRIGHT
bin/
boot/
compat/
conf/
data/
dev
entropy
etc/
lib/
libexec/
media
net
nonexistant/
rescue/
root/
sbin/
storcli.log
usr/
var
$ add COPYRIGHT
1 file marked.
$ done
Bootstrap records written to /opt/bacula/working/bacula-dir.restore.1.bsr
The Job will require the following (*=>InChanger):
Volume(s) Storage(s) SD Device(s)
===========================================================================
Vol-0002 TrueNAS Alexandria-Backup
Volumes marked with "*" are in the Autochanger.
1 file selected to be restored.
Using Catalog "MyCatalog"
Run Restore job
JobName: RestoreBaculaFiles
Bootstrap: /opt/bacula/working/bacula-dir.restore.1.bsr
Where: /mnt/backup/bacula/
Replace: Always
FileSet: Full Set
Backup Client: Alexandria-TrueNAS-fd
Restore Client: Alexandria-TrueNAS-fd
Storage: TrueNAS
When: 2020-12-18 22:35:55
Catalog: MyCatalog
Priority: 10
Plugin Options: *None*
OK to run? (yes/mod/no): yes
Job queued. JobId=27
You have messages.
It worked just fine. There is a small thing to note about this. I did have to fix the base bacula-dir.conf
file at this point because the restore job referenced a non-existent location.
Original in /opt/bacula/etc/bacula-dir.conf
Job {
Name = "RestoreBaculaFiles"
Type = Restore
Client=bacula-fd
Storage = TrueNAS
# The FileSet and Pool directives are not used by Restore Jobs
# but must not be removed
FileSet="Full Set"
Pool = TrueNAS
Messages = Standard
Where = /nonexistant/path/to/file/archive/dir/bacula-restores
}
Working in /opt/bacula/etc/bacula-dir.conf
Job {
Name = "RestoreBaculaFiles"
Type = "Restore"
Messages = "Standard"
Storage = "TrueNAS"
Pool = "TrueNAS"
Client = "bacula-fd"
Fileset = "Full Set"
Where = "/mnt/backup/bacula/"
}
With that, files would be restored on the client machine in /mnt/backup/bacula/
. That wasn’t quite what I was thinking, but it was working, and I now know exactly how to define the job, plus what to expect about where it would show up.
So in the end, it was likely working even before the upgrade to 9.6.6
. It is just a bug that for whatever reason it can read the file structure for Linux but not for FreeBSD in the web GUI. Sigh. That is a lot of effort for something that was already working as well as it is going to. It does work, and it does backup the FreeBSD files, and can restore files. That is the basic requirements for a working backup solution. So I see this as a minor success.
However, at the end of the day, I just can’t really recommend Bacula whole heartedly. I am absolutely certain that Bacula can do everything that I want. It is powerful. It is dang fast, almost twice as fast at backing up the same systems as Veeam, and way faster than the other options. But it is also extremely difficult to debug.
It is likely it was designed in a different era. If I was not as familiar with LAMP stack, C/C++ programming, older command line tools, and basic Linux use I doubt I would have got it working. The GUI is practically useless for actually setting things up correctly. Its architecture is very old in style, even though I believe it to actually be a solid design.
What it needs is a UX designer, and badly. Someone who can make it so it has friendly errors, friendly configuration for basic use cases. It took days to work through these issues, and I still don’t really know how to make sure it keeps files for only a few incremental backups or holds a full backup for a year. I am sure it can do it, but it will take another afternoon to work through how exactly to define everything. This needs a massive UX overhaul, not just a better web GUI, but an entire UX architecture. The backend is fine, but the entire configuration apparatus is cumbersome.
Bacula needs to be updated to account for how modern backups tend to work. It is far more likely to just use an old hard disk drive and label it than to use DVD or Tape drives anymore. It still has all of these options built for these older mediums. They appear to be the expected use case? They need to be deprioritized. Basic backup to basic hard drives should be the expected and obvious case here.
It is possible, and maybe even likely that my use case is not what it wants to do. However, given what it does do, this is where it should be going. This was too much configuration and debugging. Like I stated earlier, I got frustrated and moved on to a different backup solution.
I briefly deployed Duplicati [46], but it was a local backup solution. I wanted a single backup console for managing everything. So I searched for open source service client architecture and backup. I came across an option UrBackup [47]. It has a FreeBSD option as well.
So I decided to deploy this one.
UrBackup Install and Configuration
Okay, with the complication of Bacula, and the basic bug where I cannot access recovery options through the web GUI for FreeBSD, I was looking for a second open source option that would work with FreeBSD. For that I found UrBackup, which is a relatively new project starting in, looks like, 2013 based on the developer updates [48], though I admit I am not certain on that. It still has updates all the way up to now (Dec 2020), which is a fairly active project. There are only two components, a server and a client. It has a very basic install page [49]. Including a docker container! That was my first inclination.
I deployed a VM with 16 CPUs, 16 GB of ram and 30 GB of storage, similar to all of the other backup controller VMs. After a base Ubuntu install, I needed to configure the backups share from the storage VM. I added a second network interface for connecting to the storage network, therefore granting access to the .storage domain. Next I created the mount point.
mkdir /mnt/backup
Then I added the definition to the /etc/fstab
, very much like for Bacula.
alexandria.storage:/mnt/Stacks/Backups/UrBackup/ /mnt/backup nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0
After that I tried to mount the new share.
root@urbackup:~# mount -a
mount: /mnt/backup: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper progra
I forgot about the cifs-utils
. I installed that.
apt install cifs-utils
Then I tried to mount it again.
root@urbackup:~# mount -a
error 2 (No such file or directory) opening credential file /etc/samba/credentials
I forgot that when copying the fstab mount from Bacula I had created a credentials file. So I added it.
mkdir /etc/samba
/etc/samba/credentials
user=bacula
password=<SECRET>
Now the mount just worked.
root@urbackup:/# mount -a
Then I installed docker-compose.
apt install docker-compose
Followed by configuring the docker-compose file based on the data in the docker page [50].
version: '3'
services:
urbackup:
container_name: urbackup
image: uroni/urbackup-server:latest
restart: always
ports:
- 55413:55413
- 55414:55414
- 55415:55415
- 35623:35623
volumes:
- /home/nrweaver/urbackup/database:/var/urbackup
- /mnt/backup/urbackup:/backups
At that point I was able to run the container.
docker-compose up
And navigated to the control panel at 192.168.2.153:55414
I did see the warning, but I decided to try it anyway and installed the client on the storage VM.
Now a small thing about UrBackup. It automatically picks up new clients. A hint can be issued from the main screen, but it isn’t totally necessary. I will cover this in a little more detail later, as it did not work at all.
Apparently CIFS is not going to work here as the developer notes [51]. Luckily, from my private storage network configuration, I feel comfortable using NFS here. Nobody should be able to snoop since no data ever leaves this host. iSCSI is also definitely an option here, but that would limit the space to only what is presented by the iSCSI initiator. I preferred the general availability of the NFS mount. However, a bare NFS mount didn’t work.
I started by creating a new NFS share for backups on TrueNAS. I navigated to Sharing -> Unix Shares (NFS)
and clicked add
. There I navigated to the /mnt/Stacks/Backups
folder.

Then I went ahead and limited mounting to only the storage network, so nothing could make it to my normal network. I added the 10.0.2.0/24
to authorized network, and urbackup.storage
(I moved it to a DNS name with a static DHCP address) specifically to the Authorized Hosts
. Lastly I needed to add a maproot user
and a maproot group
or everything comes in as nobody:nogroup
.

Then I edited /etc/fstab
removing the old CIFS definition and mounting the new one based on some guides on what the syntax looks like [52].
alexandria.storage:/mnt/Stacks/Backups/ /mnt/backup nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0 0
Then I tried to mount it again.
mount -a
mount: /mnt/backup: bad option; for several filesystems (e.g. nfs, cifs) you might need a /sbin/mount.<type> helper program.
I needed the nfs-common
package.
apt install nfs-common
Then I tried to mount again.
mount -a
mount.nfs: Failed to resolve server alexandria.storage: Name or service not known
That was strange. It should be able to easily resolve this domain name. I needed to install net-tools
to get my basic unit tool set.
apt install net-tools
Then I checked the networking.
ifconfig
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:f0:e8:37:c0 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.2.153 netmask 255.255.255.0 broadcast 192.168.2.255
inet6 fe80::250:56ff:fea4:fe74 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:a4:fe:74 txqueuelen 1000 (Ethernet)
RX packets 18933 bytes 2789105 (2.7 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 884 bytes 116622 (116.6 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 100 bytes 7808 (7.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 100 bytes 7808 (7.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
There was no device on the storage network that was up. They all showed the other network interface.
ifconfig -a
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
ether 02:42:f0:e8:37:c0 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.2.153 netmask 255.255.255.0 broadcast 192.168.2.255
inet6 fe80::250:56ff:fea4:fe74 prefixlen 64 scopeid 0x20<link>
ether 00:50:56:a4:fe:74 txqueuelen 1000 (Ethernet)
RX packets 19805 bytes 2886258 (2.8 MB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 901 bytes 119712 (119.7 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens192: flags=4098<BROADCAST,MULTICAST> mtu 1500
ether 00:0c:29:5d:25:ab txqueuelen 1000 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 100 bytes 7808 (7.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 100 bytes 7808 (7.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Alright, I can manually configure this with the following.
ifconfig ens192 up
dhclient -1 ens192
Ifconfig ens192 up
turns on the interface, and dhclient -1
forces it to gain a new IP address on the newly up interface. However, this doesn’t actually survive a reboot. To fix that I had to learn a little bit about Netplan
. It keeps basic interface information in /etc/netplan/<filename>
. For me that was /etc/netplan/00-installer-config.yaml
# This is the network config written by 'subiquity'
network:
ethernets:
ens160:
dhcp4: true
version: 2
I added the incorrectly initializing interface.
# This is the network config written by 'subiquity'
network:
ethernets:
ens192:
dhcp4: true
ens160:
dhcp4: true
version: 2
It will boot up this network interface after a reboot and acquire an IP Address. With that out of the way, I could now complete the mount.
mount -a
Then I executed a docker-compose up
it and…
docker-compose up
urbackup is up-to-date
Attaching to urbackup
urbackup | usermod: no changes
urbackup | chown: changing ownership of '/backups': Invalid argument
So it appeared that UrBackup is expecting to utilize its own user for creating and editing files on the backup location. That is… unfortunate. I had never attempted to actually mount with specific user rights. First, I tried to see if there are any specific things that could be tried. I found that I needed to create an exports file in /etc/exports
[53] which also needed a user ID and group ID mapping, which was recommended to be a separate file [54]. So I had an /etc/nfs.map
file like:
#remote local
gid 118 118
uid 112 112
gid 0 0
uid 0 0
And an /etc/exports
like:
/mnt/backup alexandria.storage(rw,insecure,map_static=/etc/nfs.map,no_subtree_check,no_root_squash)
To figure out the correct user ids for the relevant users I created a urbackup user and group in TrueNAS.
I also needed to lookup the user ID and group ID to match with on the urbackup machine. I used [55]:
root@urbackup:/mnt# id -u urbackup
112
root@urbackup:/mnt# id -g urbackup
118
Then I tried remounting the share.
Umount /mnt/backup
Mount -a
And looked at the result.
root@urbackup:/mnt/backup# cd ..
root@urbackup:/mnt# ls -al
total 41
drwxr-xr-x 3 root root 4096 Dec 24 02:52 .
drwxr-xr-x 20 root root 4096 Dec 24 02:42 ..
d--------- 94 nobody 4294967294 94 Dec 23 21:05 backup
root@urbackup:/mnt# ls -al backup/urbackup
total 43
drwx------ 5 nobody 4294967294 5 Dec 24 03:00 .
d--------- 94 nobody 4294967294 94 Dec 23 21:05 ..
drwx------ 8 nobody 4294967294 8 Dec 23 17:26 urbackup
Well something had changed, but I was still missing something. There is actually a very small piece that is easy to miss. The domain mapping between TrueNAS and Linux NFS must match [56]. The TrueNAS domain is under Network->Global Configuration
I set that as internal. On the actual urbackup server this is located in /etc/idmapd.conf
.
[General]
Verbosity = 0
Pipefs-Directory = /run/rpc_pipefs
# set your own domain here, if it differs from FQDN minus hostname
Domain = internal
[Mapping]
Nobody-User = nobody
Nobody-Group = nogroup
I tried to restart the service, but I didn’t think I knew the right name.
systemctl restart nfs-idmapd.service
Failed to restart nfs-idmapd.service: Unit nfs-server.service not found.
So I just rebooted the machine. After the reboot:
t# ls -al
total 41
drwxr-xr-x 3 root root 4096 Dec 24 02:52 .
drwxr-xr-x 20 root root 4096 Dec 24 02:42 ..
d--------- 94 root 4294967294 94 Dec 23 21:05 backup
ls -al backup/
total 115
drwxr-xr-x 3 root root 4096 Dec 24 02:52 ..
d--------- 3 nobody 4294967294 5 Dec 24 04:37 UrBackup
(I am cropping these to my relevant folders).
That was better, the root user is being correctly identified. It occurred to me that I needed to go make its user and owner urbackup:urbackup
on the TrueNAS server itself for the mapping to work.

And then I saw:
# ls -al backup/
total 115
drwx------ 5 urbackup urbackup 5 Dec 24 04:37 UrBackup
(I am cropping this to my relevant folders).
There we go! I have successfully mounted this with GID and UID mapping. Now I was hoping the docker container would work.
docker-compose up
urbackup is up-to-date
Attaching to urbackup
urbackup | usermod: no changes
urbackup | chown: changing ownership of '/backups': Invalid argument
Sigh. That was a lot of effort in an attempt. I did try to delete the container, the volumes, the images, everything and reinstall. This seemed like it should work. I eventually deduced that this is coming from a specific file in the container.
cat entrypoint.sh
#!/bin/bash
set -e
# Copy www-folder back, if folder is bind-mounted
cp -R /web-backup/* /usr/share/urbackup
# Giving the user and group "urbackup" the provided UID/GID
if [[ $PUID != "" ]]
then
usermod -u $PUID -o urbackup
else
usermod -u 112 -o urbackup
fi
if [[ $PGID != "" ]]
then
groupmod -g $PGID -o urbackup
else
groupmod -g 118 -o urbackup
fi
# Specifying backup-folder location
echo "/backups" > /var/urbackup/backupfolder
chown urbackup:urbackup /backups
chown urbackup:urbackup /var/urbackup
exec urbackupsrv "$@"
It looks like this file has its beginnings here [57]. It looks like the community was trying to build a multi-arch docker container. Maybe this is simply a workaround shoved in here that doesn’t work for my NFS mount shares? I am not sure. I did notice that there was a more complete definition of an example docker-compose yaml
file on the docker page. So I tried editing it to work for me:
version: '2'
services:
urbackup:
image: uroni/urbackup-server:latest
container_name: urbackup
restart: unless-stopped
environment:
- PUID=112 # Enter the UID of the user who should own the files here
- PGID=118 # Enter the GID of the user who should own the files here
- TZ=America/Chicago # Enter your timezone
volumes:
- /home/nrweaver/urbackup/database:/var/urbackup
- /mnt/backup/urbackup:/backups
# Uncomment the next line if you want to bind-mount the www-folder
#- /path/to/wwwfolder:/usr/share/urbackup
network_mode: "host"
# Activate the following two lines for BTRFS support
#cap_add:
# - SYS_ADMIN
Then I tried starting the container.
docker-compose up
Recreating urbackup ... done
Attaching to urbackup
urbackup | Raising nice-ceiling to 35 failed. (errno=1)
urbackup | 2020-12-23 23:05:18: Starting HTTP-Server on port 55414
urbackup | 2020-12-23 23:05:18: HTTP: Server started up successfully!
urbackup | 2020-12-23 23:05:18: SQLite: recovered 4 frames from WAL file /var/urbackup/backup_server.db-wal code: 283
urbackup | 2020-12-23 23:05:18: SQLite: recovered 2 frames from WAL file /var/urbackup/backup_server_link_journal.db-wal code: 283
urbackup | 2020-12-23 23:05:18: SQLite: recovered 3 frames from WAL file /var/urbackup/backup_server_settings.db-wal code: 283
urbackup | 2020-12-23 23:05:18: SQLite: recovered 4 frames from WAL file /var/urbackup/backup_server.db-wal code: 283
urbackup | 2020-12-23 23:05:18: SQLite: recovered 3 frames from WAL file /var/urbackup/backup_server_settings.db-wal code: 283
urbackup | 2020-12-23 23:05:18: SQLite: recovered 2 frames from WAL file /var/urbackup/backup_server_link_journal.db-wal code: 283
urbackup | 2020-12-23 23:05:18: Started UrBackup...
urbackup | 2020-12-23 23:05:18: Removing temporary files...
urbackup | 2020-12-23 23:05:18: Recreating temporary folder...
urbackup | TEST FAILED: guestmount is missing (libguestfs-tools)
urbackup | 2020-12-23 23:05:18: Image mounting disabled: TEST FAILED: guestmount is missing (libguestfs-tools)
urbackup | Testing for btrfs...
urbackup | ERROR: not a btrfs filesystem: /backups/testA54hj5luZtlorr494
urbackup | TEST FAILED: Creating test btrfs subvolume failed
urbackup | Testing for zfs...
urbackup | TEST FAILED: Dataset is not set via /etc/urbackup/dataset
urbackup | 2020-12-23 23:05:18: Backup destination cannot handle subvolumes and snapshots. Snapshots disabled.
urbackup | 2020-12-23 23:05:18: Reflink ioctl failed. errno=95
urbackup | 2020-12-23 23:05:18: Broadcasting on ipv4 interface ens160 addr 192.168.2.153
urbackup | 2020-12-23 23:05:18: Broadcasting on ipv4 interface ens192 addr 10.0.2.31
urbackup | 2020-12-23 23:05:18: Broadcasting on ipv4 interface docker0 addr 172.17.0.1
urbackup | 2020-12-23 23:05:18: Broadcasting on ipv4 interface br-aac2308fc248 addr 172.18.0.1
urbackup | 2020-12-23 23:05:18: Broadcasting on ipv6 interface ens160 addr fe80::250:56ff:fea4:fe74
urbackup | 2020-12-23 23:05:18: Broadcasting on ipv6 interface ens192 addr fe80::20c:29ff:fe5d:25ab
urbackup | 2020-12-23 23:05:18: Broadcasting on ipv6 interface br-aac2308fc248 addr fe80::42:b1ff:fe34:e05c
urbackup | 2020-12-23 23:05:18: UrBackup Server start up complete.
Which did work! Or at least the container starts. I am not at all sure why this one works and my old one doesn’t. But I do see an error.
I kinda guessed that the issue might be that I can’t change the base access rights for the base /mnt/Stacks/Backups
versus a completely urbackup:urbackup
user:group owned share. So I went ahead and created a new dataset in TrueNAS for this specific folder underneath Backups.
Then I can see it underneath Backups.
Then I set up the new NFS share for this dataset.
After that I needed to edit the fstab for the new dataset.
alexandria.storage:/mnt/Stacks/Backups/UrBackup /mnt/backup nfs auto,nofail,noatime,nolock,intr,tcp,actimeo=1800 0
After a reboot because the device was busy. I restarted the docker container.
No warnings! I wanted to set up a local client to make sure this was working correctly. There is a client docker container [58]. However, I was concerned since docker’s primary purpose is to encapsulate environments, whereas for backup I want to backup everything on the entire machine. I also didn’t see a reference to it under the client section [59]. So although I think this is an official release, I am unsure how much support I could expect. It hadn’t been touched in a year. I decided to go with the standard install in case I missed something.
TF=$(mktemp) && wget "https://hndl.urbackup.org/Client/2.4.11/UrBackup%20Client%20Linux%202.4.11.sh" -O $TF && sudo sh $TF; rm -f $TF
It prompted me for a snapshot type.
+Detected LVM volumes
Please select the snapshot mechanism to be used for backups:
1) dattobd volume snapshot kernel module from https://github.com/datto/dattobd
2) LVM - Logical Volume Manager snapshots
4) Use no snapshot mechanism
I selected dattobd. This was a mistake. I tried to get dattobd working with the basic install.
sudo apt-key adv --fetch-keys https://cpkg.datto.com/DATTO-PKGS-GPG-KEY
echo "deb [arch=amd64] https://cpkg.datto.com/datto-deb/public/$(lsb_release -sc) $(lsb_release -sc) main" | sudo tee /etc/apt/sources.list.d/datto-linux-agent.list
sudo apt-get update
sudo apt-get install dattobd-dkms dattobd-utils
But the module wouldn’t load. I ended up having to figure out how to change this selection. The installer references /usr/local/etc/urbackup/snapshot.cfg
Which looked like:
#This is a key=value config file for determining the scripts/programs to create snapshots
create_filesystem_snapshot=/usr/local/share/urbackup/dattobd_create_filesystem_snapshot
remove_filesystem_snapshot=/usr/local/share/urbackup/dattobd_remove_filesystem_snapshot
Looking in /usr/local/share/urbackup/
I see:
:/usr/local/share/urbackup# ls
btrfs_create_filesystem_snapshot dattobd_remove_filesystem_snapshot lvm_remove_filesystem_snapshot
btrfs_remove_filesystem_snapshot filesystem_snapshot_common scripts
dattobd_create_filesystem_snapshot lvm_create_filesystem_snapshot urbackup_ecdsa409k1.pub
So I edited it to change the snapshot system. It didn’t really help. Snapshots don’t seem to work.
#This is a key=value config file for determining the scripts/programs to create snapshots
create_filesystem_snapshot=/usr/local/share/urbackup/lvm_create_filesystem_snapshot
remove_filesystem_snapshot=/usr/local/share/urbackup/lvm_remove_filesystem_snapshot
After that I checked the urbackup server and it discovered it and started taking a backup!
It appeared to be working, but there was an error taking this backup.
I needed to go and configure the backup settings. So I navigated to the Setting -> File Backups
page. I did configure some of the times, but to get around this error I added the Excluded files line
/mnt/*;/proc/*;/tmp/*;/sys/*;/.journal/*;/.fsk/*;/swap.img;
And default directories to the backup line.
/
Which did make the backup work.
When browsing the actual backup though, I could view everything, but I couldn’t restore anything.
It looks like the restore option is supposed to appear by list [60]. I needed to change a setting in the client. The client’s configuration file is in /etc/default/urbackupclient
# Defaults for urbackup_client initscript
# sourced by /etc/init.d/urbackupclientbackend
# installed at /etc/default/urbackupclient by the maintainer scripts
#
# This is parsed as a key=value file
#
#logfile name
LOGFILE="/var/log/urbackupclient.log"
#Either debug,warn,info or error
LOGLEVEL=warn
#Max size of the log file before rotation
#Disable if you are using logrotate for
#more advanced configurations (e.g. with compression)
LOG_ROTATE_FILESIZE=20971520
#Max number of log files during rotation
LOG_ROTATE_NUM=10
#Tmp file directory
DAEMON_TMPDIR="/tmp"
# Valid settings:
#
# "client-confirms": If you have the GUI component the currently active user
# will need to confirm restores from the web interface.
# If you have no GUI component this will cause restores
# from the server web interface to not work
# "server-confirms": The server will ask the user starting the restore on
# the web interface for confirmation
# "disabled": Restores via web interface are disabled.
# Restores via urbackupclientctl still work
#
RESTORE=disabled
#If true client will not bind to any external network ports (either true or false)
INTERNET_ONLY=false
I changed RESTORE=disabled
to RESTORE=server-confirms
. Then I restarted the client to load the change.
systemctl restart urbackupclientbackend.service
And restarted the server to make sure both sides picked up the change.
Docker-compose down
Docker-compose up -d
And checked the backup again.
Now the restore option works. I tested this with a test file just to make sure. That worked fine. I am thinking this is working. I needed to make sure docker would auto-restart the urbackup server after a reboot.
systemctl enable docker.service
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.
And I changed the docker_compose.yaml
file to restart: always
version: '2'
services:
urbackup:
image: uroni/urbackup-server:latest
container_name: urbackup
restart: always
environment:
- PUID=112 # Enter the UID of the user who should own the files here
- PGID=118 # Enter the GID of the user who should own the files here
- TZ=America/Chicago # Enter your timezone
volumes:
- /home/nrweaver/urbackup/database:/var/urbackup
- /mnt/backup/urbackup:/backups
# Uncomment the next line if you want to bind-mount the www-folder
#- /path/to/wwwfolder:/usr/share/urbackup
network_mode: "host"
# Activate the following two lines for BTRFS support
#cap_add:
# - SYS_ADMIN
Followed by registering the changes
Docker-compose down
Docker-compose up -d
Now a reboot starts the server container. Everything was working.
Alright, there is one more small issue that needs to be addressed. This has to do with the way urbackup clients work. A urbackup client will connect with the first urbackup server it finds. Once it has done so, it registers a key for this connection [61]. In the following look at the top message.
This is it telling me the alexandria.internal
client has already registered to a different server. This likely happened because I deleted the image and containers while I was trying to get the docker server working. It generated a new auth key, and now I need to fix it to let the client already running on the FreeBSD TrueNAS connect to this new server.
The biggest issue here is that nobody lists where exactly this file is located. This is a recent-ish port [62], and there aren’t a lot of helpful forum posts about it, and all the posts seem to be about the FreeBSD server, not client. I found it on FreeBSD at /var/urbackup
I cleared that directory out of all state
rm /var/urbackup/*
Then I restarted the client
service urbackup_client restart
Then I needed to restart the server. I tried restarting the docker container, but it was rejected. So I just rebooted the server.

There it is! Now urbackup is configured and working.
Just for completeness, on a linux system the server_idents.txt
file is located in /usr/local/var/urbackup
which I did need to find for a future client on the bacula server. Followed by
systemctl restart urbackupclientbackend.service
To register the change.
The last thing I wanted to do was set a password for this web interface. That is under Settings -> Users
. I created an admin user.
The really disturbing thing is that the main page has no username, and simply typing in the password doesn’t actually work. Luckily lastpass appears to find the username box and insert it, so I can get in.
Alright, with that completed. It is time to configure the Veeam Agents.
Veeam Agents
My backup solution was coming together at this point. I had two services, Bacula and UrBackup, that can backup my FreeBSD based TrueNAS storage VM. Now I need to finish up by setting up backup for the Windows based Streaming VM and Virtual Reality VM. I also need to configure backup for the Linux based Home Assistant VM. These three use PCI passthrough that breaks the vSphere snapshotting.
My thinking here was that I wanted to use Veeam as my core infrastructure backup solution. It is the most fully featured, including hypervisor level backups, backup based vm deployment, replication, and more. UrBackup can do file based backup and restore, which is the minimum, but it would take quite a bit of effort to transfer that into an actual deployment. It is enough to recover from minor issues, not major ones. Though I would say a major issue would probably knock out the entire server, so I’m not sure how much I should really concern myself with that, as the proper solution is to actually buy another server. I am not going to do that.
Still this is a fairly featured backup solution overall. I am pleased with how this is turning out and the options I have worked through. I am also going to include backup of my non-VM hardware, like my workstation, my laptops and my partners laptops as well. Veeam can cover these cases with the free editions. Let’s start with the Veeam Windows agent.
Veeam Windows Agent
The Windows Veeam agent can be downloaded here [63]. After logging in and downloading it I started the install. I began by agreeing to the terms and clicking Install.
Next I skipped this local target, I intend to use the Veeam Community edition management system.
I deselected the Run Veeam Recovery Media
creation wizard. I thought this is about building a recovery device to restore Windows files when necessary. That doesn’t seem necessary. Hopefully the Veeam management console can do this as well, as I don’t have a ton of usb drives to keep lying around and labeled. I clicked Finish
.
Then I needed to configure the first backup job. I clicked on the Veeam Agent icon in the bottom right and selected the Control Panel.
The Community Edition will provide one if needed (or not for the free version), I hope, as I do not have a license. I clicked No
.
Next I clicked on the snack bar in the top left to get to the main menu.
Then I selected + Add New Job…
That started the configuration of the new job.
I changed the name to Job Demo
and clicked Next
.
I stuck with the entire computer backup by clicking Next
.
On the next page I selected the Veeam backup repository
. That changed the next step to Backup Server
and added a step for Backup Repository
. I proceeded by clicking Next
.
Here I needed to configure the backup server.
I had previously issued the Veeam Backup VM a DNS name of veeam.internal
. So I entered that.

And it failed for some reason.

There were a fair number of posts and forum articles about this [64] [65] [66]. Ultimately there are really two things I needed to do to get around this issue. First I did need to change the Access Permissions
for the Alexandria backup repository
. I selected the repository, right clicked and selected Access permissions...

Then clicked allow to everyone
.

As someone who has not been a Windows LDAP admin, the actual issue wasn’t obvious, even though it was definitely in the Veeam article.
4. In the Access Permissions window, specify to whom you want to grant access permissions on this backup repository:
- Allow to everyone — select this option if you want all users to be able to store backups on this backup repository. Setting access permissions to Everyone is equal to granting access rights to the Everyone Microsoft Windows group (Anonymous users are excluded). Note, however, this scenario is recommended for demo environments only.
- Allow to the following accounts or groups only — select this option if you want only specific users to be able to store backups on this backup repository. Click Add to add the necessary users and groups to the list.
I actually needed an account with permissions. I am not sure why It is even an option to not give credentials given that Anonymous users are excluded. That would seem to suggest that the default page will never actually work. Also based on the sheer number of forum posts that seem to miss that point, I can only assume that it was just assumed by everyone. A minor point about access permissions being necessary to make that work would have been nice. Anyways, I created a Veeam-Agent
user on the Veeam-Backup VM
, mostly so I can change the user access rights separately in the future. Then I was able to proceed. It was nice that I was reminded to use the Windows domain login <domain>\<user>
which probably would have taken me some time. I’m simply more used to Unix style access rights. I clicked next
.

And now I can see the Alexandria-Backup
repository. I didn’t change anything here and clicked Next
.

I didn’t enable a backup cache. There really isn’t a need for quick access or a lot of hard drive space to these VMs for something like this. This may be more useful for the actual workstations.
Next I saw the schedule. So I edited it to run at 3:30am. Then I changed it to On these days.
Then I clicked Days… and deselected to Monday, Wednesday, and Friday.
Followed up with Apply
.

And came to the total summary. I clicked Finish
.

Then I was back at the agent screen with the Job Demo
existing.
I popped over to the Veam Community Edition server, and the new job showed up.
It was definitely possible to manage the backup job from there. So I have what I wanted, the ability to manage all of the backups from a central location.
Veeam Linux Agent
At first I thought I could just install the Linux agent by right clicking on the box and clicking install Linux Agent. That does not work on Community edition, and would have all the same issues with needing a root account as I attempted to get guest file indexing working. Which I did not get working. I am still not sure what I was missing. But there is a manual guide for installing the Veeam agent on Linux [67]. I did have something I completed before doing this. When trying to get guest file system indexing working I installed mlocate on the system. I think that is still necessary, but it was already installed on the Home Assistant VM for me.
apt install mlocate
The first thing that needs to be done is to download the Veeam pkg [68]. I copied it to one of my SMB shares, and mounted it on the Home Assistant VM. Then I copied it over and manually installed it with dpkg
.
/mnt/data# cp veeam-release-deb_1.0.7_amd64 ~/
~# dpkg --install veeam-release-deb_1.0.7_amd64.deb
Now I need to update the repos and can search for Veeam.
/# apt update
Hit:1 http://us.archive.ubuntu.com/ubuntu focal InRelease
Get:2 http://us.archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB]
Get:3 http://repository.veeam.com/backup/linux/agent/dpkg/debian/public stable InRelease [7541 B]
Get:4 http://us.archive.ubuntu.com/ubuntu focal-backports InRelease [101 kB]
Get:5 http://repository.veeam.com/backup/linux/agent/dpkg/debian/public stable/veeam amd64 Packages [4754 B]
Get:6 http://us.archive.ubuntu.com/ubuntu focal-security InRelease [109 kB]
Fetched 336 kB in 0s (772 kB/s)
Reading package lists... Done
Building dependency tree
Reading state information... Done
78 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@home-assistant:/# apt search veeam
Sorting... Done
Full Text Search... Done
veeam/stable 4.0.1.2365 amd64
Veeam Agent for Linux
veeam-release-deb/stable,now 1.0.7 amd64 [installed]
Veeam Backup for GNU/Linux repository
veeamsnap/stable 4.0.1.2365 all
Veeam Agent for Linux (kernel module)
At one point when doing this I uninstalled the veeam-release-deb
and forgot that this is what added the repos. Then I lost track of how I got it working. Embarrassing. I did eventually re-discover what I had done the first time when I was paying closer attention. Hopefully this post will serve as a reminder of this for my future needs.
Anyways. I installed Veeam.
apt install veeam
This will install the Veeam backup agent. It created the service veeamservice.service
as well. The standard systemctls will work for this.
systemctl status veeamservice.service
● veeamservice.service - Veeam Agent for Linux service daemon
Loaded: loaded (/lib/systemd/system/veeamservice.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-12-23 20:47:29 UTC; 1min 13s ago
Main PID: 117494 (veeamservice)
Tasks: 5 (limit: 4619)
Memory: 5.6M
CGroup: /system.slice/veeamservice.service
└─117494 /usr/sbin/veeamservice --daemonize --pidfile=/var/run/veeamservice.pid
Next I needed to configure a backup job. At first I had hoped that just installing the Agent would let Veeam Community edition connect to it and configure it. I even remember that working at one point. It may work, but I had to do a Veeam Community edition reinstall after not quite understanding that it took 5 minutes for Veeam to start. The SQLServer database got messed up when trying to deduce the issue with Windows not accessing the Repository. The bottom of this forum page suggests that the issue was previous data in the database that interfered with the ability of Veeam Community edition and the Veeam Windows agent to work together [69]. I did that at one point, and I think I borked the database. So I executed a reinstall, which wasn’t too bad, but it lost the ability to communicate with the Agent.
From here I needed to do a command line job setup. My hope was that it would become configurable exactly like the Windows agents did once they were configured (which is correct). There isn’t a complete guide from what I found, but I was able to piece it together. First I started with a basic config [70].
veeamconfig ui
That took me through the initial setup, which I selected defaults for. I didn’t screencap that though. Next I discovered I needed to add the veeam.internal
server for selecting the backup repository.
veeamconfig vbrServer add --name Veeam-Adm --address veeam.internal --port 10006 --login Veeam-Agent --domain DESKTOP-3MOQ30K --password beY+rush2
Backup server has been added successfully.
veeamconfig repository list
Name ID Location Type Accessible Backup server
[Veeam-Adm] Alexandria-Backup {ec946fbd-79bd-4882-84a5-746ea60fee0a} Veeam-Adm backup server true Veeam-Adm
Alright I had to re-enter the Veeam command line configuration.
veeamconfig ui
I hit C to configure the new job.

Then I named the job Home Assistant Job
and hit enter.

I left the backup type as the entire machine by hitting enter.

Here I needed to select the Veeam Backup & Replication server. Luckily I had already discovered I needed to add it.

I selected that when I pressed spacebar and navigated to next
.
I entered all the various credentials, exactly mirroring what was needed on the Windows Agent. Then I hit enter.
Followed by another enter with the Alexandria-Backup
repository selected.
Again, I left the defaults here. If I want to modify it, I really want a better interface to interact with. I hit enter for next
.

Next I needed to configure the schedule.
I was able to deselect down to just Monday, Wednesday, and Friday with the arrow keys and spacebar. But, I could not figure out what would be necessary to change the time. I tried the number keys, + and – keys, numlock and more. I still have no idea what is necessary to edit the time box. Eventually I just selected next and moved on figuring I could change it in the management console.
I finished and let it start a new backup.
Which it did.
And it eventually finished and I could see and manage it in the Veeam VM.

Conclusion
Now I have all of my core infrastructure here being backed up by Veeam Community Edition. UrBackup and Bacula are configured to provide extra backups since I have limits on what Veeam’s free editions will backup. I am satisfied with this backup solution.
I do want to briefly talk about RPO and RTO. These stand for Recovery Point Objective, and Recovery Time Objective. Recovery Point Objective is essentially how far between backups the system can manage. For my case I am targeting something like 48 hours. Technically I would have to report this as 72 hours since the time between Friday and Mondays backups is 72 hours. That is how I configured the Veeam Backups. In a more professional environment this would probably not be good enough. Veeam and other modern services have a near real-time RPO.
Recovery Time Objective is how long it takes a user to restore back to whatever point necessary. As this is related directly to how quickly human operators make that decision (usually), it is a little more difficult to measure. This is where I think Bacula has the most trouble for me. It is very complicated to do practically anything in it. I would need to spend the afternoon working through anything but a basic file restore. Even then, I am not certain I would be able to get that all figured out. This is more about training than capability. That is why I think I am dropping Bacula. It requires a great deal more training to keep it working. That takes time I don’t want to spend every 3-6 months making sure I am up to date on how it actually works and does things.
UrBackup and Veeam are much more intuitive to work with, if a little old in UX design. I can easily restore specific files within minutes, which I know because I tested this on each of these. To be fair this basic level is pretty easy on Bacula as well. However, a full system restore was really easy on Veeam, not so on UrBackup. I had the files, but I needed to redeploy the system then connect it, then do a full file restore, which I’m not sure how to do exactly. I would need to look into it as well. Bacula I don’t want to even start. Given how difficult even basic things are there I decided it wasn’t worth my time to work through.
The only other points worth considering are the kind of backup. There are three big ones [71]. The full backup where all files are saved. The incremental backup, where only changes from the last backup are stored. Finally, the differential backup, where changes from the last full backup are stored. This is about saving disk space.
In my case, I do not have near unlimited disk space. I really don’t want to use more than 6-7 TB on all of the backups. Luckily lz compression in ZFS helps, plus Veeam does deduplication, where the same files are identified and only one is stored. Think something like if there are multiple installs of Linux on my systems (all of them are Ubuntu 20.04) then only one copy of linux files are stored, since they are duplicates. This helps reduce space a lot, usually on the order of 66%. I’m not sure exactly how these are commonly measured, but I mean a 30GB backup takes 10GB in actual space.
With that I decided I really only want 1 full backup of my 6TB workstation. It compresses and deduplicates down to 4TB, which isn’t bad. I will keep incremental backups around for a bit, 2 months before I drop off the original full backup and take a new one. For UrBackup I have it saving 7 incremental backups every 24 hours for incrementals going back a week. Then I have a full backup going back every month for 6 months. This isn’t a lot of space, maybe 40GB a month.
The rest of Veeam does a full backup every month for 6 months, and an incremental backup every Monday, Wednesday, and Friday. Veeam only keeps the last 7 incrementals, or about 2 weeks worth.
Here is where Bacula is the most difficult. Its entire recycle and prune apparatus is another system to learn. I think it needs a UX update to make it easy to understand. I tried to delete half the jobs on vol-0002. Then I marked the volume as full. I marked it for pruning and.. Nothing happened. It still has the full space. I’m sure someone knows what I missed here, but I’m tired of everything taking days to work through.
As can be seen, this takes up a fair bit of disk space to keep these around. I am not a business, I will not be constantly changing these backups. Because of that, I intend to come back and examine space usage in a few months and make changes. I probably overestimated what I needed here, but I also want to make sure I have an idea of how all of these interact. This will be streamlined for taking less space.
In the end, I would like to try other options, but this is pretty featured for a basic home lab. When I run out of options on Veeam Community Edition, I can use basic UrBackup clients. I am pleased here.
[2] https://support.wdc.com/downloads.aspx?p=171
[3] https://www.acronis.com/en-us/products/backup/
[4] http://www.vmwarearena.com/top-5-best-free-backup-software-for-vmware-and-hyper-v-infrastructure/
[5] https://www.vladan.fr/top-3-free-backup-software-for-vmware-and-microsoft-and-their-limitations/
[6] https://www.itsmdaily.com/top-backup-solutions-vmware-hyperv/
[8] https://opensource.com/article/19/3/backup-solutions
[9] https://en.wikipedia.org/wiki/List_of_backup_software
[10] https://www.datanyze.com/market-share/backup-and-recovery–211
[12] https://www.veeam.com/virtual-machine-backup-solution-free.html
[13] https://helpcenter.veeam.com/docs/backup/vsphere/required_permissions.html?ver=100
[14] https://helpcenter.veeam.com/docs/backup/vsphere/credentials_manager_linux.html?ver=100
[15] https://helpcenter.veeam.com/archive/backup/95u4/vsphere/encryption_standards.html
[17] https://www.veeam.com/kb2061
[18] https://medium.com/risan/upgrade-your-ssh-key-to-ed25519-c6e8d60d3c54
[20] https://forums.veeam.com/vmware-vsphere-f24/warnings-when-backing-up-linux-vms-t41862.html
[21] https://phpraxis.wordpress.com/2016/09/27/enable-sudo-without-password-in-ubuntudebian/
[22] https://helpcenter.veeam.com/archive/backup/95/vsphere/credentials_manager_linux.html
[23] https://forums.veeam.com/vmware-vsphere-f24/warnings-when-backing-up-linux-vms-t41862.html
[24] https://www.veeam.com/windows-endpoint-server-backup-free.html
[25] https://forums.veeam.com/veeam-agents-for-linux-unix-f41/agent-for-freebsd-t61367.html
[26] https://en.wikipedia.org/wiki/Comparison_of_backup_software
[27] https://www.opensourceforu.com/2020/01/setting-up-bacula-the-modular-backup-solution/
[28] https://www.bacula-web.org/
[29] https://www.bacula.lat/baculum/?lang=en
[30] https://en.wikipedia.org/wiki/Baculum
[31] https://www.bacula.org/9.4.x-manuals/en/console/Baculum_API_Web_GUI_Tools.html
[33] https://www.bacula.lat/baculum/?lang=en
[35] https://www.bacula.org/5.1.x-manuals/de/main/main/Client_File_daemon_Configur.html
[37] https://www.bacula.org/9.0.x-manuals/en/main/Configuring_Director.html#SECTION001820000000000000000
[38] https://www.bacula.org/9.0.x-manuals/en/main/Storage_Daemon_Configuratio.html
[39] https://www.justinsilver.com/random/fix-pkg-on-freenas-11-2/
[40] https://dan.langille.org/2015/01/10/bacula-on-freebsd-with-zfs/
[41] https://forums.freebsd.org/threads/install-older-versions-of-a-port-or-package.49934/
[42] https://www.bacula.lat/script-installation-bacula-community-9-x-official-packages/?lang=en
[43] https://www.bacula.org/bacula-binary-package-download/
[44] https://bacula.org/downloads/
[45] https://www.linkedin.com/pulse/node-vs-apache-lighttpd-nginx-jeff-poyzner/
[46] https://www.duplicati.com/
[47] https://www.urbackup.org/
[48] https://urbackup.atlassian.net/wiki/spaces/US/overview
[49] https://www.urbackup.org/download.html#server_ubuntu
[50] https://hub.docker.com/r/uroni/urbackup-server
[51] https://forums.urbackup.org/t/getting-urbackup-to-backup-to-synology-nas/6701
[52] https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-ubuntu-20-04
[54] https://www.kernelcrash.com/blog/nfs-uidgid-mapping/2007/09/10/
[56] https://www.truenas.com/community/threads/sharing-nfs-smb-is-going-to-kill-me.85586/#post-592539
[57] https://forums.urbackup.org/t/questions-problems-with-urbackup-server-in-docker/7382/2
[58] https://hub.docker.com/r/uroni/urbackup-client
[59] https://www.urbackup.org/download.html
[60] https://forums.urbackup.org/t/restore-option-isnt-available-via-the-webinterface/3118
[61] https://www.urbackup.org/faq.html
[62] https://www.freshports.org/archivers/urbackup-client/
[63] https://www.veeam.com/windows-endpoint-server-backup-free.html
[65] https://helpcenter.veeam.com/docs/agentforlinux/userguide/integrate_permissions.html?ver=40
[67] https://helpcenter.veeam.com/docs/agentforlinux/userguide/installation_process.html?ver=40
[68] https://www.veeam.com/linux-backup-free-download.html
[69] https://forums.veeam.com/veeam-agent-for-windows-f33/cannot-configure-windows-agent-t48131.html
[70] https://helpcenter.veeam.com/docs/agentforlinux/userguide/backup_job_launch.html?ver=40