I didn’t start out planning to write this article. I mentioned this in passing last post, I created a dataset and NFS share for running virtual machines. At the time, I didn’t seriously consider setting this up for a bit because I expected this to be a bit of a bear to work through. I was not wrong. However, when I was researching backup technologies to use with this home lab, it became apparent that vCenter Server actually is useful for a lot of them. I am thinking that this is perhaps how the storage APIs are accessed, or perhaps this is about some scheduling technology.
In any event, I came to the conclusion I needed to get the vCenter Server installed now. I know from a previous attempt to install it that even the tiny installation will take up ~500GB of drive space. I am down to only ~800GB available on my NVME datastore. So, I decided that it was time to get my Storage VM to serve some drive space back to the ESXi hypervisor for some of the less critical VMs. Since I had started by creating NFS shares for Virtual Machines I decided to attempt that first.
Guest NFS Host ESXi
Alright, so I know from past experience that the old versions of NFS have no security whatsoever. This knowledge was reinforced when I migrated data from my old ZFS NAS to the new storage VM ZFS NAS. That was not acceptable to me. I have given out passwords and Wi-Fi access to friends and family that have visited. Not that I think they are untrustworthy or anything, but more to just nail home the point that not every device that has access to my network should be inherently trusted. If someone gets access, I prefer for it to be because I made a mistake, not someone else.
Alright, so I embarked to figure out how to add security to NFS. I attempted to just add the NFS share to VMWare to verify that the current share could work. I noted that when I selected NFS 4, it had a username password setting.
So I, mistakenly as I realized later, believed that NFS had added a basic authorization structure. Thinking I had a bead on how to fix that. I started by creating an account in TrueNAS called esxi. Then, I gave it a random password so I could provide this information. Following that up, I gave it access to the VirtualMachines
Share. I went to the Storage
-> Pools
page. I clicked on the snackbar next to the VirtualMachines
dataset and Edit Permissions
.
Then I clicked on Apply Permissions recursively
to make sure that these will apply to all files / folders.
I confirmed this selection. This is an empty dataset, so changing permissions shouldn’t affect anything.
I click apply permissions to child datasets
. I don’t think that strictly does anything right now. But if I were to decide to create a child dataset, I want these to apply.
The first time I did this, it complained that I needed at least one inheritable ACL
. That is the bottom option on each of the ACL sections.
Alright, with that done I went ahead and attempted to mount the NFS share in VMWare. I started by navigating to the storage
section. Then I selected New Datastore
.
Then I selected Mount NFS datastore
.
On the next screen I filled out the basic information for the NFS share.
After clicking on the NFS 4 option, I filled out the esxi user credentials and clicked Next
.
Followed by the final review page and Finish
.
Then I got the following error.
I should note that I recreated this error with a VMachines
dataset, not the original VirtualMachines
I first encountered that with. This was a permissions error.
I went and checked the NFS configuration options in TrueNAS. Starting at services I clicked the edit
option under actions
.
Then I checked the box for Enable NSFv4
, which I thought was a boneheaded move on my part. I should have checked that before.
And the error persisted.
It was at this point that I actually sat back and did some research on it. I had made a mistake and gotten way ahead of myself on this whole NFS mounting. Because I had worked with NFS in the past, I mistakenly believed I understood what I was dealing with. I read through the TrueNAS manual on NFS shares [1]. I went ahead and attempted to mount the NFS share from the DNS machine. A basic Linux mount should be easier to work through errors than learning the ESXi method at the same time.
Again, using VMachines
to demonstrate, I created a new mount point and attempted to mount it.
root@nameserver:/mnt# mkdir /mnt/tmp
root@nameserver:/mnt# chmod 777 /mnt/tmp
root@nameserver:/mnt# mount -t nfs -o username=esxi,password=<SECRET> alexandria.internal:/mnt/Stacks/VMachines /mnt/tmp
mount.nfs: an incorrect mount option was specified
So I tried without a password.
root@nameserver:/mnt# mount -t nfs alexandria.internal:/mnt/Stacks/VMachines /mnt/tmp
mount.nfs: access denied by server while mounting alexandria.internal:/mnt/Stacks/VMachines
I found a couple articles discussing mounting with username and password. First described a need to match UIDs (User IDs) on both machines [2], and another couple describing how NFS doesn’t work this way [3] [4]. One even described the issue as primarily folder permission options [5]. I worked through a few more pages that didn’t really add any more information than I had found here.
I did come across this guide for mounting an NFS share on an older version of FreeNAS [6]. Which mentioned adding a Maproot
User and Maproot
Group. So I did this in the NFS options section. It is under advanced
.
Which worked!
I got it mapped in. However, I was a little suspicious. It turns out that it wasn’t so much authenticating the username and password as just allowing all users who mount the share to appear as the esxi user. That is not what I was going for.
After some more generic searching about just adding authentication I found a few mentions of NFS 4 with Kerberos [7] [8]. Kerberos is a time sensitive ticket based authentication system. I’ve seen it working with LDAP a fair bit. I tend to think of it as an SSO utility, though I don’t think that’s it’s primary purpose, it’s more a side effect of how it works.
So, here’s the thing. After reading through some posts on this [9] [10]. Setting up Kerberos is not a trivial task. I found a guide to step through [11]. This looks like quite a task in and of itself.
I took stock of what I had learned up to this point:
- NFS 3 doesn’t support any kind of authentication. NFS 4 supports limited options.
- I can get NFS mounts working without authentication.
- Authentication for NFS 4 mounts works through two methods, SYS, which I am having trouble getting information on how it works, and I suspect that’s related to the UID matching method about, and Kerberos, a non-trivial enterprise grade authentication and access management tool.
- Kerberos is not trivial to set up and get working. I also think it is not trivial to maintain and keep working.
- NFS is completely insecure, and anyone on my network can get access to all the data being transferred. It could be secured using something like an SSH tunnel.
I kinda came to the conclusion I didn’t like where this was going. This whole method is a testament to too little planning on my part. That being said, I did learn a great deal about NFS and how it works. I now know exactly when and how I would use it in the future and even the high level steps necessary to secure it. I can even get it working with ESXi, which is a win of a sorts. At the time, I was not really willing to accept the lack of security that would involve.
It is definitely acceptable to run an NFS share in a home lab like this. The reality is that I am not really in danger of live hackers working in my home environment (or am I?). The criticality of the data isn’t quite what it would be for a corporation. I could easily just say, this is good enough, and go with the NFS 4 mounts I have managed to get working. However, I am aware that there is another method that ESXi and TrueNAS both support for exposing more hard disk space: iSCSI.
NFS vs iSCSI
ISCSI basically presents some of the available pool space to the home network as just a block device (that’s a fancy way of saying hard drive, there’s more to it, but I don’t really think that’s important to understanding what is going on). It has set parameters upon initialization, like block size, total space available, etc. It is possible to edit the space after the fact, though I will need to be careful about shrinking space rather than increasing it. iSCSI also supports a more basic authentication method called CHAP (Challenge-Handshake Authentication Protocol). This appears to involve just a user and a shared secret (works just like a user and password in practice). CHAP also enables periodic checking on the access to make sure that it hasn’t been hijacked after initiation. That is pretty appealing from a base level.
I was hoping to use NFS for a basic reason, it would enable me to browse the files without the need for decoding the ESXi datastore. I think at least, given the way it appears on ESXi, it didn’t reserve a large space and just used what it saw.
There are more reasons to worry about NFS though. An old article here covers some basic performance considerations between the two [12]. NFS 3 doesn’t support multipathing, although NFS 4.1 does. A verbose version of some of my mount commands shows that 4.1 succeeds for TrueNAS, whether that is because of backwards compatibility or not isn’t clear to me. I suspect it is, but cannot confirm it. NFS 4.0 was a very brief version. It would be odd to have that version supported. In any event, multipathing shouldn’t help me. That is about finding multiple physical paths to the storage device. Think a large vSAN cluster, where multiple physical links between the host and the end storage device can exist. This home lab really only has one.
I also found a forum post where someone had run a fair number of tests comparing NFSv3, NFSv4.1 and iSCSI [13]. They were using just a single server, which should be very close to my actual case here. I learned a great deal about performance comparisons here. Most posters expected iSCSI to perform better than NFS. Though from all of my research, they should be comparable. I also took note of what was discussed on a VMWare forum comparing the two. The statement was that iSCSI is less reliable because VMDKs are not particularly reliable in a power outage [14]. Given my setup with enterprise SSDs and their power loss protection, I’m not as convinced that this is an issue for me. I do, however, admit that I don’t have a huge amount of experience with this case.
Ultimately, I kinda feel this is all academic. My main concern is the lack of security for NFS. My choices still remain the same, insecure NFS 4.1 or secure iSCSI. Performance comparisons between the two seem to lead me to believe iSCSI should be a little ahead, but not dominant. Reliability concerns are more about the actual storage pool rather than the specific protocol.
I also don’t intend to use these for particularly intensive VMs, I just want to run some of the less used ones off of it. Recoverability is probably more important to me than outright reliability. In that case, I mean how difficult it is to recover from an issue rather than preventing all possible issues. I know, I know, an ounce of prevention is worth a pound of cure. But think like the Certificate Authority. I use it once every 3 months. One snapshot every month makes it a trivial fix.
I am going to attempt iSCSI. I think that getting that working is probably going to be easier in the short term than getting Kerberos setup and working for ESXi and TrueNAS.
Guest iSCSI Host ESXi
Okay, there is a pretty good guide for this by iXsystems on the TrueNAS side [15]. I’ll step through how I did it, but it closely matches theirs on this side. On the ESXi side I had to play around with a lot of things.
I started by going to the Storage
-> Pools
page.
Then I filled out VirtualMachines
for the name. Screenshot shows only VirtualMachine
, I figured that out later and fixed it. Gave it 2 TB
of space to start with and changed the block size
to 32KB
, to match the performance improvements I learned about in my Storage VM series.
Next I navigated to Sharing
-> Block Shares (iSCSI
). I clicked on Wizard
in the top right.
This is the second time I stepped through it. I named it Virtual Machines
the first time, which is not allowed, though it doesn’t say immediately that I made that mistake, it made me step through the whole process then brought me back to the beginning to fix it. That was annoying.
I named it virtual-machines
, gave it a Device
type, though I think some of the recommendations on the forums say to select File
type. Next, I assigned it the Stacks/VirtualMachines
zVol
for the actual Device
. I left the default options selected for Sharing Platform
. I finalized by clicking Next
.
Here I configured the Authentication Method
. First I selected Create New
, which opened up new options. For the Discovery Authentication Method
I selected CHAP
. I could have done Mutual Chap
, which has that double checking happen from both sides (client and server both authenticating that the other side hasn’t been hijacked), but that seems like overkill for this. That forced me to create a new Authentication Group
. I gave it the group ID
of 1002, same as the VirtualMachines group ID
from when I created that. The user is esxi
. I assigned a random shared secret.
The IP address selection, I believe, references which IP the iSCSI will execute the Bind
and Listen
system calls on. Those are low level, old, POSIX (Portable Operating System Interface) commands for networking. Whenever a server wants to accept new connections, it can specify an IP address and Port to Bind
to, then the Listen
command will accept new connections. I left 3260 as the default port. Then I clicked Next
.
I left the default initiators. However, I did add Authorized Networks
as my local network space. This is a minor amount of security. NFS technically supports this and local names as well. These are not difficult to fake. Static IPs, even static MACs, are not difficult to fake. Even I know how to do that. I clicked Next
.
This is the final review page where I can see all my selections. I clicked Submit
.
This brings me back to the Portals
page.
I then navigated to the Services
page and toggled iSCSI on. It is strange to me that at the end of configuring an iSCSI share it doesn’t actually enable the share or ask if it should.
Alright, on the ESXi side it took a bit of doing to figure out the exact method I was able to deduce what to do without much fanfare. First I navigated to the Storage
-> Adapters
section.
Then I clicked on Software iSCSI
, which opened up the configuration box. I selected Use CHAP
under CHAP authentication
. Then clicked the arrow to expand that box and input the Name esxi
and Secret
.
I then added the vmk0
Management network port. This is the ESXi host connection. And the Dynamic target
of alexandria.internal
.
It lets me know I successfully configured it.
I then navigated to the devices section and…
I don’t see the new iSCSI target… At first I tried copying in the static target with the name from TrueNAS, the IP address, and the port. That did not work correctly. It appears to work far better if I plug in the static IP address of the Storage VM into the dynamic target section.
And there we go.
I’m not quite sure why its status is Normal, Degraded
. I checked on TrueNAS, which says everything is running fine. Perhaps I have a slight misconfiguration somewhere? In any event, it worked just fine when creating a new datastore out of this disk space.
Storage Network
Despite the success of getting the iSCSI connection working between the Storage VM and the ESXi host, I do have an issue here though. The Storage VM has a dedicated 10G port, but not a vm port. That means that every iSCSI call actually traverses the network instead of staying within the host.
This is a minor point I don’t think I mentioned during my testing on the Storage VM part 3. When I had both a passthrough 10G port and a VM Network adapter, TrueNAS was performing some asymmetric routing. I would mount the shares on the 10G port’s IP address, but when I would transfer or try to use the share, it would send the data out on the VM Network’s adapter. That was not ideal, as the VM Network’s adapter is 1G, plus it is used by many other VMs. I wanted a dedicated port for a reason. At the time, I simply removed the VM Networking adapter. Problem solved, or so I had thought.
In this case, when presenting space back to the ESXi host, I would end up adding quite a bit of unnecessary latency back going back out to the network switch, then right back to the same device, different port. Not to mention it would be bandwidth limited by the 1G host port. VMWare almost certainly optimizes networking traffic between VMs so they don’t have the physical limitations of the ports they are connected to. This was the primary reason I set out to create a second VM network. It was also recommended in the VMWare iSCSI guide that I create a second network for vmkernel NICs (Network Interface Controller) [16] [17] [18]. I made quite a few minor mistakes while trying to figure out how VMWare’s networking actually works. I’ll briefly cover some of the minor ones while describing the creation of a new network.
First I navigated to the Networking
section on ESXi. I see the VM Network
, Management Network
, the VM Storage Networ
k, and Storage Management Network
that I have previously created. I will be creating a Demo Storage Network
here.
When I first did this I tried creating a new port group on vSwitch
0, which is the default switch created by ESXi during initialization. I don’t think I quite understood yet how these virtual switches actually related. A port group is essentially a VM connection group. It is something that is selectable from the VM Network ports on the VM hardware screen. I will cover that later, but the first thing I notice is I have only the two switches.
I need to create another switch if I want to isolate this network from the other networks. So I navigated to Virtual switches. Then clicked Add standard virtual switch
.
That brings up the basic switch screen.
I set the switch name to Demo Storage Switch
. Then I change the MTU
to 9000
. MTU stands for Maximum Transfer Unit, or the largest packet of data that will be sent at one time. This is something I actually learned a few years ago called Jumbo Frames
[19]. I learned them for Napp-it and for High Frequency Trading (HFT). Jumbo Frames allow for larger than default sending of data. This is strongly preferred for storage, as they are likely to want to send and/or receive more than 1500 Bytes at once, 1500 being the default configuration of most NICs. Most files are larger than that. For HFT, I needed to understand these for use from the quote stream. It is often, and likely, that multiple different order updates will be sent out at the same time. A Jumbo Frame allowed us to receive the updates faster. I finished by clicking Add
.
The Demo Storage Switch
is now present.
It was now possible to add the correct port group. I navigated to the port groups page again. Clicked on Add port group
.
Now I could type in the name Demo Storage Network
, and select the right virtual switch, the Demo Storage Switch
.
I now see the Demo Storage Network
.
The next thing I need to do is add a NIC for ESXi. Remember earlier, I need a vmk<x> NIC. These are known as VMkernel NICs. So I navigated to that section. Then I clicked Add VMkernel NIC
.
I started by naming the port group for the VMkernel NIC Demo Storage Management Network
.
So, the first time I tried creating this I selected the Demo Storage Network
. This doesn’t work right. It appears to just override all of the other NICs attached. Whenever I had a VMkernel NIC on the same port group as VM NICs, it seemed to just fail to work right. It wouldn’t boot the VM NICs, and complained about everything. The network no longer shows as an option for VM Network ports. I think that VMWare restricts port groups to prevent VMs from being on the same port group as VMkernel NICs. Maybe this is some sort of protection issue. I am not sure. Anyways, I need to make a second port group for the new VMKernel NIC.
I then selected the Demo Storage Switch
. This will let the VMs have access to this VMkernel NIC through the virtual switch.
I also set the MTU to 9000 for Jumbo Frames. These need to be enabled the entire path or they won’t be used. Networking defines the MTU as the minimum of all networking in the entire path.
I also briefly looked into the TCP/IP stack. There are three options, Default
, vMotion
, and Provisioning
[20]. This is just optimization for VM operations, vMotion
is VMWare’s live migration feature, to be able to move live VMs to other hosts with no downtime. Provisioning
is for snapshotting, cloning, and other provisioning based operations. Default
doesn’t try to optimize any particular operation. I briefly considered using Provisioning
here since that is my basic operation here, but I didn’t want to add extra complications. So I went with Default
.
I followed by clicking Create
.
This created the VMkernel NIC I would need for iSCSI. It also creates a second port group for the switch.
I can see a nice graphical representation of all of the connections to a switch by going to the switch page.
Now I want to add an actual VM to this network. So I navigated to the storage VM, and clicked Shut down
to be able to edit it’s hardware.
Next, there are some warning icons. I Ignored those for now, they don’t mean anything in this case. I click Add network adapter
.
This time it auto-selected the new Demo Storage Network
I just created.
I clicked this and show what networks are available. Note that none of the management networks actually show up here, only port groups without a VMkernel NIC.
I clicked save
and now this VM has access to the Demo Storage Network
as a new networking NIC.
So oddly enough, this probably makes NFS safe now. A completely private network with no access outside the host should be secure. Especially since one of the VMs or the ESXi box would need to be compromised already to see any of its traffic. I realized this, but I am pretty far along the iSCSI route now, so I am going to complete it from here.
The next thing I had to worry about was that none of these new VM connections or ESXi connections actually had an IP address. VMWare has a knowledge base article on creating a private VM network, that is very similar to what I just went through, save the VMkernel NIC. They recommend a static IP configuration and assignment [21]. That is not ideal. I learned a long time ago that static assignments basically make it much more difficult to make changes later, as all the configuration is distributed instead of centralized. I wanted to centralize IP assignments.
DHCP and DNS for the Storage Network
Alright, for the storage network I wanted to create a DHCP server. Unfortunately, VMWare switches don’t have this ability [22]. At the same time I also wanted to add DNS for this private network for the same flexibility as I have on the main network. This involves the storage system which might need to be replaced. Despite the fact that I needed to plug in the IP address into iSCSI to get it actually working, I do not believe this will be the only VM that ends up on this network.
Luckily, I have a completely configured DNS server already up and running! I just needed to clone it. Now, I know that cloning is something that vCenter actually does and is in charge of, but this whole process is so I can actually deploy vCenter server. I need a method that doesn’t involve vCenter server.
There is a big blog post about this exact case [23]. I followed the first option. I shutdown the DNS VM and copied its vmdk and vmx files to a new directory. Then I registered an existing VM. Afterwards it asked if it was cloned or migrated, I selected cloned. Then I renamed it to StorageNameServer_DHCP
. The copy worked!
Alright, with that setup I modified the Name Server component first. I decided to create the storage network on 10.0.2.0/24
. I also decided to call this the .storage
network so all names will have the postfix (think alexandria.storage
, esxi.storage
, etc.). This is mostly so it is easy to differentiate between this networks IPs and the main networks IPs. Since I had gained a fair amount of familiarity with these details I changed the config files in /etc/bind/
as follows:
// This is the primary configuration file for the BIND DNS server named.
//
// Please read /usr/share/doc/bind9/README.Debian.gz for information on the
// structure of BIND configuration files in Debian, *BEFORE* you customize
// this configuration file.
//
// If you are just adding zones, please do that in /etc/bind/named.conf.local
include "/etc/bind/named.conf.options";
include "/etc/bind/named.conf.storage";
include "/etc/bind/named.conf.log";
options {
directory "/var/cache/bind";
// If there is a firewall between you and nameservers you want
// to talk to, you may need to fix the firewall to allow multiple
// ports to talk. See http://www.kb.cert.org/vuls/id/800113
// If your ISP provided one or more IP addresses for stable
// nameservers, you probably want to use them as forwarders.
// Uncomment the following block, and insert the addresses replacing
// the all-0's placeholder.
recursion yes;
allow-recursion { 10.0.2.0/24; };
listen-on { 10.0.2.4; };
allow-transfer { none; };
forwarders {
// 192.168.2.1;
8.8.8.8;
4.4.4.4;
};
//========================================================================
// If BIND logs error messages about the root key being expired,
// you will need to update your keys. See https://www.isc.org/bind-keys
//========================================================================
dnssec-validation auto;
listen-on-v6 { any; };
};
//
// Do any storage configuration here
//
// Consider adding the 1918 zones here, if they are not used in your
// organization
//include "/var/lib/bind/zones.rfc1918";
zone "storage" {
type master;
file "/var/lib/bind/db.storage";
allow-transfer { 10.0.2.4; 127.0.0.1; };
check-names warn;
};
zone "2.0.10.in-addr.arpa" {
type master;
file "/var/lib/bind/db.2.0.10"; #10.0.2.0/24 subnet
allow-transfer { 10.0.2.4; 127.0.0.1; };
};
Next I configured the name databases in /var/lib/bind/
.
;
; BIND reverse data file for local loopback interface
;
$TTL 604800
@ IN SOA storage. root.storage. (
2 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
; name servers - NS records
IN NS stor-ns1.storage.
IN NS stor-ns2.storage.
; PTR Records
XX IN PTR stor-ns1.storage.
XX IN PTR stor-ns2.storage.
XX IN PTR alexandria.storage.
XX IN PTR esxi1.storage.
$ORIGIN .
$TTL 604800 ; 1 week
storage IN SOA storage. root.storage. (
1 ; serial
604800 ; refresh (1 week)
86400 ; retry (1 day)
2419200 ; expire (4 weeks)
604800 ; minimum (1 week)
)
NS stor-ns1.storage.
NS stor-ns2.storage.
$ORIGIN storage.
alexandria A 10.0.2.XX
esxi1 A 10.0.2.XX
stor-ns1 A 10.0.2.XX
stor-ns2 A 10.0.2.XX
To make this easier to remember, all of these end with the same number on both networks.
With DNS complete I needed to install and configure the DHCP server. It looks like Ubuntu recommends the isc-dhcp-server
package. I went ahead and installed it with:
sudo apt install isc-dhcp-server
Then I configured the file in /etc/dhcp/dhcpd.conf
.
subnet 10.0.2.0 netmask 255.255.255.0 {
range 10.0.2.2 10.0.2.250;
option domain-name-servers 10.0.2.4;
option domain-name "storage";
option subnet-mask 255.255.255.0;
default-lease-time 600;
max-lease-time 7200;
}
Next I needed to know what the interface name for the interface facing the storage network was. Ifconfig
didn’t actually show it.
# ifconfig
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.2.139 netmask 255.255.255.0 broadcast 192.168.2.255
inet6 fe80::20c:29ff:fe3c:1675 prefixlen 64 scopeid 0x20<link>
ether 00:0c:29:3c:16:75 txqueuelen 1000 (Ethernet)
RX packets 44956 bytes 5296914 (5.2 MB)
RX errors 0 dropped 1882 overruns 0 frame 0
TX packets 1094 bytes 185873 (185.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
inet6 ::1 prefixlen 128 scopeid 0x10<host>
loop txqueuelen 1000 (Local Loopback)
RX packets 1119 bytes 114835 (114.8 KB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1119 bytes 114835 (114.8 KB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
That was… odd. Turns out it was not actually up because it, too, couldn’t get a DHCP based IP address. I was able to find it with:
# ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state DOWN mode DEFAULT group default qlen 1000
link/ether 00:0c:29:3c:16:75 brd ff:ff:ff:ff:ff:ff
3: ens192: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000
link/ether 00:0c:29:3c:16:7f brd ff:ff:ff:ff:ff:ff
With that I was able to identify it as the ens192
interface. Then I edited the /etc/default/isc-dhcp-server
file to add the interface.
# Defaults for isc-dhcp-server (sourced by /etc/init.d/isc-dhcp-server)
# Path to dhcpd's config file (default: /etc/dhcp/dhcpd.conf).
#DHCPDv4_CONF=/etc/dhcp/dhcpd.conf
#DHCPDv6_CONF=/etc/dhcp/dhcpd6.conf
# Path to dhcpd's PID file (default: /var/run/dhcpd.pid).
#DHCPDv4_PID=/var/run/dhcpd.pid
#DHCPDv6_PID=/var/run/dhcpd6.pid
# Additional options to start dhcpd with.
# Don't use options -cf or -pf here; use DHCPD_CONF/ DHCPD_PID instead
#OPTIONS=""
# On what interfaces should the DHCP server (dhcpd) serve DHCP requests?
# Separate multiple interfaces with spaces, e.g. "eth0 eth1".
INTERFACESv4="ens192"
INTERFACESv6=""
And then I tried to restart the dhcp service.
sudo systemctl restart isc-dhcp-server.service
I had a few configuration issues that dhcpd -t
found real quick. After fixing all of those that still didn’t work. I checked the syslog and saw the issue.
Dec 13 11:45:53 nameserver dhcpd[804]: No subnet6 declaration for ens192 (fe80::20c:29ff:fe3c:167f).
Dec 13 11:45:53 nameserver dhcpd[804]: ** Ignoring requests on ens192. If this is not what
Dec 13 11:45:53 nameserver dhcpd[804]: you want, please write a subnet6 declaration
Dec 13 11:45:53 nameserver dhcpd[804]: in your dhcpd.conf file for the network segment
Dec 13 11:45:53 nameserver dhcpd[804]: to which interface ens192 is attached. **
Dec 13 11:45:53 nameserver dhcpd[804]:
Dec 13 11:45:53 nameserver dhcpd[804]:
Dec 13 11:45:53 nameserver dhcpd[804]: No subnet6 declaration for ens160 (fe80::20c:29ff:fe3c:1675).
Dec 13 11:45:53 nameserver dhcpd[804]: ** Ignoring requests on ens160. If this is not what
Dec 13 11:45:53 nameserver dhcpd[804]: you want, please write a subnet6 declaration
Dec 13 11:45:53 nameserver dhcpd[804]: in your dhcpd.conf file for the network segment
Dec 13 11:45:53 nameserver dhcpd[804]: to which interface ens160 is attached. **
Dec 13 11:45:53 nameserver dhcpd[804]:
Dec 13 11:45:53 nameserver dhcpd[804]:
Dec 13 11:45:53 nameserver dhcpd[804]: Not configured to listen on any interfaces!
I was confused. It looked like the ens192
interface was not actually working. A lot of articles suggested I did the incorrect thing in /etc/default/isc-dhcp-server
. I really don’t think I did. It appears that I identified it correctly. Seeing no other options, I eventually gave in to this answer [24] and assigned ens192
a static IP.
I haven’t manually configured a static IP since about Ubuntu 14.04. I really try to avoid that for a lot of reasons. Mostly I hate having to remember all the config. Ubuntu docs tell me the current way this is done [25]. I added a file in /etc/netplan/99_config.yaml
.
network:
version: 2
renderer: networkd
ethernets:
ens192:
addresses:
- 10.0.2.2/24
gateway4: 10.0.2.1
nameservers:
search: [storage]
addresses: [10.0.2.4]
If that works anything like rc.local
does, I think the 99
simply specifies the order that these files should be loaded in. I went ahead and applied this with
netplan apply
It took me a second to realize I had assigned the DHCP server to a different address than the name server. In this case they are both, I fixed that real quick.
network:
version: 2
renderer: networkd
ethernets:
ens192:
addresses:
- 10.0.2.4/24
gateway4: 10.0.2.1
nameservers:
search: [storage]
addresses: [10.0.2.4]
After that, isc-dhcp-server
was working and assigning IP addresses. From here I needed to figure out how to assign static IPs to all of the devices on this network. To do that I needed the MAC addresses of all the devices. For those that don’t know, the MAC address is a physical identifier for a specific network port. An IP is assigned to each MAC.
This wasn’t too hard. I could check the syslog and find the MACs of all the VMs as they joined the network. Some of them could be found in ESXi’s interface, specifically the vswitch view. The only one I had to do the logs for was the VMkernel NIC, which doesn’t display its MAC address there.
subnet 10.0.2.0 netmask 255.255.255.0 {
range 10.0.2.2 10.0.2.250;
option domain-name-servers 10.0.2.4;
option domain-name "storage";
option subnet-mask 255.255.255.0;
default-lease-time 600;
max-lease-time 7200;
host alexandria {
hardware ethernet 00:0c:29:85:49:83;
fixed-address 10.0.2.18;
}
host esxi {
hardware ethernet 00:50:56:62:25:eb;
fixed-address 10.0.2.20;
}
}
With that I had assigned static IPs to the two members of the iSCSI storage network. I was now ready to try and make the connection work between the storage VM and ESXi, or so I had thought.
Final Storage Network iSCSI considerations
There is a problem I hadn’t realized about iSCSI and private networks. It turns out that iSCSI requires the VMkernel NIC to Bind
on a specific interface as well. I still don’t know why this issue developed, but it appears that the storage network needs an uplink to make this work.
I spent quite a bit of time trying to figure out a solution to this problem. If I join this storage network to the main network, then I will have two DHCP servers running. It is unknown which one of them will actually issue out addresses to the VMs on both networks. Conflicts will happen. Things will go wrong. I have had this happen before when I couldn’t get Google Fiber’s DHCP server to shut down. It is also very difficult to work in this environment, I usually have to unplug all the network cable and login to switches individually to fix these issues.
My first inclination was to just add the link, bind the VMkernel NIC real quick, then remove it. This kinda works. First I navigated to the switch.
Then I clicked on Actions
-> Add uplink
.
I configured my available ethernet port with jumbo frames.
Now there is an uplink port.
Then I went ahead and repeated the iSCSI connection process from before. After that was complete, I went back to the switch and clicked Actions
. This time I selected Edit settings
.
I clicked the x
next to Uplink 1
.
Next I clicked on Save
.
That removed the uplink again.
I navigated back to the storage section and the iSCSI datastore is still there.
So from a high level this works. The issue is that it doesn’t survive a reboot. I would have to do this again every single time the server shutdown. I also noted that on a reboot, the VMkernel NIC actually was assigned an IP on the main network. This is exactly what I was hoping to avoid.
Alright, I got a little stumped this time. I apparently needed to have a physical ethernet port assigned to this virtual switch permanently. But somehow have this not actually provide connectivity to the larger network. I knew I could always disable the port on the Ubiquiti side. That is an unsatisfying solution. It would require me to remember that I have disabled this particular port. I wanted an ESXi based solution. That way it would be completely contained within the host.
So, here is where it helps to have a history of breaking networks. I have had numerous times where I broke a network unintentionally. There are a lot of different ways to accomplish this. Networking is rather picky. There is a reason that network engineers exist. But here? I knew of one way to break a network with the tools I had available to me in a virtual switch. VLANs.
So my main network doesn’t use VLANs for anything. If packets hit it that don’t apply towards it’s VLAN, it should just drop them. I assigned all of this network to a new VLAN of 1. Luckily this even appears to be an invalid VLAN from VMWare [26]. This should basically break the network and make it unusable.
With that, after a reboot all of the ports on this network do not get IP addresses assigned from the main network’s DHCP server. Even the VMkernel NIC appears to not get any IP address for a couple minutes until the StorageNameServer_DHCP VM starts and issues it one. This appears to be working.
There was one last issue to work through.
Surviving a Reboot
In order to make the iSCSI interface automatically connect after a reboot, I needed to make ESXi automatically reconnect to the iSCSI interface after the storage VM starts. That is not quite as simple as it first appears. The most likely case for this is that the iSCSI interface is managed by a completely separate server. That means ESXi can assume the iSCSI connector exists as it starts up. It will attempt to connect, but if that connector is not there, it will simply give up and wait for someone to come and manually fix it.
I don’t particularly like this behavior. I would much prefer that I be allowed to specify that it should simply attempt to reconnect periodically. VMWare itself appears to recommend a manual rescan at a later point in time [27]. That is not a particularly appealing option.
Here is where I got very lucky. There is a guy who has done the exact same thing I am trying. He wrote a script that would force ESXi to simply rescan on startup and hold off on all of the autostart of VMs until the script is complete [28]. It appears that VMWare has some knowledge that people may want to automatically run some things at startup. They have a file in /etc/rc.local.d/local.sh
that gives the following warning.
#!/bin/sh ++group=host/vim/vmvisor/boot
# local configuration options
# Note: modify at your own risk! If you do/use anything in this
# script that is not part of a stable API (relying on files to be in
# specific places, specific tools, specific output, etc) there is a
# possibility you will end up with a broken system after patching or
# upgrading. Changes are not supported unless under direction of
# VMware support.# Note: This script will not be run when UEFI secure boot is enabled.
/etc/rc.local.d/local.sh
I’m not sure that last part is true, I have verified a couple of times that UEFI is definitely enabled for me. But, this script definitely runs. I edited the file to match what JC-LAN suggests.
#!/bin/sh ++group=host/vim/vmvisor/boot
# local configuration options
# Note: modify at your own risk! If you do/use anything in this
# script that is not part of a stable API (relying on files to be in
# specific places, specific tools, specific output, etc) there is a
# possibility you will end up with a broken system after patching or
# upgrading. Changes are not supported unless under direction of
# VMware support.
# Note: This script will not be run when UEFI secure boot is enabled.
#Establish our timer
count=0
#Power on the guest VM with the specified Vmid
#use the command vim-cmd vmsvc/getallvms to find which Vmid to use
vim-cmd vmsvc/power.on 15
sleep 5
vim-cmd vmsvc/power.on 12
#Now continuously rescan for the iSCSI device until it is found
#or the maximum time of 10 minutes is reached.
#This command will search all Logical Devices for one that has "Vendor_Name" in the Display Name (e.g., FreeNAS)
#while ! esxcfg-scsidevs -c | grep -q 'TrueNAS'
#Alternatively if you have multiple iSCSI targets that share the same Display Na me
#(iSCSI Vendor) then you may want to instead search by Volume Name.
#This method allows you to single out a specific server since Volume Name is use r configurable.
#The command below will search for the volume name 'Your_custom_volume_name' and that Mounted status is true.
while ! esxcli storage filesystem list | grep -q "Alexandria.*true"
do
#print some debugging info to the syslog
logger "local.sh: Forcing rescan since iSCSI target is not yet available..."
#Rescan SCSI HBAs to search for new Devices, remove DEAD paths and update pa th state.
#This operation will also run an claim operation equivalent to the claimrule run command and a filesystem rescan.
esxcli storage core adapter rescan --all
#Now wait (in seconds) before checking again
sleep 30
#Increase the timer
count=`expr $count + 30`
#Check if maximum time has been reached (in seconds)
if [ $count -ge 600 ]
then
logger "local.sh: Aborting, maximum time reached while searching for iSCS I target."
break
fi
done
logger "local.sh: Search time for iSCSI target was" $count "seconds."
exit 0
Just a couple quick explanations. The first thing that needs to happen is VMs necessary for iSCSI to work need to be turned on. In my case, the StorageNameServer_DHCP server and the storage VM.
#Power on the guest VM with the specified Vmid
#use the command vim-cmd vmsvc/getallvms to find which Vmid to use
vim-cmd vmsvc/power.on 15
sleep 5
vim-cmd vmsvc/power.on 12
The second thing relates to the following code:
while ! esxcli storage filesystem list | grep -q "Alexandria.*true"
The esxcli storage filesystem list
lists all of the available datastores. This is the exact same list as can be viewed in ESXi under Storage
-> Datastores
.
The text version looks like (I hid the UUIDs for clarity).
esxcli storage filesystem list
Mount Point Volume Name UUID Mounted Type Size Free
-------------------- ------------------------------------------ --------- ------- ------ ---------- -------------
/vmfs/volumes/<UUID> Primary <UUID> true VMFS-6 2000112582656 759546511360
/vmfs/volumes/<UUID> Secondary <UUID> true VMFS-6 2000112582656 1998527135744
/vmfs/volumes/<UUID> datastore1 <UUID> true VMFS-6 118380036096 116870086656
/vmfs/volumes/<UUID> Alexandria <UUID> true VMFS-6 2198754820096 1717591605248
/vmfs/volumes/<UUID> OSDATA-<UUID> <UUID> true VFFS 128580583424 120323047424
/vmfs/volumes/<UUID> BOOTBANK2 <UUID> true vfat 4293591040 4117233664
/vmfs/volumes/<UUID> BOOTBANK1 <UUID> true vfat 4293591040 4122411008
Next the grep -q "Alexandria.*true"
will simply return true
or false
if the specific search query exists in the text. In this case it is looking for the Alexandria
datastore and the Mounted
value to be true
in the same line. With that it will loop until it is mounted executing
esxcli storage core adapter rescan --all
Which will rescan all of the storage adapters, including the iSCSI adapter. This will do exactly what VMWare recommends doing, but on a 30 second loop re-attempting to mount the iSCSI for 10 minutes until it either succeeds or fails.
This strikes me as a little hacky, but it is effective, and won’t actually break the system even in the case it fails. It will work nicely. With that, the entire scheme works and the reboot is survived.
Installing vCenter Server
At this point the only thing left to do is actually install vCenter Server. This isn’t that complicated of a task, as VMWare has done most of the heavy lifting for me by creating an installation ISO. There is a really long installation guide here if I wanted to try and do more of this manually [29]. The easiest way is to go to the ESXi host page, and click on Get vCenter Server in the top left.
That will open the VMWare downloads page to be able to download the vCenter Server ISO [30].
After downloading the ISO and reading the readme, it became apparent that I was expected to run the installer from the ISO choosing the UI for the specific workstation type I am using. In this case it is Windows, so the installer is located at <ISOMount>:\vcsa-ui-installer\win32
I must apologize for the screenshots here. I know this issue, it is an issue with HDR, which my monitor is capable of here. Without some pixel tweaking, screenshots become high gloss. I’ll try to describe, but if they don’t work, it’s not your monitor, it’s the screenshot. There are two stages. I will start with stage 1.
In stage 1 introduction, I click next
.
Next I agree to the license terms.
I then enter the ESXi box’s domain name esxi1.internal
, the username and password for administrator access.
It states that it doesn’t recognize the SSL certificate. Probably because I changed it a few weeks ago.
Next I supply the name of the vCenter Server VM to install and the password for the root user. Since I have already installed the server before I am using VMware VCenter Server2
, but it’s not important.
I select this as a tiny deployment size because I have only 1 host to control.
Next I select the Alexandria
datastore, since that was the whole purpose of this entire project. Then I clicked Next
.
Next I need to configure the vCenter Servers specifics. I tell it to have a DHCP assigned IP address. Then I gave it the domain name of vcenter.internal
. I clicked Next
.
I reviewed the details and clicked Finish
.
It started the install and created the VM.
Here is the only place I ran into any trouble. It paused half-way through everything and gave me a warning about needing to have the domain name resolving correctly before it could proceed. I got a bit lost in the chicken and egg problem of needing the VM to be running before I could figure out its MAC address and assign it a DHCP based static IP. In the end, I tried to proceed immediately, which failed. I then found the IP address of the VM in Ubiquiti, assigned it a DHCP static IP and added vsphere.internal
to my DNS server. After this, I could restart the install attempt from its local web server, or start over. I chose to start over completely and this time it was able to proceed. Here is how stage 2 went.
Starting with the stage 2 introduction and click Next
.
Next it asks about Time synchronization
, I selected Synchronize with the ESXi Host
and to enable SSH access
. This is SSH access to vCenter Server VM, not ESXi, which I still use. Then I clicked Next
.
Next it asks about creating an SSO Domain. This is related to Kerberos and LDAP, which I have opted to not use for now. I create a new SSO domain assigning a new account for the administrator and the password for it. I needed to remember these, they are different from the ESXi login and it’s a little tricky to figure out exactly what to use once vCenter Server is installed. At least, I forgot it the first time and had to start over. Once complete I clicked Next
.
Next it asks about joining the Customer Experience Program
. I join because why not. I’m doing weird stuff, maybe this will help VMWare in a minor way. I click Next
.
Last is the review page. I click Finish
.
From here it completed the install.
Then I was able to navigate to https://vcenter.internal
and get to the login page.
Now here is where I messed up the first time. The login is actual administrator@<SSO domain>
not just administrator
. It took me a minute to figure that out. With that figured though, I got right in.
From there the last thing I did was enter my license, which wasn’t as clear as I was thinking. But there is a quick guide for it [31], and please avoid VMWare’s documentation. VMWare’s documentation is way more confusing than basically any other guide.
Conclusion
From here I had success. I have a guest VM sharing an iSCSI based volume back to the Guest ESXi. It successfully remounts this drive after every reboot without any intervention on my part. I have it using a pseudo-private storage network that doesn’t correctly connect to the main network. It is pseudo because it technically does, I just intentionally broke it. vCenter Server is installed on this volume and it automatically loads after a reboot. I was even able to get some verification that the storage network is allowing ESXi to take control to improve performance between VMs and ESXi itself.
There are spikes in Disk I/O on the VMware vCenter Server VM above 1G bandwidth, so I know for a fact it’s not using either of the VMkernel physical ethernet adapters. Victory!
This was way more complicated than I had thought at the beginning, but I’m quite pleased here.
References
[1] https://www.truenas.com/docs/hub/sharing/nfs/nfs-share/
[2] https://unix.stackexchange.com/questions/252812/user-permissions-in-nfs-mounted-directory
[3] https://unix.stackexchange.com/questions/341854/failed-to-pass-credentials-to-nfs-mount
[4] https://serverfault.com/questions/750497/fstab-entry-to-mount-nfs-with-password
[5] https://dwaves.de/2017/02/08/mount-nfs-errors-mount-nfs-access-denied-by-server-while-mounting-null/
[6] https://sysally.com/blog/integrate-centralised-freenas-nfs-storage-vmware-esxi-host/
[8] https://forum.proxmox.com/threads/trying-to-mount-an-nfs-share-with-username-password.44068/
[9] https://stackoverflow.com/questions/53604706/mount-network-share-with-nfs-with-username-password
[10] https://help.ubuntu.com/community/NFSv4Howto
[11] https://ubuntu.com/server/docs/service-kerberos
[12] https://www.infoworld.com/article/2616802/your-fateful-decision–nfs-or-iscsi-.html
[13] https://forums.servethehome.com/index.php?threads/nfsv3-vs-nfsv4-vs-iscsi-for-esxi-datastores.23203/
[14] https://communities.vmware.com/t5/ESXi-Discussions/Datastore-NFS-or-iSCSI/td-p/460336
[15] https://www.ixsystems.com/blog/iscsi-shares-on-truenas-freenas/
[16] https://geek-university.com/vmware-esxi/configure-iscsi-software-initiator/
[17] https://community.spiceworks.com/how_to/158570-setup-exsi-6-7-iscsi-datastore-to-nas
[18] https://masteringvmware.com/how-to-add-iscsi-datastore/
[19] https://en.wikipedia.org/wiki/Jumbo_frame
[20] https://www.vstellar.com/2017/09/17/configuring-and-managing-vmkernel-tcpip-stacks/
[21] https://kb.vmware.com/s/article/2043160
[22] https://communities.vmware.com/t5/ESXi-Discussions/vSwitch-and-DHCP/m-p/308952
[23] https://www.vmwareblog.org/clone-vms-vmware-vcenter-unavailable/
[24] https://askubuntu.com/questions/57155/dhcpd-fails-to-start-on-eth1
[25] https://ubuntu.com/server/docs/network-configuration
[29] https://docs.vmware.com/en/VMware-vSphere/7.0/vsphere-vcenter-server-70-installation-guide.pdf
[30] https://my.vmware.com/group/vmware/downloads/#my_products
[31] http://www.virtubytes.com/2020/04/14/how-to-assign-vsphere-7-licenses/