Hardware Resources and Requirements

Alright, I’ve decided to build a new server for consolidating all of my disparate other servers.  I will need to set expectations and requirements for this new project.  Like most projects this will probably involve some feature creep, as well as reductions once we get into the meat of it.  Let’s start by examining the servers / workstations / pcs I am hoping to replace:

The Streaming media PC  

This computer is in a server chassis in the rack.  Its primary purpose is to stream to all the TVs. 

I have a more advanced system that lets me switch between various sources and destinations in a 4K-over-IP setup by Just-Add-Power [1].  Therefore I needed at minimum a 4K capable video card.  Now I think I want to go on a digression here: 4K streaming / movie playing is not as simple as being able to actually pump out 3840 x 2160 pixels at 30 frames per second. 

One of my previous projects was to do what the Just-Add-Power system does, but by capturing raw 1080p (which is 10.2 Gbps, which is why I needed the Dell x4012 from the previous post) as well as all of the audio. The audio was the more difficult task believe it or not. To accomplish it, I had to capture the 8 individual channels of 7.1 and remix them in real time with the video. There were 9 separate capture streams and realign them.

In order to accomplish that I had to learn a lot about how high definition playback works.  HDCP stands for High-bandwidth Digital Content Protection.  This is required for 4K and 1080P (as my project predated 4K).  Additionally, 4K streaming requires a complete chain of protection from the source all the way to the TV.  HDCP is the guarding protocol for the entire chain, my understanding, and I reserve the right to be wrong, is that each step has to validate the next. 

There are also requirements for decoding 4K / 1080P content on the computer.  This uses a less-known (practically unknown in my experience, but I don’t want to assume some don’t know it) set of instructions called SGX on Intel, and memory encryption on AMD.  What they do to decode a 4K blu-ray is load the keys into a protected encrypted memory space controlled by these instructions and then decode in that encrypted space and then send an unencrypted stream from it. 

Conveniently, this is also a common source of some of the recent processor security issues because this is an immature technology.  I believe that AMD implemented an ARM coprocessor that acts as a special security processor [2] for managing this space.  Intel has the same called the Intel Management Engine, but to my knowledge it has not been confirmed exactly what it is.  These co-processors actually add extra vectors of attack to these processors because they are, at least to a large degree, implicitly trusted by the main x86_64 chips. 

So to actually decode a 4K blu-ray the system needs to have a protected memory space (SGX/Encrypted Memory support), an Encrypted Stream transport (HDCP), a trusted Video transport (video cards must be compliant), and a trusted TV at the end (which is why early adopter TVs don’t always work because they were based on specifications that were tentative rather than adopted.  The most common was for 1080p, but I wouldn’t be surprised to find the same issue in 4K). 

HDCP manages this encrypted chain.  If any part of this chain fails in identification, or simply doesn’t respond quickly enough (2.0+), the entire chain falls back, usually all the way to 480p.  Therefore, if one wants to stream 4K from netflix (or anywhere for that matter), the entire chain has to be able to be enforced. 

In addition to this is an older and more easily understood protocol called EDID, which is where source and display state what they support as far as video/audio channels.  This can be easily corrupted during the HDCP initiation.  There is a decent overview here [3]. 

As a digression on this digression, this whole chain is insane, I own literally everything along the way. Everyone who should be getting money through this, is. However, I have failed so many times to just get the basic thing working right that it has permanently made me angry about it.  Can we please just stop trying to force this insane level of encryption where even people who are attempting to follow the rules constantly fail? Just let people play the media they own! 

Anyways, this whole chain thus requires physical access to all of the pieces along the way.  It needs direct physical control of processors, direct physical control of video cards, direct physical connections from those to the correct end points.  In addition this needs to be at least HDCP 2.0 for 4K (2.2 preferred, but the earliest 2.0 compliant video card I think was the nVidia 950 GTX and not any of the rest of that family, even the 980 believe it or not).  That is a challenge for virtual machines for obvious reasons.  Fortunately, PCI passthrough exists, and if it is configured properly should allow this to be achieved.  

The VR machine  

I, like many of my contemporaries, am very much excited about the state of VR.  It has just gotten cheap enough that most people can afford it.  Therefore, I have a VR machine.  My VR of choice is the Valve Index [4], but I also own a Vive [5].  What can I say, I like to play around with technology *waves hands around the entire blog*.  To make VR work, I need to have a VR capable video card, and the USB 3.0 connections along with it. 

I haven’t actually done enough testing to know if this could be a shared virtual graphics card or needs to be a complete PCI passthrough with total control.  I am assuming it will need the full passthrough. 

In addition I don’t know if the USB controller can be shared.  I doubt it.  In my experience VR uses the full bandwidth of USB 3.0, if this controller is shared with something else then it may not get the bandwidth it needs.  When I first played with the Vive, I had a long USB cable, which was recognized by the system, but it did not provide the full bandwidth over its length and VR did not work.  I switched to an active USB cable (and eventually to a fiber USB cable) and the issue went away.  Same for the DVI cables in VR.  VR needs the full bandwidth.  VR is also very latency sensitive, for a complete experience I am going to assume this needs a dedicated usb controller (either passthrough or just nothing else plugged in).

Hass.io raspberry pi  

Hass.io, for those who don’t know, is Home Assistant [6], but as a native install.  It is for smart home control, and the only one that integrates basically everything.  This is a bit of a misnomer.  I used to have a hass.io native raspberry pi 3b.   I moved it into a docker container on my ESXI server about a year ago. 

However, the reason I could do this is because of Wink.  I bought a Wink hub 2.0 [7] about 2 years ago.  I was very happy with it until about 3 months ago.  These things cost over $100 and they provide a nice app interface for it, including API access.   This allowed my Home Assistant access to wink to act as my z-wave controller. 

Wink also supposedly has local API access, so I could use it even when the internet was out, but power was not.  In my experience, this wasn’t true, but it didn’t bother me enough to switch.  About 3 months ago Wink either just realized that “servers cost money” or their business requirements involved not eating that cash upfront and trying to get monthly fees. 

This is a common trend in software I am generally not in favor of, but that is a story for a different day.  The reason I was willing to pay over $100 for a wink hub and not just use an Aeotec Z-wave USB Stick [8] ($45 as of right now) was because of API access.  Paying over double price to not have to deal with that and just have a nice integrated app was worth it to me.  Paying $5/month in perpetuity for it is not. 

Local API access should be allowed, the Wink hub on the same network as the home assistant docker container,  and I do have local access to control things from the app (though I admit, I don’t know how they implemented that, geolocation or direct local calls).  There is no reason a docker on the same network can’t send commands directly to the Wink Hub and not use any cloud resources. 

I want to use the z-wave usb stick now, since I lost all the functionality I actually wanted when I lost API access.  I thought we had an agreement, we did not.  So I want to axe Wink out of my network.  To accomplish this, I will need to give a VM direct control over the USB stick.  I also have a zigbee USB stick I kinda want to play with as well.  Hopefully this can be achieved without PCI passthrough, but I admit it might be necessary.  I have read about compatibility issues.

Storage  

Everybody likes storage!  Basically all computer technology needs storage to maintain the long term state of whatever applications are doing, often with competing storage requirements.  For me, this is a server I built in 2014. 

It started as a Napp-It [9] install on ubuntu in 2014 because of driver compatibility requirements for my SolarFlare 10G cards (High Frequency Traders will recognize that name, as it is very popular there for its TCP offloading, or at least it was as of 5 years ago, I was reading about them moving on to FPGAs built into the Arista network switches.  Wowza).  Being the global storage server, it has a dedicated 10G network card. 

In 2016 I converted it to FreeNAS [10], and got my first experience about how ZFS handles migration. It worked perfectly, no data loss, even though I was prepared for some.  My storage server used ZFS [11], which is a great system for smaller installs.  It provides a great deal of redundancy as well as an ability to expand (which I actually did once over the years). 

It started as 8 x 1.5TB Western Digital Greens, 8 x 2.0TB Western Digital Reds, and 8 x 4.0TB Western Digital Reds.  The Greens came from that old hot swap mdadm array I mentioned in the previous post.  I upgraded the 8 x 1.5TB array to 8 x 8TB Western Digital Reds in 2018.  All of these arrays are Raid-z2.  I will go into further details about this in a future post when I discuss the New and ImprovedTM storage system about what these are and why I chose the setup the way I did.  Suffice it to say it was 68 TB of storage space (Over 100 TB raw). 

I also noticed over the years that access wasn’t that great, and I haven’t done a hard upgrade in a long time (the original backbone hasn’t changed since 2014, only hard drive upgrades).  This is an opportune time to upgrade this as well.  To do this I think I need further research, since I’m not totally sure what the current state of this is. 

Back in the day I needed a SAS expansion card to hit all 24 drives.   Really, I crossflashed an LSI raid card to a SAS expansion card to save ~$300 because the raid could only support 8 drives and the expansion got up to 24.  If this is still roughly what it looks like I will need one of these, but I am hopeful that it has gotten better over the years.  I also kinda want to see if I can use GlusterFS, but that is more of a bonus.

Replacement Requirements

With that overview, I think I can see the requirements forming here for what I will need to get.  Hopefully, I will not need to rebuy everything, but this is already a strong possibility because I really haven’t upgraded these systems in a while.  But from the bare systems I can at least make the following determinations in the new larger system.

ServerDedicated RequirementsShared Requirements
The Streaming media PCDedicated Graphics Card
(Probably the nVidia 1080
GTX it already has)
16 Processors,
32 GB RAM,
Networking,
Hard Drive Space
The VR machineDedicated Graphics Card
(Probably the nVidia 1080
GTX it already has),
Dedicated USB Controller
16/32 Processors,
32/64 GB RAM,
Networking,
Hard Drive Space
Home Assistant Dedicated USB Controller?4 Processors,
8 GB RAM,
Networking,
Hard Drive Space
StorageDedicated 10G Networking,
Hard Drive Controller Cards,
Hard Drives
16 Processors,
64 GB RAM

New Systems

Now I want to consider where I am going in the future.  As I mentioned, I want to be able to play around with new technologies.  In order to accomplish this I will need to have various levels of other VM / container resources. I am thinking something like:

Workstation  

My partner does not have a dedicated workstation with a lot of resources.  She is using an older Mac.  I am hopeful I can give her an old Video Card and a Dedicated USB port for mic and video and let her roll on this!

Zoneminder  

I have long been considering building my own Video camera recording system.  Zoneminder [12] is an open source video recording utility.  It can monitor a large number of IP cameras and can be integrated with Home Assistant. 

In the same way that I dislike what Wink has done to charge me a monthly fee to use something I supposedly own, I dislike what ring did a few years ago to deny even the 2 minute recordings, let alone the dedicated recording from my doorbell.  After some research, there is a doorbell that does allow local recording, Doorbird [13].  It is big and clunky, but it does allow local recording, something I acutely value.

I also did a lot of camera research back when I did my large smart home project and have a ReoLink 1080p [14] camera for outdoor wireless recording, a Vivotek Camera [15] for indoor recording (PoE), an ArmCrest [16], and a FosCam [17] (that I actually use to watch my dogs).  I bought these to test them out and see which I liked.  Vivotek is a high end brand I knew would be good, but how much better?  I still don’t know.  I got the ReoLink, the ArmCrest, and the FosCam set up, but never got to the Vivotek.  ArmCrest and FosCam are much more affordable. 

One more thing to note, in the video recording community it is well known that 720p recordings cannot be used in a court of law (US court of law) for identification.  This imposed a 1080p recording requirement, which at the time I first considered this, was more expensive than now.  It still is not super cheap.  That being said, if I catch someone breaking into my house, I DEFINITELY want to be able to use this against them.  I also like watching the dogs on vacation and being able to move the camera and make sure they’re doing alright.  I will likely need some hard drives for this dedicated only to recording.  Luckily the major brands produce drives specifically for sequential write, the most obvious one on the top of my head is the Western Digital surveillance drives (purple).

Storage upgrade  

My old storage server uses spiny discs.  I have noticed stuttering in my playbacks for my blu-rays and DVDs.  My first iteration of this was a decade ago and I used handbrake to re-encode all of my 200+ DVDs.  This took a solid 2 months to get done and felt like a drag. 

When I built my first Home Lab in 2014 I decided hard drive space is cheap, and my time is valuable.  So I built a very large array mostly for storing these.  I hated, still do, getting up to go browse my Blu-rays and DVDs so I backed them up to these servers as ISOs. 

I also have lots of backups of my various servers. Additionally, I use this pool to store my personal project code.  Lastly, I converted all of my old video game discs to ISOs, including that weird position mapping digital content protection they used to do.  There are a lot of valuable things here. 

That being said, if I want to run applications, I probably need more than just a 120 GB SSD cache (more on that later).  I think I am going to upgrade to an all SSD array and allow my old discs to be inherited by my friends.  A couple of these old discs may be good for an application like the aforementioned Zoneminder, which seems ideal. 

Additionally, I briefly considered making a SAN[18], which I really hate that they chose those two acronyms NAS vs SAN.  It’s not clever, it’s confusing.  Nomenclature should be designed to prevent confusion, and in this case they created confusion.  Since this project involves trying to consolidate, and a SAN tends to isolate and unify the storage subsystem away from all other resources it didn’t quite fit the requirements  

Other VMs  

I am hoping to have a decent set of resources after this to do things I’ve wanted to play with.  Examples include a dedicated name server so I don’t have to memorize the IP addresses of everything.  For Kubernetes I am thinking at least 2 different machines so I can build a cluster and play with that aspect of dockerization, which is one of the parts I want hands-on experience with since most of my experience is just basic individual containers or EKS controls.  

Requirements Overview

So in total I think something like this is what I’m looking at:

ServerDedicated RequirementsShared Requirements
The Streaming media PCDedicated Graphics Card
(Probably the nVidia 1080
GTX it already has)
16 Processors,
32 GB RAM,
Networking,
Hard Drive Space
The VR machineDedicated Graphics Card
(Probably the nVidia 1080
GTX it already has),
Dedicated USB Controller
16/32 Processors,
32/64 GB RAM,
Networking,
Hard Drive Space
Home Assistant Dedicated USB Controller?4 Processors,
8 GB RAM,
Networking,
Hard Drive Space
StorageDedicated 10G Networking,
Hard Drive Controller Cards,
SSD Drives
16 Processors,
64 GB RAM
WorkstationDedicated Graphics Card
(?I have no more
hand-me-downs),
Dedicated USB Controller?
16/32 Processors,
32/64 GB RAM,
Networking,
Hard Drive Space
ZoneminderDedicated 10G Networking,
HDD Drives
16 Processors,
32/64 GB RAM
Everything else12/44 Processors,
128 GB RAM,
Networking,
Hard Drive Space

Two general thoughts I have.  First is that although there are a large number of processors, these are not high frequency.  The 7702P base frequency is 2GHz with a turbo of 3.35GHz.  That means compared to desktop processors that easily hit over 4GHz these are half as strong.  That makes me want to err on the side of giving more processors. 

Second, applications have been treating memory like it is cheap and available for a while now.  For those that dare, I challenge you to try and figure out how much RAM Chrome eats up; it is not a lean program.  Therefore I am adding far more RAM than I think is necessary. 

That streaming media PC has a fair number of processors and RAM.  Likely more than is necessary, but the truth is I will sometimes play some video games on it.  This will more than meet those requirements.  Plus with this being a virtual machine, unused CPU and RAM are still available for other use.  It is possible that this is overkill and I will be able to reduce that requirement over time. 

The storage server also has more processors and RAM than some might think is necessary at first glance.  However, with lz compression becoming standard, it actually does need decent processing power, and anyone who has ever used ZFS for any period of time will quickly learn that the more memory one can give it, the better. 

I am also generally unsure on the zoneminder requirements, so I tried to make it like a workstation in case it has larger requirements than I am expecting.

With this overall scaffold in mind, I think it’s time to research the parts I want to include.  The 12 / 44 processors are what could be leftover after all of the other VMs take their pieces.  Luckily with virtualization leftover frequency can be shared and I can keep growing. 

Also, I don’t know where I’m going to get all of the Graphics cards, or how much total ram I need since I have upwards of 424GB of RAM already outlined.  I will need at least 2 dedicated 10G networking ports and a lot of other networking ports already.  I’m looking at 4 PCI slots.  I am personally much more comfortable with EATX chassis to make this all work.  Specialized motherboards tend to require specific chassis, which complicate things pretty quickly, especially with the sheer amount of hardware I am going to be shoving into this server.  I also dislike buying second hand complete systems, such as Dell, HPE or Oracle, due to vendor lock-in issues.

References

[1] http://justaddpower.com/

[2] https://en.wikichip.org/wiki/amd/secure_processor

[3] https://connectedmag.com.au/hdcp-and-edid-demystified-part-one/

[4] https://store.steampowered.com/valveindex/

[5] https://www.vive.com/eu/product/vive-pro/

[6] https://www.home-assistant.io/

[7] https://www.amazon.com/Wink-WNKHUB-2US-Smart-home-White/dp/B01KW8WGZQ/ref=sr_1_3?dchild=1&keywords=wink+hub+2&qid=1601166869&sr=8-3

[8] https://www.amazon.com/Aeotec-Z-Stick-Z-Wave-create-gateway/dp/B00X0AWA6E/ref=asc_df_B00X0AWA6E/?tag=hyprod-20&linkCode=df0&hvadid=167142538498&hvpos=&hvnetw=g&hvrand=5373048642499440113&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9023899&hvtargid=pla-362684682927&psc=1

[9] https://napp-it.org/

[10] https://www.freenas.org/

[11] https://en.wikipedia.org/wiki/ZFS

[12] https://zoneminder.com/

[13] https://www.amazon.com/DoorBird-Video-Door-Station-Steel/dp/B01G3U2DOQ/ref=asc_df_B01G3U2DOQ/?tag=hyprod-20&linkCode=df0&hvadid=459641693740&hvpos=&hvnetw=g&hvrand=2127667221237983432&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9023899&hvtargid=pla-305801884280&psc=1

[14] https://www.amazon.com/gp/product/B076HLT53N/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1

[15] https://www.amazon.com/gp/product/B019ZFH9QS/ref=ppx_yo_dt_b_asin_title_o09_s01?ie=UTF8&psc=1

[16] https://www.amazon.com/gp/product/B01M15WH9C/ref=ppx_yo_dt_b_asin_title_o05_s00?ie=UTF8&psc=1

[17] https://www.amazon.com/gp/product/B079GV151L/ref=ppx_yo_dt_b_asin_title_o04_s00?ie=UTF8&psc=1

[18] https://www.backblaze.com/blog/whats-the-diff-nas-vs-san/#:~:text=A%20NAS%20is%20a%20single,to%20set%20up%20and%20manage.

Leave a Reply