It is time to build the actual server. I have covered the parts I purchased, and some of the basic mistakes made, however this did not all happen in a parts deciding phase. I merely covered them all at once because the structure of the series benefits from me not having to keep going back and forth between part purchase logic and build difficulties. This isn’t to say I’m done with parts though, as there is some debugging and misconfiguration yet to come. With that being said I wanted to go through the build process separately from the logic of the part selection.
In the first cluster of parts I ordered, I had the AMD EPYC CPU, the memory, the Tyan motherboard, the 846E1-R1200B based chassis (I say based because the listing didn’t give an exact model number, but this appears to be the closest configuration I can find on Supermicro’s website), and the 748TQ-R1400B. I immediately ran into a problem. The Covid lockdowns hit at the same time I ordered this first cluster in February 2020.
I got the CPU, the memory, and the two chassis’ without any issues. These were Ebay purchases and therefore involved a bit more on-hand guarantee than ordering from websites typically gives me. The Tyan though, was not. Now I didn’t just hope it was all good, I contacted the sales department of 3 different companies listing that they had these for pre-order in February. They all told me to expect the motherboard in early march. With that in mind, I ordered it. Then the lockdowns hit, the companies didn’t get their stock, and I was left without a motherboard.
This was a big problem. Good ebay sellers give about a 30-60 day return policy. I needed to test these parts in order to know they worked within that time frame. Otherwise I would be on the hook if something did not work right. These are the most expensive parts of the project, I did not want to risk them not working and missing my return window.
I purchased an ASUS KRPA-U16 that was confirmed in stock. I got this motherboard in time, and tested the memory and the processor verifying that both worked just as described. Unfortunately, I still have this motherboard and it remains unused.
I don’t really consider this a mistake I made. This kind of situation reminds me a bit of a favored episode of Star Trek the Next Generation of mine, Peak Performance. In it, Data, an android, is defeated in a strategy game by an incredibly smart alien. After a bit of sulking/worrying about mistakes by Data, Picard tells Data “It is possible to make no mistakes, and still lose”. That is what I feel about this particular outcome. I, like a great many others, did not predict Covid lockdowns, port lockdowns, or the general pandemic outbreak. The vendors told me the Tyan motherboard would arrive on time. I covered my bases. I still lost, and had to purchase another motherboard to work with in the short term.
There is another issue I ran into when trying to test the CPU and the memory, trying to confirm that the system would boot. You see, I have a LG 34BK95U-W . I was looking into updating some of my monitors for my workstation. I prefer having two ultrawide monitors for productivity. Some may say that this is too much, I think they are wrong. I really like screen real estate, it helps when jumping between 3 or more websites while programming or working through software issues.
Anyways, this was the monitor I was using for testing. It is a very new monitor, it does not have an old VGA or DVI port, and as is visible in the above photo of the motherboard, it only has an old VGA port. Now this is not uncommon for server/workstation motherboards. They often are built to not plug directly into a modern monitor, or anything built on better graphics technologies, since most servers do not need this in a datacenter. Hence the reasoning behind a crash cart.
A datacenter crash cart is essentially a shelf on wheels with an old VGA monitor, a keyboard, and probably a mouse (though not always). Whenever a server needs to be checked in a datacenter, one grabs a crash cart (usually because the server crashed). Then the technician plugs in this old VGA monitor and keyboard to see what is going on. Almost all server / workstation motherboards operate as if this is the expected method of access. If not, one simply installs a new graphics card and then it can be safely ignored.
In my case I was trying to test the CPU and memory and wanted to see the system post, play in the bios, maybe boot into a version of linux loading from a usb stick. So, I bought a VGA to HDMI converter . I plugged it in, and… nothing. The fans turned on, it appeared to be running, but nothing happened on screen.
It took me quite a bit to figure this one out. I tried an older HP LP2475w  I used about 8 years ago. It does have a DVI port, and an old VGA to DVI that I had. Also nothing. Eventually I found an old Dell monitor with a native VGA port I got from my father’s estate. This one worked perfectly with a native VGA cable. I do not have any idea why all of these converters failed, while the native monitor had no issues. It just does. I am glad I did not get rid of these old monitors because of this. I will keep this around and form my own little crash cart!
The last issue around this first step was testing the chassis’ I had. I did not want to install the EEB motherboard into either of these because it was so large, and mostly because putting them in there would add difficulty to installing and removing the CPU and memory, especially since this is not the motherboard I wanted to use.
So what I did was stick the motherboard on the box and park it next to the chassis. This is where I could plug the PSUs in and see if they worked. It is also how I determined that the 748TQ-R1400B PSUs (after realizing they only sent one and requesting and receiving the second one) did not work. This is also when I realized my mistake about how loud the darn things are. The server fans were very obnoxious to me. That is why I went ahead and purchased a 745BTQ-R1K28B-SQ.
I also ordered as many Noctua 92mm  and 80mm  fans as needed to replace all the fans in this chassis and the storage expansion chassis. I even briefly considered purchasing 40mm fans to replace the PSU fans on the 846E1-R1200B, but ultimately decided against that, I don’t know how the PSU controller was programmed and didn’t want to mess with that. With the purchase of the 745BTQ-R1K28B-SQ, the whole point was rendered moot, those power supplies are designed to be relatively quiet.
Some of the more astute of you may be wondering, but what about the CPU heatsink and fan? I haven’t really talked about that at all. This may be another mistake I made. Although it’s hard to argue about the level of mistake here. You see, when ordering a server or workstation chassis, and working with parts that the manufacturers don’t really plan on being placed in those chassis, it is really quite difficult to determine the exact size of heatsink that can fit in the chassis.
It’s only recently I have been seeing heatsink clearance even be a thing chassis specifications report on at all. Even then it is difficult as motherboard thickness isn’t standardized, and it might need to be on risers within the motherboard chassis bay, further complicating the calculation.
I also had three different chassis I used in this build, so finding one that fit literally all of them, without going for a noisy low profile solution, was going to be difficult. My own personal requirements were that I wanted it to include the largest fan (larger the fan typically means it can be more quiet and push the same amount of air) and heatsink that can fit.
I started with the Noctua NH-U14S , with the 140mm fan. This actually worked for the 846E1-R1200B, but it did not for the 745BTQ-R1K28B-SQ. I ended up returning it and using a Noctua NH-U12S , which fit just fine and was still pretty quiet. Some of you may be wondering about airflow overall in the system. These were designed to put a lot of air through and to keep the system cool. Replacing them all with low noise, but worse flow may create a heating issue, which it kinda did, but I’ll discuss that more later.
Let’s jump a bit to when I had all the parts for the main system and had it ready to be put together. There were a couple of issues I ran into as I was putting the whole thing together. The first was that since this is a supermicro chassis and a Tyan motherboard, they were incompatible in terms of onboard front panel connectors.
Supermicro has a header plug for their chassis . Luckily, they sell an add-on that will break these out into the individual pins necessary to make that work . This is something I always seem to forget about until I need to put the thing together. I didn’t even need it for the testing earlier because the ASUS KRPA-U16 has an on-motherboard power on switch. I ended up re-researching this until I found one. Then, after I had ordered one, I found an extra I had saved from a previous build. Minor mistake.
The second was connecting the SATA hard drive cages to the motherboard. I needed to make sure I ordered the right mini-SAS connectors. It may be somewhat difficult to see, but between the fans and the front disk caddies are a series of 8 SATA connectors.
The method I chose to connect these to the motherboard was to use a pair of mini-SAS breakout cables . These were far longer than I really needed. At peak lockdown, I kinda had to take what was in stock. Things weren’t shipping in.
3.3 ft cables were the shortest available, so I bought 2. I looped them around and did the best I could with cable management. This is not the cleanest install I have done or seen. The reality is, they are out of the way and I am not concerned about not being able to reach anything or stray contact. So… stay back vile beasts! Seriously though, I do not consider this cable management bad. I could do better, but it would require purchasing smaller cables, which I don’t see as necessary at this point.
I should also note I have replaced the fans in the green fan caddies with the Noctuas. That is why the Noctua fan power extensions are visible. Unfortunately the Noctua power dongles were not wholly compatible with what the old fans used. Fortunately I could just remove the entire old system and wire them underneath to the motherboard. I did the same for the exhaust fans by the CPU heatsink, though pulling those apart was a bit more difficult, and again, I just had to wire through to the motherboard directly. Everything worked out.
For anyone that is curious, this is more or less the final state of the system before I moved on to working on the expansion chassis in full force. The first two slots contain the LSI 3008-8e HBAs, then the GeForce GTX 1080, an open slot for the next GeForce GTX 1080 (since I kept the streaming media PC working until this project is ready to take over for it, but the VR machine doesn’t have that requirement), and lastly the Solarflare SFP 5122f.