dpzmick.com

Homelab Act 2: Servers and 10GbE

For the last 171 days (according to uptime on one my routers), I've been setting up a small homelab. Homelabs are kind of cool, and, the setup has been interesting. I'll be writing a few posts explaining the steps I took.

Since I could access my system remotely, I had started sticking interesting network cards into my desktop around this time and working on projects remotely. Unfortunately, I didn't have enough PCIe slots, or PCIe lanes, to keep all of these plugged in at the same time. Plugging and unplugging stuff all the time was a bit of a pain.

Since part of this was running in the cloud, I figured I'd also start setting up the cool stuff. I was hoping to set up small cloud instances to run:

Cloud Cost

A single, low ram, shared CPU, tiny storage cloud instance can cost something like $3.50 a month. If you want a real CPU, or some real storage, the prices go up quite a bit. I wanted to run, at a minumum, influx, grafana, weechat, and an NFS (or owncloud) server with 20-30gigs of space. Influx (or a SQL server) needs real-ish servers, with real-ish CPUs. Neither should need much storage for my workload.

A single core (real core, not shared core) server costs like $10 a month (ish kindof), so, if I wanted two of them, I'd be paying $240 a year for 2 cores. This is sort of okay, but this won't come close to addressing my photo problem. I'd also still be swapping PCIe cards back and forth for my other projects (annoying!).

I know I've seen used rackmount servers on eBay for about this price, so I thought it might be worthwhile to put a server in my apartment.

eBay time

The first order of business was finding a "quiet, low power, expandable, powerful server." I eventually settled on a Dell R720 with 2.5 inch drive bays. Specs:

  • 2x xeon E5-2650 v2 (8 physical cores per socket)
  • 32 gig ECC ram
  • 2x 10k SAS drives (300gigs each)
  • H710p controller (this was the upgraded choice, seemed wise in case I wanted to do RAID. This was very wrong, see NAS, ZFS, and NFS)
  • 4 ports of gigabit ethernet
  • Dell's iDRAC remote management system

This was a $379 server. Passmark gives the dual xeons an 18813. A new 2016 i7 (the one I used until RYZEN happened) sells in 2020 for $300 and passmarks at 11108. The RYZEN chip I have now blows them both away at 24503 on passmark, but this isn't an AMD fanboy post. The Xeon is a 2013 CPU, so the performance/power isn't going to be as good as newer cpus, but the performance/dollar pretty much blows away the cloud deal, if you only consider CPU cores.

This decision wasn't easy, but after reading almost every post on r/homelab I decided I'd give this a try.

Barebones server setup

Setup a RAID 0 array across the two drives

For testing, I just stuck a RAID 0 array across the two drives. This is done using the remote management web ui, or by booting the machine and tweaking settings from the remote management virtual display.

Install an operating system

Again, super straightforward. I mounted an arch ISO using the remote management tools, booted the box, and installed arch the standard way.

While the server is booting, the fans spin at something like 75% their max RPM. This is loud enough to be heard through a closed door.

Once an OS is installed and booted, the fans in the server will spin down to a pretty reasonably low volume. I can still hear the machine when the room is silent, but, if I'm typing, playing music, or doing pretty much anything else, I can't really tell it is there anymore. If I put a large amount of load on the machine, the fans will spin up, but that's expected and doesn't really bother me.

Adding PCIe cards

To install PCIe cards, all you have to do is lift the lid, pop out a little tool-less bay and plop the card in. Its useful to read the server documentation to make sure each card is installed is attached to the appropriate socket, if you are installing multiple cards.

After installing new cards, the machine booted, fans spun at max (as expected), but, they never spun down. Apparently Dell doesn't like it when you install "non-certified" cards in the server, since they are not aware of the thermal requirements of the card.

The internet gave two pieces of advice:

  1. Update to the latest iDRAC.
  2. Issue some magical incantations that tell the server to chill out about certification.

I tried (1), but it didn't make any difference. For (2), I found the solution here, reposted for longevity:

# check if the fans will get loud (do this first to make sure these instructions actually work)
$ ipmitool raw 0x30 0xce 0x01 0x16 0x05 0x00 0x00 0x00

# response like below means Disabled (fans will not get loud)
16 05 00 00 00 05 00 01 00 00

# response like below means Enabled (fans will get loud)
16 05 00 00 00 05 00 00 00 00

# if that worked, you can Disable the "Default Cooling Response Logic" with
$ ipmitool raw 0x30 0xce 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x01 0x00 0x00

# to turn it back on
$ ipmitool raw 0x30 0xce 0x00 0x16 0x05 0x00 0x00 0x00 0x05 0x00 0x00 0x00 0x00

To connect to my server, I needed to run ipmitool like this (use the idrac user/password):

# at this point, server hostname was `worf` and idrac hostname was `idrac-worf`
$ ipmitool -I lanplus -H idrac-worf -U root raw 0x30 0xce 0x01 0x16 0x05 0x00 0x00 0x00
Password:
 16 05 00 00 00 05 00 01 00 00

Networking

As discussed in my previous post, I already had a small TP-Link switch sitting behind the VPN/router box. Four ports was getting tight (I was unplugging my smart light hub to play with a network card).

I had some project ideas that might benefit from having fast ethernet, and, I really wanted statistics from the switches. To keep a long story short, I've ended up with a combination of a few different network cards and two Mikrotik switches:

  • CSS326-24G-2S+RM: 24 port 1gib, 2 SFP+ 10gib. My primary network lives on this switch
  • CRS309-1G-8S+IN: 8 SFP+ 10gib, 1x 1gib for managment. A secondary experimental network lives here.

Buying this networking gear (especially the 10 GbE switch), pushed me a little bit over the "saving money over cloud" limit if I'm only planning on running a small number of services. However, as I'll discuss in my next post, I'm also pushing a lot of bandwidth over this network and I'm not sure what that would cost on AWS.

Rack

I wired a bunch of stuff up and threw it under my desk:

server_desktop_wire_mess.jpg

That wasn't going to work, so I threw all of this into a small rack, moved my desktop to a (crappy) rack mount case, and here we are:

battlestation.jpg

Do something with it

At first I used this server mostly to poke at the network and write kernel bypass drivers for an Intel I350-T4 quad-gigabit-port network card. Side note: The ixy project is pretty neat and 100% worth poking at if you are interested in networking. I was able to get a driver working for the previously mentioned intel card by reading intel's documentation, ixy's other drivers, and spdk in 500 lines of code.

I've also used this as a bunch of CPUs for some brute forcing I tried on a few advent of code problems (I also solved them the right way) and for a few other projects where I wanted a quiet system to benchmark on.

Having a large, remotely managable server available has been pretty convenient (even though the hardware is a little old). Also, it looks really cool.

Currently, this machine is NAS and runs a handful of services. See my next post for the continuation of this series.

home switch-color-mode