Things are always changing in the IT field and the lab here at home has to adapt. My primary server, the NAT into my internal network and the host of a few websites, is getting a bit "long in the tooth." It runs RedHat Advanced Server 2.1 which apparently came out in 2002. I've updated it as necessary, new motherboard, hard drives, etc. The Apache version is also current and I've updated necessary libraries when potential threats come to light, e.g. HeartBleed, but the core operating system is ancient. Missing are such conveniences as systemd and, due to the age of the libraries, it isn't always able to run the "latest and greatest" software.
Although it's powered by a great UPS (using the Tripp Lite APS750 inverter/charger backed by a sealed lead-acid battery UPDATE: now an AGM deep-cycle Group 27 marine battery,) I recently had to upgrade the power supply. Power supplies don't last forever (see below) and so I sprung for an EVGA 650 Watt model. It has enough power to meet my requirements but also comes with a 10-year warranty. Since it hosts a number of web sites, as well as providing a gateway to the servers on the internal network, I don't want to find myself "in a pickle" if/when the server fails. I needed a Disaster Recovery (DR) plan.
I'm constantly evaluating software and had reason recently to install WordPress (don't ask!) But I also want to do some "playing around" with mod_security. I've also been updating my servers with SSD drives and I had a spare Samsung 970 EVO NVMe M.2 card sitting around. It's only the 250 GB model since it was only intended for use as a root volume but boasts incredible speeds as well as a 5-year warranty. Although you can install these drives in SATA hard drive enclosures or on a PCIe riser, nothing beats the speed of being installed on the motherboard. Since it had to serve as backup for my primary server, which has twin PCI gigabit Ethernet cards for talking to the outside and inside worlds, I needed the following:
Not a particulary onerous list of requirements. It's not a gaming machine so I don't require tons of PCIe slots or blazingly-fast memory. But then it came down to the processor. I've almost always used AMD processors since they offered much better "bang for the buck." I have a hex-core system downstairs with an AMD Phenom II X6 1090T processor, built primarily for Xen work but now also spending most of it's time mapping cancer markers for the World Community Grid project. But an interesting thing happened this time around. Research on the 'net suggests that the latest CPUs from AMD run really hot, to the point where a water cooling solution might be required. The old "same performance, half the price" model didn't seem to apply anymore.I ended up opting for an Intel Core i5-7500 processor and that more-or-less dictated the choice of motherboard, an ASUS PRIME B-250 PLUS in my case. I say more-or-less because I have some personal preferences when it comes to motherboard manufacturers; not all of them make superior products. It even comes with a USB-C connector so that I can charge my iPad Pro! I've always had good experiences with Kingston and Corsair memory so chose 8GB (2x4GB) of 2666 MHz Corsair Vengeance DDR-4 memory. I already had a case (NVidia) along with a SATA drive and power supply so the parts came in at around $500 total, my target price when assembling a new system.
Putting the pieces together isn't particulary difficult but you have to take your time and heed all the warnings. Make sure that you're grounded, don't force anything, and if it doesn't "look" right then it's probably not. My system booted the first time, single beep, and I moved on to configuration of the BIOS. I tried to set the memory speed to the one advertised for the Corsair modules but it wouldn't boot; I had to knock it back to 2400 MHz to get back to operational. Don't take that as criticism. I've been in this industry long enough to know that not all motherboards "play well" with all memory. It's fast enough for me so that's all that counts. The other problem arose with the repurposed power supply. Since the motherboard has these nice red LEDs which are illuminated in standby mode, there's a little current draw from the power supply even when the system is powered off. The old power supply made a hideous screeching noise in that mode so it required another trip to the Amazon website to order a new EVGA power supply.There were a couple of other tweaks to the BIOS which were needed. I like to be able to power-on my servers via a "magic packet" sent over the LAN. I've got a setuid utility on my primary server which can do the job with a simple "user land" command. But, of course, it only works when configured in the BIOS of the target system. The other issue didn't reveal itself until I started to install software. It turns out that virtualization isn't enabled by default on the new motherboard. The BIOS menus aren't particulary well-structured, in my humble opinion, so it took a bit of digging to find the right spot to make the change. Then again, this is a one-time exercise. I'm not about to waste countless hours adjusting voltages and timings, attempting to wring every last ounce of performance out of the components. I merely want a powerful, reliable system.
Speaking of software, RedHat was one of the original packagers of Linux software "distributions" and were recently acquired by IBM. While there are now a plethora of distributions available, including SUSE, Ubuntu, Debian, etc. I still like the RedHat packaging. I'm not about to shell out for their Enterprise Linux since I don't require that level of support; I'm very happy with the freely available Fedora releases. RedHat powers all my servers and even my laptop, although one system is a dual-boot with Windows 7 Professional and the Mac Mini runs OSX. Rather than adopting the latest release, I went with Fedora 25 Server. Installation is straight-forward and I have chosen to put files which are written regularly onto the SATA hard drive, which means configuring rsyslogd appropriately. While I can't make the SSD completely read-only, limiting the write cycles will enhance its lifetime.
The server version of Fedora 25 doesn't include a GUI. While this generally works for me, there are times when it's useful to have a GUI, such as when you need to configure BOINC. There might well be a command-line tool for configuring the client but I also want to be able to pull up Firefox when I need to download software. But there are a number of packages which need to be installed before you can run startx successfully. The following list worked for me but note that there are some here which were required for other purposes and that this list shouldn't be considered exhaustive and will only work for Intel graphics chips:
# dnf install firefox # dnf install xorg-x11-drv-evdev # dnf install mesa-dri-drivers # dnf install gtk+-devel gtk2-devel # dnf install libcurl-devel # dnf install libxml2-devel # dnf groupinstall "Development Tools" # dnf install libxml-devel # dnf install gcc-c++ libtool autogen # dnf install xorg-x11-xdm # dnf install xorg-x11-drv-intel # dnf groupinstall gnome # dnf groupinstall perl
So now the new system runs Fedora 25, Apache 2.4 and PHP 7. It also runs MySQL 5.7 since WordPress requires a RDBMS. The performance is exceptional and I have lots of "headroom" so I can install Eclipse and Glassfish 5. Now to configure mod_security2...
UPDATE: For reasons unknown, the GUI stopped working. Running startx resulted in some bit graphics in the top-left of the screen but nothing more. I tried getting back to the place I was before without success. And this is where an important lesson, learned long ago, kicked-in: don't follow the rabbit down the hole. I could have literally spent days trying to get the X.11 server working properly. But I'd hit the cut-off point and so it was time to switch to plan B. I downloaded and installed Fedora 30 Workstation. But lest you think that I lost everything I had previously installed, worry no more. One of the other lessons I've learned is that you install almost nothing to the root volume on a *NIX system, precisely so that you can weather these kinds of events. My /home, /opt and /data filesystems are on the 1 TB SATA drive. I backed up some of the important files on the root filesystem, things like the /etc/rc.d directory tree along with /etc/hosts and /etc/fstab, and proceeded with the installation onto the NVMe root SSD. Even though it wasn't absolutely necessary, I unplugged the SATA drive while performing the OS installation. Once everything was up and running, I powered-off, reconnected the SATA drive and proceeded to import and enable the volume group and logical volumes. It still took a couple of hours but was preferable to the alternative.So take from this what you will. I enjoy the luxury of multiple physical drives on my servers. I also learned my lessons the hard way; on PWB UNIX systems you had to use fsdb (file system debugger) to manually repair filesystems. Now we have journaled filesystems and logical volume managers. But some of the best practices endure. It would have been considerably more painful to upgrade my OS if my important data wasn't on separate filesystems. I saved myself from having to rebuild and reinstall MySQL and Glassfish. All my code, including the subversion projects, were almost immediately available after the re-install. I lost nothing and am now running the "latest and greatest" version of Fedora. I wish you similar success should you need to travel this road.
Copyright © 2019 by Phil Selby
All rights reserved internationally.