2019.10.09
Nathan Thompson
"Server you say? Do I snap my fingers and make it so?"
Alas, no, but installation and configuration is no herculean effort either. I would argue this whole process, when presented as a succession of text walls herein, likely looks harder than it probably is in actuality. As such, time to dive into the project.
If I may be so bold to beg this momentary indulgence to elaborate a bit about my own personal home server setup prior to beginning out tutorial, I would be infinitely grateful. While I would counsel it wise for you, the reader, to consider whether this meandering will in fact eventually take us to that very destination, a walk through detailing step by step instructions how to configure you very own local could home server…I do in fact promise such an outcome to be forthcoming. No worries, now sit back and enjoy the ride, well, impatient sorts can simply click the navigation link skipping this detour and I would never deign to hold even one iota of ill will to those excitedly onrushing towards that section.
For my own setup, after years of using Mac minis with OS X, then later an Asus Eee Box hacked to run OS X, and even an Eee PC 901 hacked to run OS X, I decided the proper route was an honest to goodness Linux based server. I have since used an older Intel NUC, currently running headless, as a home server.
Putting everything together was simple enough, the Intel NUC series are very small, micro systems. My model is one of the oldest, a second generation system that sold at deep discount compared to the two other Ivy Bridge model NUCs. Likely on account of the absence of USB 3.0 (the i5 model had one USB 3.0 port) and gigabit Ethernet port (the other i3 model and aforementioned i5 model had an Ethernet port). I added 8 GB RAM, an 120GB mSATA, and an Intel AC WiFi/Bluetooth card as the system used to be connected to an HDTV and not near the router. Once I went headless, I was able to move the NUC next to the router, which meant adding an Ethernet port would make sense as wired networking connections are generally faster and more reliable.
Since I needed all three USB 2.0 ports for drives, the Apple Thunderbolt to Ethernet adapter was the cheapest, not to mention one of the only adapters available on the market.{1} Linux sees the adapter fine and it survives reboots without problem. Seems to be hot swappable as well, but I do not test that function often. Lastly, we have a the fit-Headless, which is a dummy HDMI connection. Essentially, it tricks the system into thinking an HDMI monitor is attached so there are no problems with resolution and acceleration of remote graphical connections. VNC, RDP, etc.
As far as drives, I went with mobile drives because they take up less physical space and they do not require a dedicated power cable. One 2TB drive is for data backups for two systems, it is partitioned into 1 TB slices for each system. The remaining 1TB drive has share folders for my daughter's Surface Pro 3 and folders for each mobile device{2}.
{1} When the NUC was connected to the HDTV, I used a Thunderbolt dock for USB 3.0, Ethernet, and more, but that solution is expensive overkill if one solely requires an Ethernet port. There are two Kanex adapters as well, one USB 3.0 and Ethernet combo and one USB 3.0 and eSATA combo. The USB 3.0 and Ethernet adapter would be fantastic, but the $100 price point makes the $29 Apple adapter a much more attractive value proposition.
{2} Currently none, as I was too lazy to reconfigure the Android backups on newer devices, yes, yes, bad Nathan.
For server operating system, I prefer Linux, but have used Mac OS in the past as well. A BSD or Windows box could work as well given any particular users personal preference. For my own preference, clearly my NUC is running a Linux distribution. Once upon a time I used Elementary OS because the installation is simple, does not come with very many default applications installed, is based on Ubuntu LTS, and still uses LightDM as a greeter{3}. However, Elementary OS as a project is far from my favorite, the community, well developers and project leaders tend to rub me the wrong way with how they handle things and a show stopping bug (system will go to sleep if you are at a login screen, left me with no choice but to move on.
As a former Antergos user, now EndeavourOS user{4} on my client systems, I figured anything based off Arch was too bleeding edge to use as a robust "server" OS. By server, I do not mean command line only, as I like my system to be able to pull desktop duty as an emergency backup system. Yet, I do need the system to be stable and largely set and forget. Arch, as a rolling system, is constantly updating packages, which is a double-edged sword:
But
Bottom line, since my Arch based client systems are more or less stable (I have four configured), I went all in and now the NUC is running EndeavourOS, stock kernel and all.
{3} GDM is a mess right now for VNC connections, not worth the hassle.
{4} SAT prep time: Antergos/EndeavourOS is to Arch as Ubuntu is to Debian.
There will be a plethora of commands requiring terminal access. If the command line seems scary, no worries, we have you covered. Also remember, sudo means "I want to be a administrator and make important decisions now." Nearly every time you use sudo, a password prompt will appear for your administrator user password. As such, you need to be logged into the administrator user to run any such commands.
sudo pacman -Syyu
sudo useradd -m serveruser1
sudo passwd serveruser1
sudo nano /etc/passwd
will open the passwd file in the nano text editor as a sudo user.serveradminuser:x:UID:GID:Server Admin User:/home/serveradminuser:/bin/bash
and the regular user appears as serveruser1:x:UID:GID::/home/serveruser1:/bin/bash
::
on serveruser1's line? Change that line to appear as follows serveruser1:x:UID:GID:Server User 1:/home/serveruser1:/bin/bash
sudo systemctl enable sshd.service
sudo systemctl start sshd.service
df -Th
df
is short for disk filesystem.-T
is the option to display the file system type.-h
is the option to show disk space in human readable values.sudo blkid /dev/sdb1
(yes, your actual drive could differ) to find the UUID.sudo nano /etc/fstab
# Mounting Instructions for ClientUser1_DATA
UUID=00000000-0000-0000-0000-000000000000 /media/serveruser1/ClientUser1_DATA ext4 defaults,nofail,x-systemd.device-timeout=10s 0 0
nofail
is telling the system to time out the mount attempt after 90 seconds if it cannot find the listed drive. Since we likely desire not to add 90 seconds to our boot time, systemd.device-timeout=10s
tells the system to continue booting after waiting 10 seconds instead of 90 seconds. Without nofail
telling the system to continue booting, the server will boot loop if the drive is missing. I have seen suggestions for 1ms (yes, millisecond!), but my system would fail to mount the drive if the wait time was too short. To be on the safe side, I put 10 seconds. Feel free to play with the value to see if you can get the time down even further on your system. Do not put 0 here!!!! Zero actually means infinite timeout!!!! Back to boot looping!!!! Hopefully we are suitably chastised for even pondering such a thing. {5} I am a weirdo, as I schedule the backups in anacron as a system service within systemd, I do not create a local account per client user on the server. Instead, I create a single non admin user on the server and then point everything to it from the clients.
{6} The Arch wiki has a different suggestion and you can certainly use that method en lieu of manually editing the passwd file. Manually editing config files can result in accidental damage to other accounts, as such, I generally make a passwd.bak file before editing anything. Actually the .bak tip is applicable for every config file I edit, I make a backup for each one before editing the original.
{7} Here is the thing, if you boot your server without a user logged in, then your drives will not mount. While you can set your server to autologin your non sudo user, thus mounting the drives at login, and honestly, I did that for years under Mac OS X and even Linux, the real option is to automount.
{8} Gparted can tell you this information too:
{9} See the image below to see how the data is presented in the terminal after running df -Th
:
"Wait, your promise did come true, an honest to goodness tutorial on configuring the home server!"
Tis true. No worries as part two for Linux client configuration and part three for Windows client configuration has likewise arrived! Ciao!