Local Cloud

Backup: Client to Server

Part 1: Server


Nathan Thompson

"Server you say? Do I snap my fingers and make it so?"

Alas, no, but installation and configuration is no herculean effort either. I would argue this whole process, when presented as a succession of text walls herein, likely looks harder than it probably is in actuality. As such, time to dive into the project.

My Local Cloud Server

If I may be so bold to beg this momentary indulgence to elaborate a bit about my own personal home server setup prior to beginning out tutorial, I would be infinitely grateful. While I would counsel it wise for you, the reader, to consider whether this meandering will in fact eventually take us to that very destination, a walk through detailing step by step instructions how to configure you very own local could home server…I do in fact promise such an outcome to be forthcoming. No worries, now sit back and enjoy the ride, well, impatient sorts can simply click the navigation link skipping this detour and I would never deign to hold even one iota of ill will to those excitedly onrushing towards that section.


For my own setup, after years of using Mac minis with OS X, then later an Asus Eee Box hacked to run OS X, and even an Eee PC 901 hacked to run OS X, I decided the proper route was an honest to goodness Linux based server. I have since used an older Intel NUC, currently running headless, as a home server.

  • Intel Nuc, DC3217BY (Ivy Bridge i3 processor, 8GB RAM, 120GB mSATA SSD, three USB 2.0 ports, ThunderBolt port, AC Wireless, and HDMI)
    • Apple Thunderbolt to Ethernet adapter (this NUC lacks an Ethernet port!)
    • FitPC fit-Headless adapter for proper headless operation over VNC
    • Three USB 3 external drives.
      1. Seagate 2TB XBox drive
      2. Seagate 2TB
      3. 1TB 2.5" system pull from OWC with a 2.5" USB 3 OWC enclosure.

Putting everything together was simple enough, the Intel NUC series are very small, micro systems. My model is one of the oldest, a second generation system that sold at deep discount compared to the two other Ivy Bridge model NUCs. Likely on account of the absence of USB 3.0 (the i5 model had one USB 3.0 port) and gigabit Ethernet port (the other i3 model and aforementioned i5 model had an Ethernet port). I added 8 GB RAM, an 120GB mSATA, and an Intel AC WiFi/Bluetooth card as the system used to be connected to an HDTV and not near the router. Once I went headless, I was able to move the NUC next to the router, which meant adding an Ethernet port would make sense as wired networking connections are generally faster and more reliable.

Since I needed all three USB 2.0 ports for drives, the Apple Thunderbolt to Ethernet adapter was the cheapest, not to mention one of the only adapters available on the market.{1} Linux sees the adapter fine and it survives reboots without problem. Seems to be hot swappable as well, but I do not test that function often. Lastly, we have a the fit-Headless, which is a dummy HDMI connection. Essentially, it tricks the system into thinking an HDMI monitor is attached so there are no problems with resolution and acceleration of remote graphical connections. VNC, RDP, etc.

As far as drives, I went with mobile drives because they take up less physical space and they do not require a dedicated power cable. One 2TB drive is for data backups for two systems, it is partitioned into 1 TB slices for each system. The remaining 1TB drive has share folders for my daughter's Surface Pro 3 and folders for each mobile device{2}.

{1} When the NUC was connected to the HDTV, I used a Thunderbolt dock for USB 3.0, Ethernet, and more, but that solution is expensive overkill if one solely requires an Ethernet port. There are two Kanex adapters as well, one USB 3.0 and Ethernet combo and one USB 3.0 and eSATA combo. The USB 3.0 and Ethernet adapter would be fantastic, but the $100 price point makes the $29 Apple adapter a much more attractive value proposition.

{2} Currently none, as I was too lazy to reconfigure the Android backups on newer devices, yes, yes, bad Nathan.

System Software

For server operating system, I prefer Linux, but have used Mac OS in the past as well. A BSD or Windows box could work as well given any particular users personal preference. For my own preference, clearly my NUC is running a Linux distribution. Once upon a time I used Elementary OS because the installation is simple, does not come with very many default applications installed, is based on Ubuntu LTS, and still uses LightDM as a greeter{3}. However, Elementary OS as a project is far from my favorite, the community, well developers and project leaders tend to rub me the wrong way with how they handle things and a show stopping bug (system will go to sleep if you are at a login screen, left me with no choice but to move on.

As a former Antergos user, now EndeavourOS user{4} on my client systems, I figured anything based off Arch was too bleeding edge to use as a robust "server" OS. By server, I do not mean command line only, as I like my system to be able to pull desktop duty as an emergency backup system. Yet, I do need the system to be stable and largely set and forget. Arch, as a rolling system, is constantly updating packages, which is a double-edged sword:

    • Yay, updates! We are always on the newest packages and there are no big "distro upgrade" scenarios lurking down the line


    • Constantly updating can lead to breakage as packages sometimes have bugs or incompatibilities. Even setting an LTS kernel does not make an Arch system immune to this potential problem.

Bottom line, since my Arch based client systems are more or less stable (I have four configured), I went all in and now the NUC is running EndeavourOS, stock kernel and all.

{3} GDM is a mess right now for VNC connections, not worth the hassle.

{4} SAT prep time: Antergos/EndeavourOS is to Arch as Ubuntu is to Debian.


There will be a plethora of commands requiring terminal access. If the command line seems scary, no worries, we have you covered. Also remember, sudo means "I want to be a administrator and make important decisions now." Nearly every time you use sudo, a password prompt will appear for your administrator user password. As such, you need to be logged into the administrator user to run any such commands.

Installation of Server OS

  • I went with a pretty basic setup for EndeavourOS installation:
    1. I went with EFI, root, and hibernation enabled swap for partitions, if you desire encrypted root and/or swap, here is the way to go.
    2. I created one admin account in the installer and checked the option to "Use the same password for the administrator account."
    3. After rebooting into the actual system.
      • Command line update of system sudo pacman -Syyu
        • pacman is the installer and updater tool for Arch Linux and its kin.
        • -S will synchronize packages, or install them in normal people speak
        • -y will refresh a copy of the master package database. Using two -y flags will force refresh your package lists even if everything thinks it is up to date. You probably only need to add one most of the time.
        • -u is the update option and will update all installed packages.

Add a Seconday User

  • Add a regular, non sudo enabled account.{5}
    1. sudo useradd -m serveruser1
    2. sudo passwd serveruser1
      • Then enter password
    3. To make user show up with a proper name instead of all lower case shortname{6}:
      • sudo nano /etc/passwd will open the passwd file in the nano text editor as a sudo user.
      • Look at the admin account created within the installer and then edit the new user in the same way
        1. The admin user appears as serveradminuser:x:UID:GID:Server Admin User:/home/serveradminuser:/bin/bash and the regular user appears as serveruser1:x:UID:GID::/home/serveruser1:/bin/bash
        2. Notice there is nothing between the :: on serveruser1's line? Change that line to appear as follows serveruser1:x:UID:GID:Server User 1:/home/serveruser1:/bin/bash
    4. OpenSSH is preinstalled on EndeavourOS, yay! Let's get it running on the NUC server. Secure shell (SSH) is used to connect remotely (over the LAN in our case) and securely over a terminal connection.
      • sudo systemctl enable sshd.service
      • sudo systemctl start sshd.service
      • Feel free to edit the relevant SSH config files to properly secure your sytem as required, for instance I use public keys for authentication instead of passwords.

Automount Drives

  • In order for the server to see the drives at boot, we should automount drives in the serveruser1 account.{7}
    1. First we need to decide how we will name the drive in the file. There are a few ways to append a drive identifier in our fstab, but I decided UUID was the way to go. Make sure the backup drive is connected to the server. To find this the UUID for the drive{8}
      • df -Th
        • df is short for disk filesystem.
        • -T is the option to display the file system type.
        • -h is the option to show disk space in human readable values.
        • We should have a nice list of devices.{9} Look under the last column, "Mounted on", you will see your backup drive's name. Let us assume the drive is named ClientUser1_DATA. If you glance under "Filesystem" on the same line, we can see something akin to /dev/sdb1.
      • sudo blkid /dev/sdb1 (yes, your actual drive could differ) to find the UUID.
        • /dev/sdb1: LABEL="DriveName" UUID="00000000-0000-0000-0000-000000000000" TYPE="ext4" PARTUUID="00000000-0000-0000-0000-000000000000
        • We need the UUID. The actual UUID will be made up of random alphanumeric characters.
    2. Now to edit the fstab. Whatever is listed in the fstab will attempt to automatically mount at boot.
      • Add the following line at the bottom of the fstab file -- sudo nano /etc/fstab
# Mounting Instructions for ClientUser1_DATA
UUID=00000000-0000-0000-0000-000000000000 /media/serveruser1/ClientUser1_DATA ext4  defaults,nofail,x-systemd.device-timeout=10s  0  0
        • Notice I put a comment on that first line, this is optional. If used, the text can be anything helpful as a note.
        • UUID goes first.
        • Next the mount point of the drive. Do not use /run/media or /run/mnt, those are for the system itself to use. However, /mnt is okay too, but is usually not displayed by file managers, whereas /media will be shown in your typical file manager, which is Thunar on my system. Notice I put the mount point as the regular, non sudo, user on the server. That is my preference, feel free to configure as need be.
        • Followed by file system type. EXT4 is my case.
        • This part is the key addition, nofail is telling the system to time out the mount attempt after 90 seconds if it cannot find the listed drive. Since we likely desire not to add 90 seconds to our boot time, systemd.device-timeout=10s tells the system to continue booting after waiting 10 seconds instead of 90 seconds. Without nofail telling the system to continue booting, the server will boot loop if the drive is missing. I have seen suggestions for 1ms (yes, millisecond!), but my system would fail to mount the drive if the wait time was too short. To be on the safe side, I put 10 seconds. Feel free to play with the value to see if you can get the time down even further on your system. Do not put 0 here!!!! Zero actually means infinite timeout!!!! Back to boot looping!!!! Hopefully we are suitably chastised for even pondering such a thing.
  • Reboot the system and make sure your drive mounts.

{5} I am a weirdo, as I schedule the backups in anacron as a system service within systemd, I do not create a local account per client user on the server. Instead, I create a single non admin user on the server and then point everything to it from the clients.

{6} The Arch wiki has a different suggestion and you can certainly use that method en lieu of manually editing the passwd file. Manually editing config files can result in accidental damage to other accounts, as such, I generally make a passwd.bak file before editing anything. Actually the .bak tip is applicable for every config file I edit, I make a backup for each one before editing the original.

{7} Here is the thing, if you boot your server without a user logged in, then your drives will not mount. While you can set your server to autologin your non sudo user, thus mounting the drives at login, and honestly, I did that for years under Mac OS X and even Linux, the real option is to automount.

{8} Gparted can tell you this information too:

      • Navigate to the desired drive.
      • Right click on the desired partition.
      • Click on Information
      • Where it says UUID, that is the information you seek.

{9} See the image below to see how the data is presented in the terminal after running df -Th:

Server Down, Clients to Come

"Wait, your promise did come true, an honest to goodness tutorial on configuring the home server!"

Tis true. No worries as part two for Linux client configuration and part three for Windows client configuration has likewise arrived! Ciao!