my low-level programming blog
An important part of relying less and less on large corporations to store your data, is to have your own, self hosted "cloud" service. This can be way cheaper and much safer than any cloud service that you can rent.
For example, Google offers 8GB/per user for free. You might have an old computer lying around with a 100GB or even 1TB drive. You can definitely utilize and repurpose old hardware for this task.
Even if you don't have an old computer and/or an old hard drive, you can buy them used very cheap. Any system with >2GB RAM and a post-2010 CPU should suffice. For storage, you can pick whatever and however much you want.
These instructions work for any Linux PC running in a local network. I will be referring to the server computer as "Pi" from now on. The reasons I prefer a barebones Raspberry Pi type of solution are:
Low power consumption
Relatively cheap (although there are cheaper alternatives now)
In case of a power outage, the Pi reboots (if it can...) when the power resets. This is important for when you are not home. For a more sophisticated solution, you can use a UPS that gives the Pi the instruction to shutdown when there is an outage and turn back on after it is finished.
Simpler hardware and software setup
More compact and completely silent
This guide is written after months of experimentation and is based on my experience working with this project, to make it as reliable and practically useful as possible. I cover what seems important to me and only what I have tried, but this is definitely not the end of the story for home NAS applications.
This is the simple part. Simply connect your Pi to an outlet and connect your hard-drive (or SSD, or whatever) to it.
I use a 2TB internal SATA HDD with a SATA-to-USB adapter for storage. You can even use a large SD card (128GB) if it is sufficient for your needs, without any external storage. Of course you can use as much or as little storage, in a any shape of form suits you.
I will be referring to the storage part as "the HDD" from now on.
If you are using a regular (old or new) system for this job, you should skip this part and just install linux to it. I recommend any debian-based distro.
In order to run the Pi headless, without a monitor, we need to specify our credentials and configure the network from the ISO image.
We can use the Raspberry Pi Imager for that. Install this program and open it. Insert the SD card from which the Pi will boot to your system (in any way that you want - eg. SD-to-USB adapter).
Pick your device from the menu, we are using a Pi 4, and choose the OS of your liking. Any linux-based OS will work, but we are going to use Raspberry Pi OS (arm64). Select the SD card as your target device.
Don't forget to click on "Advanced options" before imaging and set
Your hostname: pi.local
(or anything that you like)
Enable SSH
Set a UNIX-user {unix-username}
(usually it's pi
)
Configure WiFi
More info on setting up a headless Pi
After the imaging is finished, remove the SD card and insert it to your Pi. Connect your Pi to an outlet.
After some minutes, you should be able to see it in your router settings (192.168.1.1 usually) or ping it (ping pi.local
).
You should also be able to SSH to it (ssh {unix-username}@pi.local
). We will be working on our server (Pi) through an SSH terminal. You can install PuTTY on a Windows PC in order to be able to use SSH.
If this doesn't work for you, there may be connectivity issues or boot issues. Try to connect an Ethernet cable and check again. If that fails too, you may be forced to connect to a monitor unfortunately.
What you should also do is configure your router's DHCP in order to "pin" the LAN IP of your Server. You can do this through your router's page. If you don't have access to that, you can also edit /etc/dhcpd.conf
in your server.
Now we are going to setup the HDD (or whatever storage you are using). If you don't use extra USB storage (just the storage in the SD card) you can skip this section.
If you have the ability to format your drive, you should do it. You can do it using parted
from inside your Pi/Server. You can pick any filesystem you want (Samba acts as a "translator", even if the OS doesn't support the filesystem). If, for example, you have x users and want to give them (disk_size / x) space, you can create different partitions for them. If you want to give each one y GB, you can also do that.
Each partition will be mounted as a different drive, to different directories on the disk. Most people are fine with one partition that takes up the whole disk.
You should mount your HDD (or HDD partitions) to a permanent point that doesn't change. This can be done by
blkid
and the answer will look something like:
/dev/sda1: UUID="1234-ABCD" TYPE="filesystem-type"
you need to keep the UUID
sudo mkdir -p /mnt/{mount-point-name}
creates the permanent mount point in your filesystem
sudo nano /etc/fstab
and add this line at the bottom of this file:
UUID=1234-ABCD /mnt/{mount-point-name} vfat defaults,nofail 0 0
sudo mount -a
to reload fstab
From here, you should be able to view your whole filesystem from linux-based file managers or WinSCP for Windows, just by using SFTP.
Connect to sftp://{unix-username}@pi.local
. You should be able to access the whole filesystem from there (including your HDD).
You can set shortcuts to any particular point of the filesystem in order to access them quickly from your other devices (from example, to /mnt/{mount-point-name}/movies
).
This option may be sufficient for some, so we could end the guide here. I want to have native Windows support and better control over my shares, so we keep going with Samba.
The following commands are the same for any APT system using systemd (like the Raspberry Pi). If you have another kind of Linux install, you can easily find the relevant commands that work for you.
Since we just installed our OS, we need to update and upgrade it:
sudo apt update && sudo apt -y upgrade
. This may take a while.
Then, we are going to install Samba, with:
sudo apt install -y samba
Check if everything went well with:
samba -V
, the output should be something like Version 4.17.12-Debian
.
Lastly, ensure that Samba will run on boot with:
sudo systemctl restart smbd
to restart samba, this command is generally useful
sudo systemctl enable smbd
We are going to set a password for a Samba user, in order to have better control over who is allowed to access which part of our server.
A good start is to set a samba password for your default user (e.g. pi
)
smbpasswd -a {unix-username}
Since I am the only one using this server, I haven't created more users (apart from the default one above). Of course, you can setup many samba users and give them different privileges. For example, to create a new user:
useradd sophia
, this creates a new UNIX user
passwd sophia
, this changes the UNIX password of the user
smbpasswd -a sophia
, this changes the Samba password (with which the user will be able to connect to the shares that are given to them)
This part is where you need to think and make some plans. You can add practically as many samba shares as you want.
Instead of viewing the whole filesystem and restricting access to directories based on Unix read/write privileges (like NFS or SFTP), Samba uses shares.
Each samba share has its own rules and is restricted to a directory and its contents (including subdirectories). These shares are declared in /etc/samba/smb.conf
. We will get to that later.
Samba by default comes with 2 shares
homes
, which shares the home directory of each UNIX user connecting to the Samba server
printers
, which shares connected printers
print$
, which shares printer drivers
In my configuration, this is not ideal. I can browse the /home/{unix-username}
folder when I need to (not often) using SFTP anyway. I also don't have any printers connected.
I have deleted these default shares and added:
movies
, which points to /mnt/{mount-point-name}/movies
hdd
, which points to /mnt/{mount-point-name}
You can play around with cd
(change directory), ls
(list directory) and mkdir
(make directory), to create the configuration that suits you. Don't forget that if you want to access your hdd, you need to create directories/add files to its mounting directory (/mnt/{mount-point-name}
)
Also note that if you indeed created a lot of partitions for your disk (or have more than one disks), you should definitely setup different shares for each one, as they all "point" to a different directory on the disk.
For now just think about the architecture of your shares and note down the absolute paths of each shared directory and the properties/rules that each of them must have.
Also, make sure that your user has r/w privileges on the directories that you want to share by changing the ownership recursively:
sudo chown -R sophia:sophia /this/directory/will/be/shared/with/sophia
and sudo chmod u+rw /this/directory/will/be/shared/with/sophia
More info on UNIX file system navigation here
After you have decided where your shares want to be and what purpose they should have (you might only have one share, you don't have to have more than that).
The first step is to backup your smb.conf
file, in case you need it:
cp /etc/samba/smb.conf ~/
, this creates a copy to your home folder.
Then, you should edit the file. Since we are working on a headless ssh connection, we only have the terminal (graphics should not work, because that would take up resources), we have to use a CLI editor for this job.
A good choice that comes pre-installed is nano, which has weird keyboard shortcuts. If you are used to the Windows shortcuts, you may find micro easier (sudo apt install micro
).
When you open the file sudo nano /etc/samba/smb.conf
, you should see this (I removed the comments for clarity):
[global]
workgroup = WORKGROUP
log file = /var/log/samba/log.%m
max log size = 1000
logging = file
panic action = /usr/share/samba/panic-action %d
server role = standalone server
obey pam restrictions = yes
unix password sync = yes
passwd program = /usr/bin/passwd %u
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
pam password change = yes
map to guest = bad user
usershare allow guests = yes
[homes]
comment = Home Directories
browseable = no
[printers]
comment = All Printers
browseable = no
path = /var/tmp
printable = yes
guest ok = no
read only = yes
create mask = 0700
[print$]
comment = Printer Drivers
path = /var/lib/samba/printers
browseable = yes
read only = yes
guest ok = no
As you can see, each [share]
is declaring the shares that we talked about above. Below these lines, we have the rules of each share.
I removed the [print$]
, [printers]
and [homes]
declarations and settings.
I also wrote the shares that I mentioned above, so my file now looks like this:
[global]
workgroup = WORKGROUP
server role = standalone server
security = user
map to guest = Bad User
log file = /var/log/samba/log.%m
max log size = 1000
logging = file
panic action = /usr/share/samba/panic-action %d
# Minimum protocol versions to avoid older, less secure versions
min protocol = SMB2
client min protocol = SMB2
server min protocol = SMB2
[movies]
comment = Guest Access: View-only of movies directory
path = /mnt/hdd1/movies
browsable = yes
guest ok = yes
read only = yes
create mask = 0755
directory mask = 0755
[hdd]
comment = Full access to entire HDD (requires authentication)
path = /mnt/hdd1
browsable = yes
guest ok = no
valid users = alex
writable = yes
create mask = 0775
directory mask = 0775
This is a simple configuration with two shares.
The [movies] share allows guests - guest ok = yes
(so I don't need to connect with an account every time I want to watch a movie from my smart TV).
The [hdd] share doesn't allow guests - guest ok = no
, valid users = alex
, but provides full access to the hard drive.
Let's say that we want a different user to have his designated space on our drive. We should define the share as such:
[sophia-space]
comment = Sophia's designated space on the disk
path = /mnt/hdd1/sophia-dir
browsable = yes
guest ok = no
valid users = sophia
writable = yes
create mask = 0775
directory mask = 0775
You can play around with different configurations and see what suits you.
You can check for syntax errors in your configuration with testparm
.
Don't forget to restart Samba after you're finished sudo systemctl restart smbd
.
More info on defining Samba shares here
The whole point of having a samba server is that we can connect to it natively through our Windows file manager. The same applies for linux of course.
On Windows: Open a file manager and type: \\pi.local
(or whatever your hostname is). You can also access your server through its local IP (check your router settings). After some seconds, depending on your setup, you will see all the active shares that you declared above. Depending on the privileges that the account you used has, you should be able to connect to these that you gave access to. You should definitely play around and see if everything works correctly.
On Linux: Open your file manager of choice and go to the Network tab (or find a menu button called "Connect to Server"). If you are asked, choose Samba (or smb). Either way, you can use the address smb://sophia@pi.local
(change the unix username and hostname accordingly).
All modern file managers support some kind of "pinning" of the shared directory to a shortcut, in order to access automatically. Also, they usually have the option to store your credentials in order to not have to retype them again and again.
Security-wise, until now, it's been all fun and games. The fact that this server runs on our local network means that it is protected, as every device is, from our strong WPA password.
If we want to be able to access our server from an outside network, we need to start tinkering with port-forwarding and dynamic DNS. This can be very dangerous if you don't know what you are doing. You need to stay up to date for security updates, make sure that all of your smb passwords are strong, review all of your unix and samba privileges for each user etc.
But will they target me? Of course they will. There are bots crawling around the internet trying to find open ports in home networks (just like yours).
The other solution is to setup a VPN running on your server and accessing it with your client devices. This is much safer than port-forwarding, but requires proper setup and maintaining.
What I propose to networking amateurs (and what I also use myself) is a service like Tailscale. This program acts like a VPN and connects your devices, no matter where they are in the world, in a LAN-like network. You can SSH and use Samba through Tailscale by installing it and adding the devices to your network for free.
I am generally happy with it for now, although this solution is far from optimal and I use it only when I really have to connect to my NAS while away from home.
To install on your PC, follow these instructions. I have disabled it from running on boot and I generally open it and connect only when I need to use it.
To install on your Pi, curl -fsSL https://tailscale.com/install.sh | sh
, then click the link that appears at the end of the installation and connect to your account and then enable/disable it with sudo tailscale up
and sudo tailscale down
.
You can check if this works by using SSH to the Pi from your computer, but not using the LAN IP/hostname, but the IP/hostname that Tailscale provides.
As a disclaimer, Tailscale is not open-source and I cannot fully endorse it, because I practically do not know what it does. It's a practical solution for me
If you are planning to torrent directly from your server, you should also install a VPN that actually hides your IP and redirects your traffic from the ISP to a trusted VPN server.
You should be able to follow these instructions with whatever VPN supports openvpn.
sudo apt install openvpn
on the Pi
cd /etc/openvpn
and download the OVPN files from your VPN provider there.
nano vpn_credentials.txt
and add two lines to this file
vpn-username
vpn-password
sed -i '/auth-user-pass/c\auth-user-pass vpn_credentials.txt' *.ovpn
, to modify all of the downloaded .ovpn files in order to include a reference to the file that our credentials are stored (we add vpn_credentials.txt
next to the auth-user-pass
line in each file).
chmod 600 vpn_credentials.txt
, to restrict access to the file containing your password.
Finally, you can start the VPN with sudo openvpn --config /etc/openvpn/some_country.ovpn
and run it as a daemon (in the background) with sudo openvpn --config /etc/openvpn/some_country.ovpn --daemon
.
In the first case, you stop the VPN with Ctrl-C
and in the second case you find and kill the daemon with sudo pkill openvpn
.
You can test if the VPN is working by running curl ifconfig.me
before and after you run/stop the VPN and check if the IP changes. For more in-depth testing, you can use traceroute.
We have now reached a point in our configuration that we can actually start thinking about automating some of the workload that maintaining a NAS has.
For example, we may want to sync the whole (or a subdirectory of the) share with one or more clients.
The way that I did this, and this requires no further server configuration, is by using a program called FreeFileSync. It has a simple UI and I can press a button that synchronizes everything automatically whenever I want to.
This way, I use the Pi sort of as a buffer too. I have my desktop PC and my laptop and I work on project files from both. Before I start working on anything, I update my local files from the server and after I finish working I update the server from my local files. This way everything stays up to date and is easy to maintain without much tinkering and manual labor.
A simpler, more barebones way of doing this would be through rsync, which uses a different client/server model and has "delta copy" routines.
A much more sophisticated way that utilizes a web interface would be Syncthing.
If for some reason FFS doesn't cut it for you, I would suggest one of the other two solutions.
We are connecting to the server using the terminal. This means that every program that we run needs to be running in the background (as a daemon), in order for us to be able to close that terminal session without disrupting the program (that is why we run openvpn with the --daemon
parameter).
When we want to continuously download/upload files, that leaves us with two choices:
using a daemon CLI torrent client
using a torrent client with a web interface
A good choice that does both at the same time is transmission. We can install it with sudo apt install transmission-daemon
and stop it with sudo systemctl stop transmission-daemon
to prevent conflicts.
In order for it to work, we have to change the configuration file with
sudo nano /etc/transmission-daemon/settings.json
.
You should definitely check/change the following fields:
"rpc-authentication-required": false
, it is relatively safe to not use authentication since we are only going to be connecting locally, but you can never be too sure, so in case you want,
"rpc-username": "admin"
, this is the username that you will use when connecting through the browser. Change it to something you will remember.
"rpc-password": "yourpassword"
, same as the above
"rpc-whitelist-enabled": true
, this needs to stay true
, in order to block traffic from outside your local network
"download-dir": "/mnt/hdd1/movies"
, change this to suit your needs, for example make it point to your hard drive.
"rpc-whitelist": "127.0.0.1,192.168.*.*"
, very important to add the second address in order to be able to connect from your local network. If you use a different subnet mask (other than 192.168...), add that instead after the comma.
We start it again with sudo systemctl start transmission-daemon
.
We make it run on boot with sudo systemctl enable transmission-daemon
.
The program can be controlled using the following commands (you should add -n '{rpc-username}:{rpc-password}'
at the end of each command if you are using rpc-authentication):
transmission-remote -l
, to list active torrents
transmission-remote -a "{magnet_link}"
, to add a new torrent (magnet or file)
transmission-remote -t {torrent_id} -r
, to remove a torrent
transmission-remote -t {torrent_id} -s
, to start a torrent
transmission-remote -t {torrent_id} -s
, to stop a torrent
(but I don't, so I won't bother implementing them for now)
Installing Cockpit and adding an openvpn plugin to it, in order to control (almost) the server from the browser (less SSH needed)
Installing and configuring Nextcloud, in order to have an experience as close as it gets to Google Drive or MS OneDrive
It is important to have a good grasp of the more basic system management and network setup in order to be able to fully understand and configure these more sophisticated programs. Otherwise, you will just end up with a too-feature-rich-to-be-useful situation, that is usually not sustainable and useless in the long run.
If you are sure you understand the basics and need something more complicated and advanced (e.g. real-time collaboration on documents), you should surely give them a shot.