2024-02-13T03:25:35+00:00
Recently, I’ve set up a self hosted webspace on a simple set top box pc. It has been running great so far. The only problems I encountered that sometimes the wi-fi dongle failed to come up after a reboot and a replug of the usb wi-fi is required. With this situation, the self hosted webspace cannot have a near 100 percent uptime guarantee.
So, I have decided to rent a small virtual machine to move stuff I host to the cloud. The cloud has all the advantages to host my stuffs compared to my self hosted single board pc did before.
After completing the order of my cloud vps and getting
shell access, the first thing I do is enabling public key authentication
and disabling root login. I generated a new ed25519 ssh key for use with
the cloud virtual machine and copied the public key to the
~/.ssh/authorized_keys
file.
ssh-keygen -t ed25519 -f ~/.ssh/cloudvps-sg.key
ssh-copy-id -i ~/.ssh/cloudvps-sg.key root@cloudvps_sg
Afterwards, I logged in to the server, creating a user with
sudo
right to perform my daily activity on the cloud
virtual machine.
USERNAME=writer # Replace $USERNAME with the desired username
useradd -m -s /bin/bash -G users,sudo,adm,systemd-journal $USERNAME
passwd $USERNAME
tar -cf - .ssh | sudo -u "$USERNAME" tar -C /home/$USERNAME -xf -
exit
With the steps above, I can log in for ssh access using my newly
created user and use sudo
for administrative tasks such as
updating the system or rebooting the virtual machine.
For more paranoid set up, I cleared the root
password
and locked the root
account so it is not possible to log in
with it.
passwd -d -l root
With this setup, the root account password is cleared and the root account is not available for login. If at anytime I need a root shell, I can obtain the root shell using sudo.
sudo -i
So no more root login on my cloud vps now.
After leaving the cloud virtual machine up for several minutes, I viewed the system log to find if there are suspicious activity.
journalctl
Of course, the internet is so hostile with many attempts from several ip addresses scanning for open ports and attempting to log in using different credentials. As for now, the failed attempts of ssh login has occupied ±3.8 MB of disk space.
So I researched options to limit failed ssh access log to secure the system from malicious attackers.
The first, I installed ufw
and setting up a simple rule
to limit ssh.
sudo apt install ufw
sudo ufw limit ssh
sudo ufw enable
With this setup, the firewall is working and access to ssh port will be monitored by the linux packet filtering framework. But it doesn’t stop the ssh port from being attacked.
Finally, I found that the simplest I can do is by setting up a virtual point-to-point network interface and limiting ssh access only from the virtual network interface.
I ended up with two tunnels set up inside my virtual private server.
The first is tailscale
tunnel. If you haven’t heard
about tailscale, it is a service to make a private network
consisting of several devices. With tailscale
, these
devices can appear as if they are in the same local network, no matter
what the upstream internet link they use. One can add a single board pc
at home connected to the wi-fi, a laptop, an Android phone, a cloud
virtual machine to a single tailscale network and those devices
will be available as if they were in the same local network.
The second is a pure wireguard
tunnel. The wireguard
tunnel needs more crafting to make it behaves like the tailscale tunnel,
but in fact what tailscale performs can also be done with pure wireguard
set up.
With this set up, I can activate the vpn and log in to the cloud server after the vpn has been activated using the private ip address on the vpn link.
After confirming that the vpn links, both the tailscale and wireguard
vpns, are reliable, I decided to close the ssh port from the internet.
There are many ways to achieve this feat, but I settled to the systemd
way to block access to the ssh port before the packet hits
sshd
service.
I edited the openssh service unit. On Debian, the service name is
ssh.service
. If you follow this guide, replace
ssh.service
with the openssh service name of your linux
distribution.
sudo systemctl edit ssh.service
I added the following content on the systemd-provided text editor.
[Service]
Slice=openssh.slice
IPAddressAllow=localhost 172.16.99.0/24 fd42:42:42::/64 100.0.0.0/8 fd64:64:64ef::/64
IPAddressDeny=any
If you are following this guide, be sure to write the ip addresses
range according to your ip address configuration. Also, the order of
IPAddressAllow
and IPAddressDeny
is important.
The IPAddressAllow
has to be placed before
IPAddressDeny
, otherwise you will lock yourself out of your
own system and have to resort to physical or console access to recover
the system.
The edited file will be saved as
/etc/systemd/system/ssh.service.d/override.conf
on Debian
system, since the openssh server’s name is ssh.service
.
Make sure that the vpn link is reliable before reloading systemd and
ssh.service
.
Since I have confirmed that my vpn setup is quite reliable, I reloaded the systemd and ssh.service.
sudo systemctl daemon-reload
sudo systemctl reload ssh.service
With this, the ssh.service
is reloaded but still using
the old slice. To move ssh.service
to the newly defined
slice, a service restart is needed.
sudo systemctl restart ssh.service
The slice can be inspected via systemctl
.
systemctl --type slice
systemctl status openssh.slice
With the steps above, the ssh port is hidden from the
internet. You can confirm this state by trying to ssh into the internet
facing ip address of the system or by scanning for ssh public key via
ssh-keyscan
.
MY_IP=vvv.www.xxx.yyy #Replace with the public ip or ipv6 of the system
ssh-keyscan "$MY_IP"
If the ssh scan fails with timeout, then the newly created rule is working.
Despite the controversy around systemd
, I found that
systemd
is actually very useful. Especially with
cgroup
, since each service can be monitored or even limited
to a feature. There are many possible use of systemd cgroup and the
process mentioned here is one of those possibilities.
Feel free to explore more possibilities of systemd cgroups.
Thanks for reading this page.