If you are someone new to Linux i hope the following will help you out. This guide assumes you are running on Debian(or a Debian derivative)

  1. Backup
  2. Automatic software updates
  3. Cache your debian package downloads
  4. Avoid having to use sudo on every command
  5. Syntax highlighting text files
  6. Add colors to log files
  7. Run command when file contents changes
  8. Put your computer to sleep and auto wakeup on timer
  9. Monitoring thermals
  10. Set your AMD GPU to low power mode.
  11. Monitor ping latency and packet loss
  12. Find and remove duplicate files
  13. Protect your computer from random untrusted USB devices
  14. Show data transfer progress
  15. Get notifications on your phone
  16. Extract files from any binary
  17. Detect file corruption
  18. Download more RAM
  19. Benchmarking applications
  20. Find which package provides a file
  21. Faster compression with zstd
  22. Faster checksum with b3sum and xxh
  23. Managing your Linux system on a browser
  24. Run virtual machines
  25. Run containers with systemd
  26. Run debian everywhere
  27. Optimize bootup speed
  28. Prevent system from going to sleep
  29. Run a process with low priority
  30. Authenticate once, open multiple SSH sessions
  31. Restrict SSH key to only run a single command
  32. Mount remote server locally over SSH
  33. Monitor resource contention
  34. Control CPU governor
  35. Bind mounting
  36. Reduce reserved blocks in ext4 filesystem
  37. Access cloud storage from the terminal
  38. Prevent critical files from accidental deletion
  39. Monitor files accessed by the system
  40. Avoid frequent password authentication
  41. Use 127.1 instead of 127.0.0.1
  42. Recovering from server running out of disk space
  43. Reduce memory usage when running multiple VMs

Backup

If you are new to Linux, backups are the first thing you need to setup. Speaking from experience, it is only a matter of time before you run a wrong command and completely wipe all your data. Make sure your backups are stored on another machine since a common mistake most people(beginners and experts alike) make is to accidentally run a command on the wrong hard disk. You should also validate your backups from time to time. Unvalidated backups are the same as no backups.

As of now BorgBackup is simply the best backup tool for Linux. It can compress and de-duplicate your data which lets you backup your data every day without your backup server running out of disk space. Borg can encrypt your data on the client side so your cloud provider will never be able to read your files.

Do not backup to a cifs mount, there is a known bug that will lead to data corruption. Always use SSH for backup.

Make use of the BORG_PASSCOMMAND environment variable to not have to type in your password every time you need to access the borg repository.

If you want to avoid temporary files from being backed up, you have two options:

  1. Use the --exclude-from FILE and list out each and every directory to be excluded in this file
  2. Use the --exclude-caches option. This will exclude every directory with a CACHEDIR.TAG file. You can place this inside the directory you want to be excluded. Some tools like ccache and cargo will place a CACHEDIR.TAG file inside their cache directory.

Care must be taken while backing up files from software like virtual machines, containers or databases. All of these should be shutdown before starting the backup process. If your backup takes a long time and you cannot have the server be down for so long, you should probably look into btrfs snapshots.

Borg is best combined with borgmatic. With borgmatic you can avoid having to write a script to backup, check, prune and compact the repository. With the help of apprise tool, borgmatic can be setup to send you a notification if any backup operation fails.

If you have disk encryption enabled you need to backup your LUKS header. Refer to Fedora docs or Arch Wiki

Automatic software updates

Backups and security have a lot in common, by the time you realize you needed it, its too late.

unattended-upgrades package will automatically install security updates on Debian (and debian derivatives like Ubuntu). While it is rare for a Debian security update to break the system, it can still happen. On a critical production server you should not have auto updates installed, instead you should subscribe to the Debian security advisory, test out new updated packages and then manually push out the updates to production server. If you do not have such a process in place, it is better off to enable auto updates.

After installing the unattended-upgrades package, activate auto updates by running

sudo dpkg-reconfigure unattended-upgrades

If you are running a server you will need to modify the /etc/apt/apt.conf.d/50unattended-upgrades file to enable auto reboot after a security update. Modify the following three lines as per your requirement:

Unattended-Upgrade::Automatic-Reboot "true";
Unattended-Upgrade::Automatic-Reboot-WithUsers "true";
Unattended-Upgrade::Automatic-Reboot-Time "now";

Cache your debian package downloads

If you have many Debian servers or containers, it might make sense to cache your apt downloads with apt-cacher-ng. With this, your apt downloads will now work at the speed of your LAN/server rather than your internet/Debian mirror speed.

Setup is easy, install the apt-cacher-ng package on your server and point all your clients to use it by creating a config file in /etc/apt/apt.conf.d/ by running:

echo 'Acquire::http { Proxy "http://SERVER_IP:3142"; }' > /etc/apt/apt.conf.d/proxy

Note: If you encounter a The following signatures were invalid: BADSIG 648ACFD622F3D138 Debian Archive Automatic Signing Key error on the client, you will need to apply the workaround in this comment

Avoid having to use sudo on every command

You can get a root shell with sudo -i and can avoid having to prepend sudo on every command

Syntax highlighting text files

Install the bat package and use batcat instead of cat to syntax highlight text files. Another option is the python3-pygments package and running pygmentize -g FILE

Add colors to log files

Use ccze to add colors to your log files

ccze -A < /var/log/dpkg.log

Run command when file contents changes

The entr program can be used to run a command when a file changes

echo test.py | entr python3 test.py

Now when you save the python file on your text editor, entr will run the python script

Put your computer to sleep and auto wakeup on timer

rtcwake can be used to put your computer to sleep and wakeup on a set time. This requires your computer to have a RTC (which even 15 year old computers would have but not SBCs like the raspberry pi). This is particularly of interest to home servers where this command can be used to shutdown the server to save power at night or when you are not at home.

rtcwake -m mem -s 60

The above command will put your computer to sleep and wakeup after 60 seconds.

Monitoring thermals

Install the lm-sensors package and run sensors-detect to detect the hardware. You can then monitor your system with sensors command. Run watch -d sensors to keep running the command every 2s. The -d parameter will highlight any changes.

Note that while reading the CPU and GPU information is safe, some BIOS are buggy and just repeatedly reading the temperature alone might trigger bugs in the BIOS. On my Asus motherboard, sensors command sometimes makes the fans to stop responding to temperature changes or even invert the fan speed control (spinning at full speed on low temperatures and spinning at lower speed when temperature increases)

Set your AMD GPU to low power mode.

Unless you are gaming or need your GPU for computation, putting your GPU to low power mode will prevent it from heating up on load and spinning up the GPU fans. You will need to create a systemd init service that runs the following command:

echo low > /sys/class/drm/card0/device/power_dpm_force_performance_level

Monitor ping latency and packet loss

Use mtr to monitor ping latency and packet loss

mtr -t 8.8.8.8

The -t parameter is to force it to use the terminal and not to pop open a GUI window.

Find and remove duplicate files

fdupes -dN mydir

This command will remove all duplicate files in the mydir directory

Protect your computer from random untrusted USB devices

With USBGuard you can restrict what USB devices can be connected to your computer.

Show data transfer progress

Add pv to your pipe to see progress and data transfer rate.

pv src.tar  | zstdmt > src.tar.zst

You can also have multiple instances of pv by using the cursor and name options

tar c mydir | pv -cN 'pre zstd' | zstdmt | pv -cN 'post zstd' > test.tar.zst

pv can also rate limit data transfer with the -L option

Get notifications on your phone

apprise can be used to send messages to a large list of providers. For instance, add this to the on_error hook of borgmatic and get notified when backup fails.

apprise  -t 'server1' -b 'Backup failed' -c /etc/apprise.config

Extract files from any binary

The binwalk tool can extract files from any binary file. This is particularly useful on firmware binary files.

Detect file corruption

Use hashdeep tool to calculate a checksum file and then use it to ensure files have not been modified.

Create the checksum file by running

hashdeep -r Downloads/ > checksum

and now verify integrity by running.

hashdeep -ak checksum -r Downloads/

Download more RAM

Install the zram-tools package and it will setup swap on a zram compressed ram disk. When needed the kernel will compress memory contents and swap out to zram. This approach is effective because in most cases memory contents can be easily compressed. The time required to compress and decompress memory would be lower than reaching out to the hard drive.

Once installed you can see how much of the zram is being used by running the zramctl command.

NAME       ALGORITHM DISKSIZE  DATA COMPR TOTAL STREAMS MOUNTPOINT
/dev/zram0 lz4          15.6G  4.6G  1.1G  1.3G      12 [SWAP]

Here 15.6 GiB of RAM was allotted for zram. Presently 4.6 GiB of RAM was compressed down into 1.1GiB before being stored in zram.

Benchmarking applications

To accurately benchmark an application that reads files from the hard disk, the cache memory needs to be cleared before the program is executed. Run echo 3 > /proc/sys/vm/drop_caches to drop kernel cache memory.

Find which package provides a file

With the apt-file package you can find out which package provides a file. After installation, run apt-file update once and it is ready for use.

~$ apt-file search kvm-ok
Searching through filenames ...           
cpu-checker: /usr/sbin/kvm-ok  
cpu-checker: /usr/share/man/man1/kvm-ok.1.gz

Faster compression with zstd

If you need to share files with Windows or MacOS users, compress them with zip. For everything else use zstd. You will get compression close to gzip but at fraction of the time. zstd package provides the multi threaded zstdmt command which will use all your CPU cores for compression.

Faster checksum with b3sum and xxh

b3sum is a much faster alternative to SHA256. On a raspberry PI 4, b3sum can be more than 8 times faster than SHA256. Similarly xxhash can be used instead of CRC32.

Managing your Linux system on a browser

Manage your Linux machine via a web browser with Cockpit. Install the cockpit package and head over to http://localhost:9090/.

Run virtual machines

You can run virtual machines with libvirt. First make sure hardware virtualization is enabled by installing the cpu-checker package and running kvm-ok command. It should show the following output, if not check your BIOS manual on how to enable it.

$ /usr/sbin/kvm-ok
INFO: /dev/kvm exists
KVM acceleration can be used

Install libvirt-daemon-system and the GUI virt-manager package. Launch the “Virtual Machine Manager” application to run your VM. This application can also remotely connect over SSH to other machines and manage virtual machines running on it. You can run VMs using this even on a raspberry PI, provided you have enough memory.

The guestfs-tools package provides a lot of useful tools like virt-customize, virt-sparsify and virt-sysprep. Make sure your VM is shutdown before running these tools.

Consult the ArchWiki for more information.

Run containers with systemd

Install the systemd-container package to install the tools necessary for managing the containers and debootstrap for generating container images.

Generate a container image with:

debootstrap --force-check-gpg --include=dbus,systemd stable debian

The systemd and dbus packages are mandatory if you need to manage the container with systemd tools.

Now move the container over to /var/lib/machines

mv debian /var/lib/machines/

You can now start the container with machinectl start debian and open a shell with machinectl shell debian-base. You can access the logs from the container by running journalctl on the host with journalctl -M CONTAINER_NAME -f

By default the container uses private mode networking. If you want internet access on the container you need to allow the container to access host networking by creating a file /etc/systemd/nspawn/CONTAINER_NAME.nspawn with the following contents

[Network]
VirtualEthernet=no

The mkosi tool can be used to generate container images for other Linux distributions.

Run debian everywhere

The debootstrap tool mentioned above can also be used for running debian on any existing Linux system including rooted Android phones. The process is documented here

Optimize bootup speed

systemd-analyze can be used to debug slow bootup. It can even generate a bootchart image.

systemd-analyze blame
systemd-analyze critical-chain
systemd-analyze plot > boot.svg

Prevent system from going to sleep

If you need to run something that might take a while to complete, without your system going to sleep due to being idle, prepend the command with systemd-inhibit

systemd-inhibit backup.sh

Run a process with low priority

If you need to run a heavy process in the background without it affecting anything else on the system, run it with nice and ionice:

nice ionice -c 3 backup.sh

If you have systemd, run:

systemd-run --user -t --quiet --property='CPUWeight=1' --property='IOWeight=1' backup.sh

Authenticate once, open multiple SSH sessions

The SSH control master feature allows you to establish a single connection first and then reuse this for subsequent sessions. This is especially useful if your server requires TOTP for login. You won’t need to keep on entering TOTP for every new connection.

Add the following lines to your ssh config file located in ~/.ssh/config

Host *
    ControlMaster auto
    ControlPath ~/.ssh/%r_%h_%p

Restrict SSH key to only run a single command

If you need to automate backup over SSH, the ssh key can be configured to only allow execution of a single command.

Mount remote server locally over SSH

Using sshfs, you can mount a remote server filesystem on a local directory over SSH. This can be useful if you want GUI applications to access the files on the server. No additional software is required on the server and communication is encrypted via SSH.

sshfs user@server:/home/user/dir ~/server/

Monitor resource contention

The kernel’s pressure stall information lets you monitor how resource contention is affecting your application performance. The best way to monitor this is to use htop. To add PSI meters to the user interface, Press F2 to enter the setup and select the PSI meters from the Meters category.

PSI some CPU:    13.75%  4.35%  0.99% 
PSI full IO:      0.00%  0.00%  0.00% 
PSI full memory:  0.00%  0.00%  0.00%

Control CPU governor

You can control the CPU governor by running:

echo powersave | tee /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor

The list of available governors can be read from /sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors.

Setting powersave governor may be of use in embedded systems like raspberry pi, when your power adapter cannot provide enough power to the system when it is run at full CPU load.

Bind mounting

Bind mounting lets you mount a directory from one location to another. Some use cases for this are:

  1. Run system on SSD and bind mount HDD partition to directories which are used for storing media or other content which do not require SSD performance.
  2. To make a directory read-only for use within a container.
  3. On a raspberry PI, mount a USB SSD on directories that might get lots of writes.

You can create read only bind mount by running

mount -o bind,ro /src/ /home/user/ro

Make this permanent by adding it to /etc/fstab

/ssd/src /home/user/src none defaults,bind 0 0

Now run systemctl daemon-reload and then mount -a

Reduce reserved blocks in ext4 filesystem

By default 5% of the ext4 filesystem will be allotted for reserved blocks, on a large disk this might waste a lot of space which may not be necessary depending upon on how the disk is used. You can reduce it with tune2fs to free up some space.

Access cloud storage from the terminal

rclone can be used to access your files stored on cloud providers from a terminal. Its crypto remote feature can provide client side encryption.

Prevent critical files from accidental deletion

The immutable file attribute can be set on a file to prevent even root user from deleting it

chattr +i important.txt

The immutable flag needs to be removed with chattr -i important.txt before the file can be deleted.

Monitor files accessed by the system

The fatrace can list out all the files that are being accessed in the system.

Avoid frequent password authentication

You can avoid having to frequently enter your user password when running certain software by adding your user account to the associated group as given below.

usermod -a -G GROUPNAME USERNAME

group name Use for
dialout Access to serial port
systemd-journal Access to systemd system logs
libvirt Access to libvirt virtual machines
render AMD ROCm

After making the change a reboot(or logout and log back in) is necessary.

Use 127.1 instead of 127.0.0.1

Due to how inet_aton works, you can replace 127.0.0.1 with 127.1. Similarly cloudflare DNS 1.0.0.1 can be replaced with 1.1

Recovering from server running out of disk space

On your server create a large empty file by running:

fallocate -l 1G EMPTY_FILE

If your server ever runs out of disk space, logging in, deleting this file and restarting your applications will quickly restore functionality, giving you plenty of time to figure out what is using up all the space.

Reduce memory usage when running multiple VMs

With Kernel Samepage Merging, pages containing identical data across multiple virtual machines can be merged into a single one to save memory. To activate it run:

echo 1  > /sys/kernel/mm/ksm/run

According to your needs, modify the /sys/kernel/mm/ksm/sleep_millisecs and /sys/kernel/mm/ksm/pages_to_scan file. You can then read how many pages are being shared from /sys/kernel/mm/ksm/pages_shared