A Complete Guide For Transition From Single to Multiple Partition Ubuntu 20.04 Server On Already Installed System Using LVM
The title is my request. I have been reading for hours and I have found tutorials for moving /home to a partition and /tmp to a partition. But none for converting all recommended directories into partitions for Ubuntu Server. One tutorial got close, but then ignored the obviously necessary step of copying data from prior directories to new partitions.
Anyway, after many hours, I still have not yet found a tutorial that covers my rather common situation.
Situation:
I installed Ubuntu Server on Prod & Test servers (vps & virtualbox, respectively), did some work. While reading server hardening tutorials, I realized that it is best practice to have /boot /home /swapfile /tmp /usr and var... /opt? all on separate partitions for resource management and security reasons.
The correct way to create the above mentioned partitions is by using LVM
However, none of tutorials discuss / recommend partition sizes (proportionally, as obviously different systems have different disk space available, mine is 80GB on / )
Additionally, some tutorials mention the necessary changes to /etc/fstab and some don't.
My goal is to convert my 80GB single partition setup into multiple partitions and then secure the partitions against common attacks and exploits using: a combination of nodev,noexec,nosuid in /etc/fstab
I can spend days testing and failing different configurations in virtualbox... Or some handsome and/or beautiful "Ubuntu Server Partitioning Guru" can publish an easy to follow (intermediate user target audience) definitive guide as I have described above and become Internet Famous as many users hosting their own projects on vps will forever love & adore you. :-)
UPDATE
I originally did not publish the primary server hardening guide becuse I didnt want this post to become a "debate" over this guy's very thorough piece of work. However, in hindsight I think it would be helpful for folks looking to answer this post, to see the actual security benefits that I am trying to achieve.
1 Answer
I think usually separating things into different file systems doesn't help/improve security. If someone breaks into the system while it's running, anyway everything is mounted and there is no logical difference to having everything in one file system. What reasons were given in the material you studied?
That said, it can help performance (different file systems or different hardware beneath these mount points) and shorten how long a disaster recovery takes (e.g. if only the SSD that held /var went up in smoke, you only have to restore that backup and the rest stays running).
You were asking about recommended partition sizes:
- For
/you're good with 15-20 GB. I've never needed more for a server (running a web server + mail server). swapI usually put on its own volume / partition which has the same size as the RAM - simply so that suspend-to-disk can work. People used to recommend 2xRAM size, but with nowadays' RAM sizes, when you run into a situation where you need a serious amount of swap, you're anyway in trouble. You'll notice it from a massive slowdown, and you should then quickly increase the available RAM./bootjust needs 500 MB, which gives it space for 8+ kernel+initrd versions. Make sure to runapt autoremovefrequently, to keep it trimmed after kernel upgrades.- I would keep
/usrand/optas directories on/, just don't see a benefit in moving them to their own filesystems. - As described in this hardening guide, having a separate volume for
/tmpdoes make sense, as it allows you to make that world-writable directory more restricted - both in how much space it can use of the totally available space, and what can be done with the files that are stored there. The guide recommends to use the nodev, nosuid and noexec options when mounting the filesystem. The guide only gives the mount commands for "one-time use". Translating this into a line in /etc/fstab would mean that you placenodev,nosuid,noexecinto the 4th (options) column of the line where you mount the dedicated /tmp volume onto/tmp. - That leaves us with
/homeand/var- that's usually the "important" stuff. On my servers/homeis pretty much empty, but/varholds public_html, the logs, the databases, etc. So I keep/homeas a directory on/, but/vardefinitely gets its own volume, and is backed up most frequently. Give it all remaining space after the above was done.
Then you asked how to make the transition:
- In the running system, where everything is on
/, attach the new disk (VDI file, ...), prepare it with pvcreate, lvcreate and then the file systems of your choice (mkfs.ext4 for example). - Then make temporary mount points under /mnt, e.g. /mnt/newroot, /mnt/newvar, ... and mount the file systems there.
- Then use
rsync -xaP <source>/ <destination>/for each of your file systems. The '-x' option will prevent rsync from crossing file system boundaries, i.e. if you dorsync -xaP / /mnt/newroot/it won't also copy /var, /home or even all the new filesystems mounted under /mnt. '-a' will make sure permissions etc. will be taken over without modification, and '-P' shows progress. For details, please refer toman rsync.
After that's done, edit the /mnt/newroot/etc/fstab and make sure you list all file systems at the appropriate mount points. If you've gotten that far, this shouldn't be too hard (as you chose all the /dev/mapper/... names, file systems, etc.).
You will also have to use grub-install and maybe update-grub to make the new disk bootable, but there I'm not so sure about the exact procedure. With VMs you can easily try it out and if it doesn't boot, attach the old disk again and fix it.
For reference, here is a shell session that gives you some specific commands on the partitioning + LVM + formatting + referencing in fstab topics. Please note that you most likely will have to modify them, for example if your device isn't /dev/sda, if you want different file systems, etc. - it's merely an example.
# After using fdisk to create one partition that covers the whole device,
# it looks like this:
root@ubuntu:~# fdisk -l /dev/sda
Disk /dev/sda: 100 GiB, 107374182400 bytes, 209715200 sectors
Disk model: VBOX HARDDISK
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 18ACB4C9-3F33-7041-8BEB-D819F138A809
Device Start End Sectors Size Type
/dev/sda1 2048 209715166 209713119 100G Linux LVM
# Create a physical volume for LVM
root@ubuntu:~# pvcreate /dev/sda1 Physical volume "/dev/sda1" successfully created.
# Create a volume group with the name "vg1" for LVM that will
# hold all our logical volumes
root@ubuntu:~# vgcreate vg1 /dev/sda1 Volume group "vg1" successfully created
# Create the logical volumes as described above
root@ubuntu:~# lvcreate --name root --size 20G vg1 Logical volume "root" created.
root@ubuntu:~# lvcreate --name swap --size 8G vg1 Logical volume "swap" created.
root@ubuntu:~# lvcreate --name boot --size 500M vg1 Logical volume "boot" created.
root@ubuntu:~# lvcreate --name tmp --size 5G vg1 Logical volume "tmp" created.
# Have a look at the logical volumes
root@ubuntu:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert boot vg1 -wi-a----- 500.00m root vg1 -wi-a----- 20.00g swap vg1 -wi-a----- 8.00g tmp vg1 -wi-a----- 5.00g
# Have a look at the volume group and see how much space is left
root@ubuntu:~# vgs VG #PV #LV #SN Attr VSize VFree vg1 1 4 0 wz--n- <100.00g <66.51g
# Use the remaining space for the last logical volume, var
root@ubuntu:~# lvcreate --name var --size 66.5G vg1 Logical volume "var" created.
# Have another look at the volumes
root@ubuntu:~# lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert boot vg1 -wi-a----- 500.00m root vg1 -wi-a----- 20.00g swap vg1 -wi-a----- 8.00g tmp vg1 -wi-a----- 5.00g var vg1 -wi-a----- 66.50g
# Format all volumes with ext4 file system
for i in /dev/mapper/vg1-*; do mkfs.ext4 $i; done
# Turn vg1-swap into swap space
mkswap /dev/mapper/vg1-swap
# Create fstab entries that look like this
/dev/mapper/vg1-root / ext4 defaults 0 1
/dev/mapper/vg1-boot /boot ext4 defaults 0 2
/dev/mapper/vg1-var /var ext4 defaults 0 2
/dev/mapper/vg1-tmp /tmp ext4 nosuid,nodev,noexec 0 0
/dev/mapper/vg1-swap none swap sw 0 0 7