Create a level 1 raid from empty disks
Your customer got a new Ubuntu 18 Server with two additional disks. The customer needs a new directory at
\data to store important files.
All data inside this directory should be stored on a RAID 1 utilizing the additional disks. Your colleagues are familiar with RAID but want to try Btrfs as the filesystem. But, in case they don't like Btrfs, they only want to convert the filesystem without changing the RAID.
- identify the two additional disks
- create a software RAID 1
- format and mount the RAID according to your instructions
- make sure the mount persists across reboots (reboot via
vagrant reloadinside the training directory)
- Please reboot your VM via
vagrant reloadinside the training directory - not
reboot- to mount required directories
- In case your VM doesn't boot anymore or the HDDs are broken just run
vagrant destroyfollowed by
vagrant upto delete the VM and start over
- After finishing the training, please remove all files in
Disks are organized in partitions. Partitions than have a filesystem that is used to store the files.
While it is possible to create a RAID of whole disks, there is no 100% right answer whether to use whole disks or partitions. For this training, we'll go with partitions. By using partitions we're able to precisely specifying the size to 1000MB each.
While fdisk can be used for partitioning as well, cfdisk provides a simple graphical interface.
btrfs is a modern filesystem for Linux that implements advanced features like:
- Integrated multi-device spanning(RAID like features)
While Brtfs provides these advanced features, it's development status is heavily discussed inside the community. Because some of its features are still not ready for production its still not the default filesystem on Linux distributions. openSUSE is one of the few that comes with Btrfs by default. RedHat publicly announced the removal of Brtfs support 2017
Despite the development status of some features, Btrfs is sill used by many. As long as the sysadmin is aware of the known issues, the functional features outweigh the disadvantages of older filesystems like ext4.
Using Brtfs with RAID is an example of these known issues and shows why it's important to check the Btrfs Status page. Even though Btfs is capable of setting up a RAID itself - the implementation is not fully developed. By checking the Status page, you'll find RAID0, RAID1 and RAID10 marked as stable and RAID56 as unstable. Further reading reveals the sidenote "reading from mirrors in parallel can be optimized further".
One strategy to use Btrfs but avoid any issues of its RAID-feature is to not use the built-in RAID capability and relay on stable implementations like mdadm. This strategy is also used in this training.
Creating a btrfs filesystem is pretty easy: mkfs.btrfs
mkfs was originally implemented 40 years ago and is still used for many filesystems in Linux. Running
mkfs.<fs-type> supports most filesystems.
To quote fstab(5):
The file fstab contains descriptive information about the filesystems the system can mount.
[...] it is the duty of the system administrator to properly create and maintain this file. [...]
Adding mounts is pretty straight forward. Remember to use the UUID to identify partitions and use blkid to get all information needed.
It is worth to check out all available
options in mount(8). It is always good to lookup options when copying from tutorials or other posts.
Usually adding only
defaults is fine(note that defaults are always included by default even if you don't add it. It's only needed because the options field can't be empty).
fstab.d is worth to talk about. Usually it's good practice to put additional configuration files in separate x.d directories in case they exist. But even though a /etc/fstab.d/ directory can exist, a web search shows that it's not 100% save to use. The main reasons are:
- /etc/fstab has a long history and other tools may only check this file for mounts
- typically the file only contains a handful of entries
- systemd provides a modern way to systematically configure many mounts
Check out this interesting discussion by developers of systemd, mound and libremount: /etc/fstab.d yes or not
And we don't want to support that in systemd. [...] The gain of features from fstab.d/ vs. the amount of breakage it causes is not worth the trouble.
So, since the content of /etc/fstab is vital for your system, you probably should just use the good old /etc/fstab file.
Creating a software RAID with mdadm is pretty straight forward. The command needed for this training is even included in the mdadm man page.
Be aware that a RAID can prevent data loss - but fixing a broken RAID can be complex. This training only requires you to create a RAID, but you should explore the tools to analyse and monitor your RAID yourself.