Building (or Overbuilding) a Home Media Center – Partitioning and LVM2

Posts in this series
  1. Building (or overbuilding) a Home Media Center - Initial Setup
  2. Building (or Overbuilding) a Home Media Center - Partitioning and LVM2

angel-desktop2

Disks!

A NAS device (Network Attached Storage) is only as good as it’s disks.  To that point in this chassis I managed to wedge a trio of WD 4TB NAS (are they Blue?) hard drives in.  I have 3 requirements I need to satisfy which lead me to my partitioning decisions.

  1. Large storage space for service media files over LAN.   The larger and more efficient the better.
  2. A decent sized OS disk for use as a holding area and jump off point to other services.
  3. Redundancy in case my RAID (Redundant Array of Inexpensive Disks) suffers a hardware failure.

Phase 1, Partitioning:

array_partitions

Disk Array Partitions

Here is what I did:

It doesn’t make sense to cover partitioning here since you can do it a few different ways, but this howto explains my method well.

Partition 1 – a simple Linux filesystem (ext4) volume on each device to load the kernel headers for startup.  This shouldn’t be fancy.  I am only using the partition on the first drive (/dev/sda,) but am copying the content of this drive to the other 2 boot partitions on the other devices weekly to ensure that I am still able to boot into my system in case of a hard disk failure.

These partitions only need a filesystem added once they are divided up:

mkfs -t ext4 /dev/sda1
mkfs -t ext4 /dev/sdb1
mkfs -t ext4 /dev/sdc1

That’s it!

Partition 2 – a second Linux Filesystem (ext4 again) for my media files storage.  I am adding ZFS over these partitions so I will not be making them part of an LVM pool.  ZFS likes things eu naturale so it can manage the drives and detect issues correctly.  This is an old IBM mainframe disk management system and comes with a few perks despite it’s ungainly setup and management requirements.  I set this up in a RAIDZ stripe that I can grow, so we lose the capacity of 1 disk for redundancy.  This is a huge loss for such a small array, so we intend to grow it out from this 12Tb setup to about 28Tb soon.  More on that and the FZS details later.

mkfs -t ext4 /dev/sda2
mkfs -t ext4 /dev/sdb2
mkfs -t ext4 /dev/sdc2

You are starting to see a pattern here…

Partition 3 – Our Root (system) filesystem.  These partitions are formatted to LVM (Linux Volume Manager) to have them intelligently managed as a logic redundant pool by the kernel.  We also stripe these for redundancy, although the data here is probably sacrificial compared to the large volume, ad we can backup to the big dog as needed to prevent losing anything too important.

LVM does not require you to create a filesystem on the disks themselves. That happens later.

Partition 4 – Swap space.  We need a decent sized chunk of disk area for the system to lay down things if it runs out of memory.  Disks are very slow compared to RAM (although we have a fix for that to a degree.  Later in the post I set up LVM fast cache and ZFS ARC.) so we want to avoid this if at all possible, and usually a system that is busy enough to be using swap is circling the drain and needs a reboot, but we would like to prevent our entire system from completely locking up in the event that we oversubscribe our memory just a little bit.  As for the 2:1 ration of swap to RAM, it is garbage, but we at least provide enough room to double the memory space and then some.

Phase 2, Creating Arrays:

We have 3 disk arrays we need to create. They are for Root, Swap, and Media/storage. Root and Swap will be managed by the native Linux LVM volume manager so that we can provide fault tolerance and easy disk swaps if needed without taxing our system resources too much. ZFS will handle our massive disks array because that is what it was made for and I said so. Actually, I am going to be using this machine to host virtual servers in LXD (Lex-Dee) containers as though they were real servers on my network. I chose this method for a variety of reasons, but it should suffice to say that we can effectively build out or own mini-data center this way on the cheap without having to make any sacrifices. Again, this will be covered later, and in my parallel post on “Building a Miniature Cloud Platform.”

LVM Root

There is a nice guide from Red Hat here that covers this in explicit detail. My version will be the down and dirty one but definitely use this as a reference if you have any burning questions. To create our logical disks that we will install the OS to we perform the following,

1. Initalize physical disks/partitions

pvcreate /dev/sda3 /dev/sdb3 /dev/sdc3
pvcreate /dev/sda4 /dev/sdb4 /dev/sdc4

This prepares the disks for LVM

2. Create Logical Volume Groups

vgcreate ubuntu-vg-root /dev/sda3 /dev/sdb3 /dev/sdc3
vgcreate ubuntu-vg-swap /dev/sda4 /dev/sdb4 /dev/sdc4

This step groups the volumes together into the respective sets for assignment.

3. Create Logical Volumes

lvcreate --type raid5 -i 3 -m 1 -l 100%FREE -n root ubuntu-vg-root /dev/sda3 /dev/sdb3 /dev/sdc3

Our root (system) volume will span 3 disks and keep “replica” data to recover from the loss of a drive. This works on the algebraic principle describer in a + b + c. If we know the value of any 2 of those variables, we can recover what the value of the third is.

lvcreate --type raid0 -i 3 -l 100%FREE -n root ubuntu-vg-root /dev/sda3 /dev/sdb3 /dev/sdc3

The swap area will also span all 3 disks, but we do not care about fault tolerance since we are not storing anything here that we need to keep in the event of a drive failure.

We are done with LVM now. Here is what our arrays look like:

root@ayana-angel:/home/spyderdyne/Documents# vgdisplay
  --- Volume group ---
  VG Name               ubuntu-vg-root
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               322.25 GiB
  PE Size               4.00 MiB
  Total PE              82497
  Alloc PE / Size       82497 / 322.25 GiB
  Free  PE / Size       0 / 0   
  VG UUID               Ixrwjf-8ZZQ-wIR6-XJOk-4KcT-RBlf-1vK7UZ
   
  --- Volume group ---
  VG Name               ubuntu-vg-swap
  System ID             
  Format                lvm2
  Metadata Areas        3
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               1
  Max PV                0
  Cur PV                3
  Act PV                3
  VG Size               15.36 GiB
  PE Size               4.00 MiB
  Total PE              3933
  Alloc PE / Size       3933 / 15.36 GiB
  Free  PE / Size       0 / 0   
  VG UUID               JG6nJx-I2aQ-bQ2N-nEwa-4pgU-A9da-wnsnXO
   
root@ayana-angel:/home/spyderdyne/Documents# lvdisplay
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg-root/root
  LV Name                root
  VG Name                ubuntu-vg-root
  LV UUID                t6FQ6P-esTO-DuEH-E05G-nz1b-mU8o-UGW1de
  LV Write Access        read/write
  LV Creation host, time ubuntu, 2017-01-03 17:02:29 -0500
  LV Status              available
  # open                 1
  LV Size                214.83 GiB
  Current LE             54996
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     768
  Block device           252:6
   
  --- Logical volume ---
  LV Path                /dev/ubuntu-vg-swap/swap
  LV Name                swap
  VG Name                ubuntu-vg-swap
  LV UUID                ihCKiC-D9NT-vaiK-B6mH-TJvn-ABfb-PQAPpF
  LV Write Access        read/write
  LV Creation host, time ubuntu, 2017-01-03 17:11:25 -0500
  LV Status              available
  # open                 2
  LV Size                15.36 GiB
  Current LE             3933
  Segments               3
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           252:7

Next stop? ZFS

 

Leave a Reply