I have a home server running Ubuntu, which fulfils a multitude of roles for me – SSH server, NAS, sometime web server, data archive and general workhorse for network tasks. It’s currently running mdadm configured in RAID5 with 3x2TB disks. Back in 2020, when building the server, I decided on a 8×2.5″ drive enclosure mainly as the server is in a cupboard with limited space, and had very limited ventilation. The cupboard is in a living room so there are also sound considerations. If I’d gone with 3.5″ drives then the size would have been far too deep and server oriented NAS drives can be very loud.
Recently, I decide to update my samba configuration to support Apple TimeMachine, and so I thought it was time to increase the storage.
In 2025 I can get 2TB SSDs for about £110 each. I could go up to 8TB about £550 per drive, so a bit cheaper per TB – however I’d have to spend £1500 to get 16TB in RAID5 (3 x disks) or £1100 for 8TB mirrored configuration. If I bought two more 2TB disks I get to 8TB usable in my existing array for £200 spent. I think 8TB will be fine for the time being. My next upgrade will be to larger disks – hopefully 16TB SSD, so I’ll create a new array and simply copy my old array over and retire the disks.
My existing array is made of Crucial MX550 2TB drives. I decided to update with Samsung Evo 2TB disks as they were at a good price, and it would make it a bit easier to identify them in the system. Once they arrived, after a quick visual inspection, I checked they were authentic Samsung drives by plugging them into a windows machine with Samsung Magician installed. Now, I’m not sure how good it is at detecting fake drives, but the fact the Samsung software thought they were authentic provided some comfort.
I installed the disks on the hot-swap caddies and slid one in, not really sure what to expect. When I slide the caddy into the server, the light on the front of the device blinked on, and I could see the drive on the server by typing fdisk -l
Disk /dev/sde: 1.82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: Samsung SSD 870
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Part of the reason I chose Samsung drives was there were not other Samsung drives on the system, so once I saw it I knew it had to be the new drive.
Adding to an Array
It was 5 years since I built the array, so I had completely forgotten how I did it. I’d never added to an array before, but I knew in theory it would be possible – what I wasn’t sure about was how difficult it would be.
The System Before the Update
Before I messed with any of the setting I took a look at the existing system so I’d have a ‘before’ snapshot to compare to. Looking at the available disk space –
john@newpiggy:/home/john df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 6.3G 8.8M 6.3G 1% /run
/dev/sdc2 458G 26G 409G 6% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/md0 3.6T 1.3T 2.2T 37% /home
tmpfs 6.3G 16K 6.3G 1% /run/user/1000
and looking at the /dev/md0
john@newpiggy:/home/john sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Feb 20 20:04:25 2020
Raid Level : raid5
Array Size : 3906764800 (3.64 TiB 4.00 TB)
Used Dev Size : 1953382400 (1862.89 GiB 2000.26 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Mar 14 17:04:12 2025
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Name : newpiggy:0 (local to host newpiggy)
UUID : f8cbdee3:22923683:7de834f8:0aea4d18
Events : 47128
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
3 8 48 2 active sync /dev/sdd
Adding a drive
Earlier I had confirmed that the new drive was available at /dev/sde. I didn’t partition the drive in any way, it was just a ‘raw disk’ sitting on the server.
Telling mdadm to add the device to the array was simple enough
john@newpiggy:/home/john sudo mdadm --grow /dev/md0 --raid-devices=4 --add /dev/sde
mdadm: added /dev/sde
john@newpiggy:/home/john sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Feb 20 20:04:25 2020
Raid Level : raid5
Array Size : 3906764800 (3.64 TiB 4.00 TB)
Used Dev Size : 1953382400 (1862.89 GiB 2000.26 GB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Intent Bitmap : Internal
Update Time : Fri Mar 14 17:12:58 2025
State : active, reshaping
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Consistency Policy : bitmap
Reshape Status : 0% complete
Delta Devices : 1, (3->4)
Name : newpiggy:0 (local to host newpiggy)
UUID : f8cbdee3:22923683:7de834f8:0aea4d18
Events : 47154
Number Major Minor RaidDevice State
0 8 0 0 active sync /dev/sda
1 8 16 1 active sync /dev/sdb
3 8 48 2 active sync /dev/sdd
4 8 64 3 active sync /dev/sde
So the drive was now added, and I could see that from the line “Reshape Status : 0% complete” it was updating the array.
at the end of the process, I checked the disk space again – and there was no change. I wasn’t hugely surprised at this, but I was hoping that the array would show the extra disk and some extra available space.
john@newpiggy:/home/john df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 6.3G 9.9M 6.3G 1% /run
/dev/sdc2 458G 26G 409G 6% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/md0 3.6T 1.3T 2.2T 37% /home
tmpfs 6.3G 16K 6.3G 1% /run/user/1000
Extending the Filesystem
After some searching, I found the excellent SUSE Storage Administration Guide which took me through the final step of the process. There is a command resize2fs
which will resize the filesystem of the array.
john@newpiggy:/home/john sudo resize2fs /dev/md0
resize2fs 1.47.0 (5-Feb-2023)
Filesystem at /dev/md0 is mounted on /home; on-line resizing required
old_desc_blocks = 466, new_desc_blocks = 699
The filesystem on /dev/md0 is now 1465037952 (4k) blocks long.
The command resize2fs expands the ext4 filesystem on /dev/md0 so it utilize the new disk I added to the array. Best of all I didn’t have to unmount the drive – it extended while the system was fully operational.
checking now –
john@newpiggy:/home/john df -h
Filesystem Size Used Avail Use% Mounted on
tmpfs 6.3G 9.9M 6.3G 1% /run
/dev/sdc2 458G 26G 409G 6% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
/dev/md0 5.4T 1.3T 3.9T 25% /home
tmpfs 6.3G 16K 6.3G 1% /run/user/1000
Reference
My server is built with ICY BOX Backplane for 8x 2.5″ SATA/SAS Drives – cost about £110 from https://www.novatech.co.uk back in 2020. They don’t sell them any more, but others do.