0

I've decided to grow my raid6 array:

- from: 7x 2TB drives (giving usable space of ~9TB in ext4/raid6)
- to:   7x 4TB drives (giving usable space of ~18TB in ext4/raid6)

I have replaced all seven drives, and they all contain the following structure and have been cleanly added to the array.

sda                                3.6T
└─sda1         linux_raid_member   3.6T
  └─md0        LVM2_member        18.2T
    └─shared-c ext4                9.1T /shared
sdb                                3.6T
└─sdb1         linux_raid_member   3.6T
  └─md0        LVM2_member        18.2T
    └─shared-c ext4                9.1T /shared
sdc                                3.6T
└─sdc1         linux_raid_member   3.6T
  └─md0        LVM2_member        18.2T
    └─shared-c ext4                9.1T /shared

.. continues thru to sdg in same way ... 

Then, I ran the grow command on the array, which expanded it to the size I was expecting:

// command I ran:
mdadm --grow /dev/md0 --size=max
// output from mdadm --detail /dev/md0 after grow command

/dev/md0:
           Version : 1.2
     Creation Time : Sun May 15 08:22:07 2016
        Raid Level : raid6
        Array Size : 19535080000 (18.19 TiB 20.00 TB)
     Used Dev Size : 3907016000 (3.64 TiB 4.00 TB)
      Raid Devices : 7
     Total Devices : 7
       Persistence : Superblock is persistent

       Update Time : Tue Jun 11 01:26:08 2024
             State : clean
    Active Devices : 7
   Working Devices : 7
    Failed Devices : 0
     Spare Devices : 0

Next I tried resizing the file system, using resize2fs using the following command:

resize2fs /dev/mapper/shared-c

But it said there was nothing to do - My guess is that it isn't recognising the additional space in the array. I've since done some research that suggests I should be doing something with the physical volume and logical volumes, but no free space is being shown in there either...

// output of pvdisplay

  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               shared
  PV Size               9.07 TiB / not usable <5.38 MiB
  Allocatable           yes
  PE Size               64.00 MiB
  Total PE              148681
  Free PE               160
  Allocated PE          148521
  PV UUID               7wxoRZ-LhVL-3WV5-pRcs-LT9q-r0Em-KHClad
// output of lvdisplay

  --- Logical volume ---
  LV Path                /dev/shared/c
  LV Name                c
  VG Name                shared
  LV UUID                IR9ohg-TWPk-loPX-OcVg-vajS-hSb9-eubJMa
  LV Write Access        read/write
  LV Creation host, time ,
  LV Status              available
  # open                 1
  LV Size                <9.07 TiB
  Current LE             148521
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     1280
  Block device           253:0

I'm not sure what the next step is in this process now - part of me thinks I should be doing a pvresize but the output of that is only showing 5.38mb of unused space, whereas I would assume the number should be in the TB territory for pvresize to work.

The array is currently working and all files in the array are intact and fine - Ideally I'd like to keep it that way.

1 Answer 1

1

Decided to put my big boy pants on, grit my teeth and push forward.

If your system is using LVM, I recommend a primer on LVM because it will make things somewhat clearer: https://www.techtarget.com/searchdatacenter/definition/logical-volume-management-LVM

This answer also helped a lot: https://serverfault.com/questions/695902/extending-lvm-volume-groups-with-physical-extents-pvresize

pvdisplay was showing this...

  // output of pvdisplay

  --- Physical volume ---
  PV Name               /dev/md0
  VG Name               shared
  PV Size               9.07 TiB / not usable <5.38 MiB
  Allocatable           yes
  PE Size               64.00 MiB
  Total PE              148681
  Free PE               160
  Allocated PE          148521
  PV UUID               7wxoRZ-LhVL-3WV5-pRcs-LT9q-r0Em-KHClad

however lvmdiskscan was showing this...

  // output of lvmdiskscan -l

  WARNING: only considering LVM devices
  /dev/md0    [      18.19 TiB] LVM physical volume
  0 LVM physical volume whole disks
  1 LVM physical volume

This told me that the underlying partitions were reporting the full capacity to LVM, but the [LVM] physical volume (sitting on /dev/md0) wasn't taking up the available amount of room.

So the next steps were:

// to expand the physical volume from ~9.1TB to ~18TB and 
// give the extra room to the volume group (VG: shared) automatically

pvresize /dev/md0 

// to expand the logical volume itself...
lvextend -l +100%FREE /dev/shared/c 

// to run a check on the filesystem 
// note: if you're not sure that your filesystem is clean, omit the -y argument

e2fsck -f -y /dev/mapper/shared-c

// and finally, to expand the ext4 at the top layer.
resize2fs /dev/mapper/shared-c

Mounted, all data present and expanded successfully. Very stressful process, but lots of learning. I hope this helps people in the future.

You must log in to answer this question.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.