This question is a problem derived from solving another problem that you can see on this thread.
To make it short, my dedicated server has a RAID1 array with 2x3TB HDD. A week ago, one of them failed. The company that owns the server has replaced it, so now I have a good drive with all the data, and a new one completely empty.
I am a TOTAL NEWBIE in Linux and hardware-related stuff, so forgive me is the question is very obvious but, I have no idea how to rebuild the RAID from what I have.
This information might be useful (i understand there is no RAID right now):
root@rescue /dev # lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 2.7T 0 disk
sdb 8:16 0 2.7T 0 disk
├─sdb1 8:17 0 1M 0 part
├─sdb2 8:18 0 127M 0 part
├─sdb3 8:19 0 200M 0 part
├─sdb4 8:20 0 2.7T 0 part
└─sdb5 8:21 0 455.5K 0 part
loop0 7:0 0 1.5G 1 loop
root@rescue /dev # cat /proc/mdstat
Personalities : [raid1]
unused devices: <none>
UPDATE 1
Quick info:
CPU1: Intel(R) Core(TM) i7-2600 CPU @ 3.40GHz (Cores 8)
Memory: 15974 MB
Disk /dev/sda: 3000 GB (=> 2794 GiB) doesn't contain a valid partition table
Disk /dev/sdb: 3000 GB (=> 2794 GiB)
Total capacity 5589 GiB with 2 Disks
UPDATE 2:
As suggested by Trinue:
root@rescue ~ # lspci
00:00.0 Host bridge: Intel Corporation 2nd Generation Core Processor Family DRAM Controller (rev 09)
00:01.0 PCI bridge: Intel Corporation Xeon E3-1200/2nd Generation Core Processor Family PCI Express Root Port (rev 09)
00:02.0 VGA compatible controller: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller (rev 09)
00:16.0 Communication controller: Intel Corporation 6 Series/C200 Series Chipset Family MEI Controller #1 (rev 04)
00:1a.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #2 (rev 05)
00:1c.0 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 1 (rev b5)
00:1c.5 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 6 (rev b5)
00:1c.6 PCI bridge: Intel Corporation 6 Series/C200 Series Chipset Family PCI Express Root Port 7 (rev b5)
00:1c.7 PCI bridge: Intel Corporation 82801 PCI Bridge (rev b5)
00:1d.0 USB controller: Intel Corporation 6 Series/C200 Series Chipset Family USB Enhanced Host Controller #1 (rev 05)
00:1f.0 ISA bridge: Intel Corporation H67 Express Chipset Family LPC Controller (rev 05)
00:1f.2 SATA controller: Intel Corporation 6 Series/C200 Series Chipset Family SATA AHCI Controller (rev 05)
00:1f.3 SMBus: Intel Corporation 6 Series/C200 Series Chipset Family SMBus Controller (rev 05)
03:00.0 USB controller: ASMedia Technology Inc. ASM1042 SuperSpeed USB Host Controller
04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 06)
05:00.0 PCI bridge: ASMedia Technology Inc. ASM1083/1085 PCIe to PCI Bridge (rev 01)
UPDATE 3:
As suggested by @Koko, i've tried to mount the 4 partitions, but got errors on 3 of them. May this disk also be broken?
root@rescue / # mount -o ro /dev/sdb1 /mnt/disk
mount: you must specify the filesystem type
root@rescue / # mount -o ro /dev/sdb4 /mnt/disk
ntfs_attr_pread_i: ntfs_pread failed: Input/output error
Failed to calculate free MFT records: Input/output error
NTFS is either inconsistent, or there is a hardware fault, or it's a
SoftRAID/FakeRAID hardware. In the first case run chkdsk /f on Windows
then reboot into Windows twice. The usage of the /f parameter is very
important! If the device is a SoftRAID/FakeRAID then first activate
it and mount a different device under the /dev/mapper/ directory, (e.g.
/dev/mapper/nvidia_eahaabcc1). Please see the 'dmraid' documentation
for more details.
root@rescue / # mount -o ro /dev/sdb2 /mnt/disk
mount: you must specify the filesystem type
root@rescue / # mount -o ro /dev/sdb3 /mnt/disk
root@rescue / # cd /mnt/disk
root@rescue /mnt/disk # dir
EFI
UPDATE 4:
As suggested by Michael Martinez and Koko, i've tried to duplicate data from sdb to sda, with the following errors:
root@rescue /mnt/disk # dd if=/dev/sdb of=/dev/sda
dd: reading `/dev/sdb': Input/output error
6619712+0 records in
6619712+0 records out
3389292544 bytes (3.4 GB) copied, 67.7475 s, 50.0 MB/s
UPDATE 5:
These ara the instructions the owner of the server provide: http://wiki.hetzner.de/index.php/Festplattenaustausch_im_Software-RAID/en for replacing an HDD in one of their servers. However, you will notice that i don't have the raid or the partitions as in the examples they provide.
UPDATE 6:
Hetzner already answered me: "Due to the fact that you haven't ordered an hardware RAID controller, it has a
software RAID."
UPDATE 7:
root@rescue / # mount /dev/sda1 /mnt/disk
mount: you must specify the filesystem type
root@rescue / # mount /dev/sda2 /mnt/disk
mount: you must specify the filesystem type
root@rescue / # mount /dev/sda3 /mnt/disk
root@rescue / # mount /dev/sda4 /mnt/disk
mount: you must specify the filesystem type
root@rescue / # mount /dev/sda5 /mnt/disk
mount: you must specify the filesystem type
root@rescue / # cd /mnt/disk
root@rescue /mnt/disk # dir
EFI
UPDATE 8:
I should point out that before running the mount command, i dd sdb into sda and started to create a new array using this commands:
# mdadm --create root --level=1 --raid-devices=2 missing /dev/sdb1
# mdadm --create swap --level=1 --raid-devices=2 missing /dev/sdb2
root@rescue / # mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sys on /sys type sysfs (rw,nosuid,nodev,noexec,relatime)
udev on /dev type devtmpfs (rw,relatime,size=8176304k,nr_inodes=2044076,mode=755)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620)
213.133.99.101:/nfs on /root/.oldroot/nfs type nfs (ro,noatime,vers=3,rsize=8192,wsize=8192,namlen=255,acregmin=600,acregmax=600,acdirmin=600,acdirmax=600,hard,nocto,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=213.133.99.101,mountvers=3,mountproto=tcp,local_lock=all,addr=213.133.99.101)
aufs on / type aufs (rw,relatime,si=1848aabe5590850f)
tmpfs on /run type tmpfs (rw,nosuid,noexec,relatime,size=1635764k,mode=755)
tmpfs on /run/lock type tmpfs (rw,nosuid,nodev,noexec,relatime,size=5120k)
tmpfs on /run/shm type tmpfs (rw,nosuid,nodev,noexec,relatime,size=3271520k)
devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,gid=5,mode=620)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
UPDATE 9:
When server refused to boot the first time, I asked customer service for a manual reboot. The answer they gave me was:
Dear Client,
we have restarted your server, but it seems that there is one hard drive faulty.
If you want we can repalce them, for that please confirm us the data loss on this
drive and the downtime about 15 minutes.
Your server is now in the rescue system:
Y immediately when to the robot website, when I can admin the server and searched for information about the rescue system, and here's what I found:
After activating the rescue system a config file will be created on our DHCP-server. On the next reboot your server will boot from the network and a minimal operating system will be loaded from our TFTP-server. Then you will be able to use the rescue system as long as you want.
The order for the rescue system will be active for 60 minutes. If you then reboot your server, the usual system will be started from the hard disk.
Please visit our Wiki for further information
Rescue system is a 64bit Debian.
UPDATE 10
root@rescue ~/.oldroot/nfs # ls /root/.oldroot/nfs
bash_aliases rescue32-wheezy-v006.ext2
check rescue32-wheezy-v007.ext2
copy-vnode-lvs-to rescue32-wheezy-v008.ext2
copy-vnode-lvs-to.bak rescue32-wheezy-v009.ext2
esxi rescue64-lenny-v004.ext2
firmware_update rescue64-squeeze-v011.ext2
freebsd rescue64-squeeze-v012.ext2
functions.sh rescue64-squeeze-v013.ext2
images rescue64-squeeze-v014.ext2
images.old rescue64-squeeze-v015.ext2
install rescue64-squeeze-v016.ext2
ipmi rescue64-test.ext2
iso rescue64-wheezy-v000.ext2
knoppix rescue64-wheezy-v001.ext2
lost+found rescue64-wheezy-v002.ext2
opensolaris rescue64-wheezy-v003.ext2
raid_ctrl rescue64-wheezy-v004.ext2
README rescue64-wheezy-v005.ext2
rescue32-lenny-v004.ext2 rescue64-wheezy-v006.ext2
rescue32-squeeze-v011.ext2 rescue64-wheezy-v007.ext2
rescue32-squeeze-v012.ext2 rescue64-wheezy-v008.ext2
rescue32-squeeze-v013.ext2 rescue64-wheezy-v009.ext2
rescue32-squeeze-v014.ext2 shutdown-h
rescue32-squeeze-v015.ext2 shutdown-h-now
rescue32-squeeze-v016.ext2 tightvnc-vkvm.tar.gz
rescue32-test.ext2 vkvm64-squeeze-v001.ext2
rescue32-wheezy-v000.ext2 vkvm64-squeeze-v002.ext2
rescue32-wheezy-v002.ext2 vkvm64-test.ext2
rescue32-wheezy-v003.ext2 vkvm64-v001.ext2
rescue32-wheezy-v004.ext2 vkvm64-wheezy-overlay.ext2
rescue32-wheezy-v005.ext2 vkvm64-wheezy-overlay-v001.ext2
UPDATE 11:
root@rescue ~ # fdisk -l /dev/sdb
WARNING: GPT (GUID Partition Table) detected on '/dev/sdb'! The util fdisk doesn't support GPT. Use GNU Parted.
Disk /dev/sdb: 3000.6 GB, 3000592982016 bytes
256 heads, 63 sectors/track, 363376 cylinders, total 5860533168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x8ab49420
Device Boot Start End Blocks Id System
/dev/sdb1 1 4294967295 2147483647+ ee GPT
Partition 1 does not start on physical sector boundary.
UPDATE 12:
root@rescue ~ # parted -l
Error: The backup GPT table is corrupt, but the primary appears OK, so that will
be used.
OK/Cancel? OK
Model: ATA ST3000DM001-9YN1 (scsi)
Disk /dev/sda: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 1066kB 1049kB LDM metadata partition
2 1066kB 134MB 133MB Microsoft reserved partition msftres
3 135MB 345MB 210MB fat16 EFI system partition boot
4 345MB 3001GB 3000GB ntfs LDM data partition
5 3001GB 3001GB 466kB LDM data partition
Model: ATA ST3000DM001-9YN1 (scsi)
Disk /dev/sdb: 3001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Number Start End Size File system Name Flags
1 17.4kB 1066kB 1049kB LDM metadata partition
2 1066kB 134MB 133MB Microsoft reserved partition msftres
3 135MB 345MB 210MB fat16 EFI system partition boot
4 345MB 3001GB 3000GB ntfs LDM data partition
5 3001GB 3001GB 466kB LDM data partition
Model: Linux Software RAID Array (md)
Disk /dev/md126: 133MB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Number Start End Size File system Flags
1 0.00B 133MB 133MB linux-swap(v1)
Model: Linux Software RAID Array (md)
Disk /dev/md127: 983kB
Sector size (logical/physical): 512B/4096B
Partition Table: loop
Number Start End Size File system Flags
1 0.00B 983kB 983kB ext4