Forced in to using GPT partitions and gdisk instead of ole bios methods and mbr's?
Try this if you wish to loose all of your data (i.e. use at your own risk):
Try this if the elf shot the food, and you need to find some before you die:
for a writeup that is much better than mine, follow this link:
http://www.rodsbooks.com/gdisk/bios.html#bios
Especially the bit about using pre 2011 bios from Intel. Seems if you follow the rest and it still won't boot,
you will need to do the fdisk on your boot drive and set the protective mbr partition with the boot flag!
- note not gdisk or some other util, but plain old fdisk (ignore the GPT warning).
GPT partitions (magic crap ahead).... Ok, use sysrescuecd: Create a partition on the new install disk that is like this: gdisk /dev/sda (beginning of disk, cause I don't trust the end of the disk yet ha ha.-): 200MB -> Type EF00 (EFI Partition) (Next part of disk): 500MB -> Type (Linux regular) for /boot -> don't do a raid, you can always get a kernel re-installed to another disk and repoint grub later.
Make the rest of the disk raid FD00 or whatever you want, but keep a /boot non-raided (raid 1 should work but why bother when you can just pump the kernel back in and grub-install from a rescue disk when you figure out what failed when it does later). (probably stupid idea as you don't want to have to remember what to do when you upgrade the kernel) ---
Next, (here is the "special" part) - parted magic:
parted /dev/sda set <partition_number> bios_grub on Check it: parted /dev/sda print (you should see the bios_grub flag). This sets up the EFI partition for Grub to pee in later. It embeds grub stuff in there to work with the new GPT partition extension (replacement actually). Install (distro of choice): During the install you should install grub to /dev/sda (as you set up the special EFI partition for it to be embedded in above) and you should be fine - assuming they use grub 1.9 or above with GPT support.
grub-install /dev/sda should work now.
(should copy the boot partition over to the other disks before you loose one though). Not sure how to keep em up to date yet though, but I would suppose this is where you would raid the /boot partition as at least a raid1.
If raid1 is added after the fact, I suppose you could just do:
mdadm --create /dev/mdX --level=1 --raid-disks=2 missing /dev/sd[diskletter][partition-number]
Where your disk letter is the device, and partition number is the partition number that holds /boot (and mdX is the next available software raid device number)...
and then copy the data to the other drive and then do:
mdadm --manage /dev/mdX --add /dev/sd[diskletter][partition-number]
Or just chuck that and update the /boot when upgrading (and touch /etc/fstab to have /boot point to the right guid after removing the original drive).
i.e. mount and cp -ax /boot /mnt/newmountpartition (or hell, script it) and be done with it.
Better yet, make the raids raid 6, and overlay with lvm (leave some space for snapshots!)...
]
For a single disk install repair - or bios upgrade, see a good write-up here by Laurent Desgrange:
https://blog.desgrange.net/2015/02/16/restore-debian-uefi.html