Interesting. What's the sfdisk for? Is that why my attempts to use --re-add aren't working?
I also tried "mdadm --zero-superblock /dev/sdb1" to make mdadm forget that was ever an array member, but that didn't get me any further.
I use Debian so this won't help me directly, but once I work out why I can't re-add it will be possible to use something similar in a postinst hook to rebuild all the arrays.
The sfdisk is because as part of our kickstart file we only create the first partition. After removing the disk from the raid we repartition it.
I don't see why this couldn't fix up a RAID device generated by the debian installer. The device name could be parameterized if it's not always the same.
We run this script before there's any real data on the device so loss of redundancy for a brief moment is not a huge deal - md0 is a very small device so it doesn't take long to resync.
We repartition it because we're adding partitions.
We have one kickstart file that we use regardless of medium type. For SSDs, we overprovision and leave unused space at the end. Some brands of SSDs were failing before we did that. We don't need to overprovision hard drives.
You could remove the repartitioning and it would do the right thing for your use case.
Ah okay. It seems that my re-adds were failing on arrays that don't have a bitmap. I was testing it on small arrays that don't get a bitmap by default.
https://raw.githubusercontent.com/prgmrcom/ansible-role-mdad...
for you.
Apologies I haven't gotten to your PR's yet, but there is a ticket now in our internal development queue to review and merge those.