• 1 Post
  • 32 Comments
Joined 1 year ago
cake
Cake day: June 15th, 2023

help-circle








  • Doombot1@beehaw.orgtoLinux@lemmy.ml*Permanently Deleted*
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Great explanation. Yes - I’ve done this before! Built up a system with a RAID array but then realized I wanted a different boot drive. Didn’t really want to wait for dual 15Tb arrays to rebuild - and luckily for me, I didn’t have to! Because the metadata is saved on the discs themselves. If I had to guess (I could be wrong though) - I believe ‘sudo mdadm —scan —examine’ should probably bring up some info about the discs, or something similar to that command.











  • These failures don’t have to do with where they’re manufactured - it seems like this is some sort of firmware bug. NAND doesn’t really just choose to wipe itself at random. Actual NAND chip failures are few and far-between, so this is very likely much more than a hardware issue.

    That said, I personally have done a lot of testing with WD-manufactured NAND, compared other companies’ NAND - and the WD NAND is pretty crap. I can’t really go into further details than that, though.

    Source - I’m an SSD firmware engineer.



  • I can’t see the SMART data. May be something in there that gives me more information. Seems odd to me that an SSD would just go bad out of the blue - but if you’ve not turned on the drive or laptop in a while, that could be why. But honestly, it may just be fine after a full drive write - couldn’t hurt to try zeroing it w/ dd.

    SSDs don’t like being left unpowered for more than a few months. All flash storage, actually. If you take out an SSD and stick it on a shelf for a few years, it’s unlikely that it’ll lose data - but it’s absolutely technically possible, and many companies won’t cover such data losses by warranty after a specified period of time.