this post was submitted on 16 Dec 2023
56 points (95.2% liked)

Selfhosted

40313 readers
185 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
56
Proper HDD clear process? (poptalk.scrubbles.tech)
submitted 11 months ago* (last edited 11 months ago) by scrubbles@poptalk.scrubbles.tech to c/selfhosted@lemmy.world
 

Usually my process is very... hammer and drill related - but I have a family member who is interested in taking my latest batch of hard drives after I upgraded.

What are the best (linux) tools for the process? I'd like to run some tests to make sure they're good first and also do a full zero out of any data. (Used to be a raid if that matters)

Edit: Thanks all, process is officially started, will probably run for quite a while. Appreciate the advice!

top 50 comments
sorted by: hot top controversial new old
[–] IsoKiero@sopuli.xyz 40 points 11 months ago (4 children)

Dd. It writes on disk at a block level and doesn't care if there's any kind of filesystem or raid configuration in place, it just writes zeroes (or whatever you ask it to write) to drive and that's it. Depending on how tight your tin foil hat is, you might want to write couple of runs from /dev/zero and from /dev/urandom to the disk before handing them over, but in general a single full run from /dev/zero to the device makes it pretty much impossible for any Joe Average to get anything out of it.

And if you're concerned that some three-letter agency is interested of your data you can use DBAN which does pretty much the same than dd, but automates the process and (afaik) does some extra magic to completely erase all the data, but in general if you're worried enough about that scenario then I'd suggest using an arc furnace and literally melting the drives into a exciting new alloy.

[–] cmnybo@discuss.tchncs.de 12 points 11 months ago (1 children)

The one thing DD won't overwrite is bad sectors. If the disk has any reallocated sectors, the data in the original sectors may still be there.
If there are reallocated sectors, then the disk is reaching the end of it's life and is not worth reusing anyways.

[–] IsoKiero@sopuli.xyz 13 points 11 months ago (1 children)

And if you're concenred on data written on sectors since reallocated you should physically destroy the whole drive anyways. With SSDs this is even more complicated, but I like to keep it pretty simple. If the data which has been stored on the drive at any point of it's life is under any kind of NDA or other higly valuable contract it's getting physically destroyed. If the drive spent it's life storing my family photos a single run of zeroes with dd is enough.

At the end the question is that if at any point the drive held bits of anything even remotely near a cost of a new drive. If it did it's hammer time, if it didn't, most likely just wiping the partition table is enough. I've given away old drives with just 'dd if=/dev/zero of=/dev/sdx bs=100M count=1'. On any system that appears as a blank drive and while it's possible to recover the files from the drive it's good enough for the donated drives. Everything else is either drilled trough multiple times or otherwise physically destroyed.

[–] waspentalive@lemmy.one 3 points 11 months ago

Some SSD drives can do a secure erase via block encryption where the key is stored on the drive itself. There is a command that simply generates a new key - Voilà your drive now contains random bits. I don't know if newer spinning rust drives have this feature too.

[–] Static_Rocket@lemmy.world 8 points 11 months ago

Yeah, either DD or the dm-crypt trick for filling the drive with crypto-grade randomness https://wiki.archlinux.org/title/Dm-crypt/Drive_preparation

[–] waspentalive@lemmy.one 3 points 11 months ago* (last edited 11 months ago) (1 children)

I claim my new rock band name "exciting new alloy"

[–] Tangent5280@lemmy.world 1 points 11 months ago (1 children)

hi, is the image AI generated?

[–] waspentalive@lemmy.one 2 points 11 months ago

Indeed - Meet Exciting New Alloy, playing on tour near you soon!

load more comments (1 replies)
[–] SharkAttak@kbin.social 34 points 11 months ago (1 children)

Wow, was your porn that much questionable?

[–] waspentalive@lemmy.one 7 points 11 months ago (1 children)

Since the disks are going to a 'family member' any porn at all, even the most tame, might get talked about.

[–] Tangent5280@lemmy.world 8 points 11 months ago

This degenerate has portraits of feminine ankles in his device

[–] rentar42@kbin.social 23 points 11 months ago

Just FYI: the often-cited NIST-800 standard no longer recommends/requires more than a single pass of a fixed pattern to clear magnetic media. See https://nvlpubs.nist.gov/nistpubs/specialpublications/nist.sp.800-88r1.pdf for the full text. In Appendix A "Guidelines for Media Sanitation" it states:

Overwrite media by using organizationally approved software and perform verification on the
overwritten data. The Clear pattern should be at least a single write pass with a fixed data value,
such as all zeros. Multiple write passes or more complex values may optionally be used.

This is the standard that pretty much birthed the "multiple passes" idea, but modern HDD technology has made that essentially unnecessary (unless you are combating nation-state-sponsored attackers, in which case you should be physically destroying anything anyway, preferably using some high-heat method).

[–] titey@lemmy.home.titey.net 22 points 11 months ago (2 children)

Usually, I use shred:

shred -vfz -n 2 /dev/device-name
  • -v: verbose mode
  • -f: forces the write permissions if missing
  • -z: zeroes the disk in the final pass
  • -n 2: 2 passes w/ random data
[–] FaceButt9000@lemmy.world 5 points 11 months ago (1 children)

Shred is what I used when destroying a bunch of old drives.

Then I disassbled them to pull out the magnets and platters (because they're shiny and cool). A couple had torx screws that I didn't have the right end for so I ran a hdd magnet over the surface and scratched them with a screwdriver.

[–] tburkhol@lemmy.world 3 points 11 months ago (1 children)

I have an inch-high stack of platters now. Kind of interesting to see how their thickness has changed over the years, including a color change in there somewhere. Keep thinking I should bury them in epoxy on some table top.

For extra fun, you ca melt the casings and cast interesting shapes. I only wish I were smart enough to repurpose the spindle motors.

[–] Tangent5280@lemmy.world 1 points 11 months ago

Make sure you wear lung protection when you deal with those. They're terrible for your insides.

[–] Vilian@lemmy.ca 1 points 11 months ago (2 children)

why don't just zeroes from the start?, instead of using random data and them zeroes it?

[–] MeanEYE@lemmy.world 6 points 11 months ago* (last edited 11 months ago)

Like u/MrMcGasion said, zeroing makes it easier to recover original data. Data storage and signal processing is pretty much a game of threshold values. From digital world you might see 0 or 1, but in reality it's a charge on a certain scale, lets assume 0 to 100%. Anything above 60% would be considered 1 and anything below 45% a 0. Or something like that.

When you do zero the drive, that means drive will reduce charge enough to pass the lower limit, but it will not be 0 on any account. With custom firmware or special tools it is possible to configure this threshold and all of the sudden it is as if your data was never removed. Add to this situation existence of checksums and total removal of data becomes a real challenge. Hence why all these tools do more than one operation to make sure data is really zeroed or removed.

For this reason random data is better approach is much better than zeroing because random data alters each block differently instead of just reducing charge by a fixed amount, as it is with zeroing. Additional safety is achieved by multiple random data writes.

All of this plays a role only on magnetic storage, that is to say HDDs. SSD is a completely different beast and wiping SSD can lead to reduced lifespan of the drive without actually achieving the desired result. SSDs have write distribution algorithms which make sure each of the blocks are equally used. So while your computer thinks it's writing something at the beginning of the drive, in reality that block can be anywhere on the device and address is just internally translated to real one.

[–] MrMcGasion@lemmy.world 3 points 11 months ago (1 children)

Just doing a single pass all the same like zeroes, often still leaves the original data recoverable. Doing passes of random data and then zeroing it lowers the chance that the original data can be recovered.

[–] Moonrise2473@feddit.it 6 points 11 months ago

The "can" in can be recovered means "if a state sponsored attacker thinks that you have nuclear secrets on that drive, they can spend millions and recover data by manually analyzing the magnetic flux in a clean room lab" not "you can recover it by running this program"

[–] MonkderZweite@feddit.ch 14 points 11 months ago* (last edited 11 months ago) (2 children)

# cat /dev/zero > /dev/your-disk

If you want progress bar, use pv instead of cat.

[–] Cupcake1972@mander.xyz 5 points 11 months ago (1 children)

Or # dd if=/dev/zero of=/dev/your-disk status=progress

[–] MonkderZweite@feddit.ch 5 points 11 months ago (1 children)
[–] Cupcake1972@mander.xyz 3 points 11 months ago (1 children)
load more comments (1 replies)
[–] bfg9k@lemmy.world 3 points 11 months ago

I love how straight up this is and how Linux allows it so easily.

[–] rentar42@kbin.social 11 points 11 months ago (1 children)

it's not much use now, but to basically avoid the entire issue just use whole disk encryption the next time. Then it's basically pre-wiped as soon as you "lose" the encryption key. Then simply deleting the partition table will present the disk as empty and there's no chance of recovering any prior content.

[–] waspentalive@lemmy.one 5 points 11 months ago (3 children)

Does one have to supply the password at each boot with what you are describing - this sounds like the password is somewhere in the partition table. If so what do I google to learn more?

[–] IlliteratiDomine@infosec.pub 3 points 11 months ago* (last edited 11 months ago)

There are many ways to setups full disk encryption on Linux, but the most common all involve LUKS. Providing a password at mount (during boot, for a root partition or perhaps later for a "data" volume) is a but more secure and more frequently done, but you can also use things like smart cards (like a Yubikey) or a keyfile (basically a file as the password rather than typed in) to decrypt.

So, to actually answer your question, if you dont want to type passwords and are okay with the security implementations of storing the key with/near the system, putting a keyfile on removable storage that normally stays plugged in but can be removed to secure your disks is a common compromise. Here's an approachable article about it.

Search terms: "luks", " keyfile", "evil maid"

[–] vox@sopuli.xyz 2 points 11 months ago

store it in tpm

[–] rentar42@kbin.social 1 points 11 months ago

There's many different ways with different performance tradeoffs. for example for my Homeland server I've set it up that I have to enter it every boot, which isn't often. But I've also set it up to run a ssh server so I can enter it remotely.

On my work laptop I simply have to enter it on each boot, but it mostly just goes into suspend.

One could also have the key on a usb stick (or better use a yubikey) and unplug that whenever is reasonable.

[–] thenumbersmason@yiffit.net 9 points 11 months ago* (last edited 11 months ago) (2 children)

dd works fine, you'd use it something like this

dd if=/dev/zero of=/dev/[the drive] status=progress conv=fsync bs=4M

if: input file

of: output file

status=progress: shows progress

conv=fsync: basically does the equivalent of running "sync" after the command, makes sure all the kernel buffers have actually written out and are on the device. This causes the command to "hang" near the end depending on how much RAM is installed on the computer. It's not actually hanging it's just finishing writing out the data that's still cached in RAM. This can take a while depending on drive speed and quantity of system RAM.

bs=4M sets the block size to something high enough you're not CPU bottlenecked. Not particularly important exactly what the value is, 4M is a good sane default for most things including this full disk operation.

edit: one pass of zeros is enough to protect against all trivial data recovery techniques. If your threat model includes three letter agencies the hammer and drill bit technique is 👍

[–] scrubbles@poptalk.scrubbles.tech 3 points 11 months ago (1 children)

Thanks! I've used dd for things like recovering/cloning drives but it makes complete sense I can wipe it too. Thanks for the progress trick too, it was always just a blank cursor to me when I ran it before!

[–] WaterWaiver@aussie.zone 4 points 11 months ago* (last edited 11 months ago)

I recommend using a different set of flags so you can avoid the buffering problem @thenumbersmason@yiffit.net mentions.

This next example prevents all of your ram getting uselessly filled up during the wipe (which causes other programs to run slower whenever they need more mem, I notice my web browser lags as a result), allows the progress to actually be accurate (disk write speed instead of RAM write speed) and prevents the horrible hang at the end.

dd if=/dev/urandom of=/dev/somedisk status=progress oflag=sync bs=128M

"oflag" means output flag (to do with of=/dev/somedisk). "sync" means sync after every block. I've chosen 128M blocks as an arbitrary number, below a certain amount it gets slower (and potentially causes more write cycles on the individual flash cells) but 128MB should be massively more than that and perfectly safe. Bigger numbers will hog more ram to no advantage (and may return the problems we're trying to avoid).

If it's an SSD then I issue TRIM commands after this ("blkdiscard" command), this makes the drive look like zeroes without actually having to write the whole drive again with another dd command.

[–] mouse@midwest.social 7 points 11 months ago (1 children)

While this wouldn't work for you now, something to think about is encrypting new drives going forward so that you don't have to worry about erasing/zeroing them, just toss the encryption key and your good to go.

[–] raldone01@lemmy.world 2 points 11 months ago* (last edited 11 months ago) (1 children)

I would at least overwrite the Luks header.

[–] mouse@midwest.social 1 points 11 months ago

I like it, then it's even harder to know that it was encrypted in the first place. Thanks for that suggestion.

[–] synestine@sh.itjust.works 6 points 11 months ago (1 children)

'dd' works, but I prefer 'shred'. It does a DoD multi-pass shred by default, so I usually use 'shred -vn1z /dev/(drive)'. That gives output, does a one-pass random write followed by one-pass zero of the disk. More than that just wastes time, and this kinda thing takes hours on large spinners. I also use 'smartmontools' to run SMART tests against my drives regularly to check their health.

[–] Kid_Thunder@kbin.social 4 points 11 months ago* (last edited 11 months ago)

It does a DoD multi-pass shred by default

Just a heads up that's not a thing anymore (since 2006 when the 1995 revision was superseded), except that you have to physically destroy it or whatever the CSA's policy that owns it says to do. Generally the direction for an HDD would be, if available, use a degaussing rod and then regardless, you must shred it in an approved HDD shredder (a physical shredder) or incinerate it. For an SSD, it would be to incinerate it.

Now 5220.22-M (the 1995 version) that most commercial and some not-so-commercial software referenced as the "DOD Standard" doesn't even exist anymore. It is now 32 CFR Part 117 of Title 32 and with respect to sanitization is §117.18 (b)(2).

[–] Grass@sh.itjust.works 6 points 11 months ago (1 children)

So I run encrypted drives except for the odd thing where it would be a pain in the anoos, but what would... drive... someone to physically destroy the drives?

Literally the only thing I can think of is having massive amounts of illegal data but what would that even be.

[–] Tangent5280@lemmy.world 6 points 11 months ago

Physically destroying drives is standard operating procedure. Big data centres and data brokers literally have a giant industrial shredder in the basement which shred and pulverise all obsolete hardware.

Anyone with any sort of fiduciary or legal duty to keep any data safe would also need to destroy data storage - think Lawyers, Doctors with their own clinics, Journalists, Police Officers etc. Now even people in finance need to worry about their data, because even the most inane shit like call logs from a mid to high level banker can be sold for a fuckton of money.

Or it could be plans for 9-11 part deux electric boogaloo, can't tell.

[–] tofubl@discuss.tchncs.de 5 points 11 months ago

I have been researching the same question a few days ago and am currently running > badblocks -b 4096 -c 8 -svw /dev/sda on my old NAS drive. It makes a few passes writing the disk full of 0xaa and 0x55 and then reading it back. I have my disk in a USB2.0 SATA adapter on a raspberry pi 3 and it's currently at 70% of pass #2 after 100 hours, so it sure is slow, but I don't mind.

[–] Godnroc@lemmy.world 5 points 11 months ago

I've been using Shred to wipe things at work. I give it seven passes of random, then a pass of zeros. Probably overkill, but everything is going to surplus auctions and some of the data is sensitive.

[–] BaldProphet@kbin.social 4 points 11 months ago (1 children)

You could also use DBAN to perform the erasure from outside the operating system.

[–] AbidanYre@lemmy.world 7 points 11 months ago (1 children)

DBAN was acquired. I believe ShredOS (https://github.com/PartialVolume/shredos.x86_64) is its spiritual successor.

[–] BaldProphet@kbin.social 2 points 11 months ago (1 children)

Oh, thanks for pointing that out. It's been a while since I needed to erase a disk so I didn't realize DBAN isn't really a thing anymore.

[–] AbidanYre@lemmy.world 2 points 11 months ago* (last edited 11 months ago)

Yeah, I was pretty sad to see it when I needed to wipe a disk recently for the first time in like 15 years.

[–] PlexSheep@feddit.de 4 points 11 months ago

Besides the mentioned tools, you can also just use disk encryption and discard the secret used to unlock it.

[–] Vilian@lemmy.ca 3 points 11 months ago

l usually just follow this tutorial

https://m.youtube.com/watch?v=MHSY4RdVL40

worked flawless every time

load more comments
view more: next ›