Disk wipe teniques

I frequently need to wipe drives from donated machines before passing them
on. I’ve got a commercial app that will do various wipes up to DOD (Orange
book) standards but what is available for Linux that will do a satisfactory
job without taking forever to run?


Will Honea

On 02/24/2011 01:39 PM, Will Honea wrote:
> I frequently need to wipe drives from donated machines before passing them
> on. I’ve got a commercial app that will do various wipes up to DOD (Orange
> book) standards but what is available for Linux that will do a satisfactory
> job without taking forever to run?

Does “shred” work for you?

Larry Finger wrote:

> On 02/24/2011 01:39 PM, Will Honea wrote:
>> I frequently need to wipe drives from donated machines before passing
>> them
>> on. I’ve got a commercial app that will do various wipes up to DOD
>> (Orange book) standards but what is available for Linux that will do a
>> satisfactory job without taking forever to run?
>
> Does “shred” work for you?

I’m new enough that I had to look it up. I’ll have to do some more reading
to decide if the exceptions mentioned in the man pages are a factor/hazard.


Will Honea

On Thu, 24 Feb 2011 19:39:57 +0000, Will Honea wrote:

> but what is available for Linux that will do a satisfactory job without
> taking forever to run?

dd if=/dev/random of=/dev/[device]

dd if=/dev/zero of=/dev/[device]

(Of course, only do this when you actually intend to wipe the device out

  • these are dangerous commands to run)

Of course, if you need something that does DoD Orange Book standard
wipes, then it’s going to be multi-pass, and the time it takes to run
will depend on the size of the device.

Obviously, you can’t get around the physical limitations of the
hardware. :slight_smile:

Jim


Jim Henderson
openSUSE Forums Administrator
Forum Use Terms & Conditions at http://tinyurl.com/openSUSE-T-C

How about →DBAN?

Darik’s Boot and Nuke (“DBAN”) is a self-contained boot disk that securely wipes the hard disks of most computers. DBAN will automatically and completely delete the contents of any hard disk that it can detect, which makes it an appropriate utility for bulk or emergency data destruction.

DBAN is a means of ensuring due diligence in computer recycling, a way of preventing identity theft if you want to sell a computer, and a good way to totally clean a Microsoft Windows installation of viruses and spyware. DBAN prevents or thoroughly hinders all known techniques of hard disk forensic analysis.

Nuke and boot is my favorite, but also have used “wipe” successfully.

For accurate total to new deletion what I use is dd commands on each partition, then delete all partitions, make one huge partition and use DBAN. It’s the fastest method I know which assures absolute 0 forensic recovery. dd and DBAN react to head azimuth of drives differently. normally, the head tracks for read and write in the center of the tracks with slight spillover on the leading (upper guard band) and trailing (lower guard band). Thusly, during forensic recovery all one needs to do is sample tracks in multiple passes saving image caught at each pass. Net result is a 50% to 98% recovery. Writing partitions multiple times gradually catches more and more of the guard bands until nothing can be recovered. dd increases the flux current slightly during writes which tends to cover the center write area and lower guard band effectively in one pass. Removing the partitions to make one huge partition complicates the recovery forensics to make reading and deciphering contents harder but not impossible as the upper guard band may still be readable. DBAN causes the drive head to track closer to the upper guard band making recovery success drop to about 2% if one has done the above measures. IF you don’t know what azimuth is, you can think of it like shining a dim penlight on a thick line drawn on paper. Shine the light direct above and there will be small spill above and below (the act of normal reading and writing). very slightly tilt the light from 90 degrees to say 88 degrees and more spill will land on one side of the line. Most current hdd’s use time based stepping between tracks rather than the older fixed space stepping. dd increases the track step time on by increasing the flux current slightly ( current still applied as stepping tracks starts) and DBAN retards track step time slightly by changing the step on pulse time shorter.

Most people don’t realize how much a hdd can really do through precise software manipulation of time loops. It’s this direct hdd control rather than the regular read and write functions that give hdd wiping software it’s power. For on this you can look up hdd port addressing and data meanings. There the last time I looked 16 i/o addresses for read and write ops. Normal read and write, seek and step use only the lower 5 ports. The other 11 are hdd performance registers which can be read and written. WARNING! If you intend to change registers 6 through 16 you must first save the contents so you can restore them after doing manipulation. The registers will be reloaded upon power-on-self-test at the next cold boot.
. source = QUE DOS Bios function calls 2001 & SAMS Assembly language device system calls 2007

Enjoy!

@techwiz03: I do not mean to sound impolite, but the above description leaves me pretty sceptic. I definitely am not a hardware-guru at all, basically I do not understand much of what you have written. I admit that, okay? :slight_smile:

What I see is that there is a lot of myth about deleting hard drive contents circling in the net, and I have the strong feeling that these myths have a clear intention: data recovery is a pretty profitable line of business, and well… if there was an easy way to delete a hard drive, not many people would consult companies who offer recoveries, would they (if they know a disk has actually been wiped)? The thing is: if you consult such a company, you will have to pay for their service even if the data remains unrecoverable. To keep customers coming, one has to suggest that even if a disk was wiped, there still might be one way or another to recover it. In case of a properly wiped hard disk, what you pay for is actually some time of hope - so it seems to me.

There are one or two things in your description that fit quite well in this notion. 1st: a result of 50% possible recovery is not a result at all. Statistically, if you have one bit and are able to recover it with a chance of 50%, both states of a bit are still possible (0 and 1). True, you did not write “50% recovery”, but “50% to 98% recovery”, but translating that into a human language, it means that almost all states of chances are possible (except for 100% recovery). This makes the sources of these theories sound rather dubious to me.

2nd: you mention a “guard band” which might offer the possibility to recover data from it. I admit I have never heard of such a guard band before; obviously, they exist in hard drives of course (if you say so, I did not check that), but what are they for? Because to me the term sounds as if it is something to divide two things from each other, in this case (that’s my interpretation) hard drive areas such as blocks (or something, I really have no idea). If so, most likely no valid data will be written on the guard band at all, will it? It would not fulfil its job then. Again: this is wild speculation but yet makes me raise an eyebrow.

Finally: one day I stumbled over an article - I am sure it will be an interesting read for you in case you don’t know it yet; it’s called →"Overwriting Hard Drive Data: The Great Wiping Controversy" and is referring to a paper of the same name. To make it short (esp. since the article will describe the details far better than I ever could): actually overwriting data on a hard disk one time with any kind of data will make anyone unable to recover the original data. Period (the result is really that simple). It is not necessary to overwrite hard drives with zeroes first and then repartition it and overwrite it with random data or even to do it several times. It’s gone.

I am referring only to the areas which get shredded / wiped, of course. One always has to take things like journaling or RAIDs into account (and don’t forget that darn revealing ~/.thumbnails folder… :slight_smile: ).

techwiz03 wrote:

> For accurate total to new deletion what I use is dd commands on each
> partition, then delete all partitions, make one huge partition and use
> DBAN.

Thanks for the detail - that splatter was my primary concern with the dd
approach. The timing techniques remind me of some I wrote years back using
variable track selection order to achieve the same thing. Only drawback I
see with DBAN is the self-booting disk part. I get some old machines that
all I can do is scavenge parts so booting a machine to use it problematic.
I’ll tuck this one away and try it if I don’t come across one that runs from
a booted Linux.


Will Honea

On Fri, 25 Feb 2011 02:36:02 +0000, gropiuskalle wrote:

> . if there was an easy way to delete a hard drive, not many people would
> consult companies who offer recoveries, would they (if they know a disk
> has actually been wiped)?

The only way I know of to absolutely guarantee zero chance of recovery of
a hard drive is to take it to a place that grinds it down into dust.

That’s what I’ve done with my hard drives that I no longer wanted or
needed. Too much personal info on them to take a risk of someone being
able to recover anything on most of them (I’ve had a few that I’ve sold
or given to someone, but I’ve not stored significant personal data on
them myself).

Jim

Jim Henderson
openSUSE Forums Administrator
Forum Use Terms & Conditions at http://tinyurl.com/openSUSE-T-C

gropiuskalle wrote:

> Finally: one day I stumbled over an article - I am sure it will be an
> interesting read for you in case you don’t know it yet; it’s called
> →"‘Overwriting Hard Drive Data: The Great Wiping Controversy"’
> (http://tinyurl.com/357h7tf) and is referring to a paper of the same
> name. To make it short (esp. since the article will describe the details
> far better than I ever could): actually overwriting data on a hard disk
> one time with any kind of data will make anyone unable to recover the
> original data. Period (the result is really that simple). It is not
> necessary to overwrite hard drives with zeroes first and then
> repartition it and overwrite it with random data or even to do it
> several times. It’s gone.

You have obviously never been exposed to the magic that certain agencies
employ :wink: My background with wiping disks goes back to 36" platters with
hydraulic head positioning (you carried a crescent wrench and wiping rags in
the tool kit) and totally secure wiping of used disks has been an elusive
goal for as long as magnetic media has existed. Those guard bands are there
to prevent bleed thru between tracks which are actually a fuzzy band of
magnetic domains. If you access a drive with fractional head positioning,
you will see that the actual track is a band with the nominal center being
being the track of the strongest magnetic domains with decreasing strength
to either side. Much as we would like to believe it, the real world is an
analog domain - there are no instantaneous changes. Reading the splatter is
only one of the simpler recovery techniques.

The story goes around that the Air Force had a drive go bad with top secret
military plans on it many years ago. It was one of those 36" units with
aluminum platters and the ONLY approved way to decommission it was to melt
the platter in an approved classified furnace so it sat in the corner of the
classified vault for years for lack of an approved facility.


Will Honea

And that’s ok. I’ve been a system level hardware engineering for years.

<snip> profitable line of business, and well… if there was an easy way to delete a hard drive, not many people would consult companies who offer recoveries, would they (if they know a disk has actually been wiped)? <snip>

Yes there are many theories and most are false. As they give the impression that the harddisk has been wiped then along comes a recovery company and restores almost all the original data. :(. Same harddrive and better wiping software and techniques and the recovery company comes back and says no dice!

1st: a result of 50% possible recovery is not a result at all. … This makes the sources of these theories sound rather dubious to me.

No magnetic recording technique is pure (if you know about a magnet with paper above it and you shake iron filings on it, you get a pretty fair representation of the magnetic field effect but … not all the filings will a-line perfectly. The harddisk surfaces work much the same way. Each disk platton is recordable from the outer edge to the most inner (next the hub). The ultra mini heads glide fractions of an inch above the surface controlled by the spinning disk, a head movement motor, and some rather complex timing for both write speed vs rotational speed and head move action vs time allowed for movement. To this, we need to add formatting marks for track, sectors and a crc ( cyclic redundancy code) to assure data is written right. Between these formatting marks goes your data and the crc check value as a stream of bits. Here’s where I think confusion has fallen. When we talk of recovery being 50 to 98% on a poorly wiped disk we are talking about the percentage of track surfaces which could be successful reconstructed from residual just to the edges of the tracks. Quality of disk surface and accuracy both the head and movement electronics plays a big role. With such fine tracks packed onto a disk, and a head that is slightly wider than a track to compensate for wobble, each track has a theoretical no write zone (guard) either side and a small double guard which is simply a small area between the lower guard of one track and the upper guard of the next one.

2nd: … I admit I have never heard of such a guard band before; … but what are they for? Because to me the term sounds as if it is something to divide two things from each other, in this case (that’s my interpretation) hard drive areas such as blocks (or something, I really have no idea). If so, most likely no valid data will be written on the guard band at all, will it? It would not fulfil its job then. Again: this is wild speculation but yet makes me raise an eyebrow.

They exist as simply an unused area that catches write overshoots due to disk wobble
and can be represented as this
. |lower guard|double guard|upper guard|write area |lower guard|
. |<------------- head ----------------------->|
. |<-------- normal write-------->|
. |<-------- normal read----->|
. |<----------forensic read----------------->|
If this all shows up correctly, you will see that head is actually recording in a much bigger area than what normal write and read count as valid. Thusly, playing with multi-write is in hopes of causing enough overshoot to really leave nothing to read back. Modifying the track spacing by a small margin slightly moves the head and varying the pickup amp up or down makes it more or less sensitive to make recovery possible or forensic eraser possible. Now of course you could open the case and move a magnet over a spin of the disk to also wipe it out both by magnetics and by introducing dust into a hermetically sealed environment.

Finally: one day I stumbled over an article - I am sure it will be an interesting read for you in case you don’t know it yet; it’s called →"Overwriting Hard Drive Data: The Great Wiping Controversy" and is referring to a paper of the same name. To make it short (esp. since the article will describe the details far better than I ever could): actually overwriting data on a hard disk one time with any kind of data will make anyone unable to recover the original data. Period (the result is really that simple). It is not necessary to overwrite hard drives with zeroes first and then repartition it and overwrite it with random data or even to do it several times. It’s gone.

A great thing for police to have that around! Spreads the myth quite well and makes it so easy for them to remove an erased harddisk to recover and use as evidence. If it were really as the article said, I’d be out of business. The number of times someone has come and said they formatted the wrong drive can I get their data back!

Hope you learned — er — wanted to learn about that mysterious rectangle.
Next topic, NVM (Non Volatile Memory) Why can’t I recover an erased Flashdrive. Short answer is memory is actual cells one for each bit of each byte defined by precise addressing to locate a specific group of bits that form a byte. The charge is simply charged or discharged, hence no recovery once it’s been overwritten.

interesting discussion, it’s good to know that others with some physics/engineering background can chime in when concepts need to be understood… I love hardware!

this thread pertains to winchester disks, but for those owning or hoping to own a SSD, you might be surprised reading this article:

Study: Nearly Impossible to Delete Data on SSDs

I expect this curious situation will be addressed as the SSD products mature, but for now it’s good to be aware.

Technically it is the same for harddisks in a windows system or those using ext3/4. Both use a trash can to simply not show stored info but it still exists. When you delete items in the trash can it removes the pointers to the data but it’s still there until your system decides it needs some of the space. Then portions of the original will disappear. Where Windows uses the first available free block to store new data and if that data won’t fit it fragments it into multiple blocks, Linux ext3/4 stores data only to available free blocks that are large enough to hold the whole data unfragmented.
Each disk access done has one hidden operation that is also done. This action is to seek a free block near the start and move a file further down on the drive that will fill that spot. Thusly a Linux drive write is always optimized for fast access and is the least fragmented topology out there. But as the article eludes, common techniques use to erase Hdd’s don’t work on ssd’s and the primary reason is the lack of cohesion between block and section writes. The floating structure of the directory and it’s data clusters is designed to extend life by limiting the number of writes to one area of the drive. Like if you are constantly copying a huge file to an ssd then erasing it and reusing the same space again write failure would occur rather rapidly. So in ssd technology, new writes progress further along the device even if space was freed and only when further files going to the device won’t fit near the end of the device, then previous space near the start is actually overwritten. Just like hdd’s there is a directory separate from the data storage but on ssd’s it’s also constantly being moved in storage space because if it wasn’t, constant rewrites of the directory would result in drive failure.

In early ssd experiments, they found complete failure of the device could occur in under 50 cycles of (write file, delete file, rewrite file) with a stationary directory and stationary placement of a file. But move the file and directory around saw extended life to a whopping 150,000 writes at the same location which in terms of use in real life may be as pronounced as 200 million saves or more on smaller sized ssd’s. The larger the ssd, the longer in theory it will take to saturate the substrate and cause device failure.

thermite

http://www.youtube.com/results?search_query=thermite+hard+drive&aq=f

DenverD
CAVEAT: http://is.gd/bpoMD
[NNTP posted w/openSUSE 11.3, KDE4.5.5, Thunderbird3.0.11, nVidia
173.14.28 3D, Athlon 64 3000+]
“It is far easier to read, understand and follow the instructions than
to undo the problems caused by not.” DD 23 Jan 11

i’m glad techwiz expained file allocation on SSD’s instead of me, i’d of been twice as verbose and half as clear. :slight_smile:

as it pertains to the original post, the important issue i was trying to bring up is that normal disk wipes/shred procedures are totally disregarded by the SSD controller in many many models due to the way it attempts to increase it’s longevity and protect itself. I was not aware of this until the aforementioned article came to my attention.

In some instances, security could compromised unknowingly.

Hm. While your descriptions sound plausible in itself, it remains being a mere description (meaning: there is no test data [yet]). On the other hand, the article by Craig Wright describes a thesis and a line of extensive tests to prove this thesis. Do you see any flaws in the article (esp. the tests)?

As a matter of fact a big yes. Firstly he talks about examination using an electron microscope on flexible floppy’s to try and find bits of charged particles. The actual particles are so small an electron microscope would exhibit heavy magnetic resonance on the surface resulting in an inadvertent flux change to the surface destroying the data he sought to try and recover. He talks of MFM recording techniques (pre 1982) doesn’t mention RLR used (1982 to 1985) or the host of current technologies being used today. To try and recover using his methods plain won’t work so yes he proved that visual extraction of data with electron microscope would be fruitless and reading and writing back to the drive won’t recover things either. And he is right on this point. To recover a harddisk, you use forensic read methods to capture the image in sections and write the results to a new drive ( other drive) because writing to the same drive as you are reading from can change surrounding contents. I noticed that made no efforts to attempt reading along track edges and focused only the center of tracks and even worse totally avoided MRI (magnetic resonance imaging) and drive geometry theory in his evaluation. He did spend some time skimming over flux density and peak values thresholds in an attempt to qualify. And he talked about how successive writes will never really aline to the exact position due to head variance and if this were true, we would not have a even semi reliable method for storing data. Yes there is slight variance as the head alines over the track but each disk has a fixed alignment mark to synchronize the start of tracks and data record rates are carefully timed from this precise marker so that each sector can be safely read and written repeatedly.

I’ll sign off with you can believe me or refute me as you like, it’s no skin off my nose. My aim was just to offer incite from over 35 years of detailed work on drive technology.

techwiz03 wrote:

> Next topic, NVM (Non Volatile Memory) Why can’t I recover an erased
> Flashdrive. Short answer is memory is actual cells one for each bit of
> each byte defined by precise addressing to locate a specific group of
> bits that form a byte. The charge is simply charged or discharged, hence
> no recovery once it’s been overwritten.

Even that is misleading as recent research reveals that the write
acceleration algorithms inside the SSD drives makes it virtually impossible
to insure that every previously written bit is erased. An aside: ever hear
of the microscopic (visual) methods to “read” the surface bit by bit? Uses a
magnetic dye and you gotta REALLY want the data off the disc as it is a
manual (very, very expensive) technique. As for erasure, there are some
fairly high frequency AC devices which can neutralize a platter without
opening them but they also have a tendency to destroy the heads via
inductive coupling to the conductors.


Will Honea

techwiz03 wrote:

> As a matter of fact a big yes. Firstly he talks about examination
> using an electron microscope on flexible floppy’s to try and find bits
> of charged particles.

Quite true. The dye/visual techniques I mentioned earlier are likely not
applicable to current recording methods anyway since the vertical recording
techniques obscures the structure. I haven’t been in the lab for a long
time so what I learned then is certainly out of date now.

Interesting discussion, anyway. I appreciate your comments.


Will Honea