I let my enclosure fall yesterday and now this 3To drive is messed up :{
If I remember right there were 3 partitions in btrfs and now they are displayed as fdisk -l tells: 128Go 83 Linux, 128Go 83 Linux and 100Go HPFS/NTFS/exFat…
I never took a picture of the partitions so I can’t remember the start/end of each partitions :{
I got lot of works saved on this drive and I would try to recover the most of the datas but I don’t know how to proceed…
I tried testdisk but it seems all the filesystems are damaged… and this application looks inappropriated
Using btrfs check on all the supposed Btrfs partitions seems to do nothing… except output “No valid btrfs found on /sde1 etc…”
I did a mistake using mke2fs… I used -v instead of -n… on /dev/sde1
It seems the sizes of the three partitions given by “fdisk -l” indicates the room left of each partition before this hard disk fall… (offline from 60 cm)
Each partitions were initialy about 1 To.
It seems that the size of each partitions here are corresponding to the room left (approximatively) on each partitions.
Why don’t you include the line with the command when you copy/paste from the terminal in between the CODE tags. It is only one line, not much of an effort isn’t it? Now we have some output but we do not know why it is there!
This seems so difficult to understand for many that I will give an example. Not
When that what is posted in post #6 above is the output of some fdisk command, it looks like an DOS partitioned mass storage device of 2.7 TiB, having three partitions, that follow each other without holes, of 128G, 128G and 93.3G.
Together ~349.3 G. So the rest is unpartitioned. Looks not abnormal at all.
Yes I did a fdisk -l, and for now… testdisk recovered a partition (600 Go) at least.
I think if I go deeper I would recover the whole harddisk… with some losses undoubtabily…
I have now two usb devices moving datas at a 30 Mb/s speed :{ it will take hours to move the datas.
Once done… I will try again on the partitions detected by testdisk and try a deep search on the first partition that contained the “most important” datas as well as the one currently backed up.
The third partition contained some movies I think… it is not very important if I loose the partition.
sirius:/ # fdisk -l
…
Disk /dev/sde: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk model: 003-1F216N
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x90ad11ac
The question from my side is still, what do you want help for? Have you any questions? You are hopping around, doing things faster then you can report here. So people her have no idea what your computer technical question is. At least, I haven’t.
Yes… I’m a little stressed. Even if finaly I have great hope concerning the recovery of the datas on the disk.
My questions:
Is this possible to repair a btrfs partition when the system says “not valid btrfs partition” ?
I have saw, but I did not yet tried to reproduce, in testdisk that it was possible to set the partition to btrfs…
I can’t remember if the last partition was ntfs|exfat or btrfs… as I never had issue with this disk, I never snapshoted the fdisk -l output.
I suppose that when I try to mount and the command says “not valid filesystem…” that means I have to find another way to say that the filesystem is btrfs for /dev/sde2 (I’m 100% certain) and probably exfat for the last one sde3.
I’m still moving the files from /dev/sda2…two hours remaining :{
Your story is still a confusing one. And we can not be sure there ever was a Btrfs file system on the partition you are talking about. I get the idea taht those file system have just data on them and not an operating system (openSUSE). Thus it is very unlikely it was Btrfs. Or do you have a special snapshotting policy for that data?
Also, when mount says, even when you tell it that the file system should be Btrfs, that it can not detect it as thus, then you can be sure it is damaged, probably already in the Super Block.
BTW, you again fall back to things like "when the system says “not valid btrfs partition”. I only accept such information when copied/pasted, including the prompt/command line, all output and the next, new prompt line between CODE tags. And no story telling. When not, you will not see me back on this thread.
Yep it is confusing. Sorry.
I set btrfs on this external drive that fell sunday night at least on two partitions.
One partition has been backed up fortunately… I’m trying to access the two others but it seems the first and the third are completly messed up.
The disk wasn’t running, wasn’t powered, when it has fallen so I think if I rewrite a filesystem on this disk I could reuse it.
I got no I/O errors when using testdisk and when I moved the datas that were recoverable.
All I could do is to use testdisk to “restore” the correct sizes of the different partitions.
When I read your posts, you think that you let it fall and that the shock did overwrite the partition table with another one that makes sense. That is unbelievable to me.