gparted taking very long to create filesystem

Hi, I want to partition a new, 6TB, disk with a single ext4 partition. It’s not for booting, so making bootable is’nt necessary.

I attached the disk through USB (2.0).

Gparted reports (it is reported in Dutch, so I haven’t got the exect English messages):

  • it has made an empty partition,
  • wiped old partition signings by writing zero’s at several offsets
  • configured the partition type ext4
  • and is now making a new ext4 filesystem with
mkfs.ext4 -F -O ^64bt -L 'backup-2' '/dev/sdf1'

In the last step it completed a few substeps:

  • reservation of group tables succeeded
  • writing inode tables succeeded
  • create a journal (262144 blocks) succeeded

But gparted is now a few hours busy writing superblocks and file system meta data.

That seems very long to me, or am I to impatient?

The disk is still seen by the system I think:

lsusb:
Bus 002 Device 004: ID 152d:2338 JMicron Technology Corp. / JMicron USA Technology Corp. JM20337 Hi-Speed USB to SATA & PATA Combo Bridge

dmesg:
 3265.065949] usb 2-5: new high-speed USB device number 4 using ehci-pci
 3265.222793] usb 2-5: New USB device found, idVendor=152d, idProduct=2338
 3265.222801] usb 2-5: New USB device strings: Mfr=1, Product=2, SerialNumber=5
 3265.222806] usb 2-5: Product: USB to ATA/ATAPI bridge
 3265.222809] usb 2-5: Manufacturer: JMicron
 3265.222813] usb 2-5: SerialNumber: 000001D91851
 3265.258827] usb-storage 2-5:1.0: USB Mass Storage device detected
 3265.259066] scsi host10: usb-storage 2-5:1.0
 3265.259304] usbcore: registered new interface driver usb-storage
 3265.260984] usbcore: registered new interface driver uas
 3270.319182] scsi 10:0:0:0: Direct-Access     WDC WD60 EFRX-68L0BN1          PQ: 0 ANSI: 5
 3270.319666] sd 10:0:0:0: Attached scsi generic sg6 type 0
 3270.320996] sd 10:0:0:0: [sdf] Very big device. Trying to use READ CAPACITY(16).
 3270.321370] sd 10:0:0:0: [sdf] 11721045168 512-byte logical blocks: (6.00 TB/5.46 TiB)
 3270.322454] sd 10:0:0:0: [sdf] Write Protect is off
 3270.322461] sd 10:0:0:0: [sdf] Mode Sense: 28 00 00 00
 3270.325781] sd 10:0:0:0: [sdf] No Caching mode page found
 3270.325787] sd 10:0:0:0: [sdf] Assuming drive cache: write through
 3270.327034] sd 10:0:0:0: [sdf] Very big device. Trying to use READ CAPACITY(16).
 3270.340090] sd 10:0:0:0: [sdf] Very big device. Trying to use READ CAPACITY(16).
 3270.342472] sd 10:0:0:0: [sdf] Attached SCSI disk
 3316.392876] Btrfs loaded, crc32c=crc32c-generic, assert=on
 3316.445388] JFS: nTxBlock = 8192, nTxLock = 65536
 3316.460421] NILFS version 2 loaded
 3316.556429] SGI XFS with ACLs, security attributes, no debug enabled
 3374.254138] sd 10:0:0:0: [sdf] Very big device. Trying to use READ CAPACITY(16).
 3510.522463] sd 10:0:0:0: [sdf] Very big device. Trying to use READ CAPACITY(16).
 3510.531856]  sdf: sdf1
 3510.908820] sd 10:0:0:0: [sdf] Very big device. Trying to use READ CAPACITY(16).
 3510.918388]  sdf: sdf1
 3552.593622] usb 2-5: reset high-speed USB device number 4 using ehci-pci

Should I abort gparted, or wait longer?

First (your remark about a tool reporting in Dutch came first), when you want to have a tool “talking in English” because you want to copy/paste it here, precede the command with LANG=C, like in

LANG=C fdisk -l

A 6TB file system is rather large, thus it will take some time to write the inodes all over it. BUT OTOH I have no idea if the “few hours” you are talking about is reasonable.

Just bought a new 8TB external drive a couple of days ago, and did the procedures mentioned above. Took maybe 3-4 minutes. If it’s taking more than 10-20 minutes, something is up.

My experience is that when gparted takes a long time, the problem is always that there is something wrong with the medium. In fact, I use it as one of my tests when someone reports problems with a medium.

What happens if you use YaST’s partitioner? Did you create a new GPT ? I really shouldn’t take more that a couple of minutes, even through USB2.

Assuming mkfs.ext4 -F -O ^64bt -L ‘backup-2’ ‘/dev/sdf1’ included a typo and what was meant was

mkfs.ext4 -F -O ^64bit -L 'backup-2' '/dev/sdf1'

could it be the delay is a combination of:

1-32bit EXT4 addressing (instead of 48),
2- JMicron SATA+PATA controller, and/or
3-USB 2.0 bus

does simply make it take a very long time to do 6TB? I have a Sabrent brand USB2+eSATA SATA+PATA that is JMicron based, and its performance has always seemed to be significantly less than stellar on USB2. ISTR seeing specifications for various external USB storage devices report HD size limitations. Maybe that’s at the root of this problem? It might be worth attempting the same process except with that HD connected to a motherboard SATA port.

Apparently it takes very long indeed: by accident the USB-cable got loose, after 7 hours of formatting. Dmesg reported

[do apr  5 20:04:41 2018] usb 2-5: USB disconnect, device number 4
[do apr  5 20:04:41 2018] sd 10:0:0:0: Device offlined - not ready after error recovery
[do apr  5 20:04:41 2018] sd 10:0:0:0: [sdf] tag#0 FAILED Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK
[do apr  5 20:04:41 2018] sd 10:0:0:0: [sdf] tag#0 CDB: Write(16) 8a 00 00 00 00 01 5d 44 18 d8 00 00 00 f0 00 00
[do apr  5 20:04:41 2018] print_req_error: I/O error, dev sdf, sector 5859711192
[do apr  5 20:04:41 2018] Buffer I/O error on dev sdf1, logical block 732463643, lost async page write
[do apr  5 20:04:41 2018] Buffer I/O error on dev sdf1, logical block 732463644, lost async page write
[do apr  5 20:04:41 2018] Buffer I/O error on dev sdf1, logical block 732463645, lost async page write

The size of the disk, also according to dmesg:

[do apr  5 20:05:05 2018] sd 10:0:0:0: Attached scsi generic sg6 type 0
[do apr  5 20:05:05 2018] sd 10:0:0:0: [sdf] Very big device. Trying to use READ CAPACITY(16).
[do apr  5 20:05:05 2018] sd 10:0:0:0: [sdf] 11721045168 512-byte logical blocks: (6.00 TB/5.46 TiB)

so after 7 hours only 6% was formatted. So it would take some 112 hours to format the disk.

Is this indeed normal?

I started mkfs again, from the cli, and it is happily running for some 16 hours now.

I tried to find more information about the 64bit option (switched off here by -O ^64bit, when that is indeed a typo), but it is not clear to me.

Could it be that it should in fact be on with such a large file system?

Why do you think that about the 6%?

I don’t know that either. It was what gparted issues, but I don’t know why…

Well, try without. According to others here it should take minutes, not hours, thus not much lost to try it.

BTW, I know that many love gparted as partitioner and also as file system manager, but why do you not use YaST > System > Partitioner? After all this is about openSUSE.

At the USB-hiccup, formatting arrived at logical block 732463643 according to dmesg.
Which is 6,2% of the disc size of 11721045168 logical blocks, if my arithmetic is correct.

Yes, I’ve used Yast for these things mostly up till now. To be honest, I haven’t the faintest idea why I chose or gparted this time…

I’ll start anew from Yast then. I assume it uses mkfs, so I can see what options Yast gives to mkfs.

Did I say thanks for thinking with me already? No… hereby!

Are you sure that it is still doing something? An access light on the disk flickering or rattling noises?

Ah, now I can understand what you did. Well, could be correct (the maths are correct, but the conclusion may be or maybe not).

Lights are not available in my setup (the disk is bare, connected through a USB-to-SATA-adapter. I don’t hear much either, except for the disc spinning.
But since the superllocks could be written, according to both gparted and Yast, and no error messages are shown on the terminal or in the logs, I gather hat the disc is accessible.

Maybe I should try another USB-connection, perhaps even on another system.

The partitioning from Yast is taking an hour now. It’s a pity Yast does not show any form of progress on the operation.

Yes, when it is possible to try another connection, that might be worth the try.

BTW, there is but one superblock and it has copies spread over the file system.

On another system, formatting (usong YAST) went well.

The backup of the first system trough USB seemed to go good at first, but after a while it stopped, giving USB-resets in the log.

So I attached the disk trough SATA, and now the backup went well.

After that I attached another new disk, which is meant to be a replacement for a RAID-5 disk whose health is deteriorating (although not yet kicked out my mdadm).

mdadm /dev/md0 --add /dev/sdf

went OK, but after issuing

mdamd /dev/md0 --replace /dev/sda

mdadm started rebuilding, but after a while stopped. That is, the process is still there, the disk led on the system lights continuously, but nothing seems to happen anymore, and

mdadm --detail /dev/md0

hangs forever. Also the system won’t shutdown properly.

Nothing can be seen in the journal or in the dmesg output.

Should I unmount the RAID-device first?

But maybe I should start a new thread for this…

That is the best way to draw attention to your new probelm.