Got Another Idiotic Problem - GParted Won't Format a Flash Drive Because KDE Mounts It Too Soon

OK, so I have a Diesel 8GB flash drive. I use Studio ImageWriter to write Memtest86 on the drive, then used to run a memory check to help debug my problem with system freezes. Now I want to reformat the flash drive to an empty EXT4 filesystem.

  1. I plug the drive in. KDE mounts it automatically.
  2. I open Gparted and select the drive.
  3. I tell Gparted to unmount it.
  4. I Gparted to delete the existing fat16 partition, click apply, Gparted does it (or claims to do it).
  5. I do a refresh devices. The flash drive shows as unformatted.
  6. I tell Gparted to format a new primary partition as EXT4 taking all the space except 1MB preceding the partition. I name the partition and the label Diesel8GB.
    7)) I click the apply.
  7. Gparted creates the partition, then when it tries to use mkfs to create the file system, it says the drive is mounted and mkfs can’t create the file system.
  8. When Gparted creates the partition, the KDE Most Recent Device pops up showing that the drive has indeed been mounted - which is obviously why mkfs can’t create the file system.

I go check the KDE Removable Storage settings. Sure enough, it says Diesel8 is set to automount on being attached - which I never selected to do. So I uncheck that box and try again - same result except this time the box remains unchecked. I didn’t reboot or log out so maybe I need to do that?

Worse, if I then unmount the flash drive, exit Gparted, reinsert the flash drive, open Gparted and look at the device - it’s still showing the fat16 partition which Gparted supposedly already deleted! And this happens even if I reboot in between attempts.

What the hell is going on?

I’m going to try to use a Gparted DVD I have and format the **** thing that way. But I’d like some guesses as to why this happened. In my opinion, Gparted should delete the partition when told, then after reinserting the drive and restarting or refreshing Gparted, and it shows no partition, should create the partition without KDE mounting it in the middle of the operation.

I am not having a good day.

Not so fast. Make sure you run all of the tests of **MemTest86 v8 Free Edition **several times. Memory may pass all tests but Hammer Test, the last and most stressful one. For some conclusive outcome you may need to run the tests overnight for one week.

From memtest/readme.txt:

Reclaiming disk space from the USB drive 
You may have noticed that the MemTest86 USB drive you have created may have lost some  
disk space and normal formatting will not recover the lost space.  
For example, this can happen when a UFD contains multiple partitions, such as the 
MemTest86 image. Formatting will not span across multiple partitions/volume.  
To erase the partition records and reclaim the whole disk, you will need to  
zero the MBR.

OK, rebooted with a Ventoy created flash drive with Gparted on it. Plugged in the offending flash drive. Opened up Gparted. Selected the drive. Showed the drive as no longer the fat16 partition type (I had changed it to GPT for no particular reason), and with unknown file system and with the partition label I had created as expected. Told it to format with ext4. Applied. Done. No problem.

Why the hell can’t Gparted on KDE do that? Why is KDE mounting the drive on creation of the partition before there is a file system on it? I can’t recall having had this problem in the past.

The KDE Removable Storage settings say:

  1. Enable Automatic Mounting of Removable Media: Checked.
  2. Only automatically mount removable media that has been manually mounted before: Unchecked
  3. Mount all removable media at login: Checked (that’s for the SSD I have hanging off the back that I want mounted all the time.)
  4. Automatically mount removable media when attached: Checked
  5. Device Overrides:
    a) The SSD drive has both Automount on Login and Automount on Attach checked.
    b) All the internal hard drives are unchecked (obviously)
    c) Of the Disconnected Devices, all the external backup drives in my docking stations have only Automount on Attach checked.
    d) The SSD also has the Automount on Attach checked
    e) I notice the Ventoy flash drive I just used which is no longer inserted in a port has the Automount on Attach checked - which I did not ask for, so I unchecked it again.

It seems to me that merely checking Automatically Mount Removable Media When Attached results in the case that as soon as an unmounted device has a partition created by Gparted, the system immediately mounts it - right in the middle of Gparted running mkfs to set up the file system. This seems like a bug or perhaps an unanticipated interaction between KDE and Gparted. I also don’t understand why the Device Overrides settings automatically check the Automount on Attach setting without asking.

Anyway, I got the job done. But I’d like to know why this happens so I can file that away for next time.

I did run the entire suite of tests - including the hammer - the normal four times which is supposed to be sufficient to detect the vast majority of errors . I’m quite obviously not going to run the test for an entire week and have my machine down that whole time. David at Passmark said this in a post:

  • The changes of finding an error in the first couple of hours of testing are much higher than finding an error 12 or 24 hours into a test.
  • If you are pressed for time, do a single pass of the test. If you have more time, do 2 to 4 passes. The probability of finding new errors after the 4th pass is pretty low.

As for zeroing the MBR, I recreated the partition table as GPT. But I didn’t explicitly zero the MBR as a separate action. So perhaps that was part of the problem. But I was only formatting the one fat16 partition, which is the only one Gparted showed, except for another 1MB unformatted space.

You may run the test overnight while sleeping. With people calling me I insist on them running the test for a prolonged period of time.

Yepp, I do it the same way, question is: based on which scientific knowledge?

On a machine running with bad memory anything can happen at any time. Only statistics can tell what is going on. More repetitions warrant better statistics.

More data, better info. A common misconception.

Tell us, how often did you need more than 2-3 cycles to find deffective RAM?

The 2-3 cases I had over the last 20 years, it was always detected withing the first 3-4 rounds of testing. Just saying…

Trouble with RAM never occurred with new modules. I never ran memtest86 on new ones because all of them worked with new hardware. First encountered problems were with a P54C-Pentium 90 MHz and 16 MB SIMM PS/2 70ns. When new everything worked fine, but problems occurred after half a year, mostly during POST and compilation. When bringing the machine to the vendor problems with POST were gone. But they occurred again at home. See also The SIG11 problem The vendor readily replaced the processor (FDIV bug), but refused to replace RAM modules or provide different modules for testing.

The i3-4130 worked flawlessly for several years. Out of a sudden inadvertently touching the cables on the back of the case with my shoes would reboot the machine. It had happened before, but never triggered a reboot. Removing all components but board, processor and RAM made things even worse. Eventually it won’t even boot. When swapping RAM modules I found that slot #3 had gone bad. Every other configuration with one or two modules would work. The machine has been working flawlessly again with slots #1 and #4 for several thousand hours.

Another i3-4130 would run fine since 2017 but eventually rebooted spontaneously in spring 2020. It worked well again until autumn:

Only prolonged operation can tell what is going on. Running memtest86 repeatedly can speed up the process.

The question was/is: How many cycles of memtest are needed to relably detect a faulty RAM. I don’t see anything in your reply supporting the theory that more than 3-4 cycles are needed.

I suppose I should probably bring up the point that none of this is relevant to the question I asked in this thread.

I also think they drift off-topic.

I did not try to answer because I never use gparted (I either use YaST or the command line).

If I understand correctly you only want to create one partition on the device taking as much space as is available (I do not understand why you then want a partition at all, but you may have your reasons for it) and then create a ext4 file system upon it.

In any case I would switch off any “automatic” mounting by KDE for the user that is logd in when you are doing the action and I would then probably log-out/log-in again.

It was never meant to be related to the top post, but to this thread: Freezes can be caused by memory problems.

Some users claimed they had checked their RAM on machines I had assembled and found no error. When problems persisted we agreed they would bring their machines. Some indeed had errors, some didn’t have any.

Once I even observed a dozen guys surrounding a computer and desperately trying to boot the machine which worked the day before but wouldn’t the next day. I stopped that by fully inserting the power cable of the monitor.

Errors may be detected by running memtest86 once. Sometimes you need to run it twice or more. Sometimes they aren’t detected at all.

When you think you have fixed a problem think twice and run memtest86 several times. It doesn’t hurt and sometimes an error shows up again when trying harder.

Then please post into the relevant thread. I think the OP here is correct when he complains about the discussion about disk failures is taking place here where he asked something about KDE mounting a device automatic where he does not want that automatic mount.

So please do not continue this discussion about memory problems here.

Yes, I will probably have to do that, or do as I did this last time, run Gparted from a live USB. I just wonder why this situation arises. I guess KDE works with udev or something and that runs on a lower level than a user application like Gparted, so the automount setting takes precedence over Gparted’s ongoing operations. Maybe Gparted needs to be able to signal to the desktop/OS not to do anything with a drive it’s working on. As I suggested earlier, this looks like an unintended interaction between two levels of the system. What should happen is that the partition gets created, then the file system gets created, and only then does the system mount the drive - which is the way I’ve usually seen it happen when I use Gparted to format a new hard drive. But a new hard drive isn’t described to the system as removabel or automountable, so it doesn’t happen in that instance.

I guess there’s nothing really that can be done except to remember this next time.