Hi
I run SUSE 12.3 (Tumbleweed repos) & want to make use of an SSD, however documentation only covers 11.4 & alignment info seems to unnecessarily scare as I believe it’s now outdated? Can anyone link me to a current guide?
Thanks
Hi
I run SUSE 12.3 (Tumbleweed repos) & want to make use of an SSD, however documentation only covers 11.4 & alignment info seems to unnecessarily scare as I believe it’s now outdated? Can anyone link me to a current guide?
Thanks
you found your answer ?
Yes thanks. Solved.
On 2013-07-10 04:16, fleamour wrote:
>
> Yes thanks. Solved.
Specifically, what (good) document did you find?
–
Cheers / Saludos,
Carlos E. R.
(from 12.3 x86_64 “Dartmouth” at Telcontar)
I’m not sure there really is any good information about or if it’s worth worrying about. The device itself decides where things will be stored and all sorts of things are going on in them. This page gives a reasonable description, the MLC and SLC links can be followed as well. As far a I am aware SLC isn’t available, or not easily at least and if it is will be far more expensive.
Solid-state drive - Wikipedia, the free encyclopedia
MLC layouts on the device can’t look like ordinary disc drives. There isn’t much info about on just how many levels are used either.
The 1st link that came up in the search mentions that the best file system to use is EXT4. This is probably down to the fact that the journal is checksummed. In other words if the power goes while the journal is written the chances are that the system will spot this and leave things as they are rather than trying to continue a write operation using bad data. Here is a description of what journalling is - writing data to an area of the storage device.
Journaling file system - Wikipedia, the free encyclopedia
Perhaps EXT4 wear levels that, bit doubtful so the ssd has to do that or we have to hope it does. Equally it may be better to use an un journalled file system. That results in a longer period during which a power fail may corrupt data. EXT4 reduces that period to a minimum and in theory at least corruption can’t occur as the checksum is written correctly or it isn’t.
Personally I feel that sources like this are a better guide, done by the pro’s.
As and aside the comments about Linux writing in block mode are interesting. In other words the system knows when space isn’t used. The complications come about because of the way these technologies work. Looking at a byte if it contains 11111111 anything can be written to it. If it contains some 0 bits that don’t match what is to be written then they have to be erased to all 1’s before they are written. This is at the actual storage level. From the outside the 1’s may appear as 0’s. Erasing is a slow process so chips have the ability to erase a number of them in one go as it takes exactly the same time as one. Could be that the size of the area erase matches the block size of the device. This aspect explains why there are some seemingly odd comments about on the use of trim.
My feeling is that the best optimisation is to never write to them at all except when software is installed or updated. Not difficult to do at the Linux level and shouldn’t be too difficult at the application level as well. If some one wants no spinning disks the best option is probably 2 ssd’s. One which is never written to and one where it is. The comments on MLC data retention times are a bit discouraging though. There might also be some merit in returning to compressed discs for application software. An old way of dealing with small discs in 386 times. 2 partitions, one compressed with the application software on it. Hardly ever needed defrag etc. The other for data which is always changing. Machines would often load applications faster used this way as the processor isn’t doing much while an app is loaded so has no problem decompressing it. The bottleneck was the rate the data came off the disc. Done on ssd’s it might slow or speed things up. Hard to say but would reduce the storage area used extending the scope for wear levelling compared with the same sized ssd used normally.
It’s goal is a commercial one, that’s my problem with it. I tried and tested the approach some years ago, stopped looking for “special treatment” of SSD’s. I use the “discard” and “noatime” in fstab, nothing else. Have used SSD’s for years now, trashing them with thousands of files every day, they all work fine ( the oldest must be over 6 years old now ).
I’ve wondered if discard is the best option but doesn’t that occur immediately the file isn’t wanted? On the other hand if their wear evening behaviour isn’t too good trim might be better issued when say the area that has been used is up into the 80% plus area. Probably over 95% would be ok too on many sizes or even more. The redhat link hints this. The kernel seems to maintain this info or maybe the device does. Pass. It basically the sum of used and deleted files. Maybe there is a command that keeps track of this aspect.
lol! My problem is that I have written very low level code to handle these sorts of devices in ECU’s many times yet there is very little info available on PC SSD to make a judgement about anything really other than that they do wear out.
Thanks, your opinion is appreciated.