After 8 days use tune2fs claims ‘Lifetime writes: 963 GB’. ‘Smartctl’, assuming the ‘raw’ figure is correct and applying a bit of maths, reports just under 11 GB, a far more realistic figure.
Looking for any ideas on this one. The system is running fine with no problems. But I’d rather be able to do a simple “tune2fs -l /dev/sda1 | grep ‘Lifetime writes:’” than mess around with ‘smartctl’ which gives the figure as the number of 512 byte sectors written…
If you’re running ext4, I’m not sure how necessary it is to run tune2fs. Compared to earlier versions of the ext filesystem, many more things are done, and automatically.
As for your SSD,
I recommend instead you run smartmontools to more directly get your various statistics. But, you may still not get accurate SSD metrics, drive manufacturers can be notorious for either not publishing how to monitor their drives or do so in some proprietary, non-standard way. So, YMMV.
But at least you can do a search on your drive and/or drive manufacturer and the metric, you’ll likely find posts from others before you who may have noticed similar issues and may have even found a solution.
As for tuning for SSD, I do not recommend your method which is to run a universal trim app as a cron job unless your fs is some exotic type. Typically, ext4 is recommended to run on SSDs which has its own internal trim functionality.
I highly recommend you take a look at the PPT presentation slides I created for openSUSE 12.2, everything related to SSD tuning and disk monitoring is still relevant today for 13.1. The only item that wouldn’t be relevant is the nVidia issue which is specific to 12.2 (and 12.1). If you implement those tunables, you’ll accomplish practically everything you need to both tune and monitor. If you want to chase a few more possible tiny improvements, there are a variety of additional things you can try published at the Arch Linux Wiki, but you’d be working awfully hard with no certain additional benefit. https://sites.google.com/site/4techsecrets/slide-presentations-30min
I’m not running tune2fs per se, just using the -l option to (specifically) show lifetime writes.
I recommend instead you run smartmontools to more directly get your various statistics. But, you may still not get accurate SSD metrics, drive manufacturers can be notorious for either not publishing how to monitor their drives or do so in some proprietary, non-standard way. So, YMMV.
The drive (Corsair LS Series) is not yet in ‘drivedb.h’ and unlike earlier Corsair drives it does not use the SandForce controller. I believe (raw) attribute 241 is correctly returning the lifetime writes, knowing the amount of data initially written and taking the age of the drive into account the figure is quite plausible. For the moment I’m taking it as ‘correct’, basically all I wanted was a rough idea of the daily write…
As for tuning for SSD, I do not recommend your method which is to run a universal trim app as a cron job unless your fs is some exotic type. Typically, ext4 is recommended to run on SSDs which has its own internal trim functionality.
From what I’ve read it seems the (best) method of applying trim is open to debate. I’m very open to suggestion, I chose the batched method as I know it’s highly unlikely the drive will ever exceed around 20% of it’s capacity. You’re saying mount with ‘discard’ and leave it all to ext4, is that correct? Out of curiosity only, what’s your rationale behind that?
I highly recommend you take a look at the PPT presentation slides … If you want to chase a few more possible tiny improvements, there are a variety of additional things you can try published at the Arch Linux Wiki …
I’ve only had a quick glance through your presentation, I’ll look in detail later in the day. Thanks for the link, incidentally, my partner has a wormery (if that’s the correct term) at the bottom of our garden - she doesn’t let me go near it…
I’m not after absolute performance, this was just a cost effective way of breathing life into an ageing PC. I’ve changed the I/O scheduling and that’s about it.
My presentation describes in general that different SSD manufacturers use different methods (at least 3 known) to reserve writable traps. You can look up the details of each method, IIRC some information is on Wikipedia and others is on a supplemental (not main) Arch Linux Wiki article.
But, if you execute your trim operations in a timely manner, the actual method by the manufacturer isn’t likely important.
Note, the important parameter is timeliness. You just need to have sufficient traps ready for writes available when you need them, it ordinarily doesn’t matter if you do it weekly or sooner or later as long as sufficient traps are available.
And, that’s the fundamental problem of running a cron job, it mechanically runs at a specified time regardless of need.
Leaving ext4 to do this, supposedly the system will execute trim operations whenever convenient as a low priority process… ie whenever your system has low or no activity. So, the odds are better that the erase will have been done preparing your drive for writes without any noticeable effect on system performance or experience.
The smartctl database is probably as current as is possible. I don’t know how recently your model was launched, but in general SSD data is YMMV. If the drive has been around for awhile, I wouldn’t expect things to change with more time. You can doublecheck this, though. If your drive’s metrics are publicly available you’ll probably find it with a common search. You should also be able to discern some things by baselining and then periodically checking the changes. Who knows? - Maybe the Lifetime writes is actually accurate and whoever sold the drive to you decided not to mention it was returned or used as a benchmark drive (both very common).
In any case, I’ve had some conversations with various people in the drive industry and the <more informed> people seem to say that any concern about wearing out drives are likely over-blown. You can collect statistics, but the long term rate of failure will probably be very slight compared to advertised estimates.
Thanks for the informative reply, I’ve now had the opportunity to properly read through your presentation. I’ve taken your advice, added ‘discard’ to the mount options and will let ext4 take care of trim.
With regard to the lifetime writes, as a test I copied a 1.2G ISO image to /tmp checking the reported figure with smartctl and tune2fs both before and after. I’m confident the smart data (as read by smartctl) is correct. Tune2fs however appears somewhat ‘confused’ reporting an increase of 57.6G…
Not really too worried about the life of the drive, barring a catastrophic failure I’m certain the drive will outlive the PC. I was just curious as to the average amount being written to ‘/’ Not something I’d had reason to think about before.
Used or ex-demo as new … now who would do such a thing ? :Orotfl!