Using a SSD Hard Drive with openSUSE and the TRIM Command

I have just installed a new SSD Hard Drive and I wanted to share my experience with you as to the steps I took to get it to work properly. One issue that comes up is the support of the TRIM command. Here is a description of just what TRIM means:

TRIM - Wikipedia, the free encyclopedia

And here are my recommendations to allow full TRIM support in Linux with your SSD hard drive.

  1. You need to be using kernel version 2.6.33 or higher and it does work just fine with kernel 2.6.37 in openSUSE 11.4

I have used it with openSUSE 11.4 RC2 without any problems and expect it to work with the Final release.

  1. There seems to be a limited number of supported Linux Partition Types. So, use EXT4 for now as it does work.

I tested the SSD using the EXT4 Linux partition on an 80 GB Intel X25-M Sata Hard Drive without any problems.

  1. Changes need to be made in your fstab entry to properly support the TRIM command, as it is known. Here is the line I am using in my fstab file to mount the SSD drive as “/”.
/dev/disk/by-id/ata-INTEL_SSDSA2M080G2GC_CVPO037603KN080JGN-part1 /  ext4   defaults,errors=remount-ro,noatime,discard   0 1

  1. I have found a method to test if the TRIM command is properly working. So does the TRIM command really work for you or not? Here is the procedure I found that does show if it works.

My experiments with Linux: Enterprise Kernel 6 has SSD TRIM support

When using the SSD as the main hard drive, but actually excluding the /home folder, I found that openSUSE 11.4 was able to boot up from a cold start in under 30 seconds, including: PC Boot Up, selecting the Linux OS to load and the time for me to log in. I am selecting the OS and loging in as quickly as possible. Other tricks I used was to reorganize my fstab file so that the SSD was first and my home was second and to place all partitions from the same drive together in order, low to high. This organization of the fstab file actually made a different in how quickly KDE loaded and played the music at startup, my signal that all is loaded.

So, I did find suggestions to move the /tmp files off of the SSD to reduce wear from the maximum read/writes the drive can have, but I chose not to do this as this caused some odd problems with compiling kernels. I read some suggestions to align partitions by certain boundaries, but these instructions were not clear enough for me to put into action, so I did not use them. Here is a speed check of the new hard drive. You can perform this at the terminal propt, as root yourself to check your hard disk speed:

hdparm -t /dev/sdb

/dev/sdb:
 Timing buffered disk reads: 744 MB in  3.01 seconds = 247.51 MB/sec

I would request that you post any comments about your SSD drive if you have one and if the steps you took differed from mine.

Thank You,

i suspect that the layout of fstab takes advantage of the superb sequential read rates of the SSD.

great information, i have these in my future also, not really for the performance gain, but drive failure is my number one hardware headache.

i suspect that the layout of fstab takes advantage of the superb sequential read rates of the SSD.

great information, i have these in my future also, not really for the performance gain, but drive failure is my number one hardware headache.
Thanks for your comments j_xavier and kind words. If you should get an SSD, please come back and let us know of your results. As you suggest SSD’s are supposed to be very reliable. For me that remains to be seen, but they are fast and very expensive for what you get. The Intel drive I got is a 2.5" SATA drive and would pop right into a Laptop and one could get by, even with Windows, with that much space (80 GB) if you needed to for the speed and get the alleged reliability. I only say alleged because I have not yet experience this reliability myself, but we will see. I just this year had an interal 160 GB hard drive fail on a Two Year old Dell laptop, seemingly after using it in 24 degrees F weather outside one day for work, but not really sure what happened.

Thank You,

fyi, i mentioned this in a thread about disk wiping techniques the “Looking For” category, but it was a surprise to me to read this:

Study: Nearly Impossible to Delete Data on SSDs

makes perfect sense after you consider the device, but it’s obvious that unrealized security problems might arise.

On 03/05/2011 04:36 AM, jdmcdaniel3 wrote:
>
> seemingly after using it in 24 degrees F
> weather outside one day for work, but not really sure what happened.

wow! did it get to 24 F in Austin??
that must’a shutdown the whole town!


DenverD
CAVEAT: http://is.gd/bpoMD
[NNTP posted w/openSUSE 11.3, KDE4.5.5, Thunderbird3.0.11, nVidia
173.14.28 3D, Athlon 64 3000+]
“It is far easier to read, understand and follow the instructions than
to undo the problems caused by not.” DD 23 Jan 11

wow! did it get to 24 F in Austin??
that must’a shutdown the whole town!
The cold did not shut us down, but the snow on the last day of the four day below freezing weather blasted almost a whole day. On the second day of the cold weather, where it never got above 24 F all day and started at 17 F I think, I got sent to a Time Warner Data Center to startup the HVAC control system and computer. There is no heating (or water or a bathroom) in these data centers and with no other computers there yet either, theres no internal heating and so nothing wanted to run. It was down to 38 F inside the building. This is where my Laptop PC started acting kind of funny before the hard drive went bad. Most of my equipment is located outside of the building. I did discover than even though the parking lot was not paved, “mud” is no problem when it is 24 F outside. lol!

Thank You,

Hi
There has been discussion on the mailing lists about TRIM support, which may be worthwhile reading;
[opensuse-factory] SSD discard support? hdparm’s wiper.sh?](http://lists.opensuse.org/opensuse-factory/2010-04/msg00406.html)
This one is about a firmware upgrade to intel SSD’s
[opensuse-factory] Re: weird problem with SSD drive…](http://lists.opensuse.org/opensuse-factory/2011-02/msg00729.html)
[opensuse-factory] /sbin/fstrim: /home: FITRIM ioctl failed: Operation n](http://lists.opensuse.org/opensuse-factory/2011-02/msg00749.html)

Hello malcolmlewis. The first link produce some verbage about TRIM not being effective while the next link produce this text:

wiper.sh is included already in 11.2 :
rpm -ql hdparm | grep wiper
/sbin/wiper.sh
/usr/share/doc/packages/hdparm/README.wiperto see it go in as the opensuse 11.3 methodology for calling discard
on a SSD.

That will mean including the script, testing it as part of opensuse,
and getting it in the nightly cron scripts or something.
in README.wiper I see a lot of thinks like:
“…
This script may be EXTREMELY HAZARDOUS TO YOUR DATA.

btrfs – DO NOT USE !!!
…”

so creating some default crontab rule with wiper.sh for openSUSE doesn’t look
safe for me …
I am thinking I will stick with what I said about the TRIM function for now until such time as something better comes down the pike, or kernel. While I do not subscribe to this mailing list, I did do a lot of searching on the TRIM subject and this is the very first time I saw such a possibly unsubstantiated claim being made about the implementation of TRIM function in the Linux kernel. There would surely be someone here that works with the kernel that could tell us what this means, but it looks like bolderdash (i.e. BS) to me so far.

Your second link concerning the updating of SSD firmware is a VERY GOOD SUGGESTION and should be heeded by anyone reading this thread. While I failed to mention this fact, I did right off the bat, get a COMRESET kernel error on this Intel SSD until I did a latest firmware update. The Intel SSD firmware was dated in October 2010 and I found firmware from February of 2011 that could be loaded. Once the update was done, using a Boot disk, which was emulating a floppy disk, the COMRESET error at boot up went away.

As for the last link, I have never even hear of the program /sbin/fstrim and so I am not sure why I would use it since TRIM is being handled by the kernel and the EXT4 file system. It does not seem useful to me. And, I want to add that I do appreciate ALL links, information and comments malcolmlewis. Everything of interest on this subject should be put into a message here. Thank you so much for your attention on this subject.

Thank You,

Hi
I think the other one is using 4k blocks as well, have investigated
that?

gregkh on the mailing lists is the kernel maintainer I suggest a post
on the devel mailing list for any further insights?

The SSD I’m looking at is an OCZ 60GB vertex2 (285MB/sec) for my
netbook. If I was looking at the desktop it would have to be a PCIe or
one they are running over 500MB/sec, but I also think some of the
SATA 3.5" devices are up there too…

The speeds your seeing I can get with SATA3 RAID0 (striping) Now a
couple of your SSD’s in RAID0 would rock along :slight_smile:


Cheers Malcolm °¿° (Linux Counter #276890)
SUSE Linux Enterprise Desktop 11 (x86_64) Kernel 2.6.32.27-0.2-default
up 12 days 19:16, 2 users, load average: 0.04, 0.07, 0.08
GPU GeForce 8600 GTS Silent - Driver Version: 260.19.26

Hi
I think the other one is using 4k blocks as well, have investigated
that?

gregkh on the mailing lists is the kernel maintainer I suggest a post
on the devel mailing list for any further insights?

The SSD I’m looking at is an OCZ 60GB vertex2 (285MB/sec) for my
netbook. If I was looking at the desktop it would have to be a PCIe or
one they are running over 500MB/sec, but I also think some of the
SATA 3.5" devices are up there too…

The speeds your seeing I can get with SATA3 RAID0 (striping) Now a
couple of your SSD’s in RAID0 would rock along :slight_smile:
As far as the speed, this is just a 3 GB SATA device. When the next gen comes out at 6 GB, we should really see some speed. I will say that I have one 10K 600 GB hard drive and while the fastest physical hard drive, the SSD is faster by 40% than the fastest hard drive I have. That is no slouch. As for RAID, you can double anything using that method, maybe even quadrupole it. I saw one powerhouse PC that used four 40 GB SSD’s together in RAID. Also I did see the SSD PCIe cards, but it is all real expensive. Even raiding two fast hard drives with a good card is not going to be cheap, though the space will be greater. What I see here is a SSD hard drive, at 80 GB & $200, large enough to get by and able to go right into a laptop if you wanted to without braking the bank.

As for the kernel and TRIM, I wonder if Larry Finger might make a comment here about the subject with his expertise on the subject? It would surely be appreciated here. I did look at the partitioning issues, but decided to go for the standard setup and see what one gets. Really, what is the average Joe using Linux going to do with an SSD? I was hopeing to show that it was not all that hard to setup one to work and what you might gain from it.

Thank You,

On 03/05/2011 10:36 PM, jdmcdaniel3 wrote:
>
> even though the parking lot was not paved, “mud” is no problem when it
> is 24 F outside. lol!

yep!!
i have several friends in Austin and am surprised none mentioned this
extremely cold weather (for there)…


DenverD
CAVEAT: http://is.gd/bpoMD
[NNTP posted w/openSUSE 11.3, KDE4.5.5, Thunderbird3.0.11, nVidia
173.14.28 3D, Athlon 64 3000+]
“It is far easier to read, understand and follow the instructions than
to undo the problems caused by not.” DD 23 Jan 11

I just noticed an error in my original post and should make a correction. For some reason I had a period before the noatime option. I have changed it to a comma to look like this.

/dev/disk/by-id/ata-INTEL_SSDSA2M080G2GC_CVPO037603KN080JGN-part1 /  ext4   defaults,errors=remount-ro,noatime,discard   0 1

I looked back at my fstab and no period is there. I guess I am not sure what happened but I did remove some extra spaces not there in my actual fstab entry. I am sorry for any inconvenience this error may have caused.

Thank You,

I edited your 1st post . Please advise if I made a mistake in the edit.

The edit won’t help NNTP users, but it should help WEB based users.

Thankyou for your MANY contibutions.

I edited your 1st post . Please advise if I made a mistake in the edit.

The edit won’t help NNTP users, but it should help WEB based users.

Thankyou for your MANY contibutions.
Thanks for your help oldcpu. I also greatly appreciate your efforts in the forum. I don’t know how you do it, but it is worth a million to the openSUSE forums.

Thank You,

I installed a SSD on my laptop a few days ago and I can say that the long and rather painful procedure to make sure you correctly use a SSD on Linux is worth every second you give it from your life! Your machine transforms to an “everything-instant” beast!

I use my laptop as my daily workhorse, it’s my main tool for my job, so it’s pretty important that it stays operational and in good shape. These also mean I don’t want to reinstall the OS (openSUSE of course!) with every slight and unimportant (:P) system change, like swapping my HDD for a SSD. I wanted to transfer my existing system into the new disk with as little fuss and downtime as possible.

So, here’s what I did to transfer my openSUSE 11.4 installation to the SSD, enjoy!
DISCLAIMER: I give NO guarantees that the following procedure will work for every single situation and requirement!! Please view this procedure as an ADVISORY, not as a how-to, copying and pasting commands on a terminal!

Setup:
Machine: Dell Latitude E6510 (Intel QM57 Express chipset SATA-2)
Distro: openSUSE 11.4
Existing Disk: 320GB WD Scoprion Black (7200 rpm)
SSD: OCZ Vertex 2 120GB (OCZSSD2-2VTXE120G)

Step One: Install drive as secondary
I installed my SSD on a disk enclosure and plugged it in the e-SATA port of my laptop. I guess plain USB enclosures should work just fine as well.
Right before plugging the enclosure, I started a ‘tailf’ on /var/log/messages to check the detection:

vgdell:~ # tailf /var/log/messages

That showed the system pick-up the drive, bring the SATA link up at 3GBps and register as sdb.

Step Two: Access drive and examine details
I wanted to make sure the drive has the latest firmware, so I used hdparm:

vgdell:~ # hdparm -I /dev/sdb

This showed me the drive had the latest firmware at the time.

Step Three: Partitioning
So much Internet ink has been used for SSD partitioning and alignment that if it were true ink, we’d all be blue by now! It all boils down to making partitions start at round multiples of the SSD Erase Block. It seems this is not something SSD makers like to disclose and ranges from 128K to 512K or even 1024K. I opted to use 1024K, as this ensures proper alignment for the smaller sizes as well (1024 = 2x512 = 4x256 = 8x128).
Various resources on the subject instruct you to use fdisk with a specific disk geometry (values for heads and sectors per track), but these are not necessary, at least not in a Linux-only environment since Linux uses ONLY LBA. If you will be multi-booting with Windows, install that cr@p first, then Linux. This will make sure Windows won’t freak-out with any Linux-chosen Heads and Sectors values.
To determine the starting sectors of your partitions you need two things, in sectors: the alignment unit and your partitions’ sizes. Fire-up your calculator!
Alignment Unit (sectors) = 1024 * 1024 (bytes) / 512 (bytes / sector) = 2048 (sectors)
Here’s my partitioning scheme:

part type     label size
========================
sda1 primary  boot  100M
sda2 extended -     rest
sda5 logical  swap    2G
sda6 logical  /      20G
sda7 logical  /var    2G
sda8 logical  /home rest

To determine the sizes in sectors, just convert to bytes and divide by 512, so the above becomes:

part type     label size (sectors)
==================================
sda1 primary  boot          204800
sda2 extended -               rest
sda5 logical  swap         4194304
sda6 logical  /           41943040
sda7 logical  /var         4194304
sda8 logical  /home           rest

Now, the really good part with alignment on 1024K is that with partition sizes exactly divisible by 1024K everything falls in the correct place naturally! With every new partition you create, fdisk will default the starting sector to an exact multiple of 1024K or 2048 sectors, which is exactly what we want! How better can it be?
The only thing you have to be careful with is the calculation of the last sector of each partition, which should be:
Last Sector = Start Sector + Size Sectors - 1
So, fire up fdisk:
NOTE: I’m using another disk for this run of fdisk, so total size and sector count will differ from the 120G OCZ.

vgdell:~ # fdisk -c -u /dev/sdb

Create a new partition table:

Command (m for help): o
Building a new DOS disklabel with disk identifier 0x65fa13df.
Changes will remain in memory only, until you decide to write them.
After that, of course, the previous content won't be recoverable.

Warning: invalid flag 0x0000 of partition table 4 will be corrected by w(rite)

Create a primary partition of 100M:

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
p
Partition number (1-4, default 1): 1
First sector (2048-625142447, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-625142447, default 625142447): 206847

You can see how fdisk nicely selected 2048 as the first sector of the first partition. There, nicely aligned without moving a finger! Last sector should be 2048 + 204800 - 1 = 206847.
Now, let’s create the big extended partition to host all others:

Command (m for help): n
Command action
   e   extended
   p   primary partition (1-4)
e
Partition number (1-4, default 2): 
Using default value 2
First sector (206848-625142447, default 206848): 
Using default value 206848
Last sector, +sectors or +size{K,M,G} (206848-625142447, default 625142447): 
Using default value 625142447

Move on with the swap partition:

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First sector (208896-625142447, default 208896): 
Using default value 208896
Last sector, +sectors or +size{K,M,G} (208896-625142447, default 625142447): 4403199

You can see how fdisk nicely selected sector 208896, exactly divisible by 2048, as the first for the partition. Last sector is again 208896 + 4194304 - 1 = 4403199. Let’s move on.
Root partition:

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First sector (4405248-625142447, default 4405248): 
Using default value 4405248
Last sector, +sectors or +size{K,M,G} (4405248-625142447, default 625142447): 46348287

Var partition:

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First sector (46350336-625142447, default 46350336): 
Using default value 46350336
Last sector, +sectors or +size{K,M,G} (46350336-625142447, default 625142447): 50544639

Home partition:

Command (m for help): n
Command action
   l   logical (5 or over)
   p   primary partition (1-4)
l
First sector (50546688-625142447, default 50546688): 
Using default value 50546688
Last sector, +sectors or +size{K,M,G} (50546688-625142447, default 625142447): 
Using default value 625142447

Let’s see what we have now:

Command (m for help): p

Disk /dev/sdb: 320.1 GB, 320072933376 bytes
30 heads, 54 sectors/track, 385890 cylinders, total 625142448 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x65fa13df

   Device Boot      Start         End      Blocks   Id  System
/dev/sdb1            2048      206847      102400   83  Linux
/dev/sdb2          206848   625142447   312467800    5  Extended
/dev/sdb5          208896     4403199     2097152   82  Linux swap / Solaris
/dev/sdb6         4405248    46348287    20971520   83  Linux
/dev/sdb7        46350336    50544639     2097152   83  Linux
/dev/sdb8        50546688   625142447   287297880   83  Linux

Make sure each and every Start sector is exactly divisible by 2048 and your partitions should be aligned just nicely. Write your changes to disk:

Command (m for help): w

Make sure the OS picks up the changes and you can see the partitions in /dev. This pretty sums it up for partitioning.

Step Four: File systems
There really is one way to go with file systems on SSDs under Linux right now, ext4. Btrfs can be another option, but I’ve never really tried it and don’t know how stable and safe it can be. In addition, I’m not sure of the performance benefits it may have over ext4, especially for desktop usage. But that’s a different story.
The ext* file systems were not designed with SSDs in mind, so they cannot be really optimized for them, but they do provide a way to hint them how to treat the underlying medium: stride and stripe-width. Both are meant to be used for RAID configurations, but they also apply to SSDs, especially stripe-width. To be short: stripe-width should be the same as the SSD erase block. If you read mke2fs’s man page, it says stripe-width is expressed as a count of file system blocks. ext4 uses by default a 4K block, so our 1M alignment unit is 256 file system blocks. For smaller partitions, like our 100M boot partition here, ext4 uses a 1K file system block, so in that case stripe-width should be 1024. I’m not sure how much stride affects SSD performance though… mke2fs’s man page says it mainly works at mkfs-time and that it may affect the allocator in normal use time. I arbitrarily chose a half-erase block size, so for 4K file system blocks stride should be 128 and for 1K blocks it should be 512.
So, go ahead and create the file systems:

vgdell:~ # mkfs.ext4 -L boot -E stride=512,stripe-width=1024 -v /dev/sdb1

Note that on your SSD the line “Calling BLKDISCARD…” should report success, indicating a TRIM over the entire partition. An example of a file system with 4K blocks:

vgdell:~ # mkfs.ext4 -L root -E stride=128,stripe-width=256 -v /dev/sdb6

Step Five: Copy system
Now, log out of your graphical session and press Ctrl-Alt-F1 to go to the terminal, log in as root and go into single user mode:

vgdell:~ # init 1

This ensures that most of your file system will not be touched, save for the occasional write to /var. You will now have to mount your root partition ("/") and create the skeleton of your file system. This process heavily depends on your partitioning scheme, so I will just show what I did for mine. The basic idea is to create a directory for each partition and mount the partition to it.

vgdell:~ # mount -o noatime,discard /dev/sdb6 /mnt
vgdell:~ # mkdir /mnt/boot
vgdell:~ # mount -o noatime,discard /dev/sdb1 /mnt/boot
vgdell:~ # mkdir /mnt/home
vgdell:~ # mount -o noatime,discard /dev/sdb8 /mnt/home
vgdell:~ # mkdir /mnt/var
vgdell:~ # mount -o noatime,discard /dev/sdb7 /mnt/var

You now basically need to copy your file system into the new one. I’m a control freak, so I went ahead and copied each top-level directory with “cp -a”. DO NOT COPY pseudo file systems and special directories dev, media, mnt, proc, selinux, sys and tmp:

vgdell:~ # cp -a /bin /mnt
vgdell:~ # cp -a /boot/* /mnt/boot
vgdell:~ # cp -a /etc /mnt
vgdell:~ # cp -a /home/* /mnt/home
...

Be careful with top-level directories mounted as separate partitions, as you’ve already created them. Now create the top directories for pseudo file systems and special places:

vgdell:~ # mkdir /mnt/dev
vgdell:~ # mkdir /mnt/media
vgdell:~ # mkdir /mnt/mnt
vgdell:~ # mkdir /mnt/proc
vgdell:~ # mkdir /mnt/selinux
vgdell:~ # mkdir /mnt/sys
vgdell:~ # mkdir /mnt/tmp

Edit your new fstab file to add SSD-needed mount options. You need to use noatime and discard with every ext4 file system. noatime is good to use for conventional HDDs too. REMEMBER your NEW fstab is /mnt/etc/fstab.

Step Six: Shutdown and swap drives
Big moment! Swap your slow, mechanical, energy consuming beast with your new, sleek, (hopefully) super-fast SSD! First, though, make sure you properly unmount everything:

vgdell:~ # umount /mnt/var /mnt/home /mnt/boot /mnt

Also, make sure you have a live-cd to boot into to do the final step. I used openSUSE 11.4 DVD Rescue mode, which drops you quickly in a terminal.
Shutdown, get your favorite screwdriver and do the swap.

Step Seven: Install GRUB
Power on and instruct your BIOS to boot from the cd / dvd, then make appropriate choices to get dropped in a root terminal. openSUSE dvd’s terminal asks for a login (!) in which you just enter root.
In my case, I wanted to install GRUB on the MBR of the disk, so this is what I’ll show you. You just enter GRUB’s interactive mode and run a couple of commands:

vgdell:~ # grub
...
grub> 

It is VERY important to know which partition is your boot, mine is /dev/sda1 or (hd0,0) in GRUB-speak. You can use GRUB’s find command to be sure:

grub> find /grub/menu.lst

This will indicate your boot partition in GRUB-speak. Next, let GRUB know this is indeed your boot partition with the root command (make sure you type YOUR OWN boot partition):

grub> root (hd0,0)

Note that this step requires the presence of a symbolic link called boot inside the boot partition, pointing to its own, containing directory, that is:

vgdell:~ # ls -l /boot
...
lrwxrwxrwx 1 root root        1 Μάρ   2  2011 boot -> .
...

openSUSE (and I think most distributions) contain this link, so you shouldn’t worry.
Finally, instruct GRUB to install itself into the MBR:

grub> setup (hd0)

You should see a few lines indicating success with various tasks. Quit GRUB:

grub> quit

That’s it!! You are moments away from enjoying the fastest boot your machine has seen!! Reboot your live environment, choose to boot from your hard disk and enjoy!

On 10/15/2011 03:16 AM, vgiannadakis wrote:
> That’s it!! You are moments away from enjoying the fastest boot your
> machine has seen!!

-=WELCOME=- new poster (i’ve not tried your how-to [yet], but so far i
am impressed!!)


DD
openSUSE®, the “German Automobiles” of operating systems

Thanks for the warm welcome. DenverD! I’ve been on and off Linux for many years and on openSUSE full-time for about a year and a half. Installing a SSD on Linux appears more complicated than it should be, so I thought I’d share my simplistic (but fully working) approach…

I prefer to look for /grub/stage2, which is more important and more relevant than menu.lst.
Otherwise, thank you for this great post. I’ll know where to look before installing Linux on a SDD (which might happen to all of us sooner or later).

Thanks for the info and kind words! I tried to edit my post to quote you, but it seems I can’t…

After about a week on SSD, I feel I’ve become addicted!! Raw disk reads at ~250M/sec and file system reads at ~185M/sec! %) Latency? I don’t know this word!

[QUOTE= From jmcdaniel3

I would request that you post any comments about your SSD drive if you have one and if the steps you took differed from mine.

Thank You,[/QUOTE]

My new Corsair Force 3 120 GB SSD arrived this morning. My intention is to use it for an openSUSE 12.1 system, ie. swap, / and /home with a separate data disk.

The DVD Beta 1 installation disk worked well apart from a problem seeing the desktop after the first reboot (but I had exactly the same problem with an HDD). Rebooting got me to the desktop and all seems well apart from a few minor niggles. I have a Mobo without native Sata 3 (see PC1 below) so have also installed an ASUS U3S6 Sata 3 Expansion card to support the SSD.

This is what I get as a speed check. (sdb is a standard sata2 HDD.)

hdparm -t /dev/sda:

/dev/sda
Timing buffered disk reads: 810 MB in 3.00 seconds = 269.97 MB/sec

hdparm -t /dev/sdb

/dev/sdb:
Timing buffered disk reads: 218 MB in 3.02 seconds = 72.12 MB/sec

I would like to ask also, with regard to TRIM - presumably I should include the “discard” in fstab for all three partitions - swap, / and /home?

So far my experience is pretty positive.