How do I implement "elevator=noop"?

One of my desktop PCs has a Corsair SSD and an HDD. I’ve read that it is beneficial to use “elevator=noop” for the SSD as this will speed up I/O.

However, the authors don’t give a clear explanation of which file or files the statement should be located in (fstab probably?).

Has anyone successfully tried it?

You add it as a kernel option, e.g. yast -> boot loader
select the boot menu entry for which you need it and press edit


PC: oS 11.4 x86_64 | Intel Core i7-2600@3.40GHz | 16GB | KDE 4.6.0 |
GeForce GT 420
Eee PC 1201n: oS 12.1 x86_64 | Intel Atom 330@1.60GHz | 3GB | KDE 4.8.0
| nVidia ION
eCAFE 800: oS 12.1 i586 | AMD Geode LX 800@500MHz | 512MB | KDE 3.5.10 |
xf86-video-geode

Thanks Martin - it’s easy when you know how :slight_smile:

Adding it as a kernel option makes it system wide, which is fine for obtaining a persistent elevator setting for a system with a single disk … but:

Given that noop is not particularly desirable for a mechanical disk, guess what using the kernel boot option is going to do to I/O performance with the HDD :stuck_out_tongue:

What you’d want to do in a mixed disk system instead, is change the elevator setting on a per disk basis; which can be done via the sysfs.

Assuming /dev/sdX (where X equals whatever is applicable to your system setup and disk that you are interested in), then to check the elevator options available to you (as configured in the kernel you’re running) and to see which of those is currently being utilizes for that particular disk, use:

cat /sys/block/sdX/queue/scheduler

The output of which is a single line listing the available elevator options and the one with the [brackets] is the one currently in use. Example:

noop deadline [cfq]

To change to a particular elevator option that is available to you, use (as root):

echo option > /sys/block/sdX/queue/scheduler

where option is the particular one available. So, for example, noop:

echo noop > /sys/block/sdX/queue/scheduler

To make it persistent … well, using the older system V init, and depending upon what is applicable for the distro, you’d have written the last command to either /etc/rc.local or /etc/init.d/boot.local (the latter for openSUSE).
Not sure what the equivalent would be under the new systemd (hopefully someone else can expand for us on that point).

Note: I use a SSD, but I’ve never bothered to use the noop setting (relying instead on the (default?) cfq), so I would be interested in any subjective feed back

One further note is that you would want to be mindful of udev /dev/ node assignments, which may or may not change after kernel updates … so some have suggested writing the command using a device-by-id approach to maintain the elevator setting with the intended disk, across kernel updates.

A related note to the discussion:

A Proposal To Change The Default I/O Scheduler

… and some more:

I thought that I would resurrect this old thread with a follow up question. I have 13.1 installed on a new PC where sdb is a 256GB SSD drive, and sda is a 2 TB spinning Hard drive. When I try the quoted command, for both sdb (which is the SSD) and sda (which is the HD) I obtain:


oldcpu@4770:~> cat /sys/block/sd**b**/queue/scheduler
noop deadline [cfq] 
oldcpu@4770:~> cat /sys/block/sd**a**/queue/scheduler
noop deadline [cfq]

ie both have ‘noop deadline’. Is this healthy/optimal ? Or is there a preferred setting for the HD ? (or preferred for the SSD) ?

I see now I read the thread wrong. Both have ‘cfq’ and I understand now I need to change the SSD (which is sdb) from “cfq” to “deadline” ?

Assuming ‘deadline’ is the correct option for sdb (the SSD drive) I tried:


4770:/home/oldcpu # echo deadline > /sys/block/sdb/queue/scheduler

which gave:


4770:/home/oldcpu # cat /sys/block/sdb/queue/scheduler
noop [deadline] cfq 

where ‘4770’ is the ‘name’ of my PC.

Now how to make this permanent on 13.1 … < thinking >

Hi
Have a look at user tsu2’s presentation;
https://forums.opensuse.org/showthread.php/479727-FYI-Presentation-Installing-openSUSE-on-SSD?highlight=ssd

Slide 21 has the answer :wink:

Possibly I have something setup systemically wrong on my PCs, but every slide after slide-1 in that presentation (on SSD optimization) is black and not readable.

I downloaded the PDF version of the presentation and was able to read slide-21 that way.

I note that the same recommendation as here: SDB:SSD performance - openSUSE Wiki

… one thing I do not have a good appreciate for is the relative merits wrt using noop or deadline for an SSD for an average user.

I never could find a definite answer or explanation either.

On Tue 21 Jan 2014 09:46:01 PM CST, Knurpht wrote:

oldcpu;2618268 Wrote:
> I downloaded the PDF version of the presentation and was able to read
> slide-21 that way.
>
> I note that the same recommendation as here:
> SDB:SSD performance - openSUSE Wiki
>
> … one thing I do not have a good appreciate for is the relative
> merits wrt using
> ‘noop’ (Noop scheduler - Wikipedia) or
> ‘deadline’ (Deadline scheduler - Wikipedia) for an
> SSD for an average user.

I never could find a definite answer or explanation either.

Hi
Since this system just has a 128GB SSD, I use noop, set swapiness to
1 and vfs_cache_pressure to 50.


cat /sys/block/sda/queue/scheduler
[noop] deadline cfq

sysctl vm.swappiness
vm.swappiness = 1

sysctl vm.vfs_cache_pressure
vm.vfs_cache_pressure = 50


Cheers Malcolm °¿° SUSE Knowledge Partner (Linux Counter #276890)
openSUSE 13.1 (Bottle) (x86_64) GNOME 3.10.2 Kernel 3.11.6-4-desktop
If you find this post helpful and are logged into the web interface,
please show your appreciation and click on the star below… Thanks!

Sorry about the long, and rather vague, acronym-fest. Please ignore, if you really don’t like that kind of thing.

Now that’s a very good question.

First, a bit of housekeeping:

  • for the user tsu2 slide show, I found that I had to go to the ‘30 minute presentations’ section; after that it was readable (Opera)
  • for the two Phoronix articles, well one is a forum thread linking to the other, so there is only one, really

Now, traditionally, the advice that was given (…and that seemed to make sense at the time…) was that while CFQ was the best choice for HDDs, with SSDs elevator-type scheduling makes no sense and that you might as well go for ‘noop’ scheduling (effectively, no scheduling) because the phenomenon of head stroking taking a long time is no longer present, so optimisations to work around head stroking just aren’t relevant any more. And, noop scheduling should have the lower overhead, so even if noop had the same measured performance on an otherwise unloaded system were the same as with some other scheduler, there ought to be more processor performance left over for other things.

So far, so convincing, but…purely looking at the SSD figures from Phoronix (and you’ll have to look at the original article for exact test definitions, if you are interested):

  • FS mark tests (2 tests): no big difference, but the order is CFQ, then NOOP, then Deadline (worst)
  • Blogmark test NOOP, then Deadline then CFQ: unlike the FSmark tests, these weren’t close results, with the best being twice as good as the worst
  • Compile bench compile:CFQ then NOOP then deadline: fastest about one third better than the slowest
  • Compile bench initial create: CFQ wins, the order between second and third depends on hardware platform (Sandy Bridge vs Clarksfield) - you probably call this a tie.
  • Compile bench read compiled tree: On SB, CFQ wins, NOOP/DL tie ~20% behind. On Clarksfield, DL wins with CFQ/NOOP ~20% behind.
  • On IOZONE, CFQ wins, but not my that much (~5%). On SB NOOP is ahead of deadline by a couple of percent; on CF NOOP and DL are about equal, with DL being trivially ahead
  • On TIOT; RW, 128 M, 8 th; On SB, CFQ > NOOP > DL. On Clarksfield, DL > NOOP > CFQ.
  • On TIOT; RW, 64 M, 16 th; On SB, not much in it; CF DL > CFQ > NOOP and although the margins still aren’t large, at least there is an order

Now, call me dumb but I find it difficult to call a clear winner from that lot: You’d probably say that CFQ being the best overall bet, but that if your use case was more like Blogmark, CFQ is the worst and not by a small margin. This is unexpected, but it does make the point about benchmarking that the results that you get can be heavily dependant on exactly what tests you perform.

(And a couple of passing notes - given that the results vary by platform (not just proportionally - the order also changes), who would take bets on how results would turn out on Haswell or Atom, say? And results on an AMD platform would probably be more difficult to predict still. And this was all ext4 and other filesystems would introduce another variable.)

Now, on to traditional rotating bits of magnetic matter: The first disturbing thing is that on some tests the HDD is probably about 50% of the SSD, on others it gets nowhere close to that. This is probably just a reflection of the fact that on some tests, stroke time is a significant proportion and on others, it isn’t. The second is that there are some tests where the results are closer on the HDD than they were on the SSD (see, for example, blogbench, where for SSD it is NOOP > DL > CFQ, with NOOP being handily in front, where on a HDD it is CFQ > DL > NOOP, with the difference being much smaller; now it isn’t unexpected that HDD vs SSD changes the order, but, even if you don’t write Blogbench off as an ‘outlier’, you might even say the result has little significance on an HDD, but is really significant on an SSD (a 2:1 range, versus more like 20%)).

Anyway, I think on the HDD, the thing that you don’t want is the NOOP, although there are a couple of tests that it wins. Given that this is the situation that CFQ was designed for, you might expect CFQ to win handily. As so often, in these matters, it isn’t really that clear, but DL is probably slightly ahead on points, but that could change if your use case was closer to one test than the others.

Now here is another complicating factor, or two. For the HDD, the size of the cache is probably also a factor (as would be rotational speed and interface, but probably all of the drives that you’d consider would be 7200 rpm SATA drives). These days, caches can be quite large (32/64 M) and there are many use cases where disk I/O can primarily fit into the cache, rather having the CPU wait for disk mechanics. And, most drives that you’d buy, these days, will have Native Command Queing (whereas, back in the PATA days, they wouldn’t). Now NCQ is (vaguely) like elevator seek, so you could probably argue that on an older drive, something like CFQ or Deadline are expected to be more relevant than on a more modern drive with NCQ.

I’m sorry that I couldn’t come up with something simpler or results with more clarity, but, based on that set of Phoronix results, I didn’t manage it.