Terrible iowait issue on openSUSE 12.1. Is this btrfs related ?

On my fresh installed openSUSE12.1 (only home dir moved from the old 11.4) IOwait reach 90% when performing simple task - like compile boost with bjam, or blender with cmake. System is totally unresponsive with user and system time less than 5%.
Is this some issue with Btrfs (i choice it for my system partition)?
Or maybe my system configuration is not enough? [AMD x2 5000+ 2,6GHz; 2GB DDR2]
sample vmstat when writing this post:

procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
 r  b   swpd   free  inact active   si   so    bi    bo   in   cs us sy id wa st
 1  1   2004     69    495    760    0    0  4199    99 2822 3344  6 13 33 47  0
 0  2   2005     59    490    756    0    0  4846   479 2380 2669  6 10 24 61  0
 2  0   2006     73    474    757    0    0  2825   138 3272 4251 13 10 30 48  0
 2  4   2008     63    491    760    0    0 13665   561 2344 2457  5 12 20 63  0
 1  0   2008     72    485    759    0    0  9415     0 2719 3017  8 11 25 57  0
 0  1   2008     71    475    766    0    0  4977   129 2318 2460  8  9 40 44  0

sample iostat from when building boost


avg-cpu:  %user   %nice %system %iowait  %steal   %idle
          13,77    0,00    7,84   78,39    0,00    0,00

Device:         rrqm/s   wrqm/s     r/s     w/s    rkB/s    wkB/s avgrq-sz avgqu-sz   await r_await w_await  svctm  %util
sda              47,60     9,80  195,20   83,20 13327,20   516,00    99,45    96,87  552,79  169,93 1451,03   3,58  99,60
sda2[swap]       24,00     0,00   16,80   14,40   163,20    75,20    15,28    53,36 3449,99   71,12 7392,00  26,35  82,20
sda3/]           5,40     8,00  175,60   67,80 13077,60   369,60   110,49    42,04  184,20  180,21  194,55   4,09  99,58
sda6[/home]      18,20     1,80    2,80    1,00    85,60    71,20    82,53     1,43  374,16  118,36 1090,40 156,11  59,32
sda7              0,00     0,00    0,00    0,00     0,00     0,00     0,00     0,00    0,00    0,00    0,00   0,00   0,00

I confirm that. Intel 1.73GHz 3GB DDR2. high io eats 100% cpu time. Xorg related. Nvidia card. nouveau allows it move fast sometimes, but closed version freezes every movement for sometime everytime.

sorry, I figued it out. it’s chromium. yes Xorg or Nvidia closed driver do eat cpu time, but not to the extent of LAG.

when chromium is open, and especially it refreshes webpage, io wait will eat cpu time to the roof.

firefox do so somtimes, but only just a second narrows that you can’t even feel about it.

as to konqueror, well I think it only works well with openSUSE or/and KDE offcial sites. others suck.

so It’s not a hardware problem now, I’ll open a new topic in Application field to explain how to reduce such io wait in chromium.

of course the most easy step is drop chromium and back to firefox.

iowait states may be caused by modern hard disks with physical 4k sectors when the partitions are not properly aligned with the sectors. Please check if you have this type of hard disk drives.

Nope. I have ST3320620AS seagate with 512 bytes per sector.


Dysk /dev/sda: 320.1 GB, bajtów: 320072933376
głowic: 255, sektorów/ścieżkę: 63, cylindrów: 38913, w sumie sektorów: 625142448
Jednostka = sektorów, czyli 1 * 512 = 512 bajtów
Rozmiar sektora (logiczny/fizyczny) w bajtach: 512 / 512
Rozmiar we/wy (minimalny/optymalny) w bajtach: 512 / 512
Identyfikator dysku: 0x000bc140

Urządzenie Rozruch   Początek      Koniec   Bloków   ID  System
/dev/sda1            2048      206847      102400   83  Linux ( /boot )
/dev/sda2          206848     9867263     4830208   82  Linux swap / Solaris
/dev/sda3         9867264    37046271    13589504   83  Linux ( / )
/dev/sda4        56086528   625142447   284527960    f  W95 Rozsz. (LBA)
/dev/sda5        56088576   132358143    38134784   83  Linux ( /home )
/dev/sda6       132360543   625142447   246390952+   7  HPFS/NTFS/exFAT

Nope. I have ST3320620AS seagate with 512 bytes per sector.

Good. At least you can exclude partition misalignment as a possible cause of your problem. As a side note I have to add that the output of fdisk is of no real value in determining physical sector size because even hard disks with 4k sectors pretend to emulate 512b sectors and report that to the outside.