root functions fine, all other users are unusably slow.

Hi folks, I’m hoping you can help me sort this out.

The past few days my computer has been running extremely slow, logging in as any user gives me a completely unusable system.

Open firefox and wait about four minutes, maybe it will load a page. Try the same thing as root and no problems.

The same with any other application actually, but firefox is my lifeline to (hopefully) a solution.

The system is a genuine intel I3 with 16G ram, it boots from a 60GB SSD and I have two 1TB hard drives configured as a redundant raid.

The latest kernal is 3.11.10-7-desktop I just tried booting** 3.11.6-33.gf7498bf-desktop** but it was the same issue.

I think things went downhill after a recent update but I’m not sure.

Here is the contents of my /etc/fstab
emu:~ # cat /etc/fstab
/dev/disk/by-id/ata-OCZ-AGILITY3_OCZ-92936853RA210YG1-part1 / ext4 noatime,acl,user_xattr 1 1
/dev/md0 /home ext4 acl,user_xattr 1 2
proc /proc proc defaults 0 0
sysfs /sys sysfs noauto 0 0
debugfs /sys/kernel/debug debugfs noauto 0 0
usbfs /proc/bus/usb usbfs noauto 0 0
devpts /dev/pts devpts mode=0620,gid=5 0 0

Other issues, while I’m at it, could be related I don’t know.
I seem to have problems with usb data cards, not being found in my card reader - this used to work.
USB drives in general don’t seem to work.

I recently installed the nvidia driver which fixed another issue where the computer would just randomly crash.

My keyboard and mouse are wireless if that matters.

This is not a new install of 13.1 previously I had 11.x and 12.x installed. 11.x was brilliant, after that things went down hill. I use this system daily for web development. I have the box configured as an apache server for local development.

I’m not sure what other information is required to help, but please help, I’m getting desperate here.

Thanks,
Jeff

Pure guess. Since root runs OK but users are slow and you do not see any process using large amounts of CPU, I’d say you are having some sort of issue with the RAID array since roots home is on the root partition not on the array. So need more info what sort of RAID? hardware/software/FAKE (BIOS assist)

Have you tried boot to the previous kernel? In the boot menu select advanced and the previous kernel version. Any difference?

BTW in future please put any machine code/text in the code blocks. (# option in the editor window) It keeps things from getting miss formatted by the Web editor. Thanks

@gogalthorp thanks, and yes that does make sense, I didn’t realize that the roots home directory would be on the SSD.

I was a while ago since I configured the raid but it was done at installation of opensuse 11.x

Sorry about not posting in code blocks, I’ll make sure that I do that.


 Device: (my boot drive)

 


  - Device: /dev/sda1 
  - Size: 55.90 GB 
  - Encrypted: No 
  - Device Path: pci-0000:00:1f.2-scsi-1:0:0:0-part1 
  - Device ID 1: ata-OCZ-AGILITY3_OCZ-92936853RA210YG1-part1
Device ID 2: scsi-141544120202020204f435a2d4147494c49545933202020202020202020202020202020202020202020202020202020204f435a2d39323933363835335241323130594731-part1
Device ID 3: scsi-SATA_OCZ-AGILITY3_OCZ-92936853RA210YG1-part1
Device ID 4: wwn-0x5e83a97f37a75d36-part1 
  - FS ID: 0x83 Linux native 


 File System:
 


  - File System: Ext4 
  - Mount Point: / 
  - Label:
  -  
  -  Device: 
  - Device: /dev/sdb1 
  - Size: 909.50 GB 
  - Encrypted: No 
  - Device Path: pci-0000:0a:00.0-scsi-0:0:0:0-part1 
  - Device ID 1: ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW32993398-part1
Device ID 2: scsi-1ATA_WDC_WD1002FAEX-00Y9A0_WD-WCAW32993398-part1
Device ID 3: scsi-SATA_WDC_WD1002FAEX-0_WD-WCAW32993398-part1
Device ID 4: wwn-0x50014ee25b9adb77-part1 
  - FS ID: 0xFD Linux RAID 


 File System:
 


  - File System: 
  - Mount Point: 
  - Label: 
  -  Device: 
  - Device: /dev/sdc1 
  - Size: 931.51 GB 
  - Encrypted: No 
  - Device Path: pci-0000:0a:00.0-scsi-1:0:0:0-part1 
  - Device ID 1: ata-WDC_WD1002FAEX-00Y9A0_WD-WCAW33004298-part1
Device ID 2: scsi-1ATA_WDC_WD1002FAEX-00Y9A0_WD-WCAW33004298-part1
Device ID 3: scsi-SATA_WDC_WD1002FAEX-0_WD-WCAW33004298-part1
Device ID 4: wwn-0x50014ee25b9a9415-part1 
  - FS ID: 0xFD Linux RAID 


 File System:
 


  - File System: 
  - Mount Point: 
  - Label: 



I got the info from the partioner, is there another place to get info about the raid?

Thanks,
Jeff

Hi jbenetti,

You could have been a bit more precise than x in 11.x.

You probably spent quite some time on the listing.

Yes, there is another way: open a terminal (a text console), say ‘su’ to become root and enter password,
then say ‘parted -l’.
Please post the output of that here.

It anyway is interesting to note that a thread with a very similar topic and a very similar description of the
possible reasons (kernel update) currently just is the very next thread in this forum,
After update, root works but not regular user - Install/Boot/Login - openSUSE Forums?

Good luck
Mike

Looks like software RAID.

Did you try boot to previous kernel. It may be a regression the the current kernel since the problem seems to have started recently it may have to do with a kernel upgrade

I have never actually heard of this happening before so not sure where to start. All I know is RAID is a pain :stuck_out_tongue:

FYI the root users home is /root on the root partition. LOL I know too many roots. You have to be careful of the context of root in Linux

Sorry, I don’t remember the minor release number, I can try and figure that out if it’s important.

Unfortunately, I am now at home but I will post the output of parted -l tomorrow.

Thanks Mike, I guess I will need some luck.

Jeff

I did try to boot the previous kernal, I can tell you what kernel numbers I booted but unfortunately I am now at home and I will pick this up tomorrow.

I don’t like the sound of that!

Thanks,
Jeff

On 2014-02-12 22:56, jbenetti wrote:

> Sorry, I don’t remember the minor release number, I can try and figure
> that out if it’s important.

No, not really important.

Just a comment: the “minor” number in openSUSE in fact indicates a major
release. Ie, 12.2, 12.3, 13.1… all are major releases.

How did you do those upgrades, via zypper dup?

Let me paste your fstab here, there is an issue with it.


> emu:~ # cat /etc/fstab
> /dev/disk/by-id/ata-OCZ-AGILITY3_OCZ-92936853RA210YG1-part1 /
> ext4       noatime,acl,user_xattr 1 1
> /dev/md0             /home                ext4       acl,user_xattr      1 2

> proc                 /proc                proc       defaults            0 0
> sysfs                /sys                 sysfs      noauto              0 0
> debugfs              /sys/kernel/debug    debugfs    noauto              0 0
> usbfs                /proc/bus/usb        usbfs      noauto              0 0
> devpts               /dev/pts             devpts     mode=0620,gid=5     0 0

Please comment out the entries for /proc, /sys, and /dev/pts. Those five
lines. 13.1 does not use them and might cause issues. Systemd mounts
those entries on its own now.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

On 2014-02-12 22:56, jbenetti wrote:

> gogalthorp;2624328 Wrote:
>> I have never actually heard of this happening before so not sure where
>> to start. All I know is RAID is a pain
>
> I don’t like the sound of that!

I take it that the root partition is on an SSD disk, thus very fast, but
the “/home” is a software raid using rotating plate disks, thus much slower.

That is, in that configuration, it is normal that the user “root” runs
much faster than any plain user in the system. It has to be.

So the question is, were those plain users “running” way faster before,
some time ago, with this same configuration?

We can run a speed test on that software raid of yours. For instance:


hdparm -tT /dev/md0

will give some figures so that we can compare somewhat.

I don’t know what type of RAID you have. 1? 5? If it is degraded and
rebuilding, it run slow till it finishes.

You could also check the health of those hard disks, using smartctl.


smartctl -a /dev/sda
smartctl -a /dev/sdb
smartctl -a /dev/sdc
smartctl -a /dev/sdd

Etcetera. Then you can run the short test, check, the long test, check…

More things. Instead of working as root, you can create a temporary new
user, and just tell yast to place its home on the SSD disk, like on
“/temporaryhome”. That way people here will not be grinding their teeth
thinking you are running a graphical session as root all the time :wink:


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

@robin_listas
Thank you so much for weighing in.

I take it that the root partition is on an SSD disk, thus very fast, but
the “/home” is a software raid using rotating plate disks, thus much slower.

That is, in that configuration, it is normal that the user “root” runs
much faster than any plain user in the system. It has to be.

So the question is, were those plain users “running” way faster before,
some time ago, with this same configuration?

Yes this is entirely correct and yes for the user there was no noticable difference between root or other users time wise. I am sure that it is measurabley slower but the HDD are quite fast normally.

I don't know what type of RAID you have. 1? 5? If it is degraded and
rebuilding, it run slow till it finishes.

I forget the raid numbers but I set this up as two identical drives mirrored by the raid for redundancy, I thought that this might protect me from hardware failure.

More things. Instead of working as root, you can create a temporary new
user, and just tell yast to place its home on the SSD disk, like on
"/temporaryhome". That way people here will not be grinding their teeth
thinking you are running a graphical session as root all the time ;-)

Thanks, I was going to ask if this would be a good idea, I’m not comfortable logged in as root either.

I will address the other issues as soon as I am back in the saddle tomorrow.

Thanks again,
Jeff

@robin_listas

How did you do those upgrades, via zypper dup?

All upgrades were done through the graphical interface when updates became available. For a while this update notification was buggy so I may have done some updates on the command line using zypper dup as you suggested.

Jeff

On 2014-02-13 00:36, jbenetti wrote:
>
> @robin_listas
>
>
> Code:
> --------------------
> How did you do those upgrades, via zypper dup?
> --------------------
>
>
> All upgrades were done through the graphical interface when updates
> became available. For a while this update notification was buggy so I
> may have done some updates on the command line using zypper dup as you
> suggested.

Not that. You said you previously installed 11 something, and later
upgraded to other releases. How exactly?


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

A note about RAID. RAID 1 which you probably have is great for servers or process machines that require 5 9s up time. But RAID is not a backup solution. RAID is an uptime solution. For desktop operation you are far better off doing a daily or even an hourly backup to a drive rather then running a mirror RAID. RAID simply adds an extra complication and even with it you still need a backup.

On 2014-02-13 05:16, gogalthorp wrote:
>
> A note about RAID. RAID 1 which you probably have is great for servers
> or process machines that require 5 9s up time. But RAID is not a backup
> solution. RAID is an uptime solution. For desktop operation you are far
> better off doing a daily or even an hourly backup to a drive rather then
> running a mirror RAID. RAID simply adds an extra complication and even
> with it you still need a backup.

Absolutely. You need a set for disks for the raid, and another for the
backup. I see no sense on using raid if there is no backup.

You also need to keep at least a spare hard disk to replace the one on
the raid going bad when it eventually goes, as it should be the same
make and size as the old one (ok, not necessarily, with software raid).


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

Ok thanks folks for the comments and help, I thought I was being clever setting up a raid but I guess it wasn’t so smart after all.

So far today I have modified the fstab file to remove the unnecessary entries.

Here is the** output of parted -l**

emu:/ # parted -l
Model: ATA OCZ-AGILITY3 (scsi)
Disk /dev/sda: 60.0GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  60.0GB  60.0GB  primary  ext4         boot, type=83


Model: ATA WDC WD1002FAEX-0 (scsi)
Disk /dev/sdb: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size   Type     File system  Flags
 1      23.6GB  1000GB  977GB  primary  ext4         raid, type=fd


Model: ATA WDC WD1002FAEX-0 (scsi)
Disk /dev/sdc: 1000GB
Sector size (logical/physical): 512B/512B
Partition Table: msdos

Number  Start   End     Size    Type     File system  Flags
 1      1049kB  1000GB  1000GB  primary  ext4         raid, type=fd


Model: Linux Software RAID Array (md)
Disk /dev/md0: 977GB
Sector size (logical/physical): 512B/512B
Partition Table: loop

Number  Start  End    Size   File system  Flags
 1      0.00B  977GB  977GB  ext4


I created another user with their home directory on the SSD. So I should be able to log in as a regular user, at some point I would like to learn why it is so bad to run a gui as root. Does this mean that it is not recommended to log into a gui and set up a new user? Anyway that’s another issue for another thread.

Timing tests:** The raid**

emu:/ # hdparm -tT /dev/md0

/dev/md0:
 Timing cached reads:     2 MB in  6.49 seconds = 315.59 kB/sec
 Timing buffered disk reads:   2 MB in  3.99 seconds = 512.78 kB/sec


The SSD

emu:/ # hdparm -tT /dev/sda

/dev/sda:
 Timing cached reads:   12192 MB in  2.00 seconds = 6098.70 MB/sec
 Timing buffered disk reads: 420 MB in  3.01 seconds = 139.43 MB/sec


Drive /dev/sdb

emu:/ # hdparm -tT /dev/sdb

/dev/sdb:
 Timing cached reads:     2 MB in  2.26 seconds = 906.87 kB/sec
 Timing buffered disk reads: 368 MB in  3.01 seconds = 122.34 MB/sec


Drive /dev/sdc

emu:/ # hdparm -tT /dev/sdc

/dev/sdc:
 Timing cached reads:   12070 MB in  2.00 seconds = 6037.46 MB/sec
 Timing buffered disk reads: 370 MB in  3.01 seconds = 122.83 MB/sec


Wow what do I do with this info, is drive /dev/sdb failing?

Awaiting your replies and I should be getting some backups together.

Jeff

Before I get asked here is the output of cat /proc/mdstat

emu:/admin-home # cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdc1[0] sdd1[1]
      953684856 blocks super 1.0 [2/2] [UU]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>


Just learning and anticipating questions.

So is it possible for me to now reconfigure these drives as two separate hard drives and lose the raid1?

Thanks,
Jeff

On 2014-02-13 14:26, jbenetti wrote:
>
> Ok thanks folks for the comments and help, I thought I was being clever
> setting up a raid but I guess it wasn’t so smart after all.

It is popular, but it is not as good as people think.

> I created another user with their home directory on the SSD. So I should
> be able to log in as a regular user, at some point I would like to learn
> why it is so bad to run a gui as root. Does this mean that it is not
> recommended to log into a gui and set up a new user? Anyway that’s
> another issue for another thread.

There are many opinions. I use it when I really have to.
The danger derives from the fact that everything in that session is
running as root. A program going berseck can do more damage, or a hacker
gaining entry could do anything. And your mistakes can be much more
damagefull.

>
> Timing tests:* The raid*
>
> Code:
> --------------------
> emu:/ # hdparm -tT /dev/md0
>
> /dev/md0:
> Timing cached reads: 2 MB in 6.49 seconds = 315.59 kB/sec
> Timing buffered disk reads: 2 MB in 3.99 seconds = 512.78 kB/sec
>
>
> --------------------

Wow. Too bad.

> THE SSD
> Code:
> --------------------
> emu:/ # hdparm -tT /dev/sda
>
> /dev/sda:
> Timing cached reads: 12192 MB in 2.00 seconds = 6098.70 MB/sec
> Timing buffered disk reads: 420 MB in 3.01 seconds = 139.43 MB/sec
>
>
> --------------------

Not that fast as I thought :-?

> Code:
> --------------------
> emu:/ # hdparm -tT /dev/sdb
>
> /dev/sdb:
> Timing cached reads: 2 MB in 2.26 seconds = 906.87 kB/sec
> Timing buffered disk reads: 368 MB in 3.01 seconds = 122.34 MB/sec
>
>
> --------------------

It only read 2 MB total? Why?

> Code:
> --------------------
> emu:/ # hdparm -tT /dev/sdc
>
> /dev/sdc:
> Timing cached reads: 12070 MB in 2.00 seconds = 6037.46 MB/sec
> Timing buffered disk reads: 370 MB in 3.01 seconds = 122.83 MB/sec
>
>
> --------------------

The speeds are the same, but it sdb only reads 2 MB… very strange.

> Wow what do I do with this info, is drive /dev/sdb failing?

Probably.

I would look at the output of “smartctl -a /dev/sdb”.

>
> Awaiting your replies and I should be getting some backups together.

Good idea.
Well, raid is made precisely to protect you against hardware failure.
You keep working because of that… There is one side working, at least.

To speed up the backup, you might consider to disable sdb in the raid.
The raid would work “degraded” but faster.

Funny you got no error or warning messages :-?


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

It only read 2 MB total? Why?

I have no idea, is it possible the test bailed?

To speed up the backup, you might consider to disable sdb in the raid.
The raid would work “degraded” but faster.

I would like to do that but I’m already beyond my level of expertise :slight_smile:

I’ll google and see if I can figure out how to disable one drive from the array.

Thanks,

Jeff
PS:I’ve learned a heck of a lot in the past few hours. :slight_smile:

Before we go any further, a little confusion could arise. I have connected a smaller hdd that was in the case but not connected. I am using this drive as a backup drive during my work restoring my system. This 160G drive is now mapped as /dev/sdb. The drives in the array are mapped /dev/sdc and /dev/sdd respectively.

I have disabled drive sdc for now.

emu:~ # mdadm --manage /dev/md0 --fail /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md0
emu:~ # cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdc1[0](F) sdd1[1]
      953684856 blocks super 1.0 [2/1] [_U]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>

emu:~ # mdadm --manage /dev/md0 --fail /dev/sdc1
mdadm: set /dev/sdc1 faulty in /dev/md0
emu:~ # cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdc1[0](F) sdd1[1]
      953684856 blocks super 1.0 [2/1] [_U]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>


emu:~ # mdadm --manage /dev/md0 --remove /dev/sdc1
mdadm: hot removed /dev/sdc1 from /dev/md0
emu:~ # cat /proc/mdstat
Personalities : [raid1] 
md0 : active raid1 sdd1[1]
      953684856 blocks super 1.0 [2/1] [_U]
      bitmap: 0/8 pages [0KB], 65536KB chunk

unused devices: <none>
emu:~ #
emu:~ # hdparm -tT /dev/md0

/dev/md0:
 Timing cached reads:   12250 MB in  2.00 seconds = 6128.27 MB/sec
 Timing buffered disk reads: 378 MB in  3.00 seconds = 125.93 MB/sec


Ok now that I am here and this is going to sound lame, but how can I tell which physical drive is which?

Thanks,
Jeff

Ok, so not knowing exactly what my next step is after removing the drive from the array, I deleted the raid partition and formatted the entire drive as an extended partion, native linux EXT4.
Output of smartctl

emu:~ # smartctl -a /dev/sdc
smartctl 6.2 2013-07-26 r3841 [x86_64-linux-3.11.10-7-desktop] (SUSE RPM)
Copyright (C) 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Family:     Western Digital Caviar Black
Device Model:     WDC WD1002FAEX-00Y9A0
Serial Number:    WD-WCAW32993398
LU WWN Device Id: 5 0014ee 25b9adb77
Firmware Version: 05.01D05
User Capacity:    1,000,204,886,016 bytes [1.00 TB]
Sector Size:      512 bytes logical/physical
Device is:        In smartctl database [for details use: -P show]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 2.6, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Thu Feb 13 12:53:08 2014 AST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x84) Offline data collection activity
                                        was suspended by an interrupting command from host.
                                        Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0) The previous self-test routine completed
                                        without error or no self-test has ever 
                                        been run.
Total time to complete Offline 
data collection:                (16260) seconds.
Offline data collection
capabilities:                    (0x7b) SMART execute Offline immediate.
                                        Auto Offline data collection on/off support.
                                        Suspend Offline collection upon new
                                        command.
                                        Offline surface scan supported.
                                        Self-test supported.
                                        Conveyance Self-test supported.
                                        Selective Self-test supported.
SMART capabilities:            (0x0003) Saves SMART data before entering
                                        power-saving mode.
                                        Supports SMART auto save timer.
Error logging capability:        (0x01) Error logging supported.
                                        General Purpose Logging supported.
Short self-test routine 
recommended polling time:        (   2) minutes.
Extended self-test routine
recommended polling time:        ( 168) minutes.
Conveyance self-test routine
recommended polling time:        (   5) minutes.
SCT capabilities:              (0x3035) SCT Status supported.
                                        SCT Feature Control supported.
                                        SCT Data Table supported.

SMART Attributes Data Structure revision number: 16
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x002f   198   198   051    Pre-fail  Always       -       5533
  3 Spin_Up_Time            0x0027   176   171   021    Pre-fail  Always       -       4191
  4 Start_Stop_Count        0x0032   100   100   000    Old_age   Always       -       308
  5 Reallocated_Sector_Ct   0x0033   200   200   140    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x002e   200   200   000    Old_age   Always       -       0
  9 Power_On_Hours          0x0032   075   075   000    Old_age   Always       -       18971
 10 Spin_Retry_Count        0x0032   100   100   000    Old_age   Always       -       0
 11 Calibration_Retry_Count 0x0032   100   100   000    Old_age   Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   000    Old_age   Always       -       305
192 Power-Off_Retract_Count 0x0032   200   200   000    Old_age   Always       -       199
193 Load_Cycle_Count        0x0032   200   200   000    Old_age   Always       -       110
194 Temperature_Celsius     0x0022   113   103   000    Old_age   Always       -       34
196 Reallocated_Event_Count 0x0032   200   200   000    Old_age   Always       -       0
197 Current_Pending_Sector  0x0032   200   200   000    Old_age   Always       -       6
198 Offline_Uncorrectable   0x0030   200   200   000    Old_age   Offline      -       2
199 UDMA_CRC_Error_Count    0x0032   200   200   000    Old_age   Always       -       0
200 Multi_Zone_Error_Rate   0x0008   195   195   000    Old_age   Offline      -       1095

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]


SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.

**
Speed test**

emu:~ # hdparm -tT /dev/sdc

/dev/sdc:
 Timing cached reads:   12056 MB in  2.00 seconds = 6031.34 MB/sec
 Timing buffered disk reads:  68 MB in  5.50 seconds =  12.36 MB/sec


A little confused at this point :open_mouth:

Jeff