Using SCSI harddisks - Plug'n play - multiboot without tears

Here’s a bit of experience I gathered during the
last 2 years or so, that I would like to share.

Used SCSI hardware at eBay today frequently is very cheap.

Recently I got two 68pin IBM DNES 18 GB harddisks for 1 Euro
(about 1 Dollar) plus transport costs, at least one of which
had never been used until now (probably spare disks for a RAID).

I plugged one of them (which still was in its unopened
sealed packaging) into my computer and installed OpenSUSE 11.1
on it from the standard OpenSUSE DVD, whith OpenSUSE 10.2 and an
old Windows ME residing on tow IDE hard disks already present.

Result:

Plug’n Play triple boot (11.1, 10.2, Windows), by DEFAULT.

No manual configuration etc., really nice.

As far as I read from an article in a computer journal
the same doesn’t seem that easy using a 2nd IDE harddisk,
when OpenSUSE 11.1 should be installed on this 2nd drive
configured slave and selected 2nd for booting by the BIOS.

As a surplus, the new SCSI HD, which runs in
SCSI ultra LVD (low voltage differential) mode,
is perceptible faster than the two IDE hard drives
already installed in my computer beforehand.

Moreover, the 7200 rpm (rounds per minute) DNES isn’t too loud.
The IBM DDRS drives are still more silent, albeit a bit slower as well.
But beware of e.g. 36GB IBM SCSI harddisks with 10000 rpm like
the IBM Ultrastar DDYS-T36950 - they’re loud like a circular saw.

The only inconvenience I still have to resolve is that
I can not access my old standard user partition of OpenSUSE 10.2.
On this partition there is only Data like mail, photos or downloads
and no applications, so mounting this old partition under 11.1
shouldn’t give rise to any problems.
Besides, is there a quick way using the command line to set that
this user partition is mounted by default during startup,
or is it better to use Yast?

OK, now a bit more information about the required SCSI hardware,
which you would need if you would like to extend your system
in a similar way.

(1) Of course you need a free PCI slot to plug in a SCSI controler.

(2) The SCSI controler itself. I use an Adaptec AHA-2940U2W.
It fits in a usual PCI slot, is straightforwardly doing its job,
fast enough for most devices, had been sold numerously and thus is
easy to get and cheap.

(3) And of course you need physical space whithin your computer,
i.e. a place where to install/put a new hard drive, if you don’t
want to use an external SCSI harddisk (which is possible,
but brings a bit of inconvenience, see further down below).

(4) You need SCSI cables, and probably an active SCSI terminator
if you want to have a fast and reliable working SCSI bus.
This may even be the biggest problem, because especially
used active SCSI terminators aren’t sold too frequently,
at least after my experience.

Useful to know:
While the a bit older 68pin IBM SCSI HDs like the 4GB DCAS
(5400 rpm very silent, but small and not too fast)
that run in SE (single ended) mode (as well higher
voltage, shorter cables possible)
can be jumpered such that they provide SCSI bus termination,
the more recent LVD/SE IBM HDs do not provide SCSI termination!

So in order to get that 68pin LVD/SE SCSI HD working I used
an 68 pin internal SCSI cable with a separate active SCSI
terminator for LVD mode attached.

And active terminators for LVD and SE mode seem different,
which becomes important if you only have LVD/SE devices and
the SCSI bus thus by default runs in LVD mode (I didn’t read
the specifications here but I once encountered problems with
certain combinations of terminators and drives).

The 50pin SCSI HDs usually provide SCSI termination as well.
The narrower 50pin SCSI bus is a bit slower but more widespread,
and it is frequently used for external devices like CD-burners
as well.

The DNES HDs e.g. were sold as both, 50pins with possible
SCSI termination, and (faster) 68pins LVD/SE without any
SCSI termination.

If you consider using an 68pin LVD/SE HD you probably should
look for such a cable first (or for a second 68pin drive
that provides SCSI termination), because the SCSI controllers
and HDs usually are more available.

(5) You probably need the small HD jumpers to be able to
configure the drive (set the SCSI ID/number on the bus,
activate termination if possible, and set automatic spinup at
power on etc.), because the used SCSI HDs sold frequently
come with VERY few such jumpers.
These jumpers are not expensive and are usually available
from shops selling electronics (i.e. chips, capacitors etc.),
but they are a must have.

Besides, the manuals for the IBM SCSI HDs at least half a year ago
were still available online through
www.ibm.com/harddrive
although IBM sold its HD manufacturing to Hitachi,
if I remember that right
(accessing this old IBM WEB link printed on the labels of
the IBM hard drives one is redirected to the new location).

(6) Be sure to wear some cotton clothing, and no shoes
with rubber soles, no synthetic clothing or wool, and the like,
and if possible go to a room where the air isn’t that dry,
when you open up your computer to exchange or plug in parts.
The reason is to avoid electrostatic discharges which destroy
your hardware.

Software:

SCSI devices are natively supported by Linux.

It’s Plug’n Play like already mentioned above.

Any more questions? :wink:

However, there is one tool you urgently need to make use of,
when plugging in used harddisks, in order to check the health
of these devices:

The command line tool ‘badblocks’ that comes with OpenSUSE,
which tests for bad blocks of hard drives.

This tool is accessible only when you’re logged in as root.

‘man badblocks’ at the command line brings up (almost) all
of the information you need. :wink:

Make sure to use NON-DESTRUCTIVE testing mode if you’re
interested in testing your present harddisk after booting
from the OpenSUSE DVD (the drive must be unmounted),
and in that case make sure that there are no power drop-outs/
power failures/outages, otherwise at least some of your data
on that harddisk may be gone …
And know, that if you never experienced problems with your
present harddisk, then using badblocks usually will bring
no advantage (and would only mean an additional risk).

With newly purchased used harddisks this, however, is clearly
different, if you want to make sure that your system stays
reliable.

So connect such a newly purchased HD.
Then use Yast to partition and mount that drive the first time.
Let’s assume that drive is assigned the name ‘/dev/sdb’.
Running badblocks on e.g. device ‘/dev/sdb’ results in testing
the whole harddisk.
Running badblocks on a partition like e.g. partition ‘/dev/sdb5’
results in testing only that partition.

A typical run on an intact harddisk when no test pattern is specified
will look like this:

MyHost:~ # badblocks -nsv /dev/hda1
Checking for bad blocks in non-destructive read-write mode
From block 0 to 2096451
Checking for bad blocks (non-destructive read-write test)
Testing with random pattern: done
Pass completed, 0 bad blocks found.
MyHost:~ #

Run badblocks on the whole drive with different test patterns
(at least 0x0000 and 0xffff, these patterns DO matter)
to see if there are any bad blocks, and if, how many of them
(can take upt to a few hours, depending on the size of the
disk/partition tested and the speed of your SCSI bus).
badblocks displays the number/index of the bad blocks in the
console window, if it finds any.

Here you can use destructive mode (because you don’t already have
data on the new drive), which is a bit faster, but not that much.

And in destructive mode badblocks besides may be used to
completely erase your data on HDs that you want to sell.

If badblocks finds many bad blocks, then this indicates that the
drive will hardly be usable anymore, but it may still serve
for experimenting a bit, if you like.

After a successful test with badblocks (i.e. no or no more
bad blocks reported) you may install a fresh version of Linux
on that drive (by which usually as well a new partitioning of the
harddisk is created), to get a multiboot system, or you may create
one or more partitions (possible as well: swap, useful if you’re
running short of memory and swap) to use that harddisk as
additional space.

Notice:
badblocks has one feature that is NOT recommended to use with
the SMART (self monitoring and reapair technology) harddisks
common today, like the IBM HDs mentioned above:
badblocks can produce a list of the bad blocks found, that may
serve as input for fsck or mke2fs to pass that information to
an ext2 (or probably as well ext3) file system, which would then
be used to map out these blocks.
Reason:
Experimenting with a used DNES harddisk that initally had many
bad blocks (not the usual case in my experience) and which
continuously seem to produce new bad blocks, I found that badblocks
in several runs never displayed the number/index of a certain
bad block twice (here the information about the block numbers
was vital!).
This means that this harddisk with SMART maps out the bad blocks
by itself when they are detected (self monitoring), and virtually
replaces them by intact spare blocks.
So passing a list of these blocks to ext2 or ext3 would result
in mapping out these intact spare blocks (!), the bad blocks
already being removed from use by the harddisk itself.
This would not mean a disaster, but it would mean unnecessary
waste of intact hard disk space.

Experimenting with badblocks running on that faulty harddisk it
clearly turned out as well that test pattern (i.e. the 0x0000 or
0xffff) matters: several runs with one unaltered test pattern
during which decreased the number of new bad blocks detected
to almost zero with the number of runs increasing, and then
changing the test pattern gave obvious results.

One issue, however, remains if you want to REMOVE an additional
harddisk again, which was installed in the way described,
and it doesn’t matter here whether this is a SCSI or IDE
harddisk.
When you shut down your computer, physically unplug the harddisk,
and turn the computer on again, then OpenSUSE doesn’t boot properly
because it misses the partitions created on the temporarily installed
harddisk.
To fix this, you have to startup the repair system from the OpenSUSE DVD
and have it checking your hardware.
No command line necessary and all almost automatic, as I used this
feature the last time (OpenSUSE 10.2), but it has to be done in that case.

External SCSI HDs:

As just described, unlike USB devices, SCSI harddisks may not be
plugged and unplugged without additional provisions.

This as well is different from Windows and at least from old versions
of MacOS (I don’t know OS X, which in this respect may be different,
because it is Unix-based).

As mentioned, Linux insists on mounting a native volume again that
once was linked, partitioned and mounted by default at startup.
This may be a prerequisite to make it possible to Linux to
boot from that drive, because there is no bootloader that
automatically detects all the harddisk volumes which may serve
as a boot volume and then chooses a default or asks the user
which one to take (at least GRUB would have to be extended
for this).

So an external SCSI HD containing native Linux partitions
(e.g. a partition with ext3 file system) will have to be
turned on EVERY TIME BEFORE Linux starts up (and an internal
SCSI drive will have to be jumpered such that it automatically
spins up at power on !).

May be, DOS/Windows fat partitions on such a drive could be
an exception.
I never tried that, but of course you won’t boot Linux
from them, usually, or you may run into problems if you
have more than one Linux system available at boot.

So an external SCSI HD in principle only makes sense under
Linux if you don’t have the physical space in your computer
for an additional internal harddisk, but urgently want to
have an additional harddisk from which you can boot.

Further, the most common SCSI interface for external
devices is the 50 pin bus, which is a bit slower,
and this may be felt if you want to run Linux from it.

For devices like additional external CD burners the
50 pin SCSI bus doesn’t mean a disadvantage because
Ultra-SCSI in this case is fast enough.
Such devices with SCSI interface are no longer
manufactured (I think its 7 to 8 years ago), but you
can still get used ones which run properly.

E.g. the more recent Yamaha CD burners starting from
CRW 2100 from years around 2000 provide 50 pin Ultra-SCSI
termination (older harddisks and CD burners had slow SCSI-2).
The later Yamaha drives (3200 and F1) in addition have
extended capabilities burning high quality audio-CDs,
that may even be used under Linux with the included
applications like k3b etc.!
Look into the hardware manuals still available online
from Yamaha for details like the SCSI bus specs. and
features.

Having an additional external more recent CD burner
besides is simple with SCSI: Get an old external SCSI CD
burner which usually is cheap. Get a more recent internal
SCSI CD-burner (for internal drives without casing usually
there are more offers).
Open the external SCSI casing and check for the cabeling
of the SCSI ID selector that external SCSI casings usually
have
(no two SCSI devices can have the same ID, which usually
ranges from 0 to 6, and max. 7 devices plus the SCSI controler
with usually SCSI ID 7 are possible per SCSI bus,
of which you however can have more than one).
Exchange the two CD burners.
That’s all.

The same is possible with 50 pin SCSI harddisks.

SCSI usually is downward compatible, i.e. you can
have SCSI-2, Ultra-SCSI, SE and LVD/SE devices
attached to one SCSI bus (only restriction: using rare
pure LVD devices together with pure SE controlers or
other SE hardware may result in damage of the hardware).

When thinking of external SCSI devices note however
that total cable length (external cable plus the usually short
internal cable in the casings) is limited differently for
different versions of the SCSI bus (SCSI-2, Ultra, SE and LVD)
and is about 1.5 meters for Ultra-SCSI as far as I remember
(I already have my cables and don’t measure their length daily …:slight_smile:
For the other SCSI bus types maximum cable length is greater.
Information on that can easily be found on the web
using e.g. google.

Happy multi-booting!
ratzi

It’s quite easy:

Using the partitionning tool of yast,
I did choose that there should be an entry point
in fstab, but that these old partitions shouldn’t
be mounted by default (under 11.1).

Now these old partitions can be mounted anytime
clicking on the `My Computer’ icon on the desktop.

After experiencing some difficulties when trying to
unmount these old partitions again, I’d like
to give an advice:

Mount them, copy anything you like, and then reboot
the system, to avoid problems.

I.e., get your data.

But don’t try to use these volumes in the same way
as the volumes accessible after a regular boot.

Mike