The error messages you got booting failsafe suggest that the initrd, or the boot arguments used in failsafe to reduce certain kernel functionality, resulted in the Logical Volume Manager not starting up properly. This may be due to an error in menu.lst. Or it may just be a by-product of failsafe when using LVM, which at its worst means that failsafe is not available (not a big deal).
Having said that, you have an important decision to make. Somehow openSUSE was installed using Logical Volume Management (this can only happen at the user’s request). The dm-0, dm-1, dm-2 are from Device Mapper, the kernel program which maps LVM “logical” volumes (i.e., partitions) to the physical partitions on the disk. The advantage of LVM is that it enables changing the amount of disk space by a volume, on-the-fly as opposed to resizing a physical partition which requires offloading the partition’s data, moving partitions around, and reloading the data. Consequently LVM is principally used in server or multi-user environments for dynamic storage allocation across multiple disks. It is used on desktop systems by advanced users with complex or frequently changing disk setups. However, a lot of LVM’s advantage on small systems has been greatly reduced by the relatively recent advent of very large and inexpensive disk drives, i.e., when the cost@GB is so low, physical partitions can easily be sized with ample room for data growth.
On your 200GB disk, you have 120GB in a single XP NTFS partition and you have 80GB allocated to the linux LVM, of which ~50GB has been assigned to 3 logical partitions; the remaining 30GB is there should you need to increase the size of the root volume (currently ~27GB) or the /home volume (currently ~21GB). Typically, there would be 2 physical partitions, with root approx the same size and /home having the remaining ~50GB. If there is space required beyond the 80GB, it would need to come from the 120GB XP partition (or a 2nd disk) - requiring very careful management of the logical volumes while physically changing the underlying partition - more work than if you didn’t have LVM.
The short of it is that on your system you are gaining nothing from using LVM, but there is the cost of additional complexity. If you already had experience with volumes or there was value in your understanding LVM (for example, in your job) or you plan a considerably more complex system setup, then LVM would make sense. If that doesn’t apply, then your decision is to either learn enough about LVM to be able to effectively handle it (for example, notice sda2 in your fdisk - that is a separate boot partition because it is problematic to boot from a LVM, just one of the little LVM-related details you will encounter) or to convert back to traditional partitioning.
Conversion can be approached in one of two ways. You can connect a second disk, create 2 partitions on it, copy dm-0 and dm-1 to those partitions; then on the first disk delete the 2/re-create 3 (the 3rd is for swap) physical partitions; then copy back the data from the second disk. If you aren’t experienced doing this kind of thing, this will be a challenge.
The second approach is to backup your openSUSE user data to the external, reinstall openSUSE altogether on new physical partitions, then migrate the user data in from the backup. This is probably the easiest.
Or, you leave it be with LVM. Important note: While linux LVM is mature and stable, it does introduce an additional complex I/O layer between filesystem and disk. Like any software, it can break. Backups are absolutely critical when using LVM.
So, again, you have a decision to make. Any questions, don’t hesitate to ask.