Hi. This is something that I’ve been wondering about for a while now. I wasn’t sure what forum to put it in because it deals with no specific program. I suspect this is the wrong one so go ahead and move it.
On more than one occasion I have installed something then later tried to remove it, and come up with a dependency conflict pertaining to something that was already in the system before I ever installed what I was now trying to remove.
Confusing mouthfull.
Anyway I hope you can understand what I’m asking.
Why does this happen and what does it signify? What are the consequences of removing the newer package despite the conflict when this type of thing occurs?
My natural reaction would have been that what went in last could come right back out with no problem.
Is it possible that this only refers to functionality the newer software added to the already installed programs that will be lost, or will it somehow break the system even though it wasn’t broken without the newer packages before they were installed?
hum… I must admit that I’ve some difficulties to understand your post
But I think your problem is related to hard dependancies (obligatory, must be there if you want the program to run correctly) and soft-dependancies (recommended, add some functionnalities to the program, but can run without).
I think you have added a package which is a soft-dependancy of anoher one. When you try to remove it, the package manager tell you that this “soft” dependancy will be broken, am I right ?
Btw, if you have any dependancies problem, try “zypper ve” (verify) to fix your RPM system.
User installs package X then without changing anything else in the system decides to remove same package X and gets dependency problem concerning another package already existing in the system.
I perfectly understand your frustration, but openSUSE is not Ubuntu/Debian…
Suse lame depencies are frakin bad joke, without autoremove, but with stupid paterns witch often install many unnecessary stuff or removes half of the system and you don’t even notice…
For example try to remove xgl and xgl-hardware-list and you will see. Sometimes it removes Compiz or want to install KDE libs and that is with almost all…very strange behaviour…
Spyhawk’s soft-deps explanation is the one that makes more sense. But if you can give an example we could confirm.
$ LANG=C zypper rm xgl
Reading installed packages...
The following package is going to be REMOVED:
xgl
After the operation, 5.0 M will be freed.
Continue? [YES/no]: n
$ LANG=C zypper rm xgl-hardware-list
Reading installed packages...
The following packages are going to be REMOVED:
xgl-hardware-list xgl
After the operation, 5.0 M will be freed.
Continue? [YES/no]: n
# rpm -qR xgl | grep xgl-hardware-list
xgl-hardware-list
# rpm -qa "*compiz*" | sort
compiz-0.7.4-31.1
compizconfig-settings-manager-0.7.4-28.1
compiz-fusion-plugins-main-0.7.4-28.1
compiz-kde-0.7.4-31.1
compiz-manager-0.0.1_git080201-24.1
libcompizconfig-0.7.4-28.1
python-compizconfig-0.7.4-28.1
How does a new user formulate a dependency strategy? I just installed Amarok 2.2.1 and I was faced with a bunch of choices. I chose install with vendor change when possible.
Is this right? A few things concern me:
I uninstalled about 20mb and installed roughly 300-400mb worth of dependencies just to get a new version of the same program. My drive is going to fill up fast at this rate.
It doesn’t seem reasonable to take 1/2 hr to install a 20mb program…
I don’t know what I’m doing and installing a new version of amarok seemed to affect every other program in the OS…a bit worrisome lol…
I’m learning about strong/weak dependencies, but don’t don’t understand how to identify them in the installation process.
One more question, I see a lot people being told to post their addresses to get their money back, can I get my money back if I didn’t pay anything?
The thing you have to understand about Linux is it relies almost entirely on dynamic, rather than static linking.
So if I install a new system, and then add a new program, it might well pull in hundreds of megs of dependencies. But those hundreds of megs of dependencies will decrease the probability of future programs needing to pull in hundreds of megs.
Eventually, you hit more or less equilibrium, where most things you install will only require the program itself.
Unfortunately, the more libraries you have installed the more likely you are to get to ‘rpm hell’.
Apparently, that used to be an utter nightmare - and a regular one. It’s never happened to me, and I haven’t seen people complaining about it (in fact I believe zypper’s solving algorithm is proven to be perfect, which makes a big difference).
Moral of the story? Allocate 30 gig for root, and I will eat my duvet cover if you manage to fill it up through normal use. 20 gig is probably fine…
In your case, your not dealing with just dependencies. You are also dealing with repo dependencies or conflicts.
Take xine for example. In openSUSE 11.2 xine is called libxine1 in the oss repo and in packman. Amarok is another example of a package having the same name, yet you can choose to have xine and amarok from packman (in fact it is recommended). If you pick just one, say amarok, then you may get a message about vendor change for libxine1.
Now we get into the fun stuff. Package dependencies and conflicts over repositories. Now where this gets tricky is if package-A depends on package-B, but there is another version of package-B on another repo. Depending on how the rpm script was done, the pre-requires, requires, and obsoletes, then this can cause dependencies and nightmares.
At this point you have 2 options. Simplify your repositories, so that it is something you can manage. The other option I do a lot, and that is use the versions tab in YaST>Software Manager. This tab allows you to see not just the version, but also which repo it’s coming from. When there are more than one repo, then there are more than one circles. You can tick the one you want to solve the dependency issue.
Doing this requires some trial and error, some patience, and lots of time. It is an excellent way to get to know the system, rpm, zypper and package management in general. **Be warned though, getting it wrong can end up in a broken system. It can even uninstall your system. You have been warned. **
So this is why one of the new user guides advises keeping only four repos enabled when possible? I should probably pay more attention to what I read:)
On the topic of completing messing up my OS through user error, I have a question about making a backup.
If I understand this correctly, my “system” (the OS, and all the applications it runs and the files that support those applications)are all contained on the “root partition”?
So if I have the root partition backed up I can restore the system to its former glory, with all the applications, codacs, customized desktop settings etc…?
If so this seems pretty smooth and pain free so long as I make backups. /home should be unaffected by any user error when manipulating /root?
I have about 24hrs experience with linux so bear with me:) I pay my mechanic to winterize my blinker fluid and I own a left handed baseball bat…I’m not the sharpest of wits…
Normally /home is what you’re trying to protect, because it contains the users’ files, and the so called dotfiles. Dotfiles (files that start with a dot, unix for ‘hidden’) appear in the users’ home directories and their subdirectories, and contain configuration information. You can see them in the terminal with ‘ls -a’, or in the GUI by selecting show hidden files.
Normally, programs themselves are easy to replace. If you’re installing something with complicated configuration built into its file structure, like databases, wikis, whatever, you may want to install it in /opt, and make /opt a separate partition - that can facilitate backing up.
But as a general rule, reinstalling stuff is the easy bit - there are even ways to automate getting a list of all your installed packages, then feed said list into your package manager upon reinstall.
The hard part is preserving the data - including user settings - and that’s mostly on /home.