Repositories can you explain this to me and compare this to windows like libraries = dll and etc.
take your 10 fingers. Write “repositories” into the google search window. Take your hand and use the mouse. Click on Wikipedia result:
This will give you
"Repository commonly refers to a location for storage, often for safety or preservation".
And a link relevant for software repositories:
Software repository - Wikipedia, the free encyclopedia.
You may well be lost and new to this, but IMHO you should search a bit for already straightforward information on google and wikipedia before posting these kind of questions.
Edit: well I also forgot to mention: this is the soapbox part. So this would be a place where you put strong opinions (etc - you just “READ” before you post the description of the forum before you choose it). So if you post in the WRONG forum you will get no answer. If you post UNNECESSARY questions in the WRONG forum…well this will be probably even less productive.
For the layman a repository is a library, and the packages are like books.
A dumpster for software
My advice for a novice: Only add a minimum
Over and above what is installed by default you should probably only need Packman (VLC only to add the package: libdvdcss. Then remove.)
Possibly you may need a ati or nvidia repo
IMHO this is an EXCELLENT question, and deserves a detailed answer.
I do not think there is an MS-Windows equivalent to Linux repositories.
My view is Linux repositories (which in essence are file servers on the internet containing packaged applications for a Linux distribution) were created to help work around a limitation associated with the opensource free software nature of Linux.
Because software used in Linux is to a large extent free (not free as in free beer, but free per the software foundation definiton of free) it means that developer does not have to write the entire application themself. Rather they can use parts of other free applications that have already been coded.
Hence a given application (call it app-a) may not run by itelf. Rather app-a may need a dozen other applications installed first, BEFORE the software will run. app-a may need an executeable from app-b, and libraries from app-c, app-d, and app-e. Plus app-a may need specific version of applications from app-d and app-e. In turn app-e may need app-f, app-g, and app-h. app-d may need app-i, app-k, and app-l. … etc … These are called dependencies.
So you can see if you simply try to install app-a, without the other applications, either app-a won’t install, or if it does install, it won’t work.
So to address that problem, app-a thru app-l were all put on one file server, called a repository, which was placed on the internet.
In turn software package managers were created, where these software package managers will look after the installation of app-a for one. All one is to do is click on app-a, and the software package manager will check its repositories, and appropriately install app-a thru app-l.
It might be worth your while to read up on some basic openSUSE concepts where this and many other things are explained: Concepts - openSUSE
> Repositories can you explain this to me and compare this to windows like
> libraries = dll and etc.
think of a repository as a software warehouse or storage place…
that you can go to and download all the software you need, for no cost…
except for those very few times that that M$ gives you software at NO
COST there is no Windows[tm] equal…
openSUSE repositories are located on university, government and
donated servers all over the earth (see full list here:
http://mirrors.opensuse.org/list/all.html) each GIVING you all the
software you need…
as caf4926 already wrote, do NOT just start adding enabled
repos…most new users need only four…(i’ve been using Linux for
almost 10 years and i get about 99% of my software from only three
repos…but, i like a stable, dependable, reliable system.)
Hm, to stretch that comparison a little further, it would be a dumpster with a very detailed catalogue who dumped which trash at what time and place.
A little like a dumpster for radioactive waste when you think about it.
Yep, fairly correct analogy, except our dumpster is not disposable trash (in the negative overall sense, there are packages which get disposed (eg old ones for new ones) but they are not “trash” in the sense of what belongs in a dumpster) but actually useful pool of software storage
I think it helps to realise that they’re not specific servers (or the contents of those servers either), but an abstraction above that.
Let’s take the google search database as an example. It doesn’t exist on any specific computer - in fact, I don’t doubt that you simply couldn’t build a specific computer it would fit on. It’s a network of information, that using very clever jiggery and pokery (those are technical terms) appears as far as the user is concerned as a single thing that you connect to.
Now that’s an extreme example - and copies of the SUSE repositories do exist on specific servers. In fact, if you wanted to, there’d be nothing stopping you making your own mirror of SUSE (in other words, making your own version of the repositories) for use for your own network.
When you connect to a repository through the mirror system you are intelligently connected to a server according to how fast it can send the information to you - usually this boils down mostly to geographic location (and I’ve no idea how the SUSE mirror system picks one for you).
So the repository itself is an information abstraction, a set of packages, which is ‘mirrored’ on all kinds of computers around the world, and which ‘sync’ with a master server (or servers) where the new packages are uploaded.
[this post subject to extensive correction by people who have a much better idea what they’re talking about… ;)]
I think you’re confusing repository with how actually software gets redistributed from them. A repository is a place where the actual software is stored on (eg, a server or a multiple ones working as one). The thing you’re talking about (mostly) is how that information (software/packages) is getting redistributed to other mirrors which copy the whole content (cloning and thus become themselves a repository) of the master repository and offer it to users all over the globe which in essence makes the whole system more robust and speedier, ie many servers around so if one goes down, you can pick up another one and also, depending on location (near or far away from you), can speed up or slow down downloading.
But surely the specific instances of the repositories are the mirrors?
The repository itself is an abstraction… in the same way that a website doesn’t necessarily have to exist on any specific computer, it can be distributed?
[Not that I’m arguing… you know lots more about this than I do. Just trying to get my head around it myself… ;)]
In my case I try hard not to get dragged too much in the mud of the details. As long as it works, I’m typically happy
I confess I prefer the organized Linux repository system for installing applications over the more adhoc (and massively larger) Windoze way of going to many adhoc web sites for finding specification applcations (followed by Windoze downloading .exe, scanning for virus, controlling firewall access, etc … etc … ) . … but thats “just me” and my preferences. Perhaps I’ve been using Linux too long, and a large degree of subjectivity has crept into my views.
A couple of URLs on repositories (respos) that I keep bookmarked are:
- official repositories: Package Repositories - openSUSE
- 3rd party repositories: Additional YaST Package Repositories - openSUSE
where out of that list of many repositories, I typically recommend new users ONLY keep in their software package manager the repos: OSS, Non-OSS, Update and Packman, where:
- OSS: The main repository, open source software only (typically this is close to what is on the DVD)
- Non-OSS: Non free (as in freedom) software, such as Flashplayer, Java, Opera, IPW-firmware, RealPlayer etc
- Update: Repository for official security and bugfix updates.
- Packman: Packman offers various additional packages for openSUSE. It’s the largest openSUSE external repository. Lots of multimedia and other applications are packaged and kept here
The risk of new users adding more than those 4 repos is there can be dependency problems (not all of which are handled by the software package manager) when additional repositories are added. Typically only average to advanced users can detect and solve such problems. This has IMHO been amplified by there now being a build service, where many users can create their own repositories. I happen to be a BIG FAN of the build service and a BIG FAN of this repository proliferation, but I also have a view that we need to give thought to this, to ensure there are not side effects from the repository proliferation that will hurt our user base.
Let me give a specific example. I connect via the mirrorbrain (which, incidentally, is a really cool name) to a specific mirror. I am downloading from a repository.
Now the mirrorbrain decides that another server would be faster for me halfway through my update (I don’t think it can actually do this - it’s just a hypothetical example), so it reconnects me to a different mirror.
Am I not still, in an important sense, downloading from the same repository?
Somewhat correct, but not entirely. In order for a website mirror (or in this case a repository) to exist in the first place, it first must get the website contents from somewhere else and that somewhere else must exist in the first place (master server). So from the very beginning, it has to exist on a specific location where all others copy/clone from and become mirrors of it. You cannot place or mirror information that doesn’t exist in the first place, obviously, it has to have a original source where that information comes from and others copy it
You are downloading from a cloned repository of the master repository where the mirror gets its information from in the first place
I don’t think we’re really disagreeing at all about the process, it’s just a different way of conceptualising it. I will give this more thought…
[Have to say though - the whole thing is a genuine marvel of engineering. Except when it can’t find content.key. Then it’s just rubbish…]
Note, and this is not very relevant to the thread but only about information we’re talking about, that one cannot clone information in the quantum world if it is in a superposition and its state is unknown at the time one wants to clone it… obviously (you cannot clone information if you do not have its definitive properties or position)
As for the “not necessarily have to exist on any specific computer”, not really true. Consider this
I want to create a website about something but it’s only a concept in my mind so far thus it does not materially exists, though which can be argued that it already exists since my thoughts are part of reality, but that’s a different subject
In order to create that site, I need a “master” or “original” computer I do work on which is the “specific computer” you talk about
I publish this information (website) using this “master” computer where the information comes from in the first place
others copy it (mirror) and offer it
Without point 2, others could never copy it since it won’t exist in the first place for them so they can’t mirror anything. It could exist locally only on the “master” computer
Mostly that’s true - and you’ll note that I did acknowledge in my first post that there is a master server, and that it isn’t a true distributed system.
There are things that exist only by virtue of the network - the internet didn’t begin to exist on someone’s computer, and spread from there. It existed by virtue of the connections themselves. But linux repositories don’t work like that.
I think it’s just a difference in terminology, or conception.
Let’s take an everyday example; the Oxford English Dictionary. I have a copy of the OED on my shelf. It’s two volumes, kinda old, but very handy to have around.
Now the OED had to be a specific instance of the OED in order to start off - that much is true. Some Oxford don thought “I know! I’ll make a dictionary!” and started writing words down (certain elements of this paragraph not checked for strict historical accuracy ;)).
But there is still a meaningful sense in which “The OED” doesn’t mean any given copy of the OED - even, strictly speaking, the ‘master’ one. If said don went barking mad, and changed the definition of every word to “A type of daffodil”, would we really say that the OED held that every word meant that?
If I’m playing scrabble, and I say “this word is in the OED”, I am not referring to my OED (there might be a page missing), or to the master copy (which I’ve never seen), where the OED’s next edition is thrashed out. I’m referring to an abstracted set of information, with copies dotted around the whole world, called the OED, which in practice is actually probably completely impossible to concretely define.
In that sense, I guess we’re both right.
I don’t think you can correctly compare what you said about the OED with mirrors. A mirror is an exact copy of the master, ie it holds the exact same contents as the master offers and that’s why syncing is needed to reflect on the mirror what the master offers. If you do like the OED and rip a page off in one of the OEDs but leave other OEDs intact (ie, if you block syncing of one specific package on the mirror despite the master offering it), then it can be said that it is for 99.5% (or so) a mirror of the full OEDs but there’s information missing from (the one page). Since mirrors sync with masters, in order to compare that with the OEDs, you have to glue that missing page in the one OED to reflect the information of the original OEDs… Whenever you change information in a mirror that no longer reflects that of the master, like your “A type of daffodil” word changing or in the case of a mirror, changing each package to have a -custom suffix or having the exact same packages names but their content differs from the master, then it is no longer a mirror of the original since you altered information. It may have come from the original but it is no longer a reflection of the original, ie information is changed/transformed
As for the internet, before those connections could exist in the first place, there must be a physical source that allows them to form and since the internet is an information highway, that information has to be stored on computers where it is offered to others. Without the computers in the first place, you cannot build the Net as there’s nothing else that offers the ability to make said connections (unless you developed something similar)