Protecting open source against sabotage

The open source developer community full of people providing their software skills free of charge, right?

Can someone answer the following question?
What mechanisms are in place in the community to prevent someone with a vested interest and a lot of money from hiring someone to join it and wreak just enough havoc within a fine product like OpenSuse to detract from its reputation versus the competition?

Can someone answer the following question?
What mechanisms are in place in the community to prevent someone with a vested interest and a lot of money from hiring someone to join it and wreak just enough havoc within a fine product like OpenSuse to detract from its reputation versus the competition?

Developers are almost always working in collaboration with others on large projects, so any code is subject to peer review. That, coupled with the source code being available publicly, makes the inclusion of any potential malicious code very unlikely.

Besides evil rich guys have better things to do like buy stuff from the sponsors of FOX News or Rush Limbaugh rotfl!

Another protection is the nature of open source. Even if say some big evil company were to take over Novell and try to change the direction of development, there would still be many other distros unaffected. Besides most of the talented OSS developers would quit and get jobs somewhere else. All that takeover money for little effect.

On Sat, 07 Feb 2009 10:06:01 +0000, deano ferrari wrote:

>> Can someone answer the following question? What mechanisms are in place
>> in the community to prevent someone with a vested interest and a lot of
>> money from hiring someone to join it and wreak just enough havoc within
>> a fine product like OpenSuse to detract from its reputation versus the
>> competition?
>
> Developers are almost always working in collaboration with others on
> large projects, so any code is subject to peer review. That, coupled
> with the source code being available publicly, makes the inclusion of
> any potential malicious code very unlikely.

Also, for larger projects like the Linux kernel, it’s unlikely that a new
developer would be permitted to commit significant changes right away;
what I’ve always read is that you have to start small in that group and
build a reputation over time before major changes are accepted. That
gives the core developers a chance to get to know the new developer.

At least this is what I’ve heard.

Jim

Plus you can never get more than 2 Meg broadband in a secret volcano lair. :wink:

Also the size of the open source commons should not be underestimated. For example:

Counting Source Lines of Code (SLOC)

estimates that a distribution has tens to hundreds of million lines of source code, representing thousands of developer years. You’d be hard pressed to just read that amount of code in any reasonable amount of time, let alone try to do something wicked to it.

On Sat, 07 Feb 2009 11:26:01 +0000, ken yap wrote:

> Also the size of the open source commons should not be underestimated.
> For example:
>
> ‘Counting Source Lines of Code (SLOC)’ (http://www.dwheeler.com/sloc/)
>
> estimates that a distribution has tens to hundreds of million lines of
> source code, representing thousands of developer years. You’d be hard
> pressed to just read that amount of code in any reasonable amount of
> time, let alone try to do something wicked to it.

Well, arguably, you don’t need to go through the billions of lines of
code to do something evil to it, you just need to find that one program
to use as a target…

Jim

Yes, but you would first have to understand open source to realise that most of the code is pretty decoupled and there’s not much point doing some sabotage on something peripheral like say gtkpod, just to pick an example at random. You’ll just **** off some people for a while, get banned from the dev team and not be trusted anywhere else, and have very little impact.

You really want to concentrate on the number one thing, the kernel. Or maybe Apache or Mozilla. But how to submit something that looks valid but is dangerous? Some anonymous person actually tried this, submitting a patch that would have opened a backdoor in the kernel, but it never got anywhere near Linus’s tree, too many people looking at the code.

Anybody with the nous to do good sabotage is probably already a developer where one’s talents are appreciated. :slight_smile:

“Developers are almost always working in collaboration with others on large projects, so any code is subject to peer review. That, coupled with the source code being available publicly, makes the inclusion of any potential malicious code very unlikely.”

However, given that a new release is often, if not always, done under time pressure, the opportunities for peer reviews before the release may be limited, so that the damage is done, the reputation is marred. And that would be true even if the peer reviews continue after release to repair any such damage.

HN
:\

You do have a point in that mistakes can slip past.

An example of a bug that really did slip past was a Debian developer deciding that he wanted to suppress a compiler warning and modified, in the Debian package only, a piece of code in (I think) the openssl library. Unfortunately he didn’t realise that the line in question provided more randomness in generating a key. As a result, crypto keys generated from that version of the package were not as random as they should have been and therefore weaker. It caused a big effort to get sites to revoke weak keys. It also affected Ubuntu packages, because they were using Debian packages upstream.

But when the dust cleared from that debacle, open source carried on as usual. Lots of people were not affected. Non-Debian distros were not affected. But it was a good lesson, I think in that case the developer was not knowledgeable to understand the effect of his change, so I think they will be more careful with important packages.

Here is an example of someone who did hack the Linux kernel, and how fast they caught it and fixed it. Attacker attempts to plant Trojan in Linux - ZDNet.co.uk

I wonder if that is true. It doesn’t have to be as dramatic or difficult as an attack on the kernel, so much as something which causes a lot of user-level irritation which is enough to ruin a good reputation. As I say, even if the well-meaning community rallies to repair it, the trust in a good product would be diminished or even lost.

Not really.

It was found that a Firefox extension (weatherbug) had spyware in it. It is my understanding that weatherbug has removed the spyware, but people are still reluctant to use it.

The virus threat to Linux

Maybe you should read Hacking Linux Exposed. This book explains a lot of your concerns

This is exactly the sort of thing one would hope is place. Peer reviews are part of my professional experience. But their effectiveness depends very much on how they are organised. If, in the open source world, a ‘peer review’ is meant to describe the fact that the source is open for all to see, and that it would therefore be difficult for someone to introduce unhelpful code, then, I suspect, that this is not tight enough and a little naive to assume that, therefore, just enough of the kind of destructive damage will not happen to cause the users to lose trust in the product(s). Remember, we are not talking about dramatically destructive damage.

Of course, it is comforting to understand that the open source world has a kind of self-regulating quality control, but that may not be enough in itself to pre-empt loss of trust.
:\

On Sat, 07 Feb 2009 14:56:01 +0000, hnimmo wrote:

> Of course, it is comforting to understand that the open source world has
> a kind of self-regulating quality control, but that may not be enough in
> itself to pre-empt loss of trust. :\

Well, it’s hardly going to be regulated from outside, now is it? :wink:

There are plenty of controls in place for the major projects; rather than
ask here, though, why not ask on some of the lists having to do with
those major projects? Or search the archives of those lists, which
probably have lots of discussion about how to prevent someone from
“tainting” the project or causing sabotage.

Jim

On Sat, 07 Feb 2009 13:16:03 +0000, hnimmo wrote:

> However, given that a new release is often, if not always, done under
> time pressure, the opportunities for peer reviews before the release may
> be limited, so that the damage is done, the reputation is marred. And
> that would be true even if the peer reviews continue after release to
> repair any such damage.

That can happen in closed-source development models as well; in fact it’s
probably more likely in closed-source development, since there are fewer
eyes on the code.

Jim

Of course it can happen in a closed source environment as well. I may have a strange view of things, but isn’t there a difference between the way a closed source project team is likely put together and the way an open source project comes together? And I can see that the field of questions is rather large…and getting larger…altogether an interesting subject!

On Sat, 07 Feb 2009 21:46:01 +0000, hnimmo wrote:

> Of course it can happen in a closed source environment as well. I may
> have a strange view of things, but isn’t there a difference between the
> way a closed source project team is likely put together and the way an
> open source project comes together?

I don’t know that I’d say the teams are built differently - to program on
a closed source product IME you have to go through an interview process
and be accepted as a member of the team. You have to prove that you have
skills and knowledge that apply to the product being developed.

In the big OSS projects, you have to be vetted as well. The process is a
bit different because you can start by contributing code that is
scrutinized before it is committed, so you do go though a sort of
“interview”.

OSS has been described to me as a meritocracy: you have to prove
yourself.

Jim

But once it’s established who did it and whether it was a genuine mistake or sabotage, then the attention goes to the person responsible, not the product. And the effect on open source would be temporary. People forget quickly. So the attacker has gained very little and would never get to work with an OSS project again. So there’s little motivation to do it.

Of course unintentional bugs and backdoors do exist, in both open and closed source. In at least one case a backdoor was discovered in closed source after it was released as open source, so the lack of review does allow things to slip past and openness is catching them. Nobody can give you absolute guarantees. It’s the balance of probabilities. I think though that the open source community has a better chance of preventing bugs and sabotage than closed source.