Non-free programs - are they tested for security?

After thinking a little and doing some research I don’t know if non-free programs are tested by the security team of openSUSE. Not tested in the sense of reading it’s code (becasue that’s impossible), but trying to look for suspicious activity of these non-free programs, e.g. suspicious changes in the Kernel.
It seems logical that the maintainers wouldn’t make a non-free program available for openSUSE not knowing if it does nasty things, and it seems to me that there’s some general testing involved in them. How accurate am I?

Regards,
Junior

On 2015-04-22 03:06, Junior s2 Camila wrote:
>
> After thinking a little and doing some research I don’t know if non-free
> programs are tested by the security team of openSUSE. Not tested in the
> sense of reading it’s code (becasue that’s impossible), but trying to
> look for suspicious activity of these non-free programs, e.g.
> suspicious changes in the Kernel.

AFAIK, no.

The user community does, in a sense.

> It seems logical that the maintainers wouldn’t make a non-free program
> available for openSUSE not knowing if it does nasty things, and it
> seems to me that there’s some general testing involved in them. How
> accurate am I?

A non free package, one of those that are only available in binary form,
is only redistributed, per their license, not “maintained”. Like for
instance, flash.

Besides that, there are other packages that have published source code,
but non-free licenses, and thus they are put in the non-oss repo.


Cheers / Saludos,

Carlos E. R.

(from 13.1 x86_64 “Bottle” (Minas Tirith))

I would not count on that.

We do know that flash has to be updated for a security problem every month. Admittedly, it usually runs in user mode.

When I use the Nvidia drivers, I assume that I am trusting Nvidia. And that one does affect kernel code and root code. Whether Nvidia warrants that trust, I cannot tell. If a serious security problem happens, it will cause bad press for Nvidia and a loss of profits. I can only hope that is enough incentive.

That’s pretty much the risk we take.

If I run open source software, the situation isn’t perfect either. For example, I use “sendmail” which runs as root. I don’t expect the openSUSE team to guarantee the security of sendmail. That’s really up to the sendmail developers. I do expect opensuse to provide security updates when they become available. The situation is better than with nvidia because the source is open and there are people who spend time with the source. But still there are no guarantees and most users are not checking the source for problems.

This might be comforting or scary if not aware of how bad it is http://www.theregister.co.uk/2015/02/18/zemlin_talks_core_infrastructrure_initiative/ Internet of Things is not helping. Was watching something from a white hat conference where some people had made a rescue organization or something. They were terrified of what they had seen developers/companies ship so they offered free help and training. Will be like unpatched XPs all over again if not security gets focus.

IMO

  • Security is a moving target. Both from the black and white hat sides… A flaw is not generally recognized until a vulnerability is identified and even then maybe not until an exploit (demo) has been created. To that end, there are probably plenty of latent flaws that have not yet been exposed.
  • Computing is complex. There are many ways we abstract simpler, understandable ways to look at how computing works, but computing is still basically complex and subject to error.
  • For approx 10 years now, Agile software development has been the way of successful development. This philosophy emphasizes economic principles to make software cheaper at the cost of trying harder to make released software as perfect as possible. So, understand that Vendors actually know and assume that their software contain numerous flaws.
  • The corollary of the above (Agile software development) is that today the focus should be on how responsive a Vendor is addressing flaws, so one of the first places I always look at when considering critical LOB software is its history. How bugs are still open? How long does it take to fix bugs? What are the bug severity ratings? So, for instance today VBox has still not fixed a highest level critical bug that crashes running on Windows (does not affect anyone else including deploying on openSUSE). It affects an estimated 70% of all VBox deployments, and a week later there is still no resolution (or even a progress report for that matter).
  • While incentives and methods to detect and prevent software compromise has been evolving glacially, the economic and political incentives to discover and create new exploits are exploding geometrically, so much so it may not be measurale. So, hunker down and defend.

Somewhere in a very old thread in these Forums we discussed how the software we use is based on a Web of Trust. We trust the integrity of the people and systems that create and distribute the software we use. The system is not based largely on verification because tools to do that economically don’t really exist in large scale although some (eg the mobile device stores) are trying small things which unfortunately are currently easy to avoid if someone has malicious intent.

TSU