On Sat, 07 Jan 2012 23:06:03 +0000, stakanov wrote:
> Thanks for this statement. That said, I do not feel or better read that
> users complain because expecting 100%.
I think users “expect” 100% for certain things - but those certain things
are for their configurations.
They expect, for example, that if they buy a webcam, it’s going to work
with a minimal of fuss - preferably no fuss.
The focus is on their experience, not the collective experience of all
users.
And therein lies the issue.
I recently bought a new laptop, which I needed in order to do contract
work. I plugged in a Live USB flash drive to the floor demo unit, and it
booted, so I bought the laptop. I was surprised to find that Bluetooth
worked (and in fact, I get flawless audio from my phone’s Bluetooth
headset paired with the laptop).
The integrated webcam worked out of the box. A pleasant surprise.
Even the HDMI input worked out of the box. A much hoped-for surprise
(kinda hard to check that without an HDMI AV device to plug into in the
store), because that means now I have an alternative to watch video on
our home theater when our son takes his PS3 to his girlfriend’s house.
With VirtualBox, I can even watch Netflix on the big screen.
But what I wanted to have work were the basics: Video driver working,
wifi working, ethernet working, audio working. Plenty of CPU and memory
were my main requirements. Blu-ray would have been nice, but the laptop
didn’t have a Blu-ray drive as sold.
> There is, for what is my
> understanding, a number of core components that reasonably user of the
> KDE SC “expect” to work as mail, as well as the Internet browser, being
> so uttermost central, is expected to work. Other points are that non
> exotic but frequently sold USB Hardware as printers (and the printer
> queue) are expected to work.
That’s one thing the Linux ‘marketing’ people need to do - they need to
get some sort of “Works with Linux” branding campaign going. If a
printer doesn’t work, but it’s not expected to work because the
manufacturer has either done nothing or actively worked to prevent the
hardware from working with Linux (such as not releasing critical specs
when asked for them), that’s not really something one can blame the
developers of Linux software for. That’s entirely on the manufacturers.
> However, user in general for what I see do not really “complain” about
> things not being 100%. I think they would accept 80% but sometimes find
> in core components a 0% to a 50% (where 0-50 means the program at least
> dumps an error message). Log in, log out errors…
I think users need to be introduced to the Pareto Principle (the “80/20
rule” as most people know it). This is a key reason why no software
(not commercial software, not open source software, not the software a
person writes sitting at home that is never actually released - unless
it’s trivially simple software) ever reaches 100%. The effort to get
that last 20% increases exponentially. In commercial terms, the cost
increases exponentially.
That’s the nature of software development (and indeed, most project work).
> Very visible and in the end maybe worth to correct before shipping? This
> is not because I think things should be perfect. I am not (I know that
> is shocking ;)), so why would I expect them to be.
Visible to whom, though? If a pool of testers sees a failure to boot in
1 out of 100 times, is the fault the testers’ fault, or the developers’
fault that the bug is pushed to post-release? If after release, that
fault shows up 20 out of 100 times systems are booted, that points to a
lack of diversity in the testing pool. Since OSS testing pools are
typically people who volunteer their time, does that not point ultimately
to the users of the software not being willing to step up and try pre-
release versions and then to report issues when they have them?
It’s very easy post-release to employ hindsight (which is nearly
always 20/20) and say “this should have been fixed”. If only the
developers knew during the development and testing phase what users were
going to experience 6 months after release, sure, a release would be
better.
But if those who actually have those experiences don’t test and report
prior to release, then the developers aren’t going to have that insight.
They’re not prognosticators. They’re developers. They might be able to
forsee some situations that they can pre-emptively fix, but they aren’t
going to see every possible permutation when dealing with millions of
lines of code. They can’t test every use case, which is WHY it’s
incumbent on the users of software (OSS or closed source) to be involved
pre-release. That’s why beta programs exist.
So now we’ve have this discussion post-12.1 release. Just like it has
been had after every single release of every single distribution.
And it’ll happen again after 12.2 comes out, and 12.3, and 13.1, and
13.2, ad infinitum.
Because there will always be those who don’t understand how software
development and testing works. (And I’m not directing this comment at
you, stakanov - it’s directed at the ‘ether’ in general. You may well
understand all of this stuff; you may well be a software developer, in
fact - I don’t know.)
There also will always be those who not only don’t know but who don’t
care - but nevertheless have unrealistic expectations regardless of their
own ignorance as to the realities of development. Such is life.
Jim
Jim Henderson
openSUSE Forums Administrator
Forum Use Terms & Conditions at http://tinyurl.com/openSUSE-T-C