Not a serious suggestion, but not a totally frivolous post either. I’m quite interested in the mechanics of how opensource software evolves; how different, ostensibly competing projects share code and cross-fertilise ideas. I think it’s an integral part of the way GNU userspace works - variety tends gradually to perfection.
But interestingly, the one place this is completely avoided is the Linux kernel, under its ‘benevolent dictator’, Linus Torvalds.
Distros sometimes configure kernels in different ways for different uses - servers or laptops, media centres or firewalls. Conversely, there are different *nix flavours or unixalikes, such as BSD or opensolaris.
But there is clearly a gap in between these two duplications of function. You could have another team, developing another kernel, that is still supposed to be Linux. In theory, I suppose there’s no reason that it couldn’t be compatible with all of the same operating system calls, although whether you could achieve such clean separation as to make it possible to drop a different kernel implementation into a ‘vanilla’ distro is a different matter.
As Linux grows, do you think this will eventually happen? Would it foster innovation, and play to our strength? Is it a needless division of talent, and a waste of resources? Is it inevitable anyway, because of the factious nature of opensource? Or is it simply the case that the kernel uniquely, as the very core of the operating system, must remain under the unitary control of a single, small group, while everything else divides, evolves, and prospers?
and you think other systems like OpenBSD don’t have a ‘benevolent dictator’? Think again, I think you need a good firely flaming by Theo De Raadt and his arrogance to really see what a dictator is
As for forking the kernel, go ahead and good luck attracting a lot of developers and support from big companies likes the ones Linux enjoys today.
And there’s not really a gap between Linux and other UNIXes as virtually all of them try to comply as much as possible to POSIX/SUS specifications. However, in the case of Linux and GNU specifically, it does not try to stick to POSIX. GNU’s mentality is ‘if the standard is retarded in specific areas, try to either improve it or do not stick to it and implement something that’s not so retarded’. There was a huge discussion recently about file systems on the LKML and how POSIX is flawed in defining how a file system should or should not protect your data
As for the “problem” of the core system (kernel) remaining under a few people who “control” it is because the kernel is not only huge in code-size (millions of lines), but also immensly complex and very few people can actually wrap their heads around complex sub-systems like the VM, CPU scheduling, etc
Confuseling adjusted his/her AFDB on Monday 18 May 2009 22:56 to write:
> As Linux grows, do you think this will eventually happen?
I hope not!
> Would it foster innovation, and play to our strength?
Where?
> Is it a needless division of talent, and a waste of resources?
Yes
> Is it inevitable anyway, because of the factious nature of opensource?
I don`t see many big arguments considering the amount of devs I think the %
is minuscule.
Or is it simply the case that the kernel, as the very core of the
operating system, uniquely must remain under the unitary control of a
single, small group, while everything else divides and prospers?
Where is it dividing? the different arch/distro`s have been around for years
the newer ones ( ubuntu for instance ) is based an debian and can easily be
turned into a full blown Deb with a simple apt-get.
Do YOU think the kernel is not prospering?
If the kernel is stagnating then the distro`s will, you cannot have one
without the other.
–
Mark
Nullus in verba
Nil illigitimi carborundum
Microforks are happening all the time in the kernel development. Whenever a developer undertakes to try some new idea, it’s a fork. Sometimes these forks are distributed for quite a while to test out the idea in the field. The point is there is a mechanism to fold these forks back into the mainline, you just have to convince enough people of the worth of your fork to get your code accepted. The tools (git, etc) exist for such distributed development.
The system is set up so that the value of cooperation outweighs that of going it alone. In return for being held to peer review, you get the benefit of other peoples’ contributions.
So unless you are a genius and can attract a team of bright people like the current one, you have no chance going it alone. Why would you anyway? If you are dying to be a kernel god, and you are that smart, write your own.
I don’t want to be misunderstood here - I’m not suggesting there’s anything wrong with the kernel (well, nothing I could fix ;)), and I certainly have no intention of doing this.
I’m just curious as to whether people think it would spur development. I can see a case for saying yes. The effort divided between KDE and GNOME, for example, seems to me to allow people to move in different directions simultaneously, and test out new concepts and codepaths in a way I don’t think could be done in a Grand Unified Desktop.
Now obviously the kernel does have a special place, and if it wasn’t just a different *nix, but still a version of Linux (or perhaps just GNU), it would still need to be cross compatible. I suppose that would leave whoever did it playing catch up mostly - they’d spend so much time trying to figure out the main kernel’s code they wouldn’t be able to develop anything much themselves.
Still, I could see it happening one day, whether it goes anywhere or not…
Confuseling adjusted his/her AFDB on Tuesday 19 May 2009 00:56 to write:
> I’m just curious as to whether people think it would spur development.
> I can see a case for saying yes. The effort divided between KDE and
> GNOME, for example, seems to me to allow people to move in different
> directions simultaneously, and test out new concepts and codepaths in a
> way I don’t think could be done in a Grand Unified Desktop.
>
I think that the argument over desktops is not relevant here, there are
loads of different desktops/environments etc… some due to necessity (
speed, size, capabilities etc… ) others purely because they want to be. I
still use windowmanager, enlightenment, xfce even on a multicore 64bit
system with 4 gig ram they do have their uses sometimes as well as having
KDE4.2.3 with all the eyecandy installed when I feel like indulging myself.
creating a desktop env is a lot easier than writing a kernel and getting all
the hardware support.
one person can make and maintain a wm/dm thousands are needed for a kernel.
–
Mark
Nullus in verba
Nil illigitimi carborundum
There are indeed tons of “forks” of Linux out there, for example the Linux DNA project, where the kernel is patched to be able to be compiled with the Intel C compiler and a much more immature project to try to do the same with Clang/LLVM, the unified kernel project which wants to add some windows NT kernel features from ReactOS to the linux kernel, or the Glendix project which tries to port the native Plan9 userland to the native linux kernel.
In fact, all these projects hope to one day be accepted into mainline and improve the “official” linux. Some of the “forks” get upgraded to “subsets” - like the Embeddable Linux Kernel.
I would say that at this moment, the GNU user land is the part that suffers from the least amount of competition.
GNU has several kernels, but Linux on the other hand is very tied to GNU and GCC in particular (I suppose even non-GNU linux operating systems like Dalvik (Android) has their kernel compiled with gcc).
Linux has benefitted a lot from being ported to many types of devices- everything from small embedded systems to huge multithreaded supercomputers. In contrast to what the kernel developers think, I suppose a similar “cross architecture” in user land/compilers as has been done for hardware could help revealing more bugs and make better code… on the other hand I am just a layman brainfarting…
And what do the kernel developers think? The link you point to is me asking how feasible/easy will be to support LLVM compilers (yes, that’s me asking and my email address). If you read the two replies posted by Jeff and Ted, you will realize that until LLVM fixes its problems when it comes to compiling Linux AND adds support for all the possible architectures/platforms Linux supports, it’s just not doable. They do not object to adding LLVM compiler support to Linux kernel. What they say is until LLVM compiler matures and supports most of all the archs GCC supports, it won’t be a good idea to add it.
I will not try to second-guess what you think - you certainly know more about it than I do. The way I interpreted the reply was however that there is no or little interest to make the code “compiler agnostic” (if that is even possible) and that the code would still be GCC-centric.
The same kind of reasoning can be found here.
… in there the main argument is that you need to keep one constant in order to distinguish between code (kernel) and compiler bugs. On the other hand, the advantage of some compiler error messages to improve the code are also mentioned (which was sort of what I meant with that different compilations - like different architectures - might reveal stuff that are valuable)
Although I might have mis-interpreted you.
On the other hand, I am no coder and thus I might just talk rubbish.
compiler agnostic code looks good on paper, but when it comes to highly complex systems like kernels, it actually can increase code size and thus bugs by trying to support code that has exceptions/specific code for different compilers. In the case of Linux, which is a kernel who began as non-agnostic approach (as many other kernels) fixing and rewriting subsystems to support different compilers is a very hard task. Also often kernel folks have to consider different versions of the same compiler and tweak their code so it can compile under those different versions, as Jeff mentioned is the case with GCC where Linux has to support different versions of it. Now, add support for say another two compilers like LLVM or Intel compiler, and on top of that if you also have to support different versions of these compilers, I think you can see how hard it gets. There was talk recently on LKML about writing an in-kernel compiler specific for Linux and ditching GCC due to many problems in it. Most folks, including Linus, were against it, but I haven’t followed the full discussion to tell you why… you may search a bit 