Like probably all Linux users, I’m excited about the upcoming release of 2.6.31. I’m fairly new to Linux, and my knowledge of programming is close to nothing.
So my question is: when the new kernel is released, being that I’m a complete Linux newbie, would it be overreaching my limits in thinking that I can try and compile the new kernel for my system? Should I just perish the thought entirely, and wait until the update is available via YaST?
As of now I have a stable, fully functioning, and (mostly) happy machine running openSUSE 11.1, but I’m very interested in the purported new capabilities and supports offered through 2.6.31. I’ve also heard from other Linux users that, if you care to bother updating the kernel, or simply recompiling it, it is “healthy” overall, esp. if one wants a kernel more specific to their machine. As for personal experience in this issue, I have virtually none, and thus I would love other peoples’ opinions/knowledge on the matter.
Not meant to rain on your parade, but you need to weigh closely the benefit you will receive from a new kernel versus the fact you have a working machine. You said your relatively new to Linux with little programming experience, so if you really have the need in your own judgment and are willing to face the challenge doing a fair amount of manual prep and learning, by all means go ahead and learn. If on the other hand you lack an alternate means of reaching out for help should the new kernel not re-compile exactly as expected, or you lack the time and patience, then err on the side of caution and wait for Yast which will do most of the upgrade for you. If you do proceed with Kernel compile please take the time to document all hardware and settings as they are now and do it both in written/printout form besides just making a file on your system somewhere. Keep running notes as you proceed so others can better help you.
Rick is a little pessimistic. I have been compiling kernels for a long
time. If you follow a few simple rules, then you will have no difficulty.
(1) NEVER build a kernel as root. It may not cause a problem;
however, the kernel build scripts are quite complicated. At one time,
anyone building their kernel as root had the system file /dev/null
corrupted. That caused all kinds of funny things to happen.
(2) After you load the source onto your computer, change directory to
the root of that source and execute the following code:
These steps will create a configuration that matches your running
kernel. It is not the most efficient, but it will work. Once you have
it working, you can reconfigure and eliminate pieces you don’t need.
That will compile faster and reduce the size. BTW, I am deliberately
not telling you what the above commands represent. That is homework.
(3) Build and install the new kernel with the following:
sudo make modules_install install
(4) The second of the above commands will install the kernel and add
it to the GRUB menu. Leave the standard kernel in the menu just in
case the new one will not boot.
(5) Be prepared to have to “build from scratch” any out-of-kernel
drivers that you use. This list will include the 3D acceleration for
ATI or nVidia graphics cards. In addition, a standard kernel will not
have AppArmor available.
I’m now successfully running kernel 2.6.31-rc8 thanks to this help. Although lwfinger’s info was very helpful, this of course was easier than compiling the entire new kernel. I thank you very much! And though I took the “easy way out” of sorts I still will be valuing and intending to use the commands.
You have to compile them manually when using a “non standard” kernel.
Although it’s not that hard to do, simply download the driver from nvidia.com, switch to a VT (ctrl-alt-f1), login as root then change to runlevel3 - init 3, cd to the folder that contains the driver and type - sh ./N(tab).
Follow the prompts and ignore it when it complains about compiler version ;).
I do apologise to lwfinger and others, I’m not trying to undermine their efforts to teach people how to compile their own kernel… Doing that certainly has it’s benefits, but it can be extremely frustrating waiting 4 or 5 hours for a compile to finish only to find that one silly thing was wrong and you have to start all over again :S.
Plus doing it the “easy way” is more environmentally responsible isn’t it?
…that’s my excuse and I’m sticking with it - lol ;).
lol! That’s a nice apology, but did anyone apologize for omitting to mention the frustration or typical hours of compilation time required. I don’t recall seeing any of that before. It’s better to have all the facts (+ve and -ve), isn’t it? The OP was happy having both options, and so will I when I need to make the choice.
> lol! That’s a nice apology, but did anyone apologize for omitting to
> mention the frustration or typical hours of compilation time required. I
> don’t recall seeing any of that before. It’s better to have all the
> facts (+ve and -ve), isn’t it? The OP was happy having both options, and
> so will I when I need to make the choice.
If the procedure I outlined is followed, the kernel build is unlikely
to fail, or if it does, it will happen very early. Finishing a
compilation and having the kernel not boot is usually because the
driver for a particular module is missing. This situation occurs
because the step of copying the current configuration is skipped, or
after one starts modifying the configuration to reduce the build time.
The latter step is a more advanced topic that I avoided. BTW, a full
kernel build on my HP laptop is < 20 minutes. On my desktop with
faster disks, a full build with all modules is only 30 minutes.
Encouraged by your explaination, thanks for that. My comments were aimed at a more general field than your procedure specifically. BTW, your timings suggest that the longer compile time on the desktop compared to the laptop is down to cpu/memory difference, or is the build content different for the two examples?
> Encouraged by your explaination, thanks for that. My comments were
> aimed at a more general field than your procedure specifically. BTW,
> your timings suggest that the longer compile time on the desktop
> compared to the laptop is down to cpu/memory difference, or is the build
> content different for the two examples?
The CPUS are the same - the desktop build has more than twice the
modules, but as I said, the disks are faster.
Incidentally, if you have more than one cpu, the build times are
reduced by “make -jX”, where X is the number of jobs to be run at
once. With a dual processor, the command should be “make -j3”.
Thou canst compile the kernel, it gives feeling of confidence and accomplishment. Every curious Linux-er should do it at least once.
The instructions given for unzipping config files, aren’t necessary, there’s a “make cloneconfig” option to do it from the /proc/config.gz file. There’s also a “make loadmodconfig” which saves compiling modules you dont’ have loaded, and helps you get a working kernel for your system now to.
Then you can relax and test KOTD’s from build service, once the novelty has worn off.
There’s very little real benefit to self-compilation these days, and it takes much CPU time & disk space. I really try and avoid doing it, the last few times were to integrate patches obtained from kernel hackers, to try and fix driver bugs. When I did it, there wasn’t a “make loadconfig” and my need was to over-ride that anyway, but despite plenty past experience, it took a bit of trial and error before I had a working configuration.