You both are correct, of course; old habits died hard. I’m looking at about half a meter of S.u.S.E. books that came in nice boxes with the install CDs way back when there was a S.u.S.E. GmbH. The »u« is an abbreviation for the German »und«. It’s still extra effort for me to write that versal »U« in »openSUSE«. At work, we use SLES/SLED, so the »U« doesn’t come up. That’s also why I like »Leap« and not »TUmbleweed«. Just kidding, it’s all about stability.
Maybe, while your tools are up to date, the tools on that SUSE build server weren’t. I can imagine that kernel specialists like Hubert Mantel may have their personal and experience-based preferences as to which versions of make, binutils or ld (the GNU linker). They may not use the newest tools, but older versions proven to be more speedy, stable and bug-free.
How can you be so sure? Considering the SUSE log protocols a set of patches applied immediately before the kernel build; however, your and my local builds probably don’t involve the involvement of any patches. Folks like us just grab the default kernel-source RPMs or Tarballs and build kernels going on from that.
Based on my experience with filesystems, I disagree. Create a directory, copy files into it, then delete some from it, then add some other files. Make an exact copy of the directory (with »cp -a« or »rsync -av«). Do a »ls -f« on both and notice the difference. Or: write a short C program using readdir(3) and the »dirent« struct — this mechanism is probably also used by gcc and all accompanying build tools. You will find that the »on-disc« order of files in a folder is generally unpredictable and will change while you do work within that folder. It would be inefficient for filesystem drivers to handle it any other way.
This unpredictability is enough to get different results after patching, compiling and linking kernels based on thousands of source files and hundreds of intermediate object files — wholly dependent on the physical order in which tar or rpm unpack archives and in which filesystems like ext4, btrfs, xfs and zfs happen to hash, extend, shrink or serialize their directory contents.
It makes me a little worried as well, now that I’m thinking more about it, climbing down the proverbial rabbit hole. But — what is »the rule« you think all that complexity of a build environment would break?
My understanding up to this point was that the repositories are signed by certificates, and also the packages in those repositories.
But if it is true that the kernel binaries themselves are uniquely signed, then of course the question any bit-by-bit exact recompiles becomes moot.