I haven’t been here in a while, and I’m not even sure this is the right forum for my question.
I recently started brushing up on my C programming skills by writing a couple of simple programs to calculate / display solutions to Pell’s equation. I ran into an unexpected problem. My second program, which is longer and more elaborate than the first one, occasionally makes an horrendous arithmetical error. For instance, the first program tells me that 7150937 + 1 * 22988124 = 30139061, which is correct. The second program insists that 7150937 + 1 * 22988124 = 30139060, which is clearly wrong.
I’m using the gcc compiler supplied with OpenSUSE, and the associated libraries. I’m not exactly a C guru … I spent about 25 years writing Assembly Language code on IBM mainframes, though, so I know this ought not happen. The C source code statements (at the spot where the error occurs) are identical in both programs, so the only thing I can think of is that the math libraries must be implicated somehow. I can compile and run the first program – the one that works correctly – without any special flags. To bind / link the second one, I have to specify “-lm” on the command line when I invoke gcc, because the second program says “#include <math.h>” at the beginning. I know the first program is invoking conversion subroutines (aka library functions) because I’m converting integer data to floating point data implicitly. (4-byte words aren’t long enough to hold the numbers I’m calculating with, in general, even though they would work for the instance of an error cited above … all those numbers are less than 2**31. Both programs have to convert decimal input data into binary, and binary back to decimal for display purposes, but I’m guessing that’s built in via “STDIO.H”, which is in both programs.) Could my second program be utilizing a different set of data conversion routines because of the “-lm” switch? I’m not even sure how the compiler handles floating point arithmetic. My CPU (Celeron 450) has an integrated floating point unit, so I suppose the compiler / assembler uses native mode floating point arithmetic. Could there be a problem with data conversion routines (convert decimal to floating point, or floating point to decimal) that’s coming about because of the -lm switch? If so, is there an easy way to fix it?
This isn’t a real big deal for me. I can spot the errors when they occur. It’s just kind of annoying. Computers are better at arithmetic than humans are. At least, they’re supposed to be. Oh, yeah – I’ll gladly supply the source code and / or the console output if that would be helpful. Thanks!