32bit to 64bit migration, just an observation

Recently upgraded from 32 bit to 64 bit OS12.2.
Major performance degredation. Slower throughout the system.

Original hardware Intel Core2 Quad Q9550 @ 2.83Ghz, nVidia 9500GT, 4GB DDR2
800 memory

I was able to return to previous performance levels by upgrading to
Intel Core i5 Quad @3.2Ghz, (smae video card) nVidia 9500GT and 16GB DDR3
1600 memory.

Just thought I would share the info.

I had held off from going to 64bit for as long as I could. Recent
requirements for my testing mandated the 64bit switch.

This is a very interesting observation as mine seemed to be quite the opposite. I noticed a dramatic improvement on my overall experience. I am interested in understanding why you would not have seen a performance increase.

On 2013-03-08 04:26, futureboy wrote:
>
> This is a very interesting observation as mine seemed to be quite the
> opposite. I noticed a dramatic improvement on my overall experience. I
> am interested in understanding why you would not have seen a performance
> increase.

It is possible that when upgrading a piece of software from one arch to
another performance decrease if variables increase in size.

I mean.

Suppose you do calculations on a big array of integers. You recompile
for 64 bits, and your integers are “converted”. The array increase in
size to double the size… even if you do not need huge 64 bit integers.
That means that operations on that array do have to move that large
amount of memory… and they are in fact slower. Or can be.

Of course, that piece of software had design flaws, but they do exist.


Cheers / Saludos,

Carlos E. R.
(from 11.4, with Evergreen, x86_64 “Celadon” (Minas Tirith))

That’s an interesting observation, thanks for sharing.

I tested out both 32-bit and 64-bit installations on my parent’s computer before committing. I found 64-bit offered a bit of a performance boost for the majority of tasks–especially when it came to accessing the hard drive. That observation alone was enough for me to retain the 64-bit install.

I suppose mileage will vary, though.

Are many of your apps still needing 32bit libraries?

On 2013-03-08, Carlos E. R. <robin_listas@no-mx.forums.opensuse.org> wrote:
> Suppose you do calculations on a big array of integers. You recompile
> for 64 bits, and your integers are “converted”. The array increase in
> size to double the size… even if you do not need huge 64 bit integers.

I’m trying to understand how this conversion would take place as you suggest. For example consider the following basic C++ program (called a.cpp):


# include <iostream>
using namespace std;

int main(int argc, char* argv])
{
int a, b, c;
b = 1;
c = 2;
a = b + c;
cout << a << " = " << b << " + " << c << endl;
cout << sizeof(int) << endl;
return 0;
}

On a 32 bit system, the program would output the sizeof(int) as 4, because `int’ is 32 bits. Now on a 64-bit system:


sh-4.2$ g++ -v
Using built-in specs.
COLLECT_GCC=g++
COLLECT_LTO_WRAPPER=/usr/lib64/gcc/x86_64-suse-linux/4.7/lto-wrapper
Target: x86_64-suse-linux
Configured with: ../configure --prefix=/usr --infodir=/usr/share/info --mandir=/usr/share/man --libdir=/usr/lib64
--libexecdir=/usr/lib64 --enable-languages=c,c++,objc,fortran,obj-c++,java,ada --enable-checking=release
--with-gxx-include-dir=/usr/include/c++/4.7 --enable-ssp --disable-libssp --disable-libitm --disable-plugin
--with-bugurl=http://bugs.opensuse.org/ --with-pkgversion='SUSE Linux' --disable-libgcj --disable-libmudflap
--with-slibdir=/lib64 --with-system-zlib --enable-__cxa_atexit --enable-libstdcxx-allocator=new --disable-libstdcxx-pch
--enable-version-specific-runtime-libs --enable-linker-build-id --program-suffix=-4.7 --enable-linux-futex
--without-system-libunwind --with-arch-32=i586 --with-tune=generic --build=x86_64-suse-linux
Thread model: posix
gcc version 4.7.1 20120723 [gcc-4_7-branch revision 189773] (SUSE Linux)
sh-4.2$ g++ -S a.cpp
sh-4.2$ ./a.out
3 = 1 + 2
4
sh-4.2$

As you can sizeof(int) is still 4. Now look at an excerpt of 64-bit assembler (generated from g++ -S) in a.s (please
excuse the disgusting AT&T syntax) that is relevant to the 1+2 calculation:


.file	"a.cpp"
.local	_ZStL8__ioinit
.comm	_ZStL8__ioinit,1,1
.section	.rodata
..LC0:
.string	" = "
..LC1:
.string	" + "
.text
.globl	main
.type	main, @function
main:
..LFB970:
.cfi_startproc
pushq	%rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq	%rsp, %rbp
.cfi_def_cfa_register 6
subq	$32, %rsp
movl	%edi, -20(%rbp)
movq	%rsi, -32(%rbp)
movl	$1, -4(%rbp)
movl	$2, -8(%rbp)
movl	-8(%rbp), %eax
movl	-4(%rbp), %edx
addl	%edx, %eax
<SNIP>

Look at the last 5 lines of assembler. You can see that although the base pointer register is 64-bit addressed (%rbp),
the data interval within the stack (for 1 and 2) is 4 bytes (i.e. 32 bits) and loaded into lower 32-bits of the
accumulator and data registers (i.e. eax and edx) using movl (which in Intel syntax is mov qword), meaning 32-bit
integer movement. And you can see the arithmetic itself (addl) is at 32-precision.

So I cannot see how 32 bit integer data are somehow doubled in size when moving from 32-bit to 64-bit architectures
without changes in the code, as you seem to suggest.

(apologies if the code tags don’t work - I’ve tried 6 times!)

On 2013-03-08, Carlos E. R. <robin_listas@no-mx.forums.opensuse.org> wrote:
> Suppose you do calculations on a big array of integers. You recompile
> for 64 bits, and your integers are “converted”. The array increase in
> size to double the size… even if you do not need huge 64 bit integers.

I’m trying to understand how this conversion would take place as you suggest. For example consider the following basic C++ program (called a.cpp):


# include <iostream>
using namespace std;

int main(int argc, char* argv])
{
int a, b, c;
b = 1;
c = 2;
a = b + c;
cout << a << " = " << b << " + " << c << endl;
cout << sizeof(int) << endl;
return 0;
}

On a 32 bit system, the program would output the sizeof(int) as 4, because `int’ is 32 bits. Now on a 64-bit system:


sh-4.2$ g++ -v
Using built-in specs.
COLLECT_GCC=g++
COLLECT_LTO_WRAPPER=/usr/lib64/gcc/x86_64-suse-linux/4.7/lto-wrapper
Target: x86_64-suse-linux
Configured with: ../configure --prefix=/usr --infodir=/usr/share/info --mandir=/usr/share/man --libdir=/usr/lib64
--libexecdir=/usr/lib64 --enable-languages=c,c++,objc,fortran,obj-c++,java,ada --enable-checking=release
--with-gxx-include-dir=/usr/include/c++/4.7 --enable-ssp --disable-libssp --disable-libitm --disable-plugin
--with-bugurl=http://bugs.opensuse.org/ --with-pkgversion='SUSE Linux' --disable-libgcj --disable-libmudflap
--with-slibdir=/lib64 --with-system-zlib --enable-__cxa_atexit --enable-libstdcxx-allocator=new --disable-libstdcxx-pch
--enable-version-specific-runtime-libs --enable-linker-build-id --program-suffix=-4.7 --enable-linux-futex
--without-system-libunwind --with-arch-32=i586 --with-tune=generic --build=x86_64-suse-linux
Thread model: posix
gcc version 4.7.1 20120723 [gcc-4_7-branch revision 189773] (SUSE Linux)
sh-4.2$ g++ -S a.cpp
sh-4.2$ ./a.out
3 = 1 + 2
4
sh-4.2$

As you can sizeof(int) is still 4. Now look at an excerpt of 64-bit assembler (generated from g++ -S) in a.s (please
excuse the disgusting AT&T syntax) that is relevant to the 1+2 calculation:


.file	"a.cpp"
.local	_ZStL8__ioinit
.comm	_ZStL8__ioinit,1,1
.section	.rodata
..LC0:
.string	" = "
..LC1:
.string	" + "
.text
.globl	main
.type	main, @function
main:
..LFB970:
.cfi_startproc
pushq	%rbp
.cfi_def_cfa_offset 16
.cfi_offset 6, -16
movq	%rsp, %rbp
.cfi_def_cfa_register 6
subq	$32, %rsp
movl	%edi, -20(%rbp)
movq	%rsi, -32(%rbp)
movl	$1, -4(%rbp)
movl	$2, -8(%rbp)
movl	-8(%rbp), %eax
movl	-4(%rbp), %edx
addl	%edx, %eax
<SNIP>

Look at the last 5 lines of assembler. You can see that although the base pointer register is 64-bit addressed (%rbp),
the data interval within the stack (for 1 and 2) is 4 bytes (i.e. 32 bits) and loaded into lower 32-bits of the
accumulator and data registers (i.e. eax and edx) using movl (which in Intel syntax is mov ptr dword), meaning 32-bit
integer movement. And you can see the arithmetic itself (addl) is at 32-precision.

So I cannot see how 32 bit integer data are somehow doubled in size when moving from 32-bit to 64-bit architectures
without changes in the code, as you seem to suggest.

On 2013-03-09 14:45, flymail wrote:
> So I cannot see how 32 bit integer data are somehow doubled in size when moving from 32-bit to 64-bit architectures
> without changes in the code, as you seem to suggest.

It depends on the exact word for “integer” you use in your variable
definitions. The C language definition do not mandate a determined byte
size for it.

I did test this issue years ago and created a sample code to demonstrate
the issue. I don’t have it on this laptop, and my C skills are rusty, so
it would take me some long time to duplicate the sample. When I get back
to my desktop in a week or so I may try to dig it out again.

Other people with knowledge of the issue may give you the details better
than myself.


Cheers / Saludos,

Carlos E. R.
(from 11.4, with Evergreen, x86_64 “Celadon” (Minas Tirith))