Bad NUMA behavior

recently we upgraded our Sun4100 server from FedoraCore to OpenSuse 11.1. One critical application is performing badly and we figured this out to be a NUMA issue. probably somebody can confirm this.

On Redhat “numactl --hardware” output was:
available: 2 nodes (0-1)
node 0 size: 8022 MB
node 0 free: 2151 MB
node 1 size: 8048 MB
node 1 free: 3021 MB
node distances:
node 0 1
**0: 10 10
1: 10 10 **

And on Suse, output is:
node 0 cpus: 0 1
node 0 size: 8191 MB
node 0 free: 2691 MB
node 1 cpus: 2 3
node 1 size: 8192 MB
node 1 free: 2604 MB
node distances:
**node 0 1
0: 10 20
1: 20 10 **
Note, Two hop cost for processor on Node 0 to access memory on Node1 and vice-versa. This doesn’t seems to be the case with Redhat kernel, where cost is just one hop irrespective of memory bank.

Also, there is Numa statistics, seems to be different when application is running on RedHat Fedora and suse linux.
On Redhat Fedora numstat output is
node0 node1
numa_hit 9048828 8025339
numa_miss 0 0
numa_foreign 0 0

interleave_hit 14723 15258
local_node 9042276 8016405
other_node 6552 8934

While on SLES numastat output is
node0 node1
numa_hit 2941092 3070005
numa_miss 0 356979
numa_foreign 356979 0

interleave_hit 6441 6439
local_node 2936777 3065070
other_node 4315 361914

Note large number of Numa misses which were located on other node.
Any guesses what bad is happening ? And how this can be corrected ?

I will appreciate all help in this matter.
with regards
Nain