Is xorg 7.5-11.3 really leaking memory?

I am running openSUSE 11.3, with KDE 4.5.2. I am using a dual monitor setup (Twinview) on an NVidia GeForce 8600 GT, 256 MB. NVidia module is at version 256.53. My system is 32-bit, with 4GB of RAM.

When I leave my KDE session running for a long time (2 -3 days), I am seeing a huge increase of memory usage. Today, it reached 2.2 GB, with a whopping 1.2 GB being used by the xorg process alone. The system was also using around 500 MB swap space.
After my reboot (due to a kernel upgrade), the xorg process only consumes 184.6 MB. That is a huge decrease. The entire system memory consumption at this time is around 1GB, and this with a video transcoding operation running and 3 Firefox windows open.

As time passes, the xorg process is slowly consuming more memory, almost as if there is a memory leak. Is anyone else seeing this?

joopberis wrote:
> As time passes, the xorg process is slowly consuming more memory,
> almost as if there is a memory leak. Is anyone else seeing this?

probably everyone is seeing something like that (it is normal)…

i looked at the info on your profile and see you are a “sysadmin of
Unix, Linux and Wintendo boxen” but i wonder if you have a full
understanding the difference in the way *nix and *dows uses memory?

this (and the links therin) may be helpful, maybe not:

if you feel a need to go more into this, please run and then copy/past
back to this thread the output of:

cat /proc/meminfo

When it comes to chocolate, resistance is futile.
CAVEAT: [posted via NNTP w/openSUSE 10.3]

Sorry to contradict, DenverD, but this behaviour is definitely not normal at all - note that joopberis mentions half a gigabyte of his swap being used, which shows clearly that his system ran out of usable memory. The symptoms of a swap partition being used this heavily are quite annoying, it usually makes pretty much any action very sluggish.

joopberis, I recommend a memtest. Since you have 4GiB of RAM, I suggest to run memtest for at least six hours to make sure your memory is properly working.

i am seeing similar. things but with Gnome
8 gig ram 64 bt os and a small image data set ( 768 meg tile of a 24 Gig dataset) almost locks the system up . It never gives up the working ram ( aprox 3 to 4.5 gig)
and after working on 1 or 2 of these i need to reboot

I am just starting to really look into this , it is no longer a " i am new to suse" issue .

some problem for uh…about 2 months since i started using a nvidia 8600GT and nvidia proprietary drivers.
i’ve tried a lot of xorg.conf’s and versions of driver.but nothing help this. After several hours of use the memory “xorg” consumed is increasing and never fall back.:frowning:

Do you see a difference if you manually clear the memory cache?
sync; echo 3 > /proc/sys/vm/drop_caches

Are you using Xinerama ?

I’m personally barely beginning my investigation into what I believe might be memory leaks in my system as well (I’ve only been running 11.3 for a little over a month like most everyone else which IMO is likely too soon to truly recognize any possible leaking unless it’s very obvious).

A personal opinion on DenverD’s reference links regarding the supposed difference between Linux and Windows… is that although they may be informative regarding what Linux does, they’re wildly false describing what happens in Windows at least for several generations of OS which may explain to some degree the skepticism of some of the posters in those threads. Since those references are all relatively recent (< 3 years) I’m disappointed in the accuracy of some of the statements which could be important to properly understanding and comparing one technology vs another.

At least since Vista and also mostly in XP, memory management is similar to the memory management model described as “Linux” – There is less emphasis on sending to garbage collection processes that are and have been inactive so that if re-invoked the resource might not have to be re-allocated. If you also want to consider the managed application environments that might run on top of Windows like .NET and Java, this is even more the case than native Windows applications and would similarly affect any description of Java and Mono applications running on Linux.

But, just because the general description of how the memory models work are relatively similar does not mean that in practice both memory models will perform in any way similarly.

There do seem to be at least some clear differences, as noted by one poster like the fact that Windows will write some data to swap regardless whether memory usage is full or not, recognizing that the User Experience can be improved by pushing to the background data and processes that aren’t time sensitive. If Linux doesn’t do something similar (In which I would be surprised), that in itself can affect responsiveness.

Also, at least for the referenced links I feel that they only very coarsely describe a memory model when how well memory management performs is really in the details which would be far beyond the scope of general discourse, is in the realm of high level mathematical algorithms, rulesets that ever try to predict User and machine behavior (and needs).

Maybe the correct answer for the general public which memory models works better is “It Depends.” If you’re looking for a clear technical claim that one is better than the other, it probably doesn’t exist, the only thing that can be said truthfully is that Linux and Windows are different and if you want to know what works better for you, you just have to try each and compare for yourself.

Now, as for whether memory is leaking, IMO it might be useful to define what a memory leak is and how it likely has little to do with OS memory management(Although possible, I suspect nowadays most memory leaks are due to application code).

In general a memory leak is where a memory resource has been used and isn’t garbage collected (expired) so that the resource becomes available again.

This would clearly be the case if the same process which originally utilized the memory resource is re-invoked and is allocated new resources instead of utilizing the previous resource, and the original resource never becomes available for use by another process. It’s been awhile since I’ve seen this kind of error by the OS, nowadays I usually see this because the application doesn’t ever signal the resource is no longer needed. Eventually memory resources become exhausted and responsiveness slows.

Note that is not the same as simply not expiring resources in a timely manner which can cause slow responsiveness as the Memory Manager attempts to re-allocate resources on demand (instead for instance during low demand), so has nothing to do with things like the Swap File, and is usually identified by running something like top or ps periodically to view the amount of memory allocated periodically (note this may still be an incomplete picture).

Comment and Criticism Expected :slight_smile: ,


top will tell what process uses how much memory.

Linux does not like unused memory it will assign disk cache to it until it is needed by some other process so free memory is alway seen to be very low. I have 2gig and never see swap used even when running XP in Vbox. Flash process tend to use a bunch of memory and I’d look there first but really should not force swap.

This is a special case situation since most people are not seeing the problem. So we need to know what is special or different about these machines/setups.

Also we don’t know what kernel is being run ie 32/64 bit. This is important with 4+ gig space.

tsu2 wrote:
> Comment and Criticism Expected :slight_smile: ,

just a comment…i intended not to dive into which is best Chocolate
or Vanilla…instead, it seemed to me that the question sprang from
a poster more experienced in one than the other and surprised to see
so much memory being used here…instead of saying mine is best, i
said it is different from what you are used to, and what you see is ok

When it comes to chocolate, resistance is futile.
CAVEAT: [posted via NNTP w/openSUSE 10.3]