MySQL hangs Server?

Hello,
I’m running Suse 11, in a vmware esx environment. The server setup is 2GB RAM, 2GB Swap, two AMD Processors.

Its being used as a LAMP server. Over the past couple of weeks, the MySQL has been filling up the memory and swap, the processor goes to 100% use and the system grinds to a halt. Using top shows:


top - 15:46:25 up 22:22,  4 users,  load average: 17.59, 5.89, 2.22
Tasks: 114 total,  25 running,  89 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,100.0%sy,  0.0%ni,  0.0%id,  0.0%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   2075172k total,  2026180k used,    48992k free,      212k buffers
Swap:  2216960k total,  2216960k used,        0k free,     4480k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
   20 root      15  -5     0    0    0 R    2  0.0   2:21.47 kswapd0
    1 root      20   0   772   32    0 R    2  0.0   0:05.50 init
10223 mysql     20   0  608m 6116  256 S    1  0.3   4:58.08 mysqld
 2154 root      20   0  1800  332  156 R    1  0.0   0:02.98 klogd
 9142 dfse11    20   0  9252  496  296 R    1  0.0   0:04.76 sshd
13128 wwwrun    20   0  657m 248m 1280 R    1 12.3   1:42.37 httpd2-prefork
29755 wwwrun    20   0 94916 1728  432 D    1  0.1   0:04.32 httpd2-prefork
 3009 wwwrun    20   0  861m 192m 1244 R    1  9.5   2:36.29 httpd2-prefork
28208 wwwrun    20   0  336m 225m 1652 R    1 11.1   0:57.50 httpd2-prefork
 2151 root      20   0  2200  264  124 R    1  0.0   0:05.96 syslog-ng
 5124 wwwrun    20   0  865m 190m 1348 R    1  9.4   2:35.50 httpd2-prefork
 7201 root      20   0 14856 1004  208 R    1  0.0   0:19.98 smbd
15942 wwwrun    20   0  437m 191m 1636 R    1  9.4   1:59.64 httpd2-prefork
20906 wwwrun    20   0  204m 114m  344 R    1  5.7   0:27.88 httpd2-prefork
29752 root      20   0  1728  256  144 R    1  0.0   0:07.18 modprobe
29754 root      20   0 94916 1296   76 R    1  0.1   0:04.56 httpd2-prefork
29756 root      20   0 94916 1388  156 R    1  0.1   0:04.28 httpd2-prefork

I know that there are no exceptional queries being made to the database, I’ve checked over and over, I’ve tried various settings found on google, increasing various MySQL buffer and cache sizes, and altered the PHP code to use the mysqli extension rather than mysql.

Yet it still grinds to a halt.

An alternative I’ve been given is to use an MS-SQL cluster. Something I’m not too keen on.

Can anyone suggest anything? The system is much the same as the SUSE 9 setup that was running previously without any problems.

Thanks,
Karl.

Hi,

are you sure that mysql is is responsible for this issue, because on the posted top output mysqld is only using 0.3% of the memory which should be normal. But you have a lot apache prefork processes which consumes a lot of memory. Maybe somewhere there is your problem and you can verify this by restarting apache2 when you notice that the problem occurs. But I have no idea why this happens, maybe you have to tweak some parameters prefork - Apache HTTP Server

hope this helps

Thanks for that. I shall investigate further. Currently /etc/apache2/server-tuning.conf is set at its defaults.

One of the reasons I suspect MySQL is the cause the when watching top, I see lots of little httpd forks appear, the MySQL rises to the top of the list, then the system hangs.

Unfortunately, it happens quickly, so I have little chance of restarting Apache before it freezes. I then have to reset the VM.

Here’s a different top, from a couple of days ago, before we doubled the RAM and swap, and added another processor:


top - 12:31:27 up 4 days,  3:39,  3 users,  load average: 8.81, 3.88, 1.52
Tasks: 101 total,   5 running,  96 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.1%us, 98.9%sy,  0.0%ni,  0.0%id,  0.7%wa,  0.1%hi,  0.1%si,  0.0%st
Mem:   1034864k total,  1022880k used,    11984k free,      256k buffers
Swap:  1052248k total,  1052248k used,        0k free,     4888k cached

  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
 9987 mysql     20   0  111m 4784  640 S 36.8  0.5   2:09.81 mysqld
   15 root      15  -5     0    0    0 D 13.1  0.0   2:58.89 kswapd0
20191 wwwrun    20   0  171m  82m  176 D 11.4  8.1   0:10.66 httpd2-prefork
 2539 root      20   0  2740  264  116 R 11.2  0.0   4:01.23 vmware-guestd
 7850 root      20   0 60828  332  104 D 11.1  0.0   1:21.88 packagekitd
22080 wwwrun    20   0 94072  884  172 R 11.1  0.1   0:01.68 httpd2-prefork
19446 wwwrun    20   0  185m  94m  148 D 10.0  9.3   0:16.42 httpd2-prefork
22101 root      20   0  2064  524  424 R 10.0  0.1   0:00.92 find
22100 wwwrun    20   0 94072 1012  304 S  8.6  0.1   0:01.58 httpd2-prefork
22099 root      20   0  2272  472  224 R  7.3  0.0   0:01.36 top
18046 wwwrun    20   0  278m 102m  152 R  5.9 10.2   0:18.28 httpd2-prefork
 1558 haldaemo  20   0  6160  344   40 S  5.7  0.0   0:33.31 hald
14855 ddwee2    20   0  9252  316  184 S  5.5  0.0   0:02.58 sshd
22102 root      20   0 94072 1000  300 D  4.1  0.1   0:00.24 httpd2-prefork
 2005 root      20   0  2200  208   76 S  1.4  0.0   0:19.76 syslog-ng
18037 wwwrun    20   0  231m  90m  116 S  1.4  8.9   0:30.36 httpd2-prefork
    1 root      20   0   772   52   28 S  0.7  0.0   0:32.69 init

I keep thinking that MySQL is filling the ram and swap with cache data, but restarting mysql doesn’t clear it. I’ll check again in the morning when things have filled up again, and restart apache to see if that clears it (but then apache restarts when logrotate runs).

Karl.

Have you checked, using mysql’s data repair commands to see if there are any inconsistencies in the database? If you don’t serve anything with Apache, does mysql get into trouble? (I understand if you are not able to take the app offline to test this.)

Doing repairs and optimizations was one of the first things I did. If it was a bad query wouldn’t it be failing every time that query was run from the PHP code?

The webalizser statistics show the server averages about 32Mb in data transfer a day, from 10095 hits. Is that a lot or is it fairly modest?

Karl

32MB seems pretty modest.

You never know with these servers. My theory was that mysql was busy repairing the damage on some other part of the DB.

You may have to bring out the heavy artillery and strace the server to see what it’s doing.

Just been trying out strace. It looks like its output may mean little to me.

I’ve been using SUSE for over 5 years, and never before have I had these problems. Its only in v11 that its all gone wrong.

Perhaps I was wrong blaming MySql. It seems to be a PHP or Apache2 problem.

What’s happening is that over time, a couple of days, the RAM gradually fills up, once thats full the sawp fills. Once thats full the VM computer freezes.

It appears to add to the memory when there’s substantial httpd activity.

Restarting httpd releases a large block of that memory.

For example, httpd running:


top - 09:19:52 up 1 day,  4:18,  1 user,  load average: 0.00, 0.00, 0.00
Tasks:  89 total,   1 running,  88 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.3%us,  0.3%sy,  0.0%ni, 99.0%id,  0.3%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   2075228k total,  1424100k used,   651128k free,     3480k buffers
Swap:  2096472k total,  1185680k used,   910792k free,    29452k cached

after httpd stopped:


top - 09:20:59 up 1 day,  4:19,  2 users,  load average: 0.62, 0.17, 0.06
Tasks:  85 total,   2 running,  83 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  1.3%sy,  0.0%ni, 98.3%id,  0.3%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:   2075228k total,   269604k used,  1805624k free,     4692k buffers
Swap:  2096472k total,    21360k used,  2075112k free,    40416k cached

starting httpd again and the memory stays at the same lower level.

Any suggestions as to what could be causing this?

Thanks

Karl.

A memory leak in one of your web apps.

OK. After some searching I’ve found this:
What’s new in PHP V5.2, Part 1: Using the new memory manager

I wrote a version to write the information to a database, within two minutes I had over 100 records. This could keep me busy for a while.

Unless there’s an issue with PHP5.2 that I didn’t come across in v5.1?

Karl.