I am not sure if this is the right place to post this question, so don’t take it for bad if it is not. I have openSuSe 13.1 and it was working OK for some years now (well, I’ve done updates all from version 12). I am using this server at home as an internet gateway, apache server and other stuff. Strange thing happened about two months ago. There was no traffic going over server and I couldn’t access to to by using ssh so I have tried to log on in console. As I have entered username and password, no prompt was shown. So I reset by using “long push” on power button. After start, I checked /var/log/messages and there I have seen “pause” of almost two hours! (I have crontab jobs running every 5 minutes so there is always something written every 5 minutes. Mentioned happened for a couple of times after that and the symptoms were the same. I have then done zypper up hoping that it will help but the problem is still here. My question are:
Is there any other place where I can look to find some logs or something that can give me a “hint” where the problem might be?
As I have a custom application running on servers (using sockets, mysql, websockets) written in C… Is it possible that this application “eats” all TCP ports and that this causes what I described? Is there any command to see used ports by some process (so I can check if the ports are released properly)?
You can find more informative detail inspecting your journal which replaces the legacy log files. Many sources feed into the journal. Inspect your options displaying what you want with
Without knowing anything about your C program, no… apps don’t “eat” TCP ports in some uncontrolled way you might be thinking, TCP ports are opened as needed within a specified range for network connections. By your description, I’m guessing it’s some kind of LAMP type application but in C?
In any case, you should describe what you mean when you say “updates from 12.” Are you saying that the system was originally or may be still applying updates from openSUSE 12.3, 12.2 or 12.1?
You should inspect your current enabled repos with the following
After you’re sure your enabled repos are correct, then you should update your system
well, journalctl for --system, --boot and --dmesg for around period when problem happened, is empty and just before log writing stopped, nothing suspicious is written (as I have already seen in /var/log/messages).
I did updates properly (enabled update repositories, done update and then, for next release, enabled only updates from that release disabling for “old ones”). So there is no mixture of releases. If only I could see what happens in that particular moment when server doesn’t respond! But, as I can’t login, and there is nothing in logs, I am confused. Mentioning my app written in C, it must be that way because is a special protocol of communication involved. I have monitored for a couple of days my app using top and there is no overhead or “memory eating” so probably, that one is not the cause. Strange is: how all stops and I can’t login on tty? So looks like processes “work” (while I can enter username and password but after that, all stops.
I have used top to see what’s happening but there is nothing strange when server runs normally. When it “freezes” I can’t logon to check top. Only way I can think of is to make top “snap-shoot” every 10 minutes and then, when it freezes, analyze last “snap-shoot”.
I am using server in multi-user runlevel without X.