Hi.
Well, hope I am right posting at this forum. If not please tell me which is the correct. So: ]
I’m coding a client/server application in C/C++. The client periodically opens a very simple single bracketed TCP/IP conversation with the server (connects to server, writes message, waits and receives response, closes connection). The server does the same (accepts connection, reads the message, writes the response, closes the connection). Very simple.
I use plain sockets and the expected family of functions (this is a requirement): socket, connect, etc. For each of these functions, I log the result of calling them. There is no single error, either at client or server side, for any of these functions. I first shutdown(.,…RDWR) and then close(…) the socket at each side.
I usually test with 1 sec. and up to 20 secs. of delay between each client pulse. The server IP address is resolved just once, when connecting to it for 1st time. From then on the address and port are well known (However, in real conditions this may change; the client may fail to reach the last server and connect to any other from a list it has).
As time elapses, the socket handle number (the one returned by the socket(…) function) increases and is never reused. This produces the complete socket exhaustion (at around nr. 1023). After this, it’s impossible for the client to reconnect. Please notice that netstat shows no “ghost” connections; it rather shows the last tens in FIN-WAIT status; the older ones finally disappear – which is correct. It is the socket handle in the program which is not reused.
The solution should not be, of course, to set the process or the system file handler maximum to the highest possible value, since the program will continue to consume handlers without reusing them.
Ah! And no setsockopt() at all now – have tried some but there is no specific option, seems to me.
I guess I am missing something very basic here…
¿Does anybody have any idea or any hint or help?
Thanks !
(BTW: OpenSUSe 11.0, g++ 4.3.1. (what more ? )).