Shared memory

Hello,

I’ve got a question about shared memory under Linux. When I allocate e.g. 20 Bytes of shm with shmget()&shmat(), I’m able to write up to ~5200 Bytes of data until it crashes. (with symbol lookup error?!)
When I write ~5900 Byte, I get a segmentation fault.

Under Solaris 10 I’m able to write up to PAGE_SIZE until I get a segmentation fault. (and no symbol lookup errors)
getconf PAGE_SIZE results in 4k on my Linux machine.

Well, my question is: How many Bytes are allocated at once of the OS? 4k or 8k or something like that would IMHO make sense for me, but these ~5500 Bytes are quiet strange. :
And why are there symbol lookup errors? (and no, this isn’t my homework and yeah I’m aware that this is a silly question :wink: )

I tested this with the following program:

#include <stdio.h>
#include <stdlib.h>
#include <errno.h>
#include <unistd.h>
#include <string.h>
#include <sys/shm.h>

int main(int argc, char* argv])
{
    int shmid;
    char* message;
    long limit = 0;
    long real = 0;

    /* Parameter handling */
    if (argc == 3) {
	real = atol(argv[1]);
	limit = atol(argv[2]);
    } else {
	printf("Usage: %s <size of shared memory> <nr of bytes to write>
", argv[0]);
	exit(1);
    }

    if ((limit <= 0) || (real <= 0)) {
	printf("Please specify a number above 0
");
	exit(1);
    }

    /* Allocate and attach "real"-Bytes of shared memory */
    shmid = shmget(42, (size_t) real, IPC_CREAT | 0640);
    if (shmid == -1) {
	perror("Couldn't create shared memory");
	exit(errno);
    }

    message = (char*) shmat(shmid, NULL, 0);
    if (message == NULL) {
	shmctl(shmid, IPC_RMID, 0);
	perror("Couldn't attach shared memory");
	exit(errno);
    }

    /* Write "limit"-Bytes to shared memory */
    memset(message, '\0', (size_t) limit);
    printf("%ld Bytes successfully written to shared memory
", limit);

    /* Detach and free shared memory */
    if (shmdt((void*) message) == -1) {
	perror("Couldn't detach shared memory");
	exit(errno);
    }

    if (shmctl(shmid, IPC_RMID, 0) == -1) {
	perror("Couldn't remove shared memory");
	exit(errno);
    }

    return EXIT_SUCCESS;
}

With the following output:

> ./producer 20 4096
4096 Bytes successfully written to shared memory

> ./producer 20 5100
5100 Bytes successfully written to shared memory

> ./producer 20 5300
./producer: symbol lookup error: ./producer: undefined symbol: printf, version GLIBC_2.0

> ./producer 20 5900
Speicherzugriffsfehler
(segmentation fault)

You are not supposed to write anything beyond the allocated space anyway. If you do the result can be catastrophic.

This is what specification writers call an undefined situation. The shmem routines have satisfied the minimum specifications. If you go beyond the limits, it may blow up on you now, blow up on you later, or do nothing. But since you ask, probably something to do with the minimum page size and the amount of memory beyond that you can overwrite without noticing anything.

./shm.o 20 9393 = success
./shm.o 20 9394 = segfault

Hi,

I’ve already written data with a size of some MBs in the past, and I know a lot of programs which writes a lot more, maybe you should have a look at Advanced Linux Programming chapter 5 which describes shared memory usage. Sometimes it is necessary to increase the maximum shared memory size (look at /proc/sys/kernel/shmmax).

Maybe you also want to use a more high level interface for your IPC task look at Chapter*9.*Boost.Interprocess or if you want to use Qt Qt 4.5: QSharedMemory Class Reference

Hope this helps