PDA

View Full Version : Can't create QSharedMemory on macOS Catalina



swante
22nd May 2020, 19:47
Hello.

I'm trying to create a shared memory segment between two processes.
Calling QSharedMemory::create() returns false with QSharedMemory::errorString() set to "QSharedMemory::create: out of resources".
strerror(errno) returns "No space left on device" (I have 150 GB available), sometimes it returns "Too many open files".

Same code works fine on Windows system. Don't know what to do, hope for your help.

d_stranz
22nd May 2020, 23:23
Let's see, I am trying to peer into my crystal ball at your invisible code, but I am not having much luck.

So if you are getting an error that indicates there are no resources available, this would seem to indicate that maybe you have opened too many files or other things that use a resource handle, but haven't closed or released those handles after you are done with them. Is every fopen() matched with an fclose() (or the equivalent if you are writing a pure Qt solution)? If a FILE pointer goes out of scope, it doesn't close the file in the process, the file (and the resource handle it uses) stay open until your program exits.

swante
23rd May 2020, 02:38
Sorry, I didn't posted the code because I don't think it's something code related, but maybe (hopefully) I'm wrong. Here's the minimal code reproducing the problem:


#include <QCoreApplication>
#include <QSharedMemory>
#include <QDebug>

int main(int argc, char *argv[]) {
QCoreApplication a(argc, argv);
auto memory = new QSharedMemory;
memory->setKey("foo");
if(!memory->create(8, QSharedMemory::AccessMode::ReadWrite)) {
qDebug() << memory->errorString();
qDebug() << strerror(errno);
}
return a.exec();
}
The first line of the output is always "QSharedMemory::create: out of resources"
The second line is funny because at first it was "No space left on device", but after a couple of debug builds it has changed to "Undefined error: 0". Also, before today I saw it as "Too many open files". Don't know what it depends on.

d_stranz
23rd May 2020, 18:09
For one thing, shared memory does not use an HDD, so it doesn't matter if you have 150 GB of disk space. It is memory-based.

Second, shared memory is allocated by the OS and is shared among processes. In your code, you allocate a new QSharedMemory instance (using new), but you never delete it when the program exits. If the shared memory segment named "foo" is still around, then you can't create a new one. If you are running multiple instances of your program, then the first time through it may be able to "create()" the segment, but the second running instance of the app can't because it already exists. You can "attach()" to it.


"Undefined error: 0"

The QSharedMemory docs say that an error code of 0 means "no error".

swante
23rd May 2020, 21:14
Since heap-allocated memory is freed on program exit there's no need for explicit free(). Anyway, the allocation can be static, as QSharedMemory memory() with the same result. There are no other instances of this process, calling attach() returns "QSharedMemory::handle:: UNIX key file doesn't exist". "Undefined error: 0" is a result of strerror(errno), not memory->errorString(). Latter returns "QSharedMemory::create: out of resources" which has a code 7.

d_stranz
23rd May 2020, 22:27
Since heap-allocated memory is freed on program exit there's no need for explicit free().

That isn't the case for shared memory if there are other processes attached to it. It is probably reference counted. As the documentation says, unless the QSharedMemory destructor is called (decrementing the reference count), you cannot be guaranteed that the OS will release the memory. Heap that is freed when a process exits is simply freed without calling any destructors. That's why you can have a program with a memory leak and have all the leaked memory freed on exit.

i don't know what the problem is with your program. Maybe your choice of "foo" as the key? Pretty common thing for a programmer to use as a name for testing purposes.

ChrisW67
24th May 2020, 07:35
According to the Qt docs, for UNIX (which I am assuming will include OS X):


Unix: QSharedMemory "owns" the shared memory segment. When the last thread or process that has an instance of QSharedMemory attached to a particular shared memory segment detaches from the segment by destroying its instance of QSharedMemory, the Unix kernel release the shared memory segment. But if that last thread or process crashes without running the QSharedMemory destructor, the shared memory segment survives the crash.

If I execute your code on my Linux box you see the shared segment created:

$ ipcs -m

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
...
0x5102002c 57 chrisw 600 8 1
If I then exit with Control-C (because there's no other way) the program is "crashed":

$ ipcs -m

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
...
0x5102002c 57 chrisw 600 8 0
you see that the number of attached processes goes to zero but the shared memory persists. The OS will free memory allocated on the heap when terminated, i.e. the space occupied by the QSharedMemory controlling object, but not the resource it controlled because the destructor is not called.

A second attempt to run your code then fails:

$ ./test
"QSharedMemory::create: already exists"
No such file or directory

If I modify your code so the program terminates gracefully and ensures QSharedMemory object is destroyed:
#include <QCoreApplication>
#include <QSharedMemory>
#include <QDebug>
#include <QTimer>

int main(int argc, char *argv[]) {
QCoreApplication a(argc, argv);
auto memory = new QSharedMemory(&a); // <<< parented to ensure destruction
memory->setKey("foo");
if(!memory->create(8, QSharedMemory::AccessMode::ReadWrite)) {
qDebug() << memory->errorString();
qDebug() << strerror(errno);
}
QTimer::singleShot(5000, &a, &QCoreApplication::quit); // <<< quit in 5 sec
return a.exec();
}
Then after deleting the persistent shared memory block with ipcrm, the program will run to a graceful completion and deallocate the shared memory object.



$ ipcs -m

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 32773 chrisw 600 20480 2 dest
0x00000000 32774 chrisw 600 20480 2 dest
...

chrisw@newton:/tmp/tt$ ./test &
[1] 5023
chrisw@newton:/tmp/tt$ ipcs -m

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 32773 chrisw 600 20480 2 dest
0x00000000 32774 chrisw 600 20480 2 dest
0x5102002c 32778 chrisw 600 8 1
...

# program terminates after 5 seconds
chrisw@newton:/tmp/tt$ ipcs -m

------ Shared Memory Segments --------
key shmid owner perms bytes nattch status
0x00000000 32773 chrisw 600 20480 2 dest
0x00000000 32774 chrisw 600 20480 2 dest
...


This approach will be more resilient:

#include <QCoreApplication>
#include <QSharedMemory>
#include <QDebug>
#include <QTimer>

int main(int argc, char *argv[]) {
QCoreApplication a(argc, argv);
QSharedMemory memory("foo"); // scope ensures destructor is called on normal exit
// attach to shared mem foo if it exists
if(!memory.attach(QSharedMemory::AccessMode::ReadW rite)) {
// does not already exist so create it
if(!memory.create(8, QSharedMemory::AccessMode::ReadWrite)) {
qDebug() << memory.errorString();
qDebug() << strerror(errno);
}
}
QTimer::singleShot(5000, &a, &QCoreApplication::quit); // <<< quit in 5 sec
return a.exec();
}
You can run ten of these simultaneously. The first to start will create, the others attach, and the last one to terminate gracefully will remove the shared memory.

swante
25th May 2020, 12:57
Yep, you were right that created memory segments were outliving the process, because I never exited it gracefully, only by terminating the debugger/killing the process. I've tried to call detach() on QSharedMemory object before a.exec() and it also worked.

Also, my simplified example isn't correct because in the real code I create 10 memory segments and this is the true reason for "QSharedMemory::create: out of resources" and "Too many open files". macOS by default has a limit of 8 shared memory segments (and 4MB total) per user, that's why attempts to create last two segments always returned these errors. Fortunately, these limits can be changed as described here: http://www.spy-hill.com/help/apple/SharedMemory.html

So we can either change the limits per-machine or just don't attach to more than 8 segments simultaneously.

Thanks for the help!