Sublime Forum

Crawl files gobbling up memory on Linux

#1

My system starts to run out of memory when it’s been up for a while. Usually I just reboot but I did some investigation today. It turns out that my /dev/shm filesystem, which is a tmpfs, is using 19G! (The default config is to let it use up to 50% of memory, and my system has 64G.) /dev/shm contains over 100K files named *crawl* taking up 99.97% of the space. They contain what looks like a PID, and the PID in the most recent ones matches the currently running Sublime. However, there are thousands with an older PID that are left over from previous runs of Sublime, and when I quit the current Sublime the files it was using aren’t cleaned up. When I start Sublime again a new crop of crawl files appear in /dev/shm. Some of them start with sem. and are 32 bytes long, and the rest are 524288 bytes long.

This seems like a pretty serious problem. Is this a bug in Sublime or in my Linux distro? I’m on an x86_64 Linux machine running NixOS Unstable.

0 Likes

#2

What version of ST are you using? This bug was fixed in build 4170.

0 Likes

#3

I’m using 4192

0 Likes

#4

Is indexing working for you? What’s in the indexing status under Help > Indexing Status…?

0 Likes

#5

Yes, indexing is working, and there’s lots of activity in the status window

0 Likes

#6

The crawler (which does indexing) unlinks the shared memory it uses immediately upon being launched. Unless you’ve somehow blocked the unlink through some security mechanism I don’t see how both the indexing can work as well as leak.

0 Likes

#7

I’m on NixOS, which does some unusual things. It could be a problem with my glibc. I’ll look into it (maybe with strace) and get back to you.

I temporarily limited the size of my /dev/shm filesystem to 1GB and then both Sublime and Slack started crashing.

0 Likes

#8

I did an strace and it shows that there’s no attempt to unlink the shared memory file. I’ll provide a fuller trace extract here, but the only mentions of one particular left-over file are:

916110 unlink("/dev/shm/915244crawl4r0") = -1 ENOENT (No such file or directory)
916110 openat(AT_FDCWD, "/dev/shm/915244crawl4r0", O_RDWR|O_CREAT|O_EXCL|O_NOFOLLOW|O_CLOEXEC, 0700) = 201
916110 ftruncate(201, 524288) = 0
916110 mmap(NULL, 524288, PROT_READ|PROT_WRITE, MAP_SHARED, 201, 0) = 0x7fabbef80000
916110 close(201) = 0
916110 clone3({flags=CLONE_VM|CLONE_VFORK|CLONE_CLEAR_SIGHAND, exit_signal=SIGCHLD, stack=0x7fabbf264000, stack_size=0x9000}, 88) = 916113
916113 execve("/nix/store/20sd8x85vxm2s895d7iywqj82b013bm2-sublimetext4-bin-4192/.sublime_text-wrapped", ["/nix/store/20sd8x85vxm2s895d7iyw"..., "--crawl", "915244crawl4s0", "915244crawl4r0", "915244", "/home/user/.config/sublime-te"..., "/home/user/.cache/sublime-tex"..., "/nix/store/20sd8x85vxm2s895d7iyw"...], 0x7fffda6757c0 /* 78 vars */) = 0
916110 munmap(0x7fabbef80000, 524288) = 0

So the parent thread creates the file, sets its length, mmaps it, closes it, passes the name to the crawler thread, and munmaps it. There’s no sign that either thread attempts to unlink it, and I used strace -f so the tracing included all subprocesses.

However, the child thread doesn’t seem to do much:

916113 rt_sigprocmask(SIG_BLOCK, NULL, ~[KILL STOP], 8) = 0
916113 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0
916113 execve("/nix/store/20sd8x85vxm2s895d7iywqj82b013bm2-sublimetext4-bin-4192/.sublime_text-wrapped", ["/nix/store/20sd8x85vxm2s895d7iyw"..., "--crawl", "915244crawl4s0", "915244crawl4r0", "915244", "/home/user/.config/sublime-te"..., "/home/user/.cache/sublime-tex"..., "/nix/store/20sd8x85vxm2s895d7iyw"...], 0x7fffda6757c0 /* 78 vars */) = 0
916113 brk(NULL ) = 0x5634f1701000
916113 mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f1f6ab07000
916113 access("/etc/ld-nix.so.preload", R_OK) = -1 ENOENT (No such file or directory)
916113 openat(AT_FDCWD, "/etc/sane-libs/glibc-hwcaps/x86-64-v3/librt.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
916113 newfstatat(AT_FDCWD, "/etc/sane-libs/glibc-hwcaps/x86-64-v3/", 0x7ffc5f2cf260, 0) = -1 ENOENT (No such file or directory)
916113 openat(AT_FDCWD, "/etc/sane-libs/glibc-hwcaps/x86-64-v2/librt.so.1", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
916113 newfstatat(AT_FDCWD, "/etc/sane-libs/glibc-hwcaps/x86-64-v2/", 0x7ffc5f2cf260, 0) = -1 ENOENT (No such file or directory)
916113 +++ killed by SIGKILL +++

So it’s certainly not unlinking the file right after starting, but perhaps the problem is that it’s never getting a chance to do that.

It’s odd that it’s getting a SIGKILL, which prevents it from running any kind of cleanup before it terminates. I’d have expected it to get a SIGTERM or SIGINT. I exited Sublime from the GUI with File > Quit so there shouldn’t have been any emergency termination. I did watch the progress of indexing in the GUI and waited until it was idle for a while before quitting. It’s getting killed with SIGKILL by the parent thread, though:

916110 kill(916113, SIGKILL) = 0

I should run the trace again with timestamps, so I can see how long it executes before being killed.

0 Likes

#9

Running with timing enabled shows that the thread is being killed when it’s been running for only a few ms. One was 6.25ms and one was 0.8ms. In all of those cases, the subprocess hasn’t even finished loading all its shared libraries.

Strangely, I am seeing indexing progress messages in the status window.

0 Likes

#10

The SIGKILL indicates that the crawler is getting stuck (spending more than 10 seconds on a file) and getting killed by the main process. I could see a leak happening if the subprocess takes more than 10 seconds to get to opening the shared memory, but if that’s the case then something else has seriously gone wrong.

Note that anytime this happens it gets logged in the indexing status. Are you seeing that?

0 Likes

#11

No, I’m not seeing anything unusual in the logging status.

Also, the kill is happening within 6ms of the thread being forked, so the crawler isn’t getting stuck. It doesn’t even have time to finish loading its shared libraries before it’s killed.

0 Likes