Workaround: Always sleep 1
before running a command that prints any output.
Fix to exec.py:
--- exec_old.py 2018-10-11 19:11:54.000000000 +0200
+++ exec_new.py 2019-06-04 14:00:02.157801936 +0200
@@ -129,17 +129,19 @@
def kill(self):
if not self.killed:
self.killed = True
- if sys.platform == "win32":
- # terminate would not kill process opened by the shell cmd.exe,
- # it will only kill cmd.exe leaving the child running
- startupinfo = subprocess.STARTUPINFO()
- startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
- subprocess.Popen(
- "taskkill /PID %d /T /F" % self.proc.pid,
- startupinfo=startupinfo)
- else:
- os.killpg(self.proc.pid, signal.SIGTERM)
- self.proc.terminate()
+ # only hard kill the process if it is still around
+ if self.poll():
+ if sys.platform == "win32":
+ # terminate would not kill process opened by the shell cmd.exe,
+ # it will only kill cmd.exe leaving the child running
+ startupinfo = subprocess.STARTUPINFO()
+ startupinfo.dwFlags |= subprocess.STARTF_USESHOWWINDOW
+ subprocess.Popen(
+ "taskkill /PID %d /T /F" % self.proc.pid,
+ startupinfo=startupinfo)
+ else:
+ os.killpg(self.proc.pid, signal.SIGTERM)
+ self.proc.terminate()
self.listener = None
def poll(self):
@@ -241,7 +243,12 @@
self.encoding = encoding
self.quiet = quiet
+ if self.proc:
+ if not self.quiet:
+ print("Killing previous process")
+ self.proc.kill()
self.proc = None
+
if not self.quiet:
if shell_cmd:
print("Running " + shell_cmd)
@@ -283,9 +290,8 @@
try:
# Forward kwargs to AsyncProcess
- self.proc = AsyncProcess(cmd, shell_cmd, merged_env, self, **kwargs)
-
with self.text_queue_lock:
+ self.proc = AsyncProcess(cmd, shell_cmd, merged_env, self, **kwargs)
self.text_queue_proc = self.proc
except Exception as e:
@@ -306,6 +312,11 @@
if proc != self.text_queue_proc and proc:
# a second call to exec has been made before the first one
# finished, ignore it instead of intermingling the output.
+ #
+ # This shouldn't happen any more, since we kill a stale
+ # process before starting a new one. Keep this check
+ # in case something did go very wrong and we keep
+ # getting output from some old process that won't die.
proc.kill()
return
@@ -363,8 +374,8 @@
self.append_string(proc, "[Finished in %.1fs with exit code %d]\n" % (elapsed, exit_code))
self.append_string(proc, self.debug_text)
- if proc != self.proc:
- return
+ # Process finished cleanly, so stop tracking it.
+ self.proc = None
errs = self.output_view.find_all_results()
if len(errs) == 0:
@@ -464,3 +475,4 @@
w = view.window()
if w is not None:
w.run_command('exec', {'update_phantoms_only': True})
+
The first issue was that self.proc = AsyncProcess
outside the lock would sometimes cause the new process to receive output before self.text_queue_proc
was set which means append_string
would be called and the test there for if proc != self.text_queue_proc
would be true and the newly started process would incorrectly be killed on the first line of output. This could be worked around by always adding a sleep 1
to any build command to make sure there is no output while the threads start up.
The second issue is that a stale process would be killed only if it produced output while a new process is active. A simple echo Test; sleep 5
script being run twice 2 seconds apart would show that the old process was actually only killed much later. I changed exec.py
to kill a stale process before starting a new one, which should also take care of file access/overwriting problems of long running compiler commands.
Third issue (somewhere in the middle of the change timeline from old to new) was that a process which exits cleanly without final output will still trigger append_string
which, if it was a stale process, would then result in append_string
wanting to kill the stale process. The AsyncProcess.kill
call would then throw an exception about os.killpg
not being able to find the PID (since it already exited cleanly). So now a process is only killed if we think it is still active.