Sublime Forum

How to avoid ST UI freezing while running several threads in serial?

#1

I got this snippet:

import threading
import time

count = 0


class Listener():

    def on_data(self, text, index):
        print("thread={} listener.on_data={}".format(index, text))

    def on_finished(self, index):
        print("thread={} listener.on_finished\n\n".format(index))


class AsyncProcess:

    def __init__(self, listener, index):
        self.listener = listener
        self.index = index
        self._thread = threading.Thread(target=self.run)
        self._thread.start()

    def run(self):
        for i in range(10):
            self.listener.on_data(
                'subprocess {} line {}'.format(self.index, i), self.index)
            time.sleep(0.5)
        self.listener.on_finished(self.index)

    def join(self):
        self._thread.join()


def UI_main_thread():
    global count
    for j in range(10):
        print('UI_main_thread', str(count))
        time.sleep(0.2)
        count += 1

for i in range(3):
    l = Listener()
    my_obj = AsyncProcess(l, i)
    UI_main_thread()
    my_obj.join()

print('done')

Running this one on a python console process will display in real-time the output, obviously. Now, if I try to run it on the ST main thread it will freeze the UI (ie: using UnitTesting)… I kind of understand why this is happening… problem is I don’t know how to fix it and getting a responsive equivalent script where i’ll be able to see the output in real time.

When I say equivalent I mean getting the exact same order/output of the printed data of the above attempt.

Thanks in advance.

0 Likes

#2

I guess the UI freezes because of 2 facts:

  1. You call time.sleep(0.2) in UI_main_thread() which sends the main thread to sleep in this context. Why that?
  2. The method join() blocks until all processes/threads are finished.

Not sure what you intend to achieve, but you’d need to create another worker thread, which runs your snippet and can sleep safely without freezing the UI. This thread could then call a global on_finished() event handler if all processes have finished.

3 Likes

ExecCommand subclass to run multiple commands
#3

Mmm, let me check whether I’ve understood correctly, right now I’m spawning the AsyncProcess threads from the main thread, which is the caller thread of the AsyncProcess threads, so at the moment I’m using join this caller thread becomes blocked.

Now, when you talk about creating a “worker” thread, you mean I’d be spawning the AsyncProcess threads from this worker thread (which it’d be the caller thread), right? So then, when using join in the worker thread the AsyncProcesses would only block THIS worker thread but not the UI thread, have I understood correctly?

If it’s so, I think the idea is awesome… also I think I have had this “aha” moment about the join method :wink: . Hopefully it’ll work like this so I won’t have to use semaphores or any other fancy concurrence technique to solve the problem… Btw, if you’re curious, the above snippet is an oversimplification of the original problem I’m trying to solve, which i’ve posted few days ago here.

In fact, related to all of this (this and the other thread beforementioned), if you read carefully the ExecCommand code you’ll realize the output of that command is not consistent and deterministic, users can live with it without any problem of course but not sure why they decided to code it to be that way.

0 Likes

#4

You understood me correct.

Another solution to avoid a “root” worker thread, was to create a list, which contains a flag for each process to be finished. Each process sets his flag in the on_finished() handler and checks, whether there are other processes pending. If not, the global on_fishished() could be fired. But this would requier to omit any time.sleep() in the main thread.

Why do you thing it is not consistent? A process is started and two threads are created, which just forward the process’s stdout/stderr to console in the order the messages are received. I can’t see any issue despite the fact it was better implemented via asyncio if it was available, to avoid creating threads but pass the event handlers directly to the OS.

If you run several processes in parallel this will of course result in mixed output from all of them. But this is the result of running stuff parallel by nature. If you want ordered output you must start each process one after the other or buffer each process’s output first and print it to console afterwards based on desired rules.

0 Likes

#5

Regarding to what I said about the output not being consistent, it’s not something that I think but instead it’s something i’ve already tested/experienced many times. In fact, you can check it out quite easily, let’s say, just pick up a random build-system and execute it many times through ExecCommand, you’ll see the output of the command won’t be the same 100% of the times (non-deterministic). For instance, take a look to the code referencing debug text or the text printed on the on_finished method so you’ll see what I mean.

0 Likes

#6

I mainly used the built-sytem to run some PowerShell scripts right now whose output was deterministic in all situations, but I’d guess a built system running multiple threads to compile stuff in parallel can’t output stuff 100% exact on a second run. The order will always change a bit. If a built-system outputs messages on both the stdout and stderr, then those might not be forwarded in correct order as they are tracked in two threads racing against each other.

0 Likes