Sublime Forum

Best practices to create a good plugin test suite

#1

I’m thinking which one is the best way to create a good set of unit-tests. Let’s say I got a bunch of WindowCommand subclasses and I’d like to test them in a synchronous way. I know run_command is synchronous… but at the moment the plugins use quick_panel or input_panel they immediately become asynchronous and the problem here is I’m not sure how to guarantee a certain order {command1 > command2 > … > commandN}.

Any advice here so I can guarantee the tests will run in order? One partial solution of course would be the unittests testing only the synchronous code used by the plugins but I wonder if there would be a the option to run directly the commands in order instead.

Usually when I write pyqt apps I tend to test them using the QTest framework, something like this, so I was wondering if it would be possible to have something similar to this on SubimeText.

If not, I’d like to hear from you guys any good recomendation about how to test properly the plugins.

Thanks in advance.

0 Likes

#2

For commands that take user input through things like input_panel, I prefer to write the command so that it can take the input as an argument, and will skip the user input if it is passed in (and thus stay synchronous).

3 Likes

#3

That’s actually a good idea and I’ll take it, I got a bunch of command using just a single input_panel or quick_dialog and those can be parametrized as you’re saying, that way you can test them with random data in a non-interactive way.

So, what about commands using several of those combinations? For instance, I got commands which are some sort of “wizard dialog” commands and they’re using few input_panesland quick_dialogs, any suggestion about those?

0 Likes

#4

I have built a wizard-like command before. However, it is only a single WindowCommand. It runs in a recursive loop asking each question. If you pass in the answers as arguments, then it skips the interactive UI. So you can easily test a long sequence with just a single command.

0 Likes

#5

Interesting… so, in this case, your strategy is quite clear, designing the commands in a way where you can feed the input data in a non-interactive way so they become ‘testable’, I think I’m gonna give it a shot. The first idea which had came to my mind was having super thin commands using the main API and the unit-tests would just test the API… but that approach felt like the test-suite wouldn’t be really complete that way. I’ll give it a shot to your way, thanks!

Any other suggestion so far to come up with a good test-suite?

0 Likes

#6

I use the UnitTesting plugin (https://github.com/randy3k/UnitTesting) which I highly recommend if you are not already using it.

0 Likes

#7

Outside of the scope of testing, this is also a good idea where practical because it allows a user to create a menu entry, key binding or command palette entry that performs a desired action that they may take fairly often without having to be prompted for Input.

For example I do that in OverrideAudit to allow you to generate a diff or report for specific packages without being interactively prompted.

Of course, depending on what the command is actually intended for, that may or may not be applicable in all cases.

0 Likes

#8

Yeah, that’s the one I’m using, it’s quite useful one. Although I got as a pending task to try also this one (https://github.com/codexns/package_coverage), used for instance here (https://github.com/codexns/newterm).

0 Likes

#9

Yeah, that’s a good point. In fact, I try to force myself to write super thin plugins. They should use behind the curtains some sort of well-defined testable API, if a plugin is meaty… then the probability of having to be refactored is quite high :slight_smile:

0 Likes