28 Feb 2020

Determining if you are a low performer

History / Edit / PDF / EPUB / BIB / 2 min read (~270 words)

How can you tell if you are a low performer?

I always prefer to compare myself against my prior self and not against others. Thus, I would consider myself a low performer if my throughput is lower than what it has been on average in the past. This may happen for many reasons, amongst them it would be because I'm learning something new, so I'm spending a good chunk of my time on learning and less on executing. It might be because I'm trying different ideas to find the best one because I'm working on something I've never worked before.

It's generally easy for a programmer to tell whether he's been more or less productive than the prior week. It is mostly based on feelings, where you feel good when you are productive and less good when you're not making any progress or facing issues.

If you think and feel that you are performing poorly, start recording more thoroughly what you are working on. Identify when you start and finish working on a task, and when you get blocked, write down why. After a few weeks, look at what you wrote and assess what might cause you to feel that you are a low performer. Is it because you're working on a task you are not good at? Is it because of a lack of motivation on the task you've been assigned to?

With more information in hand to determine why you feel that you are a low performer, you will be able to devise a plan so that you can once again feel like a high performer.

27 Feb 2020

Identifying python files with no coverage

History / Edit / PDF / EPUB / BIB / 1 min read (~127 words)

I use pytest with coverage and I want to see the files that have no coverage.

It appears that pytest and pytest-cov will not list someof the files that are under namespace packages, while it will work fine for files in regular packages (see PEP 420 on the topic of implicit namespace packages).

To fix this problem, one solution is to add __init__.py files in all of your directories in order to create regular packages.

If you are using PyCharm Professional, you can simply run your test with coverage. This will allow you to identify all the files that have currently no coverage as they will appear with coverage = 0%.

27 Feb 2020

User demo walkthrough

History / Edit / PDF / EPUB / BIB / 2 min read (~247 words)

What should be defined to make a user demo walkthrough successful?

You need to define what you want to learn from the demo walkthrough: where does the user ask questions? where does he stay stuck? what is easy/hard for him to do? what does he think about when he goes through the demo? what is/isn't working? what frustrates him? where does the user want to have more guidance?

The user doing the walkthrough should be as close as possible to the ideal user otherwise you may get feedback that is biased on their own experience. A user with too much knowledge compared to your target user will be able to do many things your target user may need help with and they may assume a lot of things because they know about them. On the other hand, a user with too little knowledge will require help in many places where the target user is expected to have knowledge, which may make the demo walkthrough slower than desired.

The walkthrough should have a clear scenario. You may only give an initial setup to the user and a desired goal and let them figure everything out by themselves. You may also go with a more directed approach, where you tell them what to do and you see if the instructions are clear enough to accomplish the steps. The first approach is interesting because it allows you to observe variability in how to solve a problem.

26 Feb 2020

Working on the wrong task

History / Edit / PDF / EPUB / BIB / 2 min read (~357 words)

How can you tell when you're working on the wrong task?

You may be working on the wrong task because priorities have changed. To determine if that is the case, you should ask yourself whether completing the task provides value, either to you or your users. If the answer is no, drop the task. If the answer is yes, you should determine if it is the utmost important task at the moment. If the answer is no, then figure out which task is. If the answer is yes, then proceed.

You may be working on the wrong task because you don't have the necessary information to complete the time in an appropriate amount of time. If you find yourself spending most of your time gathering information instead of accomplishing the task that should be done with the information you need, then it may not be the right time to do the task yet. You may have to create a prior task which is to acquire the necessary knowledge to execute the original task.

If you notice that to complete your task there are pre-requisites that should have been completed, then you should work on those instead of the task with those dependencies. In some cases you may realize that you can't accomplish a task because you don't have the tooling necessary or the technology to accomplish the task is not available yet.

As I suggest in my article Given that you define a ROI on a task, when should you stop working on a task and abandon it given its cost?, you should estimate how long you expect for a task to take. At the half time, you should evaluate whether you'll be able to complete the task by the estimate's deadline. If you can't, then you should either drop the task (if you can), or look for alternative ways to get the task completed, either by asking a more experienced person or by simplifying the task.

26 Feb 2020

Accelerate slow pytests

History / Edit / PDF / EPUB / BIB / 2 min read (~287 words)

My pytests take a while to complete, how can I speed up the process?

A fairly cheap solution is to use parallelization to run your tests on multiple CPUs instead of the 1 cpu used by default. To do so, you can install pytest-xdist. Once the extension is specified, all you need to do is add -n auto when you call pytest.

Another thing you should do that requires more effort is to investigate which of your tests are consuming a lot of CPU time to execute. To do so, use the --durations=0 flag when you call pytest. A report will be generated after your tests have run that lists how long setting up, running and tearing down each specific test took. The list is ordered from longest to shortest durations, meaning that the tests that have the most potential for being optimized will be at the top. You should focus on these tests because the longest one will determine how long it would take to run your tests even if you had an infinite amount of CPU cores.

Investigate why certain tests take a while to execute.

  • Are some tests computing something that takes a while and is computed exactly the same way by multiple tests? Precompute this result once and share it between the different tests (think of it as a fixture).
  • Are calls to a slow external API done? If you are not testing that the remote API is changing, store example responses and emulate receiving them.
  • Is there a loop in the test that runs hundreds of thousands iterations while the same test could be executed with only a thousand iterations?