15 Feb 2020

# Healthy software company

History / Edit / PDF / EPUB / BIB / 2 min read (~271 words)

## Question

What are the signs of a healthy software company?

The following signs are defined assuming a web development software company.

• Projects are completed on time and under budget.
• Development is supported by continuous integration practices.
• Tests are written for the code developed.
• The different stages of development are not rushed so that coding happens as soon as possible.
• Projects are put in production and monitored.
• Events requiring intervention in production are handled without the presence of a large amount of stress.
• Employees have time to share their knowledge of the codebase with one another.
• Code is reviewed before being merged into the master branch.
• Version control is used.
• Processes are documented, followed, and updated when necessary.
• Traceability is possible from clients' requests to their deployment in a live environment.
• Most of the system has been designed beforehand and only minor sections of the design need to be updated during the sprint iterations.
• Priorities have been established and are well documented.

14 Feb 2020

# Keeping a site up to date while highlighting changes

History / Edit / PDF / EPUB / BIB / 2 min read (~338 words)

## Question

How can you keep a website up to date and yet have previous visitors recognize new content as fast as possible?

As I am a developer, what is the most straightforward answer to this problem is to use a tool such as diff. When I write articles on my blog, I use Visual Studio Code which I have configured to save on window getting out of focus (or the current tab of the editor being changed). With this save event, I also create a git commit automatically with a very boring message "Automated save from VS Code.". The point is not to have a fancy commit message, but to have a trace of when the changes where made. This allows me to offer to my visitors the ability to view the history of changes that were done to an article.

The downside to this approach is that it is not very easy to diff the article between two versions using the github web UI. It requires manually playing around the url, to provide the base and latest article SHA1 hash and to find the article in the list of files changed, which makes the experience rather painful and likely to lead nobody to do it.

Given that the git repository is available on my server where the blog is hosted, it would be possible for me to run a git diff command provided the last version seen by the visitor. This would allow me to present the changes that were done since the visitor last came. For instance, removal of sentences would be simply not displayed as removed since it is likely to be irrelevant to the visitor, however new sentences would be highlighted in green.

14 Feb 2020

History / Edit / PDF / EPUB / BIB / 2 min read (~355 words)

## Problem

I write a lot of articles and I want them to be understood by most people. How do I use the most common language possible?

## Solution

My approach is to write whatever I want to write about using whatever language I come up with first. Then I use a tool I've developed which I've called the vocabulary gradient. It is a very simple tool where you will generally copy and paste the article you've written and look at the result of the analysis. The tool uses a word frequency list as specified in the README.md. This list was built using the Project Gutenberg library, which makes the word frequency list a bit outdated.

The report generated by the tool presents the minimum, average, maximum and standard deviation of the index of the words used in the text you provided. Those numbers give you a rough overview of the difficulty of your text based on word frequency alone. The lower your average and maximum is, the simpler the article should be to understand An histogram is also generated, where the bins are based again on the index of the word in the frequency list. Finally, the provided text is rendered with each word index as a subscript. Words that are unknown are highlighted in yellow, while words for which the index is high are shaded with a darker shade of gray as their index increases.

With this information in hand, you can spot the words that have high word frequency indexes and try to replace them with lower index words.

13 Feb 2020

# Data anonymizer

History / Edit / PDF / EPUB / BIB / 2 min read (~275 words)

## Problem

I want my clients to share with me confidential data without revealing what the exact values are so that I can train machine learning models on this data.

## Solution

I wrote a simple python package that uses pandas and scikit-learn to apply some simple transforms to the data. Some transforms that are applied to the dataset can change the distribution of the data, changing its statistical properties, while others preserve them but simply rescale the domain.

Given an anonymizer dataset using this tool, it is possible to do a preliminary data audit and possibly train machine learning models on the data to give a quick idea to clients whether their data looks promising or not without actually revealing the true numbers (except if desired).

The main concern with this approach is that most clients are not technical, and thus having them anonymize their data is generally not easy, if not impossible. Thus it means that such a tool is currently not applicable in the desired context.

## Reference

13 Feb 2020

History / Edit / PDF / EPUB / BIB / 3 min read (~450 words)

## Question

How can a project be well executed through consensus instead of leadership?