21 Feb 2020

Ways to measure intelligence

History / Edit / PDF / EPUB / BIB / 1 min read (~163 words)

What are the different ways to measure intelligence?

We can measure intelligence as:

  • the number of CPU cycles necessary to solve a problem. Less is better.
  • the amount of energy spent by a system to solve a problem. Less is better.
  • the time necessary to solve a problem. Less is better.
  • the space (memory, RAM, atoms) necessary to solve a problem. Less is better.

The benefit of all those measurements is that they are scalar and thus ordinal, creating an order between the different agents that can successfully solve the problem.

Those values can be taken separately as one axis of intelligence. Taken together, they become a complex measurement of intelligence where there's no clear ordering between the different approaches. The agent with the "highest intelligence" is defined by the requirements of the environment: if energy sources are abundant in the environment, then using little energy is less important than using less time or space.

21 Feb 2020

Learning topics that are overwhelming

History / Edit / PDF / EPUB / BIB / 3 min read (~435 words)

I want to learn a new topic but it feels overwhelming. How can I do it?

When I wanted to learn about artificial general intelligence (AGI), I knew almost nothing about the field. I had a background in software engineering and I was curious about intelligence as a whole, but I had no clue what I would need to know in order to work on artificial general intelligence.

My initial approach was to read a lot about the topic. It allowed me to learn about a variety of concepts and the vocabulary associated with AGI. However, as I was doing that, I felt overwhelmed by the amount of information I would have to learn. In 2014 I was reading Artificial Intelligence: A Modern Approach, then in 2017 the Deep Learning book. It was hard to reconcile how the new concepts I was learning and how they were all connected.

At the time I already knew of mind maps. I thought it would make sense to try and map out all the knowledge I had acquired so that I could make sense of it and be able to refer to it as I would work.

I initially built a very large concept map using yEd that included everything I could think of that touched on the topic of AGI: machine learning, mathematics, computer science, neuroscience, etc. As the map got larger and larger, I started thinking about how I would organize it so that it was clearer to understand and that it was possible to progress through it if someone else wanted to learn from my experience. Thus, I extracted the machine learning, computer science and mathematics concept maps to their own respective concept maps.

The approach has been to spend time to write down what I knew, create a node in the graph with the term, then try to associate it with other terms already in the graph. As I learned new terms I would add them to the graph. I would also dedicate a bit of time during the week to try to brain dump whatever I might have learned or I knew about that could be added to the graph. This is an iterative process, so the graph itself is never really completed. However given this graph, it is now possible to know what I have already explored and the areas where I need to do more exploration because my understanding is lacking.

20 Feb 2020

Omnifocus "ofocus" format

History / Edit / PDF / EPUB / BIB / 2 min read (~391 words)

I use Omnifocus but I'd like to have access to the underlying data and use it in another application. How can I do that?

The first step to solve this problem is figuring out if we can access the underlying data easily. Sometimes we're lucky and the data is just a single file with the data in a format that we need to manipulate a little. In other cases, like for the ofocus format, files are organized in a single directory that contains zip archives. Unzipping some of those files we can observe there are two kinds: a master file and many transaction files. By playing around a bit with Omnifocus, we can make all the transactions files disappear, effectively merging them into the master file.

The next step is to look at the content of the master file. This file contains most of what we would expect to find when we use Omnifocus, namely contexts, folders and tasks. By inspecting multiple copies of the same type of entry it is possible to collect the different attributes that constitute them and whether some are optional while others are always present.

When I tried to reverse-engineer the format of ofocus files, I used my own database which contained many entries. This allowed me to quickly find most of the attributes of each type of entry. Another approach would've been to start with a clean database and to create each of the types of entry and create two variants: one with all the attributes defined, the other one with the least amount of attributes necessary.

Once the structure of those entries is identified, it is not too difficult to use reasoning to determine from where certain values come from. In some cases, values are references to other entries, similar to how you would have foreign keys in a database.

With all this knowledge now available to us, it is easy to simply convert the XML file into a JSON file. Once the data is available in JSON, it's slightly easier to work with it in PHP than using XML. By understanding the structure and relationships within the data, it is possible to make use of this data to build your own application that would reproduce the hierarchy of folders/tasks that are displayed inside Omnifocus.

20 Feb 2020

Writing one article per day

History / Edit / PDF / EPUB / BIB / 2 min read (~212 words)

What are the benefits of writing one article per day?

The main benefit I get from writing at least one article per day is that it forces me to spend time to think about a single question and to write down and express my thoughts clearly.

It allows me to practice my writing abilities, even though I use them daily to write down notes. It forces me to ask myself whether my grammar is correct and whether what I wrote makes sense, which is something I care a lot less about when I write down notes.

It lets me get into the habit of writing daily, so that writing and communicating doesn't feel like it requires a lot of energy.

I see writing one article per day as a habit to acquire in order to be able to write more effectively and communicate better. It is like exercising regularly. The purpose is to keep the muscle of writing alive and strengthening it slowly over time. By writing a lot we slowly acquire a voice that we can work to change and improve. Just like processes that are made explicit can be improved, so can communication be improved by spending time to write down what you want to share with others.

19 Feb 2020

PHP semantic versioning checker

History / Edit / PDF / EPUB / BIB / 2 min read (~310 words)

I use semantic versioning 2.0.0 for my PHP libraries and I'd like to know when I generate breaking changes to my code. I would like to have a tool that tells me when that happens, as soon as possible so I can avoid creating backward-incompatible changes.

In 2015 I was working on a lot of PHP code and releasing various libraries as well as using a lot of libraries which were not always respecting the semantic versioning 2.0.0 rules. I understood that it was difficult to keep track of all the changes done to a codebase and that you needed a certain level of expertise to tell what kind of semantic versioning impact changes had. I knew that most of the semantic versioning rules however were rules that could be codified such that you could run a program that would look at a before and after snapshot of a piece of code and tell you how it changed. Those changes could then be categorized according to what they changed, and based on the semantics of semantic versioning, a semantic versioning change could be generated for the code change itself.

Thus PHP Semantic Versioning Checker was born. It analyzes a before and after snapshot of a directory of source code and generates a report of all the changes that occurred. It is then possible to review all the changes by their type (class, function, method, trait, interface) and their semantic impact (major, minor, patch, none). It also computes a suggested versioning change, which is the highest semantic impact found amongst the different types inspected.

I hope that this library and the concept of scanning source code for semantic versioning changes becomes more mainstream, such that every project runs this kind of tool as part of their CI pipeline.