19 May 2018

Task duration estimation

History / Edit / PDF / EPUB / BIB / 2 min read (~357 words)

  • If you are building a product, it is your responsibility to define how much time you're willing to bet/spend on a feature, this is called the appetite
  • Unlike a task duration estimate, an appetite is a fixed amount of time that is defined in order to define the boundaries of what the feature to develop should be vs an estimate which means that the task should be completed and we expect it to take X amount of time (variable)
  • The appetite helps set the amount of time before a task is deemed without a proper scope

  • Record initial task duration estimate
  • Record task duration after its completion

  • Make a high-level, gut feeling estimate of the duration of the whole project
  • Lowest (fastest, optimistic), average (most likely), and highest (slowest, pessimistic) time duration estimates
    • A final value is computed by taking (lowest + 4*average + highest)/6
    • List the reasons why you think things may go right/wrong
  • Heuristic estimation
    • At a high level, you should be able to tell how much each portion of a project should have over the complete project
  • Use historical data
    • If you have underestimated in the past, you will keep underestimating if you do not adjust
  • Break down any item that is larger than 10% of the project into smaller items

  • Duration will depend on the skills of the person completing the task. An expert will generally take less time than a novice to accomplish the same task

  • Being interrupted while working (difficulty tracking beginning and ending)
  • Working on unrelated things at the same time (diluted productivity)
  • Working on multiple tasks at the same time (task overlapping)
  • Determining what is and what isn't part of a task (task isolation)
  • Changes in task definition
  • Estimating tasks for which we have no experience

  • Easy to start/stop tracking a task (less than 15 seconds, otherwise it feels like an interruption)

18 May 2018

Workstack

History / Edit / PDF / EPUB / BIB / 2 min read (~397 words)
note taking

The workstack is a very simple idea I had while working. It is based on the concept of a stack as the name clearly implies. As you work, you, like a computer, process things one at a time and as new things need to be done, you either throw them in a todo list (a queue), or you start doing them right away (you stack them).

The workstack is a way to record notes about what you work on. As you work on something, you can either work on them to completion, or be interrupted by the necessity of working on another task. In the first case, tasks are simply written one after the other with their begin and end time. In the second case, items are also indented, such that it is possible to observe when a task forced you to "switch context".

An example of this note taking format is as follow.


2018-05-18
Task 1 10:00-10:30
Task 2 10:35-10:50
Task 3 11:00-...
    Task 4 11:05-11:15
    Task 6 11:17-...
        Task 7 11:20-...
Task 5 (not begun)

In this case, the person started working on tasks 1 and 2, then began working on task 3. As he began his work, he noticed that something else was necessary, which spawned task 4. While he was working on task 4, he observed something that could be done, but didn't have to be done right away, which spawned task 5. As he completed task 4, he returned to task 3, but noticed that something else also had to be done, which effectively spawned task 6. During task 6, something else also interrupted him, which forced him to work on task 7. In this case, it could have been a coworker asking you for help on something. Task 5 could be a coworker asking for help as soon as you're available, but not wanting to interrupt you.

Conceptually, you would want to always complete a stack of operations before moving to a new task. However, it is highly common in programming that a programmer will start going down such stack while working on code and then will not end up climbing back the stack, effectively not completing all he started working on.

This format thus allows a programmer (or anyone working on tasks that can spawn other tasks) to better track what they were doing and what they did and did not complete.

17 Feb 2018

Reusable writing

History / Edit / PDF / EPUB / BIB / 3 min read (~441 words)

I'm a programmer. I like to write code and whenever I can, I like to reuse bits of code I've written in the past instead of writing it from scratch.

Lately I've been spending a lot of my time thinking. However, unlike programming, I feel that my writing is repetitive and that I sometimes don't make any progress with my ideas. Oh how I wished writing was more like programming.

There's many ways one can write. You have the top-down approach, where you first lay out a plan of the topics you want to write about. You also have the bottom-up approach, where you write things as they come to mind, sometimes rearranging the content. Generally, when I write, it is to dump what is on my mind about a certain topic, so it is very much like the bottom-up approach. However, the thing I dislike about it is that you can end up with subtle similar sections of content, but not know about it. You actually need to go through all of your writing to discover these bits of duplicate writing. That's a lot similar to writing code and sometimes finding out you have functions/methods that do almost the same thing...

In programming, when you realize that two bits of code are similar, you refactor your code to get one out in order to reduce duplication. When you write, it's not as simple. First off, in code you will generally have encapsulation, which provides you with the means of limiting the scope of the duplicate logic. With text, your context (encapsulation) often comes from the text prior to it.

One option to deal with this problem might be to extract ideas and then rewrite them in order to make them independent of their previous context. This will help making it possible to read any part of an article without having read what was said previously. However, this will also have the adverse effect of making things much more difficult to follow as there will appear to be no connection between the different sections.

Maybe a better approach to the problem is to think of how one makes code more reusable. One of the biggest issues with code is often that multiple levels of abstractions are used within the same method, which makes reasoning about the method more difficult.

  • Should one limit the size of an idea, in the same fashion one tries to limit the number of lines or statements in a function?

10 Feb 2018

Brain organization

History / Edit / PDF / EPUB / BIB / 1 min read (~156 words)
productivity

  • How can one track progress? Is there progress without explicitly specified goals?

  • Dump everything in my brain in text documents
  • Schedule things that need to be scheduled/reminded
    • Decompose tasks and define time/effort estimate

  • Write everything down in markdown documents
  • Write thoughts in Google Keep while on the move, then merge them in existing documents
  • Attempt to manually aggregate content

  • No time tracking/evolution of articles
  • Manual aggregation is difficult and time consuming
  • Most of what is written is never reviewed and potentially forgotten
    • Content is not consumed/reused
    • Most of the time, the point is to make explicit what is in the brain, so that it can also be referred to later on in a reliable format
  • Notes I leave myself aren't tagged

  • Get feedback on what I wrote that is consumable by others

15 Dec 2017

AI/AGI/ML - A short overview

History / Edit / PDF / EPUB / BIB / 3 min read (~580 words)
ai agi ml

After having spent about 200 workdays of my life getting familiar with the fields of AI/AGI/ML, I think it is a good moment for me to reflect back on my learning experience.

Of the three domains, AGI is the least well defined. It has to do with general intelligence, but our definition of intelligence is still up for debate. Furthermore, the different known approaches in the field have different opinions regarding what sort of agent an AGI system should be. Should it be a self-contained entity with its own goals and purpose? Or should it be a system that is exhibiting human capabilities while being the servant of a human being or a group of human beings?

Next is AI, which encompasses many different types of approach to try and solve high level goals such as deciding, planning, organizing,, understanding, etc. AI is about the discovery of strategies to solve problems without having to provide the whole solution. I think it is also where goals for the ML field have been defined.

Finally, ML is about using statistical approaches to solve problems. These problems have to be formalized and specify their input/output in the shape of data, may it be as text, audio, video, or simply vectors of numbers. Most of the recent and exciting development that has happened in ML is due to deep learning, which is the ability to develop deep neural networks that can learn to accomplish certain tasks without writing any explicit logic. This logic would be discovered by the network itself, with some help of the developer which would define a composition of modules that would process the data in order to create associations between the input and outputs.

When I initially started my "adventure", I was interested in AGI. I wanted to know how a machine would be able to exhibit human behavior without having to tell it too much about it. I wanted to learn how you could teach a machine as you would teach a child. The AGI field itself does not currently have any curriculum as to what you should learn about, nor a reference book you can refer yourself or others to (when asked "What should I first read if I want to learn AGI?"). One can be interested in anthropomorphic aspects such as memory, intelligence, perception/senses, processing and so on, while others may be more interested about a computational oriented approach, such as the algorithms, the implementations, the computational complexity of diverse approaches and so on. I think it makes AGI a very interesting topic/field, however it lacks the definiteness of the other fields. One can see this by reading the various iterations of the book "Artificial General Intelligence", which is a collection of articles by various researchers that is generally published every year in relation to its conference.

When I started learning about AGI, I decided I would explore various venues and determine the approaches that matched the most my own interests and which looked the most promising. I looked at different venues such as cognitive science, set theory, logic, universal artificial intelligence, biology, and philosophy amongst others.

Along the road, AGI research has challenged many of my beliefs as well as how I see life. For those two reasons alone, I'm grateful I have been able to dedicate some of my time on it.