26 Aug 2024

Tools

History / Edit / PDF / EPUB / BIB / 1 min read (~43 words)
work

Tools I use on a regular basis at work.

  • Chrome
  • Copilot
  • Cursor
  • Docker
  • Git
  • GitHub
  • Google suite (Gmail, Docs, Meet, Sheets, Slides)
  • Intellij IDEA
  • LLMs (ChatGPT, Claude, Poe)
  • iTerm2
  • Jetbrains Toolbox
  • Notion.so
  • OpenLens
  • Podman
  • Postman
  • PyCharm
  • Rectangle
  • Slack
  • Stats
  • Visual Studio Code
23 Aug 2024

Calendar reminders

History / Edit / PDF / EPUB / BIB / 1 min read (~25 words)
work

Daily

  • 15m to document
  • 1h lunch

Weekly

  • 30m 1:1 with coworkers I work with on a daily basis
  • 30m 1:1 with my manager
  • 2 1h for continuous learning
08 Jul 2024

Weekly readings - 2024-06-30

History / Edit / PDF / EPUB / BIB / 2 min read (~299 words)
weekly-readings

Gradually, then Suddenly: Upon the Threshold
I've recently started playing with Claude 3.5 artifacts for fun to prototype an idea I had and while I struggled to get it to do something "fairly" simple, I was impressed with the ease of prototyping compared to if I had to do it myself. I expect this to continue to improve, reducing the amount of time spent on setting up your development environment and getting more immediate results.

[...] I suggest that people and organizations keep an "impossibility list" - things that their experiments have shown that AI can definitely not do today but which it can almost do.

Innovation through prompting
Pretty exciting ideas on how to use LLMs to enable more dynamic teaching, even though it might not be perfect.

Algorithmic progress in language models

[LLM] Models require 2× less compute roughly every eight months

This would be 3 times faster than Moore's law (every 24 months). Similar to Moore's law, the big question is when we'll reach a plateau on those performance improvements.

How to Make Yourself Into a Learning Machine
A lot of things I've found myself doing over the years, mostly reading many books each year, keeping quotes (highlights) and notes from what I'm reading, using Anki to learn and remember languages and concepts/ideas, the use of Zettelkasten (but really, just the habit of writing down any thoughts into a digital note), etc. Definitely a recommended read it you're into personal information management.

Writing one sentence per line
A good way to make your writing clear and to the point. I also suggest to use easy words instead of fancy ones that people don't often use.

28 Jun 2024

Weekly readings - 2024-06-23

History / Edit / PDF / EPUB / BIB / 3 min read (~558 words)
weekly-readings

How to... use ChatGPT to boost your writing
Tips from the article:

  • More elaborate and specific prompts work better.
  • You can ask the AI to use specific styles for writing.

Tips from me:

  • Use it to review your syntax, grammar, clarity, tone, biases, identify convoluted sentences, sentences that are too long.
  • Use it to generate alternative sentences when you don't like how yours reads.
  • Ask it to give you feedback on what you have written so far, what gaps are there.
  • Ask it to produce an article with the opposite viewpoint.

Working with AI: Two paths to prompting

Structured Prompting is about turning the AI into a tool that does a single task well in a way that is repeatable and adapts to its user.

Structured prompts are very powerful. Once you start using a LLM regularly you'll frequently have the same type of requests which will nicely lead you to collect those statements (prompts) so that you can simply copy/paste them and adapt them to your new use case. I think that being able to share, easily edit, and observe how others use your structured prompts can help you improve them. I've personally found that reading other people's prompts enabled me to broaden my capabilities and the breadth of my thinking.

LLM prompting guide
Tips:

  • When choosing the model to work with, the latest and most capable models are likely to perform better.
  • Start with a simple and short prompt, and iterate from there.
  • Put the instructions at the beginning of the prompt, or at the very end. When working with large context, models apply various optimizations to prevent Attention complexity from scaling quadratically. This may make a model more attentive to the beginning or end of a prompt than the middle.
  • Clearly separate instructions from the text they apply to.
  • Be specific and descriptive about the task and the desired outcome - its format, length, style, language, etc.
  • Avoid ambiguous descriptions and instructions.
  • Favor instructions that say “what to do” instead of those that say “what not to do”.
  • “Lead” the output in the right direction by writing the first word (or even begin the first sentence for the model).
  • Use advanced techniques like Few-shot prompting and Chain-of-thought
  • Test your prompts with different models to assess their robustness.
  • Version and track the performance of your prompts.

Cognitive Load is what matters
Interesting way to discuss cognitive load when reading code.

No, you don't owe me a favor

If I take the time to do something for you, it’s not because I’m a matcher looking for something in return. It’s because I aspire to be a giver—I enjoy being helpful. My effort to support you means that I think highly of you and might even care about you. When you say you owe me, it reduces my investment in you to an accounting transaction.

Something that resonated with me quite a lot. When I do things for others, it's not because I expect things in return. Maybe the only thing I hope is that you acknowledge and possibly appreciate the help, but I don't expect reciprocation.

21 Jun 2024

Weekly readings - 2024-06-16

History / Edit / PDF / EPUB / BIB / 3 min read (~482 words)
weekly-readings

I read more articles from https://www.oneusefulthing.org this week.

How to... use AI to teach some of the hardest skills
Very insightful article on the topic of using LLMs to teach students... or yourself. Based on this article I started learning about sociology terms, electronics, tried to have it role play a senior software backend engineer I could practice mentoring (and get mentoring feedback from). I also added the prompt "Explain how X works" to my prompt collection. I love articles that expand my thinking and exploration.

Prompt to learn about a domain through question/review cycles:
Act as an expert in X. Ask me to explain a concept and then correct me if I'm wrong. Then restart the process, continuing endlessly.

How to... use AI to unstick yourself
I've been using LLMs a lot to help me get some quick sanity check on thoughts I have and see what I might not have considered. I think LLMs are a rather useful tool to help you stay motivated when you feel a bit stuck or don't want to particularly work on a piece of code. It's like having a peer that's always willing to help.

Thinking companion, companion for thinking
Two heads are generally better than one. LLMs can be your second head when you need to think about what might go wrong or to address gaps in your thinking.
You should also learn about opportunity cost and sunk cost!

ChatGPT is my co-founder
One of LLMs strengths is their ability to always be somewhat helpful. One helpful thing they do is lowering the barrier to doing anything, as long as you know how to ask for help. While I code this mostly means giving me a small push to accomplish a task I would partially complete without its help. When writing, it's a great tool to stimulate creativity and get feedback on which you can act.

Superhuman: What can AI do in 30 minutes?
More and more of how you decide to spend your time will decide how effective (or not) you are. In this article the author spends 30 minutes to accomplish the following with the help of generative AI:

Output: Bing generated 9,200 words or so of text and a couple images, GPT-4 generated a working HTML and CSS file, MidJourney created 12 images, ElevenLabs created a voicefile, and DiD created a movie.

Input: I made less than 20 inputs to all the systems to generate these results.

Assuming that there were only 20 interactions, that would mean ~1 minute between interaction. Over a 30 minutes period, most of the time is likely spent on reviewing the generated content and then deciding our next move/writing prompts. A time breakdown would have been interesting.