- Don’t anthropomorphize when talking with other humans about AI systems. That is, don’t use the same words you use to talk about humans or other living things: mind, intelligence, consciousness, desire, intention. This will lead to confusion.
- Don’t fix on a mental model for how these systems work until you have experience interacting with them. Pretend the systems were designed by aliens to interact with us. We have about as much knowledge of how they work as we would if they had been designed and sent by an alien civilization.
- Their behavior is going to change. The company training the model may make an update leading to behavior that contradicts what you have seen in the last version.
- Experts and non-experts have been equally bad at predicting progress in AI. There is not a good general theory of how deep neural networks work and so nobody, not even Sam Altman, is in a position to make if-then statements we should believe: “if we double the computer power then a large language model’s performance on the LSAT will increase by 10%”
- Get to know AI systems and learn 1) how they can make you better or faster at tasks and 2) how they can slow you down or deteriorate your experience.
We are in for a roller coaster over the next few years. If nothing else, they are going to shake up the job market. Unless you want a job designing or training AI models, don’t worry about what a deep neural network is, focus on how to successfully leverage the AI models available through companies or open source projects to give yourself new skills or enhance your skills.