My default approach to any new technology has always been skepticism. Working in tech makes this a difficult mindset to have. We're bombarded with new concepts and technologies daily and we have to make decisions about what to adopt and what to leave behind. When ChatGPT came online in November 2022, I had the same thought I always have about new technology, "Oh good! Time to find something I don't like about this."
The way that these tools are marketed do them a disservice. Like any tool, there are good ways and bad ways to use it. After experimenting with them personally and professionally for the last year, I've started to think about my own principles when using AI. When will I use it, but perhaps more importantly when will I not use it?
These are my working principles. The general flow goes something like this:
Literacy → Solve → Collaborate → Verify
Literacy over reliance
My initial skepticism when using AI led me to want to understand more before using it. There are real ethical and moral questions you have to ask yourself when engaging with a tool like AI. From a moral standpoint, do I really want to use valuable energy resources shipping a code change a little bit faster? On one hand, no. On the other, we've way overestimated the amount of energy needed to run a prompt.
As for the ethical side, organizations are often setting productivity goals around adopting these tools. Move this much faster, do that this many more times. Research has shown that measuring productivity is a relatively fruitless pursuit. Output does not always equal impact. AI is changing the way that everyone works. It became embedded so quickly, it is going to be near impossible to turn back. I believe the scale at which we will all interact with it will trend down in the coming years as access to these tools becomes more expensive. We're in a pricing anomaly right now, and when that shifts so too will our individual and collective reliance on these tools.
My operating principle here is to stay up to date on what changes with these tools. I'm going to continue to explore these tools cautiously, finding the right way to invite them into my workflow. I keep an eye on publications like Every and Epoch to help me find my way.
Solve before prompting
Similar to the first, I will not start a new workflow with AI until I have solved the problem myself. I'll solve the problem as well as I can on my own, and then I invite AI into my space. If I don't have a clear path towards a solution, I can't verify what I prompt.
Collaboration over entertainment
These tools are meant to be collaborators, not producers. Right now they're marketed as producers and I believe this is where a lot of general concern around AI comes from. Treat it as a collaborator, not an image generator. It will strengthen your thinking, but only if you strengthen your own first.
Along these lines, I will also not allow AI to collaborate for me. Meaning I will never pass on instruction to someone else based on what AI has told me to do. Maybe this is related to morals, maybe not, but if someone doesn't want to interact with AI - don't make them.
Only prompt what I can verify
This is the principle I deploy the most. If I can't independently verify the output, it is useless to me. It may work, but if I can't explain it it has little functional value to me.
At first, I was operating with extreme skepticism when engaging with AI. I spent most of my time learning from the outside rather than starting to tinker. As I learned more I started to develop these principles in the back of my mind that helped me act on my knowledge, without being guided by a vision based on productivity. Leaning on these four operating principles has allowed me to be crystal clear with my intent when using AI as a collaborator. I can get in and get out quickly, back to the work that I enjoy.