While the definition of AI agents is debateable and there’s plenty of “workflow automations” that are being called “agents” in error, there’s no doubt that we’re starting to solve a lot of our day to day problems with some variation of an “AI agent”. First, a quick definition, what do I mean when I say agent? This is going to change a lot as the technology develops, but for right now - when I say “AI agent” I mean any service or framework that uses one or more LLMs to call an outside function to execute an action. I remember a big ah-ha moment last year when I was reading OpenAI docs and landed on their function calling API, my attitude towards LLMs jumped from “novel chatbot” to “this is going to start acting in our world in ways few are thinking about right now.”
Lot’s of pieces have come together to make this all possible, with LLMs providing the last significant brick in the super cool sci fi AI future. Although there’s an increasing number of AI agent frameworks available, most of the agent discussion still feels more like talking about doing things instead of doing them. I read about new agent frameworks daily, but way less about interesting usage of agents in the real world.
So, let’s change that…I’m challenging myself (and you!) to write less about agents in the abstract and more about agents solving a problem you have. The world needs theory and discussion about what agents mean and how to use them ethically and effectively, but I’m a believer that sometimes the best way to form an opinion on a technology is to get into the dirt and try it out for yourself.
I posted about this on linkedin, feel free to jump into the conversation there or send me a message on bluesky.