April 2025 technologies
April 2025 AI checkpoint
~9 years ago, I was somewhat (in)consistent about updating the various technologies I was interested in. Since these posts are fun to look back at and reminisce about, I thought I would start them up again.
In a different life, I was a young CS graduate student very interested in all things artificial intelligence - but that story is for another day. 1 Fast forward 25 years, and I’m working on large language model adjacent projects. Whether it’s convincing people (and companies) to leverage them, tinkering with various frameworks, reading about their underpinnings, or just ruminating with friends about how we’re all using these tools - it’s been quite the whirlwind.
I’m sure I’m not alone when I say that my biggest gripe with the technology is that while it may feel magical when we “one-shot” generate a complex piece of software, the frontier is still jagged and rife with (steep) cliffs. I still experience countless moments where I traverse deep LLM-rabbit holes that make me quite unproductive, yet I continue to blindly trudge through the muck, hoping that \this\ iteration will deliver me one step closer to the promised land.
While various leaderboards and evals exist, at least in userland, there don’t seem to be very good quantitative approaches to all of this madness. That little hint of magic is blinding us from the reality that there are probably only a limited number of modes where leveraging a language model makes sense. After all, working too closely for too long sucks the joy out of writing software.
In a gold rush, it’s best to sell shovels. And right now, we’re all still digging.
So without further ado, here’s my April checkpoint of where I currently stand with respect to all things AI.
Evolving Toolkits
I’ve built a thing or two in many of the large Python frameworks - though most are available in other languages. Here are some of the ones I’ve played around with extensively:
- LangChain / LangGraph - The OG framework that pioneered many of the current abstractions. Early on, they suffered from their meteoric growth, but for the most part, their APIs have stabilized.
- Autogen - Microsoft’s attempt at a multi-agent collaboration framework.
- CrewAI - Originally provided a higher level abstraction over LangChain, but they’ve removed that dependency and have since become their own shovel.
- PydanticAI - As the name implies, from the trusted Pydantic folks.
- Agent Development Kit - Google’s entry into the space. Fairly new, I used it in my capstone project for the Google / Kaggle GenAI Intensive course.
For the most part, there’s quite a bit of overlap in the abstractions
exposed through these frameworks. I’d suggest just picking one and going
deep - it’s relatively trivial to switch between frameworks once you have
a decent understanding of the high-level concepts. Since the space is
constantly shifting, I’d also suggest periodically re-evaluating every few months.
For example, I maintain a private llm-framework-playground
to test the latest
and greatest.
Tooling Ecosystem
The tooling ecosystem has absolutely exploded in the past year, with clear winners emerging. Again: sell lots of shovels during a gold rush.
Here’s some of the tools that I use daily:
- Claude Desktop (w/ various MCP servers)
- Claude Code / Aider
- Cursor
- Google Gemini
As a note, at some point, I also had an OpenAI subscription, but with the release of the Sonnet series, I let it lapse. YMMV here as I know peeps who get a lot of miles out of ChatGPT.
Books
Here are books I’d recommend on the topic. I’ve probably read at least 3 or 4 more, but these were the best. Knowledge of Python and linear algebra will help you more easily digest the material.
- Build a Large Language Model from Scratch - Great for understanding the core underlying principles behind large language models
- Hands-On Large Language Models - Guide for practical implementation with a myriad of real-world examples
- AI Engineering - Best guide for production considerations in AI system design
Courses
I tend to be an experiential learner (i.e., I learn by doing). I take small projects and work through an implementation, so I haven’t taken much coursework for the technologies I’ve learned throughout the years. With that said, since most of my projects have been private, I’ve completed the following (ordered from least to most useful):
- Google AI Essentials - Sign up for this if you’re just getting started.
- LangChain Academy Intro to LangGraph - High level course about the LangGraph implementation.
- Huggingface Agents course - Walks you through a few of the frameworks; my first time encountering SmolAgents.
- Google / Kaggle Generative AI Intensive course - Most comprehensive, but it’s quite a bit of material to digest. Since one of my hobbies is purchasing an unhealthy amount of domain names, for my capstone project, I worked on a fun Domain Name Brainstorming Agent.
I found the Google / Kaggle course the most informative (and the deepest dive), but it might be a little overwhelming to some because it’s information-dense and compressed into a tight schedule.
My Current Observations
- Observability tooling is going to end up a massive winner. I’d already been a fan of the observability movement for a few years, but now the benefit is quite clear, especially when running numerous LLM calls through various frameworks.
- We’re probably a ways out from agents that can act autonomously - if we can even agree on a definition of what that means. I’d suggest taking a peek at one of the prediction markets (Metaculus / Manifold) if that’s your thing.
- Whoever figures out a generic Agent workflow UX is going to win very big. I may be in the minority, but I still don’t think Cursor is a very good experience, just one we’ve all accepted as ‘good enough’.
- Model evaluation remains frustratingly subjective. Despite advances in benchmarks, the gap between measured performance and real-world utility persists.
- ‘Vibing’ does work and it’s extremely useful, but like anything else, context matters. It still seems difficult for someone without a software background to put something into production.
What’s next
- I hope we come up with standardized definitions of the various technologies in the space i.e., What is an agent? What does it mean to be autonomous?
- I hope we develop better protocols to enable interoperability. Anthropic’s MCP and Google’s A2A are great starts.
- I’m excited about where this is all headed, especially in the ‘Agentic’ space, and one day, I hope that my blogging Agent will have written this so I can go back to finding domain names.
If you’ve made it this far… (congrats!)
- What’s your experience been with these technologies?
- Have you found any frameworks or tools that have genuinely improved your workflow?
I’d love to hear about your LLM rabbit holes.
Hope to see y’all next time!
This post was written by yours truly and spellchecked by AI
I’ll probably catch some flack for saying the following, but if you’re interested in where this is all going, I would encourage you to take a peek ‘under the hood’. The math underlying these systems isn’t very difficult, and the general computing ideas have been around for a very long time - there’s a reason there’s a proliferation of these language models, and it’s not because they’re very complex to understand; while some of the novel architectures / optimizations are interesting (and neat!), they’re definitely within the realm of comprehension. ↩︎
- Tagged:
- Technology,
- Retrospective,
- Ai