
I'm a full-stack software engineer with 5+ years of professional experience, mostly focused on AI integration and agentic systems. At Verisk Analytics and ReliaQuest, I've built AI solutions that teams actually use daily. I care about what ships and what sticks, which is harder than it sounds.
My path to software began in biology labs, automating data analysis with Python. That foundation in complex systems thinking now shapes how I approach AI integration: understanding when emergent behavior adds value and when it creates unpredictable risk. Not every problem needs AI; recognizing the difference is half the battle.
At Verisk Analytics, I cut Playwright test code generation time by 40% using agentic AI tools and context engineering practices. At ReliaQuest, I work on agent orchestration systems for real-time security threat detection. If something breaks, a security analyst misses a threat — so the bar for reliability is high.
I've also built my own agent systems from scratch: a Digital Twin chatbot, a Writers' Room with literary AI personas, and a Code Review Command Center. Each one taught me something different about what it takes to go from 'cool demo' to 'actually reliable.' Turns out that's where most of the real work lives.
Current Focus
AI-Augmented Engineering
Integrating AI tools (Claude Code, Cursor, custom MCP servers) into development workflows. The interesting question isn't the initial speed boost — it's whether teams are still using the tools six months later.
Agentic Architecture
Designing multi-agent systems where agents hand off work, stay in their lane, and fail gracefully. The architecture matters more than the model.
AI Implementation Strategy
Figuring out which problems are actually worth throwing AI at, and being honest about the ones that aren't. Most of the value is in scoping and sequencing.
Context Engineering
Crafting prompts, system instructions, and retrieval strategies that make LLMs useful instead of confidently wrong. This is where I spend most of my time.
How I Work
- I ask 'what problem are we solving?' before I ask 'what model should we use?' Most failed AI projects skip this step.
- I test against real outcomes before scaling anything. A demo that impresses leadership is not the same as a tool people use.
- I build the error handling and oversight layer before the flashy parts. Production AI that fails silently is worse than no AI.
- I write down why I made the integration decisions I made. Six months from now, someone will need to understand them.
- I still write code every day. Consulting advice from people who stopped building tends to age badly.
Want to talk about a project?