Empathetic Agentic AI Lab
Empathetic Agentic AI Lab Podcast
Welcome to the Lab
0:00
-10:18

Welcome to the Lab

An Update and Hello on My Two Months on Substack

This week, I published my paper on Responsibility Engineering.

Responsibility Engineering exists to answer one question:

When an AI system is reused, adapted, or placed under pressure, how do we know it is still doing the job we intended it to do?

Responsibility Engineering gives us a way to notice, describe, and respond to that moment before it turns into a legal, ethical, or operational problem.

Google, OpenAI, Anthropic, and others are deeply focused on agent QA — primarily on whether agents behave correctly.

My work focuses on something adjacent but different: whether agent behavior is explainable, testable, and accountable in practice.

That gap shows up after deployment — when behavior is reused, pressure increases, and someone has to decide whether what they’re seeing is acceptable or needs escalation.

This lab is about how decisions get made when there is no playbook, incomplete information, and real consequences. That’s what makes it unique.

With the paper being published, I’ve locked the terms.

Now, I’m moving back into practice with Agent Decision Briefs because as you might’ve guessed, someone interested in QA wants to make sure their stuff works and why.

This is why I think of myself as an AI Agent Mechanic. And I want all of you to get good at a few things as well.

I’m building a missing professional discipline that becomes necessary in moments of failure. If you don’t do this, your system will fail in ways you won’t be able to explain.

Everything I prioritize right now much answer this question:

Does this make Responsibility Engineering more understandable —or more necessary?

Discussion about this episode

User's avatar

Ready for more?