Demystifying AI Assurance: An Interview with April Chin of Resaro

At the Inspire and Connect Asia Conference organized by BPI France, we had the chance to sit down (or actually, stand up) with April Chin, CEO of Resaro, an AI assurance company that's tackling one of the industry's most pressing challenges: making AI systems transparent and trustworthy.

Playing Techie Taboo with AI Assurance

To make things interesting, we challenged April to explain AI assurance using our Techie Taboo cards—but without using the words "safety," "alignment," "monitoring," "governance," and “risk.”
After a moment of thought, April nailed it:
"AI assurance is the process of measuring, evaluating, and communicating whether your AI solution meets the expectations of your stakeholders."
Simple. Clear. Accessible. Exactly what the industry needs.
The Three Key Stakeholders

At Resaro, April and her team focus on serving three distinct groups:
Operators - The people on the ground actually using AI solutions
Engineers - The builders developing AI systems
Practitioners - Those ensuring AI is being used properly and meeting compliance requirements
This holistic approach recognizes that AI assurance isn't just a technical problem—it's a communication challenge across different expertise levels.
Open Source: Leveling Up AI Testing Standards
Resaro recently open-sourced two critical tools:
1. AI Solutions Quality Index (ASQI)

ASQI, a non-technical framework that allows operators and practitioners to communicate their expectations clearly. As April explained, "These stakeholders want to make sure the AI solution delivers results and doesn't break guardrails, but they're usually not technically trained."
2. ASQI Engineer
ASQI Engineer is a test runner that developers can use to validate AI systems against those quality indicators. "You can't just communicate your expectation and hope in this black box that it performs as you would expect," April noted. "We want to bring transparency into this whole testing and validation process."
By open-sourcing these tools, Resaro is inviting the entire developer community to contribute and raise the bar for AI testing practices.
Using AI to Monitor AI
When asked whether Resaro uses AI in developing their own tools, April confirmed they do—employing models as evaluators and using generative models to create synthetic test data. But she emphasized that confidence ultimately comes from "having domain experts, engineers, and data scientists who are experienced and passionate about integrity in the testing and validation process."
Making Tech Accessible
This interview represents exactly what we're trying to achieve with our podcast: breaking down complex tech concepts into digestible, engaging conversations. Our Techie Taboo cards (which April gamely played along with) are designed to make tech fun and accessible while helping everyone—technical or not—understand the buzzwords shaping our industry. Watch our podcast episode below to see how we use it:
Want to try Techie Taboo yourself? Join our waitlist to get access to these conversation cards that turn complex tech concepts into engaging discussions.
Check out Resaro's open-source tools on GitHub and join the movement toward more transparent, trustworthy AI systems.
Watch the full interview on our YouTube, Instagram and TikTok that we’ll be publishing on 22 Nov, and don't forget to star Resaro's GitHub repository!