About Us
Rahul Baxi
Rahul Baxi is an AI researcher focused on certification frameworks for deployed agent systems. His work addresses the gap between laboratory AI evaluation and real-world regulatory requirements, particularly in domains where AI decisions carry financial or safety consequences.
Over the past two years, Rahul has developed complementary frameworks—CGAE, CDCT, DDFT, EECT, and IHT—that collectively measure whether AI systems truly understand their limitations, whether they remain honest under economic pressure, and whether they reason ethically in ambiguous contexts. These frameworks move beyond base model performance to evaluate agent behavior in multi-agent systems.
Our Philosophy
The era of single-metric AI evaluation has passed. Comprehension, honesty, ethical reasoning, and economic incentive alignment are not dimensions that can be collapsed into a single number. Modern AI certification requires multi-dimensional assessment.
Every framework we build is grounded in empirical validation. We don't propose abstract ideals—we develop testable measures with measurable inter-rater agreement (κ values), deploy them in real systems, and publish results openly. Our goal is to enable regulators and enterprises to make confident decisions about where AI can be safely deployed and where human oversight remains essential.
Vyasa Labs operates independently. We publish all research open-access and maintain no financial stake in any particular AI platform or deployment methodology. This independence allows us to prioritize rigor over advocacy.
Affiliations
- •ICML 2026 — Reviewer
- •ICLR 2026 — Reviewer
- •arXiv — Author Profile