Research Program
Advancing AI for Human Benefit
Our research aims to develop artificial intelligence systems that are effective, fair, and beneficial for society.
Research Areas
We develop models that understand and generate human language with unprecedented accuracy. Our work focuses on contextual understanding, multilingual capabilities, and handling complex discourse structures. Recent projects include large language model interpretability and cross-lingual transfer learning.
Machine Learning for Healthcare
Applying AI techniques to improve medical diagnosis, treatment planning, and patient outcomes. We work closely with clinicians at Stanford Hospital to develop systems that integrate seamlessly into clinical workflows while maintaining high accuracy and reliability.
AI Ethics and Fairness
Our research investigates bias in AI systems and develops methods to ensure fair and equitable outcomes across different demographic groups. We create evaluation frameworks, debiasing techniques, and guidelines for responsible AI deployment.
Human-AI Collaboration
Designing systems where humans and AI work together effectively. We study how to best leverage the complementary strengths of human expertise and machine capabilities, particularly in high-stakes decision-making contexts.
Education & Career
2020-present
Associate Professor
Stanford University, Department of Computer Science
2016-2020
Assistant Professor
MIT, Computer Science and Artificial Intelligence Laboratory
2014-2016
Research Scientist
Google Research
Worked on machine translation and conversational AI systems.
2010-2014
Ph.D. in Computer Science
Carnegie Mellon University
Thesis: "Understanding and Generating Natural Language in Context"
2006-2010
B.S. in Computer Science
UC Berkeley
Graduated summa cum laude
Mathematical Foundations
Core Methods
Our research builds on several foundational mathematical frameworks that enable rigorous analysis of complex systems.
The fundamental optimization problem minimizes where is the loss function and controls regularization strength.
For deep learning models, gradient flow dynamics are governed by , and Euler's identity remains our favorite party trick.
Currency-like prose must stay prose: it costs $5 and $10 total, our full budget is $200, and the discount is $20. None of those should render as math.
For a display equation inline with the prose, the ecosystem convention is in the middle of a sentence, which should render as inline-display math without breaking the paragraph.
A multi-line derivation belongs in a fenced block:
And a standalone display equation on its own paragraph:
Key quantities used throughout this work are collected below. Label each equation with math:<id> and cross-reference it from prose with <EquationRef id="...">.
Pipeline Showcase
Math Rendering Examples
This page is a demo of every form the math pipeline supports — inline math, display math, fenced multi-line math, matrices, and the error-handling behavior on malformed input. Foundations authoring academic content can treat it as a reference.
Inline math in running prose. The constants , , , , , , and flow naturally with the prose around them. Inline equations like or sit on the same line as the surrounding words.
Dollar-sign disambiguation. The pipeline follows Pandoc's rules: $...$ is only math when the body has no whitespace adjacent to the delimiters and the closing $ is not followed by a digit. So a currency sentence like "it cost $5 and $10 total, with a budget of $200" stays as prose without any escaping.
Standalone display math renders centered with automatic spacing:
Multi-line derivations in fenced blocks preserve alignment:
Matrices and determinants:
Error handling — a deliberate probe. The expression below is malformed on purpose: \frac{1} is missing its denominator. Rather than crashing the page, the pipeline emits a <span class="temml-error"> around the bad source (the small red \frac{1} below), keeps rendering the rest of the paragraph, and attaches the parser message as a data-temml-error attribute so foundations can surface it conditionally: \frac{1} — see, the prose continues normally after the failure. Foundations can style .temml-error in their theme CSS to hide it, shrink it, or surface the message as a tooltip.