Research Program

Advancing AI for Human Benefit

Our research aims to develop artificial intelligence systems that are effective, fair, and beneficial for society.

Research Areas

We develop models that understand and generate human language with unprecedented accuracy. Our work focuses on contextual understanding, multilingual capabilities, and handling complex discourse structures. Recent projects include large language model interpretability and cross-lingual transfer learning.

1

Machine Learning for Healthcare

Applying AI techniques to improve medical diagnosis, treatment planning, and patient outcomes. We work closely with clinicians at Stanford Hospital to develop systems that integrate seamlessly into clinical workflows while maintaining high accuracy and reliability.

2

AI Ethics and Fairness

Our research investigates bias in AI systems and develops methods to ensure fair and equitable outcomes across different demographic groups. We create evaluation frameworks, debiasing techniques, and guidelines for responsible AI deployment.

3

Human-AI Collaboration

Designing systems where humans and AI work together effectively. We study how to best leverage the complementary strengths of human expertise and machine capabilities, particularly in high-stakes decision-making contexts.

Education & Career

2020-present

Associate Professor

Stanford University, Department of Computer Science

2016-2020

Assistant Professor

MIT, Computer Science and Artificial Intelligence Laboratory

2014-2016

Research Scientist

Google Research

Worked on machine translation and conversational AI systems.

2010-2014

Ph.D. in Computer Science

Carnegie Mellon University

Thesis: "Understanding and Generating Natural Language in Context"

2006-2010

B.S. in Computer Science

UC Berkeley

Graduated summa cum laude

Mathematical Foundations

Core Methods

Our research builds on several foundational mathematical frameworks that enable rigorous analysis of complex systems.

The fundamental optimization problem minimizes f(x)=i=1n(hθ(xi),yi)+λθ2f(x) = \sum_{i=1}^{n} \ell(h_\theta(x_i), y_i) + \lambda \|\theta\|^2 where \ell is the loss function and λ\lambda controls regularization strength.

For deep learning models, gradient flow dynamics are governed by dθdt=θ(θ)\frac{d\theta}{dt} = -\nabla_\theta \mathcal{L}(\theta), and Euler's identity eiπ+1=0e^{i\pi} + 1 = 0 remains our favorite party trick.

Currency-like prose must stay prose: it costs $5 and $10 total, our full budget is $200, and the discount is $20. None of those should render as math.

For a display equation inline with the prose, the ecosystem convention is E=mc2E = mc^2 in the middle of a sentence, which should render as inline-display math without breaking the paragraph.

A multi-line derivation belongs in a fenced block:

And a standalone display equation on its own paragraph:

Key quantities used throughout this work are collected below. Label each equation with math:<id> and cross-reference it from prose with <EquationRef id="...">.

total=recon+βKL=xx^2+βDKL(q(z|x)p(z))\begin{aligned} \mathcal{L}_{total} &= \mathcal{L}_{recon} + \beta \cdot \mathcal{L}_{KL} \\ &= \|x - \hat{x}\|^2 + \beta \cdot D_{KL}(q(z|x) \| p(z)) \end{aligned}
0ex2dx=π2\int_0^\infty e^{-x^2}\,dx = \frac{\sqrt{\pi}}{2}
CE=c=1Cyclog(y^c)\mathcal{L}_{CE} = -\sum_{c=1}^{C} y_c \log(\hat{y}_c)
Attention(Q,K,V)=softmax(QKTdk)V\text{Attention}(Q, K, V) = \text{softmax}\left(\frac{QK^T}{\sqrt{d_k}}\right)V
ELBO=𝔼q(z|x)[logp(x|z)]DKL(q(z|x)p(z))\mathcal{L}_{ELBO} = \mathbb{E}_{q(z|x)}[\log p(x|z)] - D_{KL}(q(z|x) \| p(z))

Pipeline Showcase

Math Rendering Examples

This page is a demo of every form the math pipeline supports — inline math, display math, fenced multi-line math, matrices, and the error-handling behavior on malformed input. Foundations authoring academic content can treat it as a reference.

Inline math in running prose. The constants α\alpha, β\beta, γ\gamma, δ\delta, ϵ\epsilon, π\pi, and ω\omega flow naturally with the prose around them. Inline equations like eiπ+1=0e^{i\pi} + 1 = 0 or k=0nk=n(n+1)2\sum_{k=0}^{n} k = \tfrac{n(n+1)}{2} sit on the same line as the surrounding words.

Dollar-sign disambiguation. The pipeline follows Pandoc's rules: $...$ is only math when the body has no whitespace adjacent to the delimiters and the closing $ is not followed by a digit. So a currency sentence like "it cost $5 and $10 total, with a budget of $200" stays as prose without any escaping.

Standalone display math renders centered with automatic spacing:

Multi-line derivations in fenced blocks preserve alignment:

Matrices and determinants:

Error handling — a deliberate probe. The expression below is malformed on purpose: \frac{1} is missing its denominator. Rather than crashing the page, the pipeline emits a <span class="temml-error"> around the bad source (the small red \frac{1} below), keeps rendering the rest of the paragraph, and attaches the parser message as a data-temml-error attribute so foundations can surface it conditionally: \frac{1} — see, the prose continues normally after the failure. Foundations can style .temml-error in their theme CSS to hide it, shrink it, or surface the message as a tooltip.

k=0xkk!=ex,p prime11ps=ζ(s)\sum_{k=0}^{\infty} \frac{x^k}{k!} = e^x, \qquad \prod_{p \text{ prime}} \frac{1}{1 - p^{-s}} = \zeta(s)
(a+b)2=(a+b)(a+b)=a2+ab+ba+b2=a2+2ab+b2\begin{aligned} (a + b)^2 &= (a + b)(a + b) \\ &= a^2 + ab + ba + b^2 \\ &= a^2 + 2ab + b^2 \end{aligned}
A=(123456789),det(A)=0A = \begin{pmatrix} 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end{pmatrix}, \quad \det(A) = 0