My first paper at the intersection of AI and medicine started with a bold opening: "We develop a model which can diagnose irregular heart rhythms from single-lead ECG signals better than a cardiologist." My mentor's reasoning: people don't have time to wait for the point. Give them the point early.
That paper got over a thousand citations. Execution gets the work done. Problem selection determines if it matters. Framing determines if anyone notices.
Writing is thinking, not reporting. You don't figure out what you did, then write it down. You write to figure out what you actually did and why it matters.
The papers I've written personally, I rewrite 10-15 times. Students and collaborators I've worked with follow the same principle. Clarity emerges through rewriting. First drafts reveal what you're trying to say. Rewrites reveal what you should say.
Over time, I've converged on a structure that works across papers in different venues:
Paragraph 1: The problem. Why it matters. Paragraph 2: What's been tried, why it's insufficient. Paragraph 3: What we did. Why now. Paragraph 4: What we found. What this unlocks.
One page. Every sentence earns its place.
The introduction matters disproportionately. Reviewers decide based on intro and first figure. I spend about 40% of writing time on the introduction. I rewrite it last, after results are done.
Here's an example of the difference framing makes:
Bad first sentence: "Machine learning has revolutionized healthcare." Good first sentence: "200,000 emergency department visits result in preventable deaths annually."
The difference: specificity and urgency. Many papers are hard to understand, even at top venues. This creates an opportunity. Clear papers get accepted more easily, cited more often, implemented more widely.
Research impact follows a power law. A small fraction of papers generate most of the follow-up work—through territories they open, infrastructure they create, collaborators they attract.
Papers that reframe problems compound more than papers that improve numbers.
The question: "Will this change how people think, or just what numbers they report?"
Problem selection matters more over time. I spend more time now deciding what NOT to work on. Taste—the ability to recognize which problems will matter—becomes increasingly valuable.
Consistent work creates opportunities. Publishing regularly leads to collaborations. Working with others helps.
What hasn't changed: still read regularly, still code regularly, still rewrite extensively.
What has changed: reading is faster, projects start faster, drafting is faster. The skills build on each other over time.
The work stops feeling like a slog at some point. Skills become automatic. Confidence builds.
This is a good time to be doing research. AI tools make reading faster, experimentation more productive, writing clearer.
We're in an interesting window. AI is powerful enough to accelerate research significantly, but not yet capable enough to do science autonomously. This period—where AI amplifies researchers rather than replacing them—creates unusual opportunities for productivity.
The joy compounds too. Reading is easier when tools help you find relevant work. Experiments run faster when infrastructure improves. Writing clarifies faster with assistance.
The three capabilities—execution, problems, framing—build on each other over years.
I sometimes think about what advice I'd give 20-year-old me, sitting in that Stanford office in December 2015. Anxious. Uncertain. Hungry.
The uncertainty doesn't go away. But it becomes familiar. The curiosity is an asset, not a liability. And the work itself—when you find the right problems—is deeply satisfying.
The first decade took me from SQuAD to medical AI, from student to professor, from individual contributor to running a lab. The path wasn't linear. It never is.
We'll see what it looks like ten years from now.
Last updated: 12/17/2025