Writing Academic Papers with Claude + Obsidian

A structured workflow for using AI to accelerate academic writing. Applicable across all disciplines.

What This Guide Is

This guide provides a workflow for AI-assisted academic writing. It covers the prose-heavy parts of scholarship: literature synthesis, argumentation, drafting, and revision.

Tools Required

Prerequisites

  • Basic Obsidian familiarity
  • A research question or thesis
  • Claude Pro subscription

You will use Obsidian as your knowledge base and Claude as your writing partner. The web interface works for the full workflow; Claude Code adds the ability to read and write files in your vault directly. Despite its name, Claude Code is not just for programmers. It excels at any task involving structured text and files.

The Workflow Overview

This accelerates: literature synthesis, drafting, structuring arguments, iterative revision.

This does not replace: your domain expertise—running experiments, conducting interviews, close reading, or formal derivation.

The Core Insight

AI output quality depends almost entirely on input context quality.

Dump a vague request → get hollow, generic text.

Provide structured, compressed context → get useful output.

This is Context Engineering: actively managing what the AI sees. The documents you create aren't just notes. They shape what AI can do for you.

The Key Principle

The context documents are the artifact. The generated text is replaceable.

What is a Vault?

A "vault" is Obsidian's term for a folder of interconnected markdown files. Unlike traditional note-taking apps, Obsidian stores everything as plain text files on your computer. No proprietary formats, no lock-in.

Why this matters for AI-assisted writing

Benefit Description
Persistence Your intellectual identity (writing style, methodological commitments, theoretical positions) lives in files that persist across projects
Accumulation Sources build up over time; each paper draws on previous work through internal links
AI-readability Claude Code can directly access markdown files, reading your context and writing drafts into your project folders
Portability If tools change, your knowledge base remains. It's just folders and text files.

Vault Structure

Create this folder structure in your Obsidian vault:

/Academic-Vault/
├── CLAUDE.md                    # Your AI constitution
├── me/                          # Your intellectual identity
│   ├── style.md
│   ├── positions.md
│   └── challenges.md
├── References/                  # All sources, linked across projects
├── Attachments/                 # PDFs, data files
└── projects/
    └── Paper-Name/
        ├── CLAUDE.md            # Project-specific rules
        ├── context/
        │   ├── research.md
        │   ├── data.md
        │   ├── requirements.md
        │   └── design.md
        ├── drafts/
        ├── feedback/
        └── journal.md

Why This Structure

  • /me carries your intellectual identity across projects: your writing voice, preferred methods, and theoretical commitments
  • /References accumulates over time. Sources link across papers, building a personal knowledge graph.
  • /projects/Paper-Name/context/ holds the distilled context AI actually uses (not raw materials, but compressed insights)
  • journal.md documents your process for future reuse and methodological transparency

Guarding Against Echo Chambers

If you store your positions in /me/positions.md, the AI will reinforce rather than challenge your assumptions. Consider adding a challenges.md file containing the strongest arguments against your positions, and instruct the AI to read it. This forces productive friction into the workflow.

Note on Terminology

The filenames (requirements.md, design.md) borrow from project management to enforce clarity. They apply to any structured argument, not just technical work.

Safety Setup

Before granting Claude Code file access to your vault:

  1. Set up real backups.

    Obsidian's File Recovery is a snapshot tool, not a true backup. For agentic workflows where AI can write files, use Git, Time Machine, or maintain an archive folder that the AI has no write access to. If something goes wrong, you need recovery outside the system the AI can touch.

  2. Understand the access model.

    Claude Code can read and write files in any folder you give it access to. This is powerful but requires trust. Real backups ensure that trust is recoverable.

  3. Check your target journal's AI policy.

    Some publishers (Nature, Science, Elsevier, and others) prohibit or restrict uploading unpublished manuscripts to generative AI. Verify compliance before using this workflow on work intended for specific venues. Never use this workflow for confidential peer review materials.

Phase 1: Preparation

Goal

Assemble raw materials without organizing them yet.

The purpose of this phase is collection, not analysis. Resist the urge to structure too early. You don't yet know what will matter most.

  1. Create your project folder in your Obsidian vault under /projects/Paper-Name/ with the subdirectories shown above (context/, drafts/, feedback/)
  2. Write your initial thesis and questions in context/requirements.md. This doesn't need to be polished. Just capture what you're trying to argue and what you need to figure out.
  3. Collect sources: Download PDFs to /Attachments, create notes in /References. How you structure these notes is up to you. Some prefer detailed annotations, others just bibliographic info and a few key quotes. The point is having sources accessible and linkable.
  4. Start journal.md with today's date and your initial state: What do you already know? What are you uncertain about? What's your current best guess at the argument?

Don't organize yet. Just collect.

Obsidian vault showing project folder structure with Attachments, me folder, projects folder with context documents, drafts, and feedback, plus References and CLAUDE.md

Phase 2: Exploration & Mapping

Goal

Test your thesis against the literature and identify the strongest and weakest points of your argument.

This phase produces working ideas, not polished documents. You're having a conversation with Claude to stress-test your thinking before committing to a structure.

Questions to Explore

  • What's actually possible with my sources/data?
  • What are the strongest objections to my argument?
  • What adjacent fields have relevant frameworks I'm missing?
  • Where does my thesis need refinement?

Key Prompt Pattern

I'm developing a paper arguing [THESIS].

Before I commit:
1. Steel-man the strongest opposing view
2. Identify the 3 weakest points in my argument
3. What evidence would change your assessment?

Be critical, not agreeable.

Document in journal.md

  • What you tested
  • What held up, what didn't
  • Dead ends and why
  • Sources to verify

Early Verification Checkpoint

As you explore, flag uncertain claims by adding [verify] inline. Don't wait until the end to discover your argument rests on a hallucinated source. Check key claims against your actual sources now. AI can misrepresent what sources say.

Claude conversation showing steel-man prompt asking for strongest objections, with Claude's critical response identifying philosophical tensions in the thesis

Phase 3: Distillation

Goal

Compress everything you've learned into focused context documents that work well for AI.

This Is Not a Shortcut

This is front-loading the cognitive labor. You are building the blueprint so the AI can build the walls. If the blueprint is weak, the house collapses. Distillation is where you do the hard thinking; drafting is where the AI helps you execute it.

Why this matters: AI performance degrades with context length ("context rot"). More tokens ≠ better output. You want maximum information in minimum space.

The Context Documents

Document Contains
research.md Theoretical framework, state of field, key sources (linked to /References, not quoted in full)
data.md What you're working with, structure, access methods, limitations
requirements.md Research questions as clear, testable statements
design.md Paper structure, argument flow, venue requirements

The Craft of Distillation

Bad distillation = dumping everything potentially relevant

Good distillation = only what's necessary for THIS paper, compressed to essentials

Ask yourself:

  • If I cut this, would the AI output suffer?
  • Am I describing or just copying?
  • Is this already in AI's training data? (Don't explain what regression is, or who Foucault was.)

Link, Don't Paste

Reference sources in /References; the AI gets the distilled insight, you keep the full record. When prompting Claude Code, explicitly instruct it to read linked files: "Read all files linked in the References section before generating the draft."

Context Budget

Aim to keep your combined context documents under 3,000 words. If they're longer, you probably aren't distilling; you're dumping. This is a guideline, not a hard rule.

Verification before finalizing: Before you consider these documents complete, verify that key sources actually say what you think they say. Misunderstandings from Phase 2 will propagate through the entire draft if you don't catch them here.

Phase 4: Implementation

Goal

Generate, critique, and refine drafts until the quality is publishable.

Step 1: Generate First Draft

Provide your context documents to Claude. Request a complete draft.

Write a complete draft based on this context:

[Paste or reference research.md]
[Paste or reference requirements.md]
[Paste or reference design.md]

Requirements:
- Every claim grounded in provided sources
- No filler phrases
- Flag uncertainties explicitly

Step 2: Expect It to Be Bad

First drafts are typically hollow: structurally correct but thin on substance. This is normal. The draft gives you something concrete to push against.

Step 3: Push Back with Specificity

Vague critique → vague revision.

Specific critique → actual improvement.

Problems with this draft:

1. Paragraph 3 claims "scholars have debated X" without specifics.
   Name the scholars, state the positions, cite the sources.

2. The transition from section 2 to 3 is asserted, not argued.
   What's the logical connection?

3. "It is important to note" appears 4 times.
   Cut all instances. Make direct claims.

4. You cite Martinez 2021 but this isn't in my sources.
   Remove or replace with provided sources only.

Revise addressing these specifically.

Step 4: Peer Review Cycles

Use a fresh session (new conversation) as critical reviewer. The session that wrote the draft has seen all your context and reasoning. It's primed to defend its own output rather than critique it. A fresh session approaches the text without that history.

You are a critical peer reviewer. Evaluate this draft:

[Paste draft]

Assess:
- Argument structure and logical gaps
- Evidence quality and support for claims
- Actual contribution vs. restatement
- Weaknesses a skeptical reviewer would attack

Be harsh. I need criticism, not encouragement.

Tip

Many academic journals publish their reviewer guidelines online. Feed these criteria into your review prompt for discipline-appropriate feedback.

Save feedback in /feedback → feed back to writing session → iterate 2-3 times.

Step 5: Human Editing

Take over. This is where your intellectual fingerprint goes.

Questions to ask yourself:

  • Would I defend this claim in a seminar?
  • Is this MY argument or generic synthesis?
  • Can I explain this without notes?

Step 6: Source Verification

Check every citation against actual sources. AI generates plausible-sounding but wrong references confidently. You should have caught major issues in earlier verification checkpoints, but do a final systematic check.

Create feedback/verification-log.md:

## [Source]
- Claim in draft: "..."
- Actual text: "..." (p. X)
- Status: ✓ Verified / ✗ Remove / ~ Revise

Domain Adaptation

The workflow is universal. What changes is what fills the context documents.

Research Type data.md contains
Experimental Protocols, datasets, variables, statistical approach
Theoretical Frameworks, prior results, formal setup
Qualitative Coding schemes, source descriptions, methodology, positionality
Interpretive/Hermeneutic Primary sources, archival materials, interpretive framework. Treat design.md as provisional; you may rewrite it after seeing the first draft, and that's the method, not failure.
Mixed Methods Combine relevant elements from above

Note on Sensitive Data

Context documents describe structure and approach. Never paste raw participant data, proprietary datasets, or confidential materials into AI.

Effective Critique

The skill that most improves AI output is specific, substantive pushback.

Weak Strong
"Make it better" "Claim X in paragraph 3 has no supporting evidence. Add citation or remove."
"This isn't quite right" "The logical structure is: A → B → C. But B → C isn't argued, only asserted."
"More academic" "This uses positivist framing. My approach is interpretivist. Reframe around meaning-making."

Domain-Specific Examples

  • Quantitative: "You describe 'significant results' without specifying significance level, test used, or effect size."
  • Experimental: "Methods must follow CONSORT structure. Reorganize into: participants, interventions, outcomes."
  • Theoretical: "The proof sketch in section 3 skips from step 2 to step 4. Make the intermediate step explicit."
  • Qualitative: "This reads as naive realism. Add reflexivity statement addressing researcher positionality."

The Journal

journal.md serves three functions:

  1. Process documentation: What you did, what worked, what didn't
  2. Knowledge capture: Insights that don't fit in context docs but matter
  3. Future reference: Reusable patterns for similar projects

Entry Format

## YYYY-MM-DD

### Did
- [Actions taken]

### Learned
- [Insights, surprises, dead ends]

### Next
- [What follows from today]

Final Entry

## YYYY-MM-DD (Complete)

### What worked
### What I'd change
### Time spent
### Reusable for future

Common Problems

AI keeps producing generic output. Your context is too vague or too long. Distill harder. Be more specific in requirements.

AI ignores my theoretical framework. State it explicitly in prompts: "My approach is [X]. Do not use [Y] framing."

AI hallucinates citations. Constrain: "Only cite from these sources: [list]." Verify everything anyway.

Output improves then degrades. Context window filling up. Start fresh session with just context documents.

Workflow feels slow. For shorter papers, combine documents. Skip peer review cycles. The full workflow is for substantial research papers.

Quick Reference

The Four Phases

  1. Preparation — Gather materials, initial questions, start journal
  2. Exploration — Test ideas with AI, map possibilities, verify critical claims
  3. Distillation — Compress into context documents, verify sources
  4. Implementation — Draft → Push back → Review cycles → Verify → Human editing

Context Documents

  • research.md — Framework + sources
  • data.md — What you're working with
  • requirements.md — Your questions
  • design.md — Structure + venue

Core Principles

  • Your thinking first
  • Compress, don't dump
  • Critique specifically
  • Verify throughout (not just at the end)
  • Journal the process

Sources

Note on Tools

This guide is written for Claude Code and Obsidian as they exist in early 2026. Tools evolve. New AI assistants will emerge, interfaces will change, capabilities will expand.

The methodology is more durable:

  • Context engineering (compress, don't dump)
  • Iterative critique (push back with specificity)
  • Verification throughout (don't trust, verify)
  • Process documentation (journal your work)

If you're reading this with different tools, the principles still apply. Adapt the specific instructions to whatever AI and knowledge management system you're using.