Writing Analysis That Actually Explains Itself: The Power of Narrative Code

I was reviewing a colleague’s analysis last month, and I found myself staring at a folder containing seven R scripts, an Excel file with “final_results” in the filename, and a PDF report that didn’t mention where any of the numbers came from. It took me two days to piece together what they’d actually done. There has to be a better way.

There is. It’s called literate programming, and it’s transformed how I create data analysis that people can actually understand and trust.

What If Your Code Told Its Own Story?

Literate programming is essentially writing code and explanation together in the same document. Think of it as creating a lab notebook where your experiments can actually run themselves. Instead of writing code in one place and explanations in another, you weave them together into a single narrative.

Here’s why this matters: analysis isn’t just about getting the right answer—it’s about being able to explain how you got there, and why anyone should believe you.

How It Works in Practice

Let me show you what this looks like with a real example. Suppose we’re analyzing customer churn:

markdown

# Understanding Customer Churn Patterns

## The Business Problem

Our customer success team has noticed an increase in cancellations over the past quarter. This analysis examines which customers are most likely to leave and why.

We’ll be working with data from January through March 2025, focusing on subscription-based customers.

## Loading and Preparing Our Data

First, let’s load the necessary packages and data:

“`{r setup}

library(tidyverse)

library(lubridate)

library(ggplot2)

# Load the customer dataset

customer_data <- read_csv(“data/active_customers_2025.csv”)

“`

The dataset contains `r nrow(customer_data)` active customers as of January 1st, 2025.

## Exploring Customer Characteristics

Let’s understand our customer base by looking at their subscription patterns:

“`{r customer-profile}

subscription_summary <- customer_data %>%

  summarise(

    avg_tenure_months = mean(account_age_months),

    avg_monthly_spend = mean(monthly_revenue),

    premium_rate = mean(plan_type == “premium”)

  )

print(subscription_summary)

“`

Our average customer has been with us for `r round(subscription_summary$avg_tenure_months, 1)` months and spends about $`r round(subscription_summary$avg_monthly_spend, 2)` per month. `r round(subscription_summary$premium_rate * 100, 1)`% of customers are on premium plans.

See what’s happening here? The code and the explanation live together. When someone reads this, they understand not just what we’re doing, but why we’re doing it.

Beyond Static Reports: Documents That Update Themselves

The magic happens when your data changes. Traditional reports become outdated the moment new data arrives. With literate programming, your documents stay current.

Parameterized Reports for Different Audiences

Imagine you need to create the same analysis for different departments:

markdown

title: “Department Performance Review”

params:

  department: “Marketing”

  fiscal_quarter: “Q2”

# Performance Analysis for `r params$department`

This report examines `r params$department`’s performance during `r params$fiscal_quarter` 2025.

“`{r load-department-data}

dept_data <- read_csv(“data/performance_2025.csv”) %>%

  filter(department == params$department,

         quarter == params$fiscal_quarter)

“`

We’re analyzing `r nrow(dept_data)` initiatives from the `r params$department` team.

Now you can generate customized reports for each department without changing the code. Run it once for Marketing, once for Sales, once for Engineering—the structure stays the same, but the content adapts.

Building Trust Through Transparency

One of my clients once asked, “How do I know this number is right?” With traditional reports, that’s a hard question to answer. With literate programming, I can point to the exact code that generated every figure.

Showing Your Work

markdown

## Calculating Customer Lifetime Value

There are different ways to calculate lifetime value. We’re using the historical method because we have several years of data:

“`{r lifetime-value}

calculate_ltv <- function(customer_df) {

  customer_df %>%

    group_by(customer_id) %>%

    summarise(

      total_revenue = sum(transaction_amount),

      total_months = n_distinct(month_year),

      ltv = total_revenue / total_months * 12  # Annualize

    )

}

customer_ltv <- calculate_ltv(transaction_history)

“`

The median customer lifetime value is $`r round(median(customer_ltv$ltv), 2)`. The top 10% of customers have LTV over $`r round(quantile(customer_ltv$ltv, 0.9), 2)`.

*Note: We’re excluding customers with less than 3 months of history from this calculation, as their LTV estimates are less reliable.*

Now anyone questioning the results can see exactly how we calculated lifetime value. They might disagree with our method, but they can’t say we’re being opaque.

From Exploration to Publication

The same document can serve multiple purposes throughout your analysis:

  • Phase 1: Exploration
    You’re figuring things out, trying different approaches, making mistakes. The document becomes your thinking space.
  • Phase 2: Collaboration
    Share the document with teammates. They can see your reasoning, reproduce your results, and suggest improvements.
  • Phase 3: Presentation
    Render the same document as a beautiful HTML report, PDF, or even a slideshow for stakeholders.

Here’s how you might structure a complete analysis:

markdown

# Customer Retention Analysis

## Executive Summary

[High-level findings for busy managers]

## Methodology

[How we approached the analysis]

## Detailed Analysis

[The meat of the work with code and explanation]

## Limitations

[What this analysis doesn’t show, and why]

## Recommendations

[Actionable insights based on what we found]

Making It Work in Team Environments

Literate programming plays nicely with version control. Since everything is plain text, you can use Git to track changes, see who added what, and collaborate without overwriting each other’s work.

Handling Sensitive Information

Sometimes you can’t share the actual data, but you can still share the method:

markdown

## Revenue Calculation Method

We calculate monthly recurring revenue as:

“`{r revenue-method}

# Note: Actual revenue figures are confidential

calculate_mrr <- function(subscriptions) {

  subscriptions %>%

    group_by(month = floor_date(start_date, “month”)) %>%

    summarise(

      new_mrr = sum(monthly_price[is_new_customer]),

      expansion_mrr = sum(monthly_price[upgrade_event]),

      churned_mrr = sum(monthly_price[cancel_event]),

      net_mrr = new_mrr + expansion_mrr – churned_mrr

    )

}

“`

*For privacy reasons, the actual revenue figures are not included in this public version.*

Common Patterns That Work

After years of using this approach, I’ve found some patterns that make life easier:

  • The “One Big Idea Per Chunk” Rule
    Each code chunk should do one clear thing. If a chunk is getting long, break it up and explain the pieces.
  • Narrative Flow
    Structure your analysis like a story: introduction, methods, results, discussion. Guide your reader through your thinking.
  • Progressive Disclosure
    Start with high-level findings, then gradually reveal more detail for interested readers.
  • Honest Accounting
    Include the dead ends and failed approaches. They’re part of the research process and help others avoid the same mistakes.

Conclusion: Analysis That Stands the Test of Time

I recently had to recreate an analysis I’d done three years earlier. Because I’d used literate programming, it took me about an hour instead of days. The document remembered what I’d done better than I did.

Literate programming isn’t about making pretty reports—it’s about creating analysis that’s:

  • Understandable by your future self and others
  • Reproducible with a single click
  • Maintainable when requirements change
  • Trustworthy because everything is transparent

The initial learning curve is real, but the payoff is enormous. Start with your next small analysis. Write it as if you’re explaining it to a colleague. Include the code alongside your thinking. You’ll be surprised at how much clearer your own thinking becomes when you have to explain it.

In a world drowning in data but starving for insight, the ability to create analysis that people can actually understand and trust might be your most valuable skill. Literate programming is how you build that skill into everything you do.

Leave a Comment