DIA · DataSpark Squad · March 2026

The AI-First Playbook

A practical guide for the Data Analytics team on working smarter, moving faster, and building with AI as a core part of how we work — not an afterthought.

Nihit Prakash · Manager, Data Analytics · Data Intelligence & Analytics
🧠

This is a mindset shift, not a tool adoption

The goal isn't to use AI tools for the sake of it. It's to rewire how we work — so that reaching for an agent is as natural as opening a SQL editor. The teams that do this well won't just be faster. They'll do work that wasn't possible before.

Core Principles
9 Ways to Work AI-First
These aren't rules. They're habits. Start with one or two and build from there.
Principle 01
✍️

Master Prompt Engineering

The quality of your output is directly proportional to the quality of your input. Vague prompts get vague results. Specific, context-rich prompts unlock genuinely useful work.

Try this: Include your goal, the data source, the format you want, and any constraints — all in one prompt.
📖 Seven Ways to Think About Prompting → 📖 Prompt Engineering Guide →
Principle 02
📚

Write Obsessively Good Documentation

Documentation is agentic food. The better you document your code, processes, and data definitions, the more effectively an AI agent can help you — and your future self. Undocumented work is invisible to agents.

Try this: Every time you write a BQ query, add a comment block explaining what it does, what table it reads, and what the output means.
Principle 03
🗣️

Converse, Don't Just Query

Instead of rewriting the same query 10 times or Googling the same thing repeatedly, build a conversation with an agent. Give it context once, then iterate. Let it remember what you told it.

Try this: Next time you need to investigate a BQ table or debug a pipeline, open OpenCode and walk through it conversationally rather than jumping straight to SQL.
Principle 04
📋

Create SKILL.md Files

For any repetitive task — table usage investigations, BQ audits, pipeline debugging, data validation — write a SKILL.md that defines the workflow step by step. Agents use these as playbooks and execute them consistently every time.

Try this: Write a SKILL.md for your most common VB investigation pattern. You'll never have to re-explain it to an agent again.
Principle 05
🏃

v1 Doesn't Need to Be Perfect

Ship something real to stakeholders faster, then layer on quality. A working Streamlit prototype in 2 hours beats a perfect Tableau dashboard in 2 weeks. Get eyes on it. Validate the idea before investing in the execution.

Try this: On your next analytics request, ask "what's the fastest version I could put in front of a stakeholder today?" — then build that first.
Principle 06
🔍

Look Back at Your Last 5 Tasks

Audit your own recent work. Which parts were repetitive? Which required no judgment? Which took hours but could be described as a clear set of steps? Those are your AI candidates.

Try this: Block 20 minutes this week to list your last 5 tasks and mark each step as "AI can do this" or "needs human judgment."
Principle 07
🤝

Be a Contributor, Not a Blocker

For tasks outside your core skillset — infra setup, frontend code, data engineering — use AI to get 70% of the way there yourself, then bring in an expert. Don't wait for someone else to start. Show up with something already built.

Try this: Next time you need a pipeline change outside your area, draft the code with AI first before raising a ticket.
Principle 08

Prioritize Dynamically

Quarterly planning is a starting point, not a contract. With AI reducing execution time dramatically, the work that matters can change faster. Stay close to stakeholders. Re-prioritize as context shifts, not just at quarter boundaries.

Try this: At the start of each week, ask: "Given what changed this week, is what I'm working on still the highest value thing I could be doing?"
Principle 09
🗄️

Curate Data for Agents, Not Just Humans

This is one of our biggest advantages as a data team. We don't just consume data, we create and maintain curated datasets. When those datasets are built with agents in mind, they become tools that AI can reason over directly — not just sources that humans query.

What does this look like in practice? ▼ click to expand
  • Column descriptions in BigQuery — every table and column has a plain-English description of what it means, its unit, and any caveats. An agent reading the schema knows exactly what it's working with.
  • Documented grain and lineage — a table that says "one row per account per billing month" is instantly usable by an agent. A table with no grain definition is a guessing game.
  • Curated metric tables — instead of making every consumer re-derive the same metric from raw data, we maintain a single source of truth with the logic already applied and documented.
  • LLM-readable reference documents — like the Metrics Definition Catalog we built: a structured HTML document that both a human stakeholder and an AI agent can read to understand what a metric means, how it's calculated, and who owns it.
  • Consistent naming conventions — tables and columns named with clear, predictable patterns (e.g. billing_period_end_month not bpe_mo) that agents can infer meaning from without needing to ask.
Try this: Pick a BQ table you know well and have a conversation with OpenCode about it — ask it to explain the data, find trends, or investigate an anomaly. Notice how much it already understands from the metadata. Then pick a table you've never worked with before and do the same. See how much you can learn about an unfamiliar dataset just by conversing with an agent rather than spending hours exploring it manually.
Two Superpowers. One Team.
We sit at the intersection of deep business context and emerging AI capability. Neither alone is enough. Together, they're unstoppable.
🏢

Business Context

What we've always had

  • Variable billing definitions and edge cases
  • Cloudflare One and Dev Platform product knowledge
  • Stakeholder relationships and trust
  • Revenue Operations and audit requirements
  • How the data actually flows end-to-end
+
🤖

AI Agents

What we're building now

  • Code generation and pipeline building
  • Table usage investigation (SKILL.md)
  • Rapid prototyping and iteration
  • Documentation and context retention
  • Cross-repo search and analysis

🚀 The Result: Work that wasn't possible before

Faster delivery · Higher quality · More ambitious scope · Fewer blockers · Happier stakeholders

What's Possible — Real Examples
Built with OpenCode in a Single Session
These two artifacts were built in one day working with OpenCode. This is not to show off — its to show you what becomes possible when you start working this way.
📊

Variable Billing Dashboard

Streamlit Prototype + Production CF App

A full multi-page dashboard showing MoM usage trends, customer movement (expansion/contraction/churn), top accounts, and an overall portfolio view — for all 40+ variable billing products.

Built on top of the exact same data source stakeholders use for invoicing (billable_usage_monthly_by_customer), so numbers match perfectly.

Two versions: a Streamlit prototype for rapid stakeholder validation, and a production app being tested locally and eventually deployed on Cloudflare Workers + Pages — the same infrastructure Cloudflare sells to customers.

Streamlit Cloudflare Workers Cloudflare Pages BigQuery R2 Iceberg
📐

Metrics Definition Catalog

Cloudflare One · Developer Platform · Network Services · Variable Billings

A comprehensive, beautifully formatted HTML metrics reference document covering all key KPIs and metrics across Cloudflare One, Developer Platform, Network Services, and Variable Billings.

Each metric is documented with its definition, owner, data source, calculation methodology, and which function it serves — Product, CSM, Finance, or GTM.

Designed to be LLM-readable as well as human-readable — making it agentic food for any future AI-assisted analysis or investigation against these metrics.

→ View Metrics Definition Catalog
Metrics Reference Cloudflare One Developer Platform Variable Billings LLM-Ready
Your Turn
What Will You Build?
The tiles above are just the start. Add your wins here as you build with AI.
?

Your Next Build

Coming soon

What problem will you solve with AI?

?

Your Next Build

Coming soon

What problem will you solve with AI?

?

Your Next Build

Coming soon

What problem will you solve with AI?

?

Your Next Build

Coming soon

What problem will you solve with AI?

The Numbers
What Changes When You Work AI-First
Based on real numbers from our team's experience.
5x
Faster Pipeline Creation

New data ingestion pipelines that took 4–8 hours now take 20–25 minutes with AI assistance

1 day
VB Product Onboarding

New product additions to the VB pipeline expected to drop from 2–5 days to under 1 day

1 session
Prototype to Production Plan

Full Streamlit dashboard + production architecture planned and built in a single working session

Start Here This Week
Pick the ones that feel most relevant to your current work. You don't need to do all of them at once.

Try OpenCode on a real taskAsk it to help you write or debug a BQ query, investigate a table, or draft a wiki entry

Audit your last 5 tasksMark each step as AI-automatable or human-essential. Share your findings with the team

Read the prompting guidesunilpai.dev/posts/seven-ways — 10 minute read, immediately applicable

Write one SKILL.mdPick your most repetitive investigation or analysis pattern and document it as a step-by-step workflow

Build a Streamlit prototypeTake your next analytics request and build a quick prototype before going straight to Tableau

Add a comment block to your next queryDescribe what it does, what it reads, and what the output means — in plain English

Our superpower has always been business context — we know this data, these products, and these stakeholders better than anyone. AI gives us the execution speed to match that knowledge.
Prompt Engineering Resources
Curated for data analysts — click to expand

No ML background needed. Start with the first two, then go deeper as you build confidence.

🔗

Seven Ways to Think About Prompting

The best practical read on how to work with coding agents. Written by Sunil Pai (Cloudflare engineer). Covers how to structure prompts as constraints + context + oracles + feedback loops rather than just instructions.

Key insight: "Agents make code cheaper. They do not make judgment cheap." — the scarce skill is expressing constraints and designing feedback loops, not magic words.
→ sunilpai.dev/posts/seven-ways
📘

Prompt Engineering Guide (DAIR.AI)

The most comprehensive free reference for prompt engineering techniques — from zero-shot and few-shot prompting to chain-of-thought, prompt chaining, and AI agents. Covers both theory and practical application.

Start with: Basics of Prompting → General Tips → Chain-of-Thought. Skip the academic sections until you need them.
→ promptingguide.ai
🎓

Learn Prompting (Free Course)

Structured, beginner-friendly course covering prompt engineering fundamentals through to advanced techniques. Used by 3M+ learners. Has a specific track for boosting day-to-day efficiency — directly applicable to data analyst work.

Recommended track: "Boost Your Day-to-Day Efficiency With Generative AI" — practical and immediately applicable without a technical background.
→ learnprompting.org
📋

The Context Packet Template

From the Sunil Pai article above — a structured template for writing prompts that actually converge on correct results. Pins the goal, non-goals, constraints, repo anchors, prior art, and a definition of done. Turns "senior intuition" into something an agent can act on.

For data work, adapt it to: goal (what metric/insight), non-goals (what to exclude), constraints (BQ tables, date range, units), oracle (expected row counts or values to validate against).
→ Context Packet Template
🤖

Anthropic's Prompt Engineering Docs

Official guidance from the team that builds Claude (the model powering OpenCode). Covers Claude-specific techniques including how to give clear instructions, use system prompts effectively, and structure multi-step tasks. Directly applicable to your OpenCode sessions.

Most useful section: "Be clear and direct" — simple guidance that improves every prompt immediately.
→ docs.anthropic.com/prompt-engineering
💬

The "Context Packet" in Practice

The single most impactful habit: before starting any non-trivial task with an AI, write down your goal in one sentence, your constraints, what "done" looks like, and 2–3 examples of the output you want. This one habit alone will 3x the quality of your AI interactions.

Template:
Goal: [one sentence]
Constraints: [what must be true]
Not this: [what to avoid]
Done when: [how you'll verify]
Example output: [show don't tell]