A practical guide for the Data Analytics team on working smarter, moving faster, and building with AI as a core part of how we work — not an afterthought.
The quality of your output is directly proportional to the quality of your input. Vague prompts get vague results. Specific, context-rich prompts unlock genuinely useful work.
Documentation is agentic food. The better you document your code, processes, and data definitions, the more effectively an AI agent can help you — and your future self. Undocumented work is invisible to agents.
Instead of rewriting the same query 10 times or Googling the same thing repeatedly, build a conversation with an agent. Give it context once, then iterate. Let it remember what you told it.
For any repetitive task — table usage investigations, BQ audits, pipeline debugging, data validation — write a SKILL.md that defines the workflow step by step. Agents use these as playbooks and execute them consistently every time.
Ship something real to stakeholders faster, then layer on quality. A working Streamlit prototype in 2 hours beats a perfect Tableau dashboard in 2 weeks. Get eyes on it. Validate the idea before investing in the execution.
Audit your own recent work. Which parts were repetitive? Which required no judgment? Which took hours but could be described as a clear set of steps? Those are your AI candidates.
For tasks outside your core skillset — infra setup, frontend code, data engineering — use AI to get 70% of the way there yourself, then bring in an expert. Don't wait for someone else to start. Show up with something already built.
Quarterly planning is a starting point, not a contract. With AI reducing execution time dramatically, the work that matters can change faster. Stay close to stakeholders. Re-prioritize as context shifts, not just at quarter boundaries.
This is one of our biggest advantages as a data team. We don't just consume data, we create and maintain curated datasets. When those datasets are built with agents in mind, they become tools that AI can reason over directly — not just sources that humans query.
billing_period_end_month not bpe_mo) that agents can infer meaning from without needing to ask.What we've always had
What we're building now
Faster delivery · Higher quality · More ambitious scope · Fewer blockers · Happier stakeholders
Streamlit Prototype + Production CF App
A full multi-page dashboard showing MoM usage trends, customer movement (expansion/contraction/churn), top accounts, and an overall portfolio view — for all 40+ variable billing products.
Built on top of the exact same data source stakeholders use for invoicing (billable_usage_monthly_by_customer), so numbers match perfectly.
Two versions: a Streamlit prototype for rapid stakeholder validation, and a production app being tested locally and eventually deployed on Cloudflare Workers + Pages — the same infrastructure Cloudflare sells to customers.
Cloudflare One · Developer Platform · Network Services · Variable Billings
A comprehensive, beautifully formatted HTML metrics reference document covering all key KPIs and metrics across Cloudflare One, Developer Platform, Network Services, and Variable Billings.
Each metric is documented with its definition, owner, data source, calculation methodology, and which function it serves — Product, CSM, Finance, or GTM.
Designed to be LLM-readable as well as human-readable — making it agentic food for any future AI-assisted analysis or investigation against these metrics.
→ View Metrics Definition CatalogComing soon
What problem will you solve with AI?
Coming soon
What problem will you solve with AI?
Coming soon
What problem will you solve with AI?
Coming soon
What problem will you solve with AI?
New data ingestion pipelines that took 4–8 hours now take 20–25 minutes with AI assistance
New product additions to the VB pipeline expected to drop from 2–5 days to under 1 day
Full Streamlit dashboard + production architecture planned and built in a single working session
Try OpenCode on a real taskAsk it to help you write or debug a BQ query, investigate a table, or draft a wiki entry
Audit your last 5 tasksMark each step as AI-automatable or human-essential. Share your findings with the team
Read the prompting guidesunilpai.dev/posts/seven-ways — 10 minute read, immediately applicable
Install the DIA skills repoClone cloudflare/bi/skills and run bash install.sh — pre-built playbooks ready to use in OpenCode immediately
Write one SKILL.mdPick your most repetitive investigation or analysis pattern and document it as a step-by-step workflow — then contribute it back to the shared repo
Build a Streamlit prototypeTake your next analytics request and build a quick prototype before going straight to Tableau
Add a comment block to your next queryDescribe what it does, what it reads, and what the output means — in plain English
Our superpower has always been business context — we know this data, these products, and these stakeholders better than anyone. AI gives us the execution speed to match that knowledge.
No ML background needed. Start with the first two, then go deeper as you build confidence.
The best practical read on how to work with coding agents. Written by Sunil Pai (Cloudflare engineer). Covers how to structure prompts as constraints + context + oracles + feedback loops rather than just instructions.
The most comprehensive free reference for prompt engineering techniques — from zero-shot and few-shot prompting to chain-of-thought, prompt chaining, and AI agents. Covers both theory and practical application.
Structured, beginner-friendly course covering prompt engineering fundamentals through to advanced techniques. Used by 3M+ learners. Has a specific track for boosting day-to-day efficiency — directly applicable to data analyst work.
From the Sunil Pai article above — a structured template for writing prompts that actually converge on correct results. Pins the goal, non-goals, constraints, repo anchors, prior art, and a definition of done. Turns "senior intuition" into something an agent can act on.
Official guidance from the team that builds Claude (the model powering OpenCode). Covers Claude-specific techniques including how to give clear instructions, use system prompts effectively, and structure multi-step tasks. Directly applicable to your OpenCode sessions.
The single most impactful habit: before starting any non-trivial task with an AI, write down your goal in one sentence, your constraints, what "done" looks like, and 2–3 examples of the output you want. This one habit alone will 3x the quality of your AI interactions.