Fluent, Not Native: Agentic Tools and Cross-Team Contribution

Fluent, Not Native: Agentic Tools and Cross-Team Contribution

The ability to improve a design occurs primarily at the interfaces. This is also the prime location for screwing it up.

- Akin’s Laws of Spacecraft Design #15 (Shea’s Law)

Full stack engineers are supposed to be generalists, but at some point in the last few years "full stack" has quietly expanded to include territory that used to belong to dedicated data teams, leaving dashboards, reporting queries, data models, and BI tooling increasing within the product engineer's remit.

The data team still owns the warehouse and the pipelines. But product engineers regularly need to make changes that touch both sides: adding a metric, building a dashboard for a feature they've just shipped, debugging why a report shows unexpected values. The traditional options are either to learn the data stack properly (time consuming, especially in a rapidly evolving product) or file a ticket and wait (slow, especially on a lean team). Agentic tools offer a third path: assisted contribution that lets engineers work productively in data-adjacent territory without fully context-switching into a new discipline.

The goal isn't fluency indistinguishable from a native speaker. It's being conversational enough to get things done.

The friction at the boundary

A concrete scenario: an engineer ships a feature and needs to add tracking to an existing dashboard. The data's already flowing into the warehouse. Conceptually, the change is straightforward.

But the dashboard is defined in YAML via a sync process. There are conventions for how cards wire up to filters, entity ID formats to follow, parameter mappings to configure correctly. The SQL needs to follow patterns specific to the reporting schema. The engineer could learn all of this, but it’s tooling they’re not going to be using on a daily basis in an environment that’s changing rapidly. It’s a lot of time investment for an uncertain and intermittent reward.

The alternative is to hand it to the data team. But they have their own priorities, and now there's coordination overhead: explaining the requirement, waiting for availability, reviewing something written by someone who doesn't have full context on the feature that motivated the change and isn’t in direct contact with the stakeholders who drive the requirement.

Neither option is particularly appealing. The first burns cognitive load on learning systems you'll rarely touch. The second introduces latency and the inevitable friction of handoffs.

This isn't an argument for reckless autonomy. Sometimes the right answer genuinely is to hand it off. If the change is architecturally significant or touches something fragile, you want the domain expert involved from the start. But plenty of contributions get stuck not because they're genuinely complex, but because the contributor lacks familiarity with the tooling and conventions of an adjacent domain. That's a different problem, and one where agentic tools can help.

Agentic tools as translation layers

The useful framing here is translation rather than replacement. Agentic tools can't be a substitute for the data engineer's expertise. They're bridging the gap between "I know what I want to achieve" and "I know how to express that in this system's idioms."

Dashboard contributions are a clear example. The product engineer understands the business logic and the underlying data, having the in-depth understanding that only comes from having built the feature in the first place. What they don't have memorised is the YAML schema, the naming conventions, or how filter mappings need to be structured. An agentic tool can scaffold that translation while the engineer focuses on the semantics of what they're trying to display.

Reporting queries follow a similar pattern. Writing SQL against a well-modelled warehouse isn't conceptually difficult, but knowing which tables to join, what the naming conventions are, and where the edge cases live takes time to absorb. Pattern-matching against existing queries, surfacing relevant schema information, suggesting approaches based on similar reports—these are tasks where agentic tools can meaningfully accelerate the work.

Data model exploration is often the real bottleneck. Understanding what's actually in the warehouse—what fields exist, how they relate, what's been deprecated—typically requires either reading documentation (if it exists and is current) or interrupting someone on the data team for orientation. Agentic tools can compress that exploration significantly.

None of this works by magic. There are prerequisites.

The existing codebase needs reasonable conventions and patterns to match against. If the current SQL is an inconsistent mess, the tool will confidently reproduce that inconsistency. Structure in -> structure out, or the other thing.

There need to be explicit boundaries on what's appropriate for assisted contribution versus what needs data team involvement from the start. There are a lot of use cases where slow and steady really does win the race.

As with all tool-assisted development, the review process is absolutely key. The data team's role shifts from "do all the work" to "review contributions and catch domain-specific errors." This is a different skill, and arguably a better use of their time, but it requires a lot of focus and detailed review in volume becomes a specialist skill in its own right. If review becomes a bottleneck you've just moved the ticketing process a couple of stages down the pipeline.

Insoluble problems

Agentic tools can help with expressing intent in unfamiliar systems. They're considerably less useful for forming that intent in the first place. Again, another new specialist skillset emerges: choosing which metrics actually matter to the business, understanding why the data model is structured as it is, debugging subtle issues where values seem wrong for non-obvious reasons. These all require domain knowledge and continuous stakeholder interaction that the tool can't provide.

Architectural decisions about the data stack remain firmly in data team territory. The questions of whether to restructure a fact table, how to handle slowly changing dimensions, or when to materialise a view aren't things you want someone to stumble through with AI assistance. The goal is unblocking routine contributions, not dissolving the boundary between teams entirely.

Where does this leave us?

Agentic tools are genuinely useful when they reduce friction for contributions that would otherwise be blocked by knowledge gaps in adjacent domains. They're a wonderful facilitator for cross-team collaboration, not a magic bullet that eliminates the need for expertise.

The interesting question isn't whether these tools replace specialists, but what effect they have on how specialists spend their time. Ideally, more time is spent on architectural thinking, detailed review, and knowledge transfer. These are areas of work that exert more leverage on the engineering division's overall output.

Read more