The team assigned to your AI project will make or break the engagement. Not the agency's reputation, not their sales pitch, not their fancy client list — the specific people who show up to the kickoff meeting and stay through delivery. Understanding the roles, their real responsibilities, and what strong vs. weak looks like in each position will help you evaluate agencies and manage engagements effectively.
Here's how AI agency teams are typically structured, what each role actually does, and what it costs.
The Core Roles
Project Manager (PM)
What they do: The PM owns the engagement operationally. They manage the timeline, facilitate communication between your team and the agency's technical team, run status meetings, track risks, coordinate dependencies, and generally keep things from falling apart when complexity increases.
A strong PM is the early-warning system for a project in trouble. They know the project well enough to spot when the technical team's estimates are slipping, when requirements are creeping, and when client-side blockers are about to derail a milestone. They translate between business language and technical language in both directions.
What weak looks like: A PM who is primarily a meeting scheduler and status-report writer. If the PM can't tell you, in technical terms, what the team is building and why, they're not adding value — they're adding overhead. Also watch for PMs who are conflict-averse; the PM's job sometimes requires telling you that a request is out of scope or that your data isn't ready for what you want to do.
What it costs: $80–$150/hour at mid-size agencies. Often allocated at 25–50% of their time to your engagement rather than full-time unless the project is large.
Do you need them?: Yes, always. Self-managed technical engagements without a PM usually drift. Even on smaller projects, having a dedicated coordination point on the agency side is worth the cost.
---
ML Engineer / AI Engineer
What they do: The hands-on builders. ML engineers design and implement the AI systems — training data pipelines, model training and fine-tuning, evaluation frameworks, inference infrastructure, and production deployment. On most AI projects, this is where the majority of the actual work happens.
The distinction between "ML engineer" and "data scientist" is meaningful: ML engineers are primarily focused on system design, engineering quality, and production deployment. Data scientists are focused on analysis, experimentation, and statistical methodology. You need both for most projects of substance.
Strong ML engineers write code that other engineers can maintain. They document their decisions, build evaluation harnesses that let you know when the model is degrading, and think about operational concerns from the beginning. They don't just get a model working in a Jupyter notebook — they build a system.
What weak looks like: Engineers who can demonstrate impressive demos but can't ship production code. The gap between a research-quality AI proof of concept and a production AI system is enormous, and agencies that specialize in demos rather than deployment are a common failure mode. Ask specifically: "How many of the systems your team has built are still running in production today?"
What it costs: $150–$300/hour for senior ML engineers. This is the single most expensive line item in most AI agency engagements. A 3-person engineering team working for 12 weeks at $200/hour average runs $144,000 in engineering time alone before PM, data science, or overhead costs.
Do you need them?: Yes. Everything else is support staff for the ML engineers. This is who builds the thing.
---
Data Scientist
What they do: Data scientists own the analytical and methodological side of AI projects. They explore and understand the data, design the modeling approach, run experiments, evaluate model performance, and translate business questions into statistical specifications.
In practice, the line between data scientist and ML engineer blurs at many agencies. Smaller agencies often have people who do both. At larger agencies, the roles are more distinct: data scientists do exploratory data analysis and modeling strategy; ML engineers implement the production system.
Strong data scientists know what they can and can't learn from data. They push back on modeling requests that aren't statistically supportable. They communicate uncertainty — "this model predicts X with 85% precision but we're less confident about edge cases in category Y" — rather than overselling results.
What weak looks like: Data scientists who focus on maximizing a metric on a test set rather than ensuring the model performs well on real-world production data. Test-set optimization is a trap. The goal is a system that performs well on the data it will actually encounter, which is often quite different from the training distribution.
What it costs: $120–$250/hour depending on seniority and specialty. Often allocated at 50–100% of their time depending on project phase — heavily front-loaded during data exploration and modeling, lighter during deployment.
Do you need them?: Yes for any project involving custom model development. For purely integration-focused projects (connecting to an existing LLM API with minimal custom modeling), you might get by without dedicated data science resources — but be explicit with the agency about whether the project truly requires it.
---
Data Engineer
What they do: Data engineers build and maintain the data infrastructure that AI systems depend on. This includes data extraction pipelines (pulling data from your CRM, ERP, databases, or APIs), transformation logic, storage architecture, data quality validation, and the pipelines that feed training data to models and operational data to production systems.
This role is frequently underestimated and often the bottleneck in AI projects. The phrase "our data is a mess" is a data engineering problem before it's an AI problem. Projects that underinvest in data engineering end up with ML engineers spending 60% of their time cleaning and wrangling data — work that's both inefficient and expensive when done at ML engineer rates.
What weak looks like: Treating data engineering as an afterthought. Agencies that don't have a dedicated data engineering capability (or don't scope data engineering work explicitly) are likely to encounter painful surprises when they actually engage with your data.
What it costs: $100–$200/hour. Often the most cost-efficient role to invest in heavily — good data engineering makes the ML work faster and more reliable.
Do you need them?: Yes, if your data requires extraction, transformation, or cleanup before it can be used. That's most projects. The question is whether this is a standalone role or whether your ML engineers cover it.
---
Solution Architect
What they do: Solution architects own the technical design of the system as a whole — how the AI components fit into your existing technology stack, how data flows through the system, what the deployment architecture looks like, and how the system scales and fails gracefully. This is a senior role that requires both deep AI technical knowledge and broad systems engineering experience.
On smaller projects, the ML engineer or technical lead often doubles as the solution architect. On larger or more complex projects — especially those with complex integration requirements — this is a distinct role that's worth the investment.
What weak looks like: Solution architects who design for technical elegance rather than practical deployability. A beautifully designed AI system that requires infrastructure your team can't support is useless. Strong solution architects design for the client's actual operational environment, not for what they'd build if they were starting fresh.
What it costs: $200–$400/hour. Usually engaged at lower time allocation (25–50%) for most projects.
Do you need them?: Yes for complex projects with multiple integration points, compliance requirements, or significant scalability needs. Optional for simple, well-bounded projects.
---
Domain Expert / Industry Specialist
What they do: Domain experts provide subject matter knowledge in the industry or use case the AI system is addressing. They understand why certain patterns in the data matter, what edge cases are likely in production, and whether the model's outputs make sense from a domain perspective.
Some agencies have staff domain experts for their target verticals (healthcare AI agencies with clinical informatics specialists, legal AI agencies with former attorneys). Others source this expertise through the client or through contracted specialists.
What weak looks like: Agencies that skip domain expertise and assume their ML engineers can figure it out. In specialties like healthcare, finance, and legal, domain knowledge is not optional — it determines whether the system behaves safely and correctly in real-world conditions.
What it costs: Highly variable. Staff domain experts may be included in agency rates. Contracted specialists run $200–$500/hour.
Do you need them?: Yes if your domain is specialized. Absolutely required for healthcare, legal, financial services, and any application where errors have serious consequences. Nice-to-have for general business applications.
---
What You Actually Need vs. Nice-to-Have
For a typical mid-size AI project ($30K–$100K), the minimum viable team is:
Required: 1 PM (half-time), 1–2 ML engineers (full-time), 1 data scientist (half- to full-time), 1 data engineer (half-time).
Add when relevant: Solution architect (complex integrations), domain expert (specialized industries), DevOps/MLOps engineer (complex deployment, ongoing monitoring requirements).
Teams larger than this on sub-$100K projects should raise questions about efficiency. A 10-person team on a 3-month project is not inherently better than a 4-person team — and it's significantly more expensive in coordination overhead.
The Staffing Red Flags
Bait-and-switch on team members: The senior engineers in the sales presentation don't work on your project. Junior staff do. This is common enough that you should explicitly ask to meet the project team — not the company leadership — before signing. Get the names of who will work on your engagement in writing.
Team churn during projects: High staff turnover mid-project is expensive. Every handoff requires knowledge transfer that slows delivery. Ask agencies about their average staff tenure and their approach to continuity if key personnel leave during an engagement.
No ML engineer, just a "prompt engineer": For projects that are genuinely just LLM API integration with custom prompting, a software engineer with LLM experience may be appropriate. For anything requiring custom model development, training, fine-tuning, or evaluation, you need actual ML engineers. These skill sets are not interchangeable.
Oversized teams for small projects: A 12-person team on a $40K project sounds impressive. It's actually a coordination nightmare with an agency trying to maximize billing on a small engagement. Lean teams with clear ownership outperform large teams with diffuse responsibility.
Managing the Agency Team Effectively
Your side of the engagement matters as much as theirs. The clients who get the best outcomes from AI agencies consistently do three things well:
Provide a decision-maker, not a committee. AI projects involve frequent trade-off decisions — scope, timeline, technical approach. If every decision requires a committee of 6 people and a 2-week approval cycle, the project will stall. Name one person on your side with authority to make calls.
Treat data access as a project dependency, not an afterthought. Agencies frequently spend the first 2–4 weeks of a project waiting for access to the data they need. This is entirely preventable. Identify what data will be needed, prepare access credentials, and have them ready before kickoff.
Engage actively during testing. The testing phase is where the system meets reality. Your team's involvement in testing — running realistic scenarios, reporting edge cases, providing feedback on outputs — is the difference between a model that works in the lab and one that works in production.
Browse aiagencymap.com's agency listings to find firms whose team structure and specialty match what your project requires.
Ready to Find the Right AI Agency?
Browse 700+ verified AI agencies. Filter by tech stack, industry, location, and client ratings.
Browse AI Agencies