Strategy7 min read16 April 2026

Hiring Automation Help: 12 Questions to Ask Before You Sign Anything

A practical guide for business owners evaluating automation consultants, agencies, or freelancers — the specific questions that separate people who'll solve your problem from people who'll create new ones.

H

Haroon Mohamed

AI Automation & Lead Generation

The market is flooded with people calling themselves automation experts

In 2026, automation has become the "SEO of this decade" — a field full of generalists claiming specialization, results claims that can't be verified, and people who learned a single tool three months ago marketing themselves as experts.

The good ones genuinely exist. They deliver systems that generate measurable revenue. But the signal-to-noise ratio in the market is low.

Here are the questions that separate one from the other.


Before the first call

Question 1: Can I see the exact systems you've built for previous clients?

Good answer: "Yes, here's a case study. Here's a Loom walkthrough of the actual automation. I can set up a test account and show you live."

Bad answer: "I can't share specific client work due to NDAs, but here's a vague description."

NDAs are real, but a genuine operator has enough case studies that they can share at least some with specifics. If everything is vague and anonymized, you're probably talking to someone without substantial portfolio.

Question 2: What tools do you actually use day-to-day?

Good answer: A specific list. "GoHighLevel for CRM and messaging, Make.com for non-CRM automations, VAPI for AI calling, Supabase for custom data, Clay for enrichment."

Bad answer: "Whatever the client needs."

Specialists have preferences. "Whatever the client needs" usually means "I'll try to learn what you want me to use mid-project."

Question 3: What's a project you took on that didn't work out, and what happened?

Good answer: A specific story. "I tried to automate X for a client but the downstream process wasn't defined clearly enough. We burned 2 weeks before realizing we needed to redesign. Here's what I'd do differently."

Bad answer: "Everything I've built has worked." Or refusal to engage with the question.

Nobody has a 100% track record. People who claim otherwise either haven't done much work or can't self-reflect.


About the scope

Question 4: Before building anything, how do you figure out what needs to be built?

Good answer: A discovery process. "I spend 1–2 hours mapping your current workflow. I identify bottlenecks. I write up a proposed architecture with specific tools and timelines. Only then do we start building."

Bad answer: "I'll just start implementing what you asked for."

An operator who starts building without understanding the current system will miss critical context and build solutions that don't fit.

Question 5: What happens if we discover the scope is bigger than originally agreed?

Good answer: A clear process. "I flag it immediately, describe what we've learned, propose an updated scope and price, and you decide whether to continue."

Bad answer: Either "I'll just do whatever it takes within the original scope" (unsustainable, leads to resentment) or "I'll send you a change order" without a clear framework.

Question 6: How do you hand off the system so we can maintain it ourselves?

Good answer: Documentation deliverables. Loom walkthroughs of every automation. Written runbooks for common changes. Access to all accounts with proper permissions.

Bad answer: "I'll be available if you need changes." That's not handoff — that's permanent dependency.


About pricing

Question 7: How do you price this?

Good answer: A clear model. Either fixed-price per deliverable, hourly with capped scope, or retainer with defined monthly scope. The operator can explain why they chose that model.

Bad answer: "It depends." or a hourly rate with no estimate.

Open-ended pricing usually benefits the operator at the client's expense. Reasonable operators can give you a ballpark range even for custom work.

Question 8: What's included in the quoted price, and what costs extra?

Listen specifically for:

  • Software licenses: Are GHL, VAPI, Make.com subscriptions on you or included?
  • Twilio / telephony costs: Usage is almost always passed through — confirm this is clear.
  • Revisions: After launch, how many rounds of changes are included?
  • Ongoing support: Is 30 days of post-launch support included? 90 days? None?

The common pattern is: implementation fee is clear, ongoing support is murky. Get the ongoing part in writing.


About implementation

Question 9: What's the timeline, and what could push it?

Good answer: A week-by-week breakdown with specific dependencies. "Week 1: audit + architecture. Week 2-3: build core workflows. Week 4: integration testing. Week 5: user acceptance + launch."

The good operator will also name what could delay: "If you're not available for weekly check-ins, timeline slips. If we discover your data is in worse shape than expected, timeline slips. If you want changes during build, timeline slips."

Bad answer: "It'll take about a month." No specificity, no named dependencies.

Question 10: Who else will work on this besides you?

Good answer: Honest answer about team composition. Either "just me" or "I'll handle X, my junior will do Y under my supervision."

Bad answer: Vague references to "my team" or "my agency partners" without specifics.

If juniors will do work, ask to speak to one of them. If the operator won't arrange it, be cautious.


About the operational reality after launch

Question 11: Six months after launch, what's the relationship?

Good answer: Clear options. "You own everything. You can continue paying me monthly for support, or take it in-house. If you want to walk away, here's everything handed over."

Bad answer: Anything that creates lock-in. Accounts the operator owns on your behalf. Integrations that only work if they maintain them. Custom code without source access.

The best automation consultants work toward making themselves unnecessary. The worst make themselves indispensable.

Question 12: What happens when this breaks in ways you didn't anticipate?

Good answer: An honest acknowledgment that things will break, plus a specific protocol. "I maintain a monitoring dashboard. I commit to a response time SLA during support periods. For breakages after the support period, I have an hourly rate for emergency work."

Bad answer: "I build things that don't break." Red flag. Everything breaks eventually.


Specific signals to weight heavily

Positive signals:

  • Can explain concepts in terms of YOUR business, not generic automation theory
  • Asks about your goals, metrics, and what success looks like — before pitching solutions
  • References specific tools by name and knows their limitations
  • Has written content, talks, or other public work that demonstrates depth
  • Will say "I don't think automation is the right answer here" when it isn't

Warning signals:

  • Promises specific numbers ("I'll 3x your leads") without seeing your operation
  • Every problem has the same solution (usually involving their preferred tool stack)
  • Can't explain technical decisions in plain English
  • Dismisses your existing setup without understanding it
  • Quotes based on buzzword fit rather than actual scope

The one question most people forget to ask

"What would you NOT recommend automating in our business?"

Good operators have opinions about what shouldn't be automated. A complex customer conversation. A pricing negotiation. A high-value referral discussion.

Operators who can't answer this are likely to over-automate — putting AI where humans belong, and creating problems that cost more than the automation saves.


Red flags specific to AI automation

Some flags specific to the AI-automation space:

  • "We have proprietary AI technology" (usually means they're white-labeling GPT + a prompt library)
  • "Our AI agents have 95%+ booking rates" (almost certainly cherry-picked, or not measuring what they claim)
  • Refusal to let you hear a real call the AI has made
  • No mention of cost management, ongoing prompt maintenance, or failure modes

Real AI automation is more art than magic. Practitioners who have done it at scale talk about it like engineers: with specifics, trade-offs, and an awareness of what can go wrong.


Sources

This is a synthesis post based on my own experience as both a consultant and (before that) a client. There's no specific citation for the patterns — they're from deployment experience across client projects.

If you're evaluating automation help for your business, feel free to run these questions past me too. I'd rather have you choose the right person than the wrong one, even if that's not me. Let's talk.

Need This Built?

Ready to implement this for your business?

Everything in this article reflects real systems I've built and operated. Let's talk about yours.

H

Haroon Mohamed

Full-stack automation, AI, and lead generation specialist. 2+ years running 13+ concurrent client campaigns using GoHighLevel, multiple AI voice providers, Zapier, APIs, and custom data pipelines. Founder of HMX Zone.

ShareShare on X →