Every vendor promises AI-driven network operations (NetOps).
Every keynote speaker claims autonomous NetOps are just around the corner.
Every enterprise leader wants to believe AI will eliminate outages and free engineers from firefighting.
But EMA’s 2026 AI-Driven NetOps Survey delivers a much harder truth:
We are not in the era of AI-driven NetOps.
We are in the era of AI aspiration.
And aspiration without readiness doesn’t accelerate outcomes—it results in failure.
EMA asked respondents how engaged they really are with their vendors’ AI capabilities.
The results expose the first major maturity gap:
Most teams think they’re using AI because their tools include AI features.
But actual operational usage drops dramatically.
The disconnect looks like this:
|
AI promise |
Enterprise experience |
|
AI bundled into platforms |
Often enabled by default |
|
AI actively used by NetOps |
Rare |
|
Teams understand AI outputs |
Limited |
|
AI influences daily operations |
Minimal |
Having AI in your tools ≠ using AI in your operations.
Most teams don’t know:
What their tools’ AI features actually do.
How to activate them.
How to validate outcomes.
Whether they’re producing value.
This isn’t incompetence.
It’s a readiness gap.
EMA also asked how well teams can evaluate AI-driven network management.
A majority admitted they struggle to validate AI outputs.
That’s a critical blocker.
Why is AI-driven network operations failing to scale in enterprises? Because:
You can’t adopt what you can’t measure.
You can’t automate what you can’t verify.
You can’t trust what you can’t explain.
These are the primary reasons AI isn’t scaling in NetOps.
Teams don’t lack ambition.
They lack observability-grade evidence.
EMA found AI is increasingly influencing product selection.
Enterprise leaders want:
AI-assisted troubleshooting
AI-driven recommendations
AI-enhanced visibility
But here’s the uncomfortable truth the survey exposes:
AI is shaping buying decisions more than operational decisions.
Translation:
Vendors are selling AI.
Organizations are funding AI.
But NetOps teams aren’t operationalizing AI.
That’s not innovation—that’s shelfware.
EMA shows budgets for AI in NetOps are expanding—many within the next 12 months.
At the same time, teams report:
Low confidence in data quality
Low confidence in AI evaluation
Low confidence in AI accuracy
This creates a dangerous pattern:
We’re funding AI faster than we’re preparing for AI.
That’s how organizations end up with expensive platforms and no operational trust.
EMA asked how frequently AI produces false or mistaken insights.
The answer?
Regularly.
Not occasionally.
Not rarely.
Regularly.
This reshapes the entire autonomous networking narrative.
If AI frequently generates incorrect insights:
Closed-loop automation becomes unsafe
Trust collapses
Engineers revert to manual workflows
AI adoption stalls
Which leads directly to the biggest barrier of all.
EMA asked whether respondents are ready to allow AI to take automated action without human involvement.
Most said “no.”
And even those who were willing initially ended up losing confidence once they accounted for AI error rates.
This isn’t a tooling issue.
This is a trust issue.
And trust isn’t built with dashboards or demos.
Trust is built with:
Clean telemetry
Complete visibility
Correlated signals
End-to-end data integrity
Right now, most enterprises don’t have that foundation.
EMA report delivers signals industry leaders can’t ignoreAcross NetOps teams, the same patterns keep emerging:
These aren’t edge cases. They’re symptoms of immature observability. |
It’s a readiness problem.
What are the biggest barriers to autonomous network operations? EMA’s research reveals a consistent story:
✔ Organizational leaders want AI.
✔ Organizational leaders are budgeting for AI.
✔ Organizational leaders believe AI will improve operations.
But simultaneously:
❌ They don’t use the AI they already own.
❌ They can’t measure AI effectiveness.
❌ They see frequent AI errors.
❌ They don’t trust AI for automation.
❌ They lack clean, correlated data.
That’s not an AI failure.
That’s an observability failure.
A data readiness failure.
A NetOps maturity failure.
Here’s the part the hype skips:
AI is an amplifier.
Feed it partial, siloed data—and it amplifies chaos.
Feed it clean, complete, correlated, end-to-end telemetry—and it becomes transformative.
Until you first fix the data foundation, you can’t:
Trust AI
Automate workflows
Predict failures
Reduce MTTR
Enable closed-loop actions
This is why AI belongs last in the NetOps maturity sequence:
Visibility creates trust.
Automation creates scale.
AI creates foresight.
Reverse that order, and you don’t accelerate outcomes—you accelerate mistakes. (See my prior post to find out more about why an AI-first approach is destined to fail.)
AI adoption is not the goal.
AI readiness is.
EMA’s survey makes it clear: Most organizations aren’t there yet.
But the path to success is straightforward:
Fix your data → mature observability → build trust → then automate.
That’s how NetOps becomes predictive.
That’s how networks become AI-ready.
And that’s how enterprises move from aspiration to execution.
Q: Why isn’t AI scaling in NetOps today?
A: Because most organizations lack clean, correlated, end-to-end data. Without trusted observability, AI outputs can’t be validated or operationalized.
Q: Is autonomous networking realistic in 2026?
A: Only for teams with mature visibility and automation foundations. For everyone else, autonomous operations remain aspirational.
Q: What’s the biggest blocker to AI-driven NetOps?
A: Trust—and trust depends on observability maturity, not AI features.
Q: What should organizations focus on first?
A: Visibility. Then automation. Then AI. NetOps maturity is sequential, not additive.