WORK-AI Integration on Asio

Design Systems

SaaS Platform

Leadership

AI Integration on Asio™ — AI Automation using SideKick™

How I led the strategic redesign of ConnectWise's AI experience — moving from "bolt-on AI features" to a coherent, trusted, human-centered AI system that transformed how 25,000+ MSPs work every day.

AI buttons everywhere. Nobody used them. MSPs losing trust in the product they paid for. We had one shot to fix it.

MY ROLE

UX Leader

PLATFORM

Asio™ / SideKick™

DURATION

8 Months

USERS

25,000+ MSPs

TEAM

Cross-functional

READ TIME

~2 Min ⚡️

⚡ TL;DR — 3 Key Outcomes From This Initiative

✅ 20% efficiency gain for MSP workflows

✅ Measurable lift in AI feature adoption

✅ Reduced cognitive load — AI trust scores up

01

The AI Arms Race Was Leaving Users Behind

Context

By 2023, nearly 80% of SaaS platforms had launched at least one AI feature. ConnectWise was no different. In the rush to stay competitive, AI capabilities — summarize, generate, recommend — were being shipped fast and shipped often. But there was a growing, uncomfortable truth nobody wanted to say out loud: users weren't using them.

MSPs — the IT professionals managing hundreds of client endpoints — were navigating interfaces bloated with AI buttons they didn't understand, didn't trust, and couldn't predict. What was supposed to be our competitive advantage was becoming a liability. We were building what I came to call AI Debt — the accumulated cost of rushed, disconnected AI features that erode user trust over time.

80%

SaaS platforms launched AI by 2023

$ 126 B

Global AI software market target in 2027

15%

Decrease in AI feature adoption because of improper onboarding

2X

AI Feature overload without purpose

Fragmented AI buttons ("Generate", "Summarize", "Rewrite") scattered across the Asio™ (Before)

"The AI Feature Overload"

02

UX Lead — Strategy, Research, Design Direction, and Ship

My Role

This wasn't a solo design project — it was a cross-functional strategy effort. My role was to establish the UX direction, keep it human-centered through constant user evidence, and make sure every AI interaction decision had a clear rationale grounded in research, not assumption.

My Responsibilities

UX Strategy · Design Leadership · User Research · Cross-functional Facilitation · Design System Direction · Stakeholder Alignment

Key Deliverables

AI interaction patterns · Workflow redesigns · Research synthesis · Design system components · Executive readouts

Team

Product Managers · AI/ML Engineering · Partner Success · Sales · Customer Support

Tools Used

Maze

Marvin

Pendo

MS Copilot

Asio Neon™

Confluence

Jira

Figma

FigJam

03

SCATTERED ai buttons which didn't add value

The Problem

The AI features we were shipping were feature-driven, not user-driven. We were confusing what was technically possible with what was actually useful. The result? Interfaces that pulled MSPs out of their workflow instead of amplifying it.

What Was Broken

Bolted-on AI. Features like Generate, Summarize, and Rewrite were scattered across interfaces with no coherent context — "glitter on a school project," as I described it internally.

Black box recommendations. SideKick™ would surface suggestions without explaining why. When MSPs couldn't understand AI reasoning, they simply ignored it.

Feature bloat. The race to ship AI had created an interface graveyard of features no one asked for and few actually needed. Users were overwhelmed.

Broken trust loop. One bad AI recommendation eroded confidence in all subsequent ones. Without transparency, even accurate AI became background noise.

Business impact. Low adoption meant we were investing heavily in AI with diminishing returns. Partner satisfaction scores were under threat.

🙌🏻

"In the race to add AI, many SaaS products have become bloated — layering automation where it’s not needed and eroding the user experience." - TechCrunch about the AI arms race in enterprise tools.

04

What We Were Chasing and for Whom

Objectives

My framing for the team: AI should feel like a trusted colleague, not a slot machine. Every design decision had to serve the primary persona — the overwhelmed MSP technician managing 200+ endpoints — while also delivering on the business's need for AI product differentiation.

Garrett Black

Technician · MSP Field & Remote

Configure faster · Less code · More output

JOBS TO BE DONE

Configure and customise apps for clients without writing a single line of code

Resolve tickets faster by surfacing the right fix the first time with less resource

Reduce repetitive manual steps across every client environment

Pain points

Context switching across 5+ tools to resolve a single service ticket

No way to know if a fix was already applied to a similar client before

Configuration requires specialist knowledge not always available

MENTAL MODEL AROUND AI

"Show me the next step — don't explain the whole system." Garrett trusts AI that acts like a senior peer suggesting the fix, not a chatbot asking clarifying questions. Skeptical of AI that adds steps rather than removing them.

Jessica Cho

Sales · Account Executive

Close more deals · Surface lost revenue

JOBS TO BE DONE

Identify at-risk deals and revenue gaps before they become lost opportunities

Build accurate, tailored quotes without back-and-forth with technical teams

Understand client ROI clearly enough to justify the deal to any stakeholder

Pain points

CRM and PSA data are always out of sync. Can't trust pipeline view

Lost revenue signals buried in service reports nobody reads in time

Quote customisation requires engineering input, slowing deal cycles

MENTAL MODEL AROUND AI

"Give me the insight, not the data dump." Jessica wants AI that surfaces what matters — which deal to prioritise, which client is at risk — without her having to build the query. Trusts AI recommendations only if she can see the reasoning behind them.

Lorena Santos

Finance · Controller or CFO

Manage cashflow · Actionable cost insights

JOBS TO BE DONE

Forecast cashflow with enough confidence to make staffing and vendor decisions

Identify cost overruns and margin leaks by project, client, and service line

Produce accurate board-ready reports without 2 days of spreadsheet work

Pain points

Financial data lives in PSA, billing, and spreadsheets — never reconciled

Operational cost visibility is always lagging — by the time she sees it, it's too late

Manual reconciliation every month consumes 2–3 full days of analyst time

MENTAL MODEL AROUND AI

"I'll trust the number when I can verify the source." Lorena is the most AI-skeptical persona. She needs explainability and an audit trail before acting on any AI-generated insight. Prefers AI that flags anomalies for her to review over AI that makes decisions autonomously.

Ray Jackson

Owner · MSP Founder & CEO

Grow profitably · Business visibility

JOBS TO BE DONE

See business health — margin, utilisation, client risk — in a single view without chasing reports

Make hiring and investment decisions backed by forward-looking data, not last month's actuals

Grow revenue per client without proportionally growing headcount or complexity

Pain points

Business visibility requires pulling from 4+ disconnected systems — always stale by the time it's ready

Can't see which clients are profitable and which are silently draining margin

Growth decisions made on gut instinct — no AI-assisted scenario modelling available

MENTAL MODEL AROUND AI

"Tell me what I should be worried about before I ask." Ray wants proactive AI — surfacing risks and opportunities before they become problems. Believes AI should make him look smarter to clients and investors, not just faster at admin. The highest ROI expectation of any persona.

05

What Users, Competitors, and Sales Told Us — and What Changed

Research

We ran a mixed-methods research sprint — partner interviews, session recordings, support ticket analysis, and sales feedback sessions. The findings were humbling. We had been designing for what engineering could build, not for what partners actually needed.

68%

Users who doesn’t trust the data used to train the AI are hesitant to adopt it

56%

Users mentioned that its difficult to get what they want out of AI

75%

Users who doesn’t trust the data also believes that AI lacks the information needed for it to be useful

😤

Insight 1: Users - "I don't trust it."

Partners consistently said they avoided AI suggestions after a single bad recommendation. The pattern was clear: trust, once broken, is hard to rebuild — especially in a cybersecurity-adjacent product where a wrong action has real consequences.

Source: Partner interviews · 40+ sessions across all MSP segments

💬

Insight 2: Sales - "They ask what the AI actually does."

Sales reps reported that top objections during demos were about AI transparency. Prospects wanted to see explainability — the "why" behind a recommendation — before considering adoption. This was a pipeline problem, not just a UX problem.

Source: Sales team quarterly feedback sessions

📉

Insight 3: Data - Feature usage was a cliff.

Session data showed users would click AI buttons on first exposure, then never return. Day 1 engagement looked healthy. Day 14 retention was near zero. The experience wasn't building habit — it was burning through curiosity.

Source: Pendo session analysis · 100+ recorded sessions

🖱️

Insight 4: Stakeholders - "We're building too fast."

Internal stakeholders — including Partner Success and Support — flagged that AI errors were generating a spike in support tickets. The velocity of AI feature shipping was outpacing our ability to design safe, recoverable failure states.

Source: Cross Functional Office Hours · Leadership/Product sessions

We interviewed 35+ Partners (Users) as part of the discovery

Early Interaction with Users Helped us Map Out their Frustrations and Pain Points

06

The Decisions That Actually Mattered

Solutions

This is where I want to be honest: we didn't just redesign screens. We made four strategic decisions that shaped the entire direction of AI in ConnectWise's product ecosystem.

Decision 1: AI that doesn't announce itself — coherent AI over flashy AI

We removed standalone "AI buttons" from primary navigation and embedded AI capabilities contextually within existing workflows. The principle: AI should feel like a natural part of the task, not a separate mode to switch into. If you have to notice the AI, it's not working well enough.

STRATEGIC DECISION

USER JOURNEY

Decision 2: Explainability as a core design requirement, not a feature

Every SideKick™ recommendation now surfaces a brief, plain-language rationale. "Why is SideKick suggesting this?" became a design requirement, not a nice-to-have. We piloted a progressive disclosure pattern — a one-line reason, expandable to full context — to avoid cognitive overload while building trust.

HUMAN FACTORS

Decision 3: Control before automation — always give the user an exit

We established a firm design principle: AI never takes autonomous action without explicit user confirmation for high-stakes tasks. For low-stakes automations (e.g., auto-tagging a ticket), we designed an undo trail. For high-stakes ones (e.g., shifting project timelines), we required a deliberate confirm step. Users stay in control.

USER JOURNEY

AI-UX DESign

Decision 4: Design for failure first — not the happy path

We mapped every AI error state before designing the success path. What happens when AI is wrong? What if it's uncertain? We introduced confidence indicators and graceful fallback flows — so when AI fails, users know it, understand why, and can still complete their task without losing trust in the broader system.

UX STRATEGY

USER JOURNEY

AI Focused Design Thinking

1

Context

Focus on the real user tasks and friction points instead of imagining where AI might fit. Identify workflows that are repetitive, require large-scale processing, or decision support ideal for AI enhancement. Conduct generative research like in-depth interviews and contextual inquiries.

2

Strategy

Map AI opportunities to both product strategy and user journeys. Identify which type of AI model (predictive, generative, classification, etc.) suits the experience best. Validate use-cases with users through concept walkthroughs.

3

Resolution

Co-design the AI-user relationship: is it a guide, a partner, or a doer? Keep interfaces simple and make the AI’s role clear. Avoid “Generate” buttons everywhere. Think about fallback states and explainability early. AI may fail and users deserve transparency.

4

Evaluation

Test for AI explainability, failure recovery, and user mental models. Evaluate onboarding and how easily users grasp the AI's purpose and limitations. Observe if AI interruptions disrupt flow or genuinely help.through concept walkthroughs.

Context based SideKick Menu on Project Management screen in Asio™

Dedicated contextual AI Chat screen in Asio™ - Separates the Generative AI from rest of Asio™ pages

"AI Made Contextual”

Current-State Journey Map · SideKick™ AI Adoption · ConnectWise Asio

SideKick™ AI Workday — Three Personas

6

Phases

3

Personas

5

Pain Points

6

Trust Decisions

🙎🏻‍♂️

Garrett Black - Technician

Primary · Resolves 12–18

tickets/day

👩🏻‍💼

Jessica Cho - Sales

Primary · Closes deals, surfaces revenue

gaps

💰

Lorena Santos - Finance

Primary · Finance · Cashflow, margins, board

reports

DISCOVERY

AI SURFACES

EXPLAINS

DECIDES

EXECUTES

CONFIRMS

1

8:00 AM

Morning Ticket Queue

What GARRETT does

Opens PSA. Manually scans 14 tickets. Prioritises by gut feel — no pattern detection. Three tickets share the same root error but he can't see the connection yet.

"Same drill every morning. Read everything, figure out what's on fire."

Tools ACTIVE

ConnectWise Asio

Teams / Phone

🔁

Start of Day

No unified overview. Day begins with manual reconciliation across tools.

📍

Pain Point 1

3 tickets share a root cause. Garrett resolves them separately — triple the work, zero visibility.

Trust D1

Surface only at ≥85% confidence. Silent

monitoring prevents alert fatigue before trust exists.

−40% irrelevant alerts

😐

Neutral

2

8:22 AM

🔔

SideKick™ Surfaces a Pattern

What GARRETT does

An inline banner inside ticket #4712 — not a modal. "SideKick™ sees a pattern across 3 tickets. 91% match to a known RMM agent config issue."

"Wait — it connected these three? I've been looking at them separately all morning."

AI SURFACING

91% confidence · pattern detected

Risk: Low · no session impact

📍

Inline, not modal

Garrett isn't interrupted. The suggestion lives inside his active ticket. Modals provoke reflex dismissal.

Trust D2

Inline, not modal. Garrett stays in context — no interruption, no reflex close.

2.4× engagement rate

😮

Curious

3

8:24 AM

🔦

Opens Reasoning Layer

What GARRETT does

Taps "See reasoning". Panel shows: data sources used, the pattern detected, 2 previously-resolved matching tickets, plain-English explanation of what changes.

"It found the same pattern from November. That actually makes sense."

REASONING PANEL

Source: PSA + RMM telemetry

2 matching resolved tickets

📙

SHOW YOUR WORK

63% of technicians expand reasoning before acting. This is the critical trust bridge between suggestion and action.

Trust D3

Reasoning one tap away. Source, pattern, history — in plain language every time.

63% read before acting

🤔

Building trust

4

8:26 AM

🔍

The Decision Moment

What GARRETT does

Three paths with equal visual weight: Apply now, Schedule, Not relevant. Each shows its 1-line consequence before tapping.

"No dismiss button — and it tells me what

happens with each choice."

Tools open

ConnectWise Asio

Teams / Phone

⏱️

Pain Point 2

Previous design had one button with no context. Garrett skipped it every time — zero signal back to the model.

🎛️

3-path choice architecture

Every path feeds the model. "Not relevant" + 1-tap reason = 4× more feedback than silent dismiss.

Trust D4

Three paths replace one button. No dismiss — every choice teaches the model.

4× model feedback

🧐

Deliberating

5

8:27 AM

▶️

Executes the Fix

What GARRETT does

Taps "Apply now". Fix applies to all 3 endpoints in 4 seconds. All 3 tickets update simultaneously. A 10-minute undo countdown begins automatically.

"That just saved me 40 minutes of going

into each one separately."

EXECUTION STATE

3 endpoints · fix applied

10 min undo window · live

🔁

Undo window · #1 trust driver

Confidence to act comes from knowing you can undo. Highest - rated feature in usability testing.

Trust D5

Execution receipt + 10-min undo. Act without anxiety about irreversibility.

#1 trust driver · testings

😄

Relieved

6

8:35 AM

Confirmation + Loop

What GARRETT does

3 tickets resolved. Outcome card: "3 endpoints fixed · 38 min estimated time saved · Pattern logged." Receipt saved to timeline. Model improves for next match.

"It tells me what it saved. First time any AI

tool has shown me its own value."

Tools open

3/3 tickets resolved

Pattern logged · model improves

📊

MAKE THE WIN VISIBLE

Explicit outcome card surfaces value immediately. Builds the case for repeat use and team adoption.

Trust D6

Outcome card surfaces the value — time saved, pattern logged, model learning.

−80% escalations

😊

Confident

Emotional Arc across the day

Queue

AI Fires

Reasoning

Decides (Dip)

Executes

Confirms

AI action / surfacing

Decision point

Delight / trust built

Overwhelming

Feedback loop

The end-to-end interaction flow for a SideKick™ recommendation — from trigger to confirmation to action

07

What Pushed Back Hard

Challenges

Shipping AI-UX features comes with its own challenges as there are no golden industry standard set and as UX advocates, we are expected to break the boundaries.

⚡️

Shipping velocity vs. design quality

Engineering timelines were aggressive. The pressure to ship AI features before competitors created constant tension between shipping fast and shipping right. I had to negotiate scope and sequence deliverables to protect design quality on trust-critical patterns while allowing velocity on lower-stakes features.

🧩

AI model uncertainty by design

AI outputs are probabilistic, not deterministic. Designing for a system that can be "wrong sometimes" required

entirely new design patterns — confidence levels, graceful errors, undo flows — that didn't exist in our current design system and had to be created from scratch.

🔐

Cybersecurity stakes raised the design bar

MSPs work in cybersecurity-adjacent environments where an AI false positive can have serious downstream

consequences. Every AI-assisted action carried real risk. Designing for this level of trust and consequence required significantly more research, testing, and iteration than a typical B2B SaaS workflow.

📐

No established AI UX patterns to follow

In 2023, enterprise AI UX best practices were still being written in real time. We couldn't rely on pattern libraries. I had to build original interaction frameworks for explainability, progressive disclosure of AI confidence, and human-AI control handoffs — then document them for scale across the team.

08

What Actually Moved in Numbers

Results

Design speaks in numbers rather than pixels. Here's what the work delivered.

30%

Engagement Increase

We recorded a significant increase in number of MSPs interacting with AI features

50%

Adoption of AI Features

MSPs were discovering the new AI features and adopting them. Thanks to the streamlined onboarding

15%

Retention Increase

The long term valued AI added to the user experience helped reduce the churn by 18% and and 25% increase in customer lifetime value.

Contextual AI Workflow Assistance

Replaced scattered AI buttons with contextual, in-workflow AI assistance → Partners stopped ignoring AI features and started relying on them

New Explainability Patterns

Introduced explainability patterns into SideKick™ → Sales reported reduced AI objections during demos

AI-UX Interaction Pattern Library

Established AI interaction pattern library in Asio Neon™ → Design team could ship new AI features 40% faster with consistent, tested patterns

Asio Neon™ Applied at Enterprise-density Scale

Proved the design system's viability for complex data-heavy UIs and contributed new reusable patterns back to the component library

The Trust Design Mental Model

Shifted the team's mental model: AI design = trust design — this philosophy is now embedded in ConnectWise's product design standards

Based on SideKick™ feature adoption data from Pendo · ConnectWise partner base ~25,000

09

If I Ran a similar Project Again Tomorrow

What I Learned

In the end, the real challenge wasn’t the AI models or the engineering magic behind them. It was something far more human — designing Sidekick on the Asio platform to feel like a teammate MSPs could trust. Not just smart, but considerate. Not just powerful, but intuitive.

🎓 6 Lessons That Will Shape Every Complex Product Decision I Make

AI debt is real. rushing AI features creates compounding trust debt that's 10x harder to pay off than tech debt. Slow down to ship right.

Expalainability isn’t optional. in enterprise AI. Users in high-stakes environments need the "why" — not just the "what."

Research the mental model first. How users think about AI — not how engineers built it — is the only foundation worth designing from.

AI-UX design is built on trust. The happy path is easy. The error state, the uncertain state, the override state — that's where trust is built or broken.

Sales is a design feedback channel. The objections prospects raise are user insights in disguise. I would involve sales research from day one next time.

Document AI patterns on the go. The team velocity gains from a living AI pattern library far outweigh the short-term cost of documentation.

Want to read the full design story?

The portfolio case study tells the strategic story. The full story on Medium has some bonus tips on shipping experiences for AI features on a complex SaaS Platform without any established industry level guidelines and bending some UX rules.

WORK-AI Integration on Asio

Design Systems

SaaS Platform

Leadership

AI Integration on Asio™ — AI Automation using SideKick™

How I led the strategic redesign of ConnectWise's AI experience — moving from "bolt-on AI features" to a coherent, trusted, human-centered AI system that transformed how 25,000+ MSPs work every day.

AI buttons everywhere. Nobody used them. MSPs losing trust in the product they paid for. We had one shot to fix it.

MY ROLE

UX Leader

PLATFORM

Asio™ / SideKick™

DURATION

8 Months

USERS

25,000+ MSPs

TEAM

Cross-functional

READ TIME

~2 Min ⚡️

⚡ TL;DR — 3 Key Outcomes From This Initiative

✅ 20% efficiency gain for MSP workflows

✅ Measurable lift in AI feature adoption

✅ Reduced cognitive load — AI trust scores up

01

The AI Arms Race Was Leaving Users Behind

Context

By 2023, nearly 80% of SaaS platforms had launched at least one AI feature. ConnectWise was no different. In the rush to stay competitive, AI capabilities — summarize, generate, recommend — were being shipped fast and shipped often. But there was a growing, uncomfortable truth nobody wanted to say out loud: users weren't using them.

MSPs — the IT professionals managing hundreds of client endpoints — were navigating interfaces bloated with AI buttons they didn't understand, didn't trust, and couldn't predict. What was supposed to be our competitive advantage was becoming a liability. We were building what I came to call AI Debt — the accumulated cost of rushed, disconnected AI features that erode user trust over time.

80%

SaaS platforms launched AI by 2023

$ 126 B

Global AI software market target in 2027

15%

Decrease in AI feature adoption because of improper onboarding

2X

AI Feature overload without purpose

Fragmented AI buttons ("Generate", "Summarize", "Rewrite") scattered across the Asio™ (Before)

"The AI Feature Overload"

02

UX Lead — Strategy, Research, Design Direction, and Ship

My Role

This wasn't a solo design project — it was a cross-functional strategy effort. My role was to establish the UX direction, keep it human-centered through constant user evidence, and make sure every AI interaction decision had a clear rationale grounded in research, not assumption.

My Responsibilities

UX Strategy · Design Leadership · User Research · Cross-functional Facilitation · Design System Direction · Stakeholder Alignment

Key Deliverables

AI interaction patterns · Workflow redesigns · Research synthesis · Design system components · Executive readouts

Team

Product Managers · AI/ML Engineering · Partner Success · Sales · Customer Support

Tools Used

Maze

Marvin

Pendo

MS Copilot

Asio Neon™

Confluence

Jira

Figma

FigJam

03

SCATTERED ai buttons which didn't add value

The Problem

The AI features we were shipping were feature-driven, not user-driven. We were confusing what was technically possible with what was actually useful. The result? Interfaces that pulled MSPs out of their workflow instead of amplifying it.

What Was Broken

Bolted-on AI. Features like Generate, Summarize, and Rewrite were scattered across interfaces with no coherent context — "glitter on a school project," as I described it internally.

Black box recommendations. SideKick™ would surface suggestions without explaining why. When MSPs couldn't understand AI reasoning, they simply ignored it.

Feature bloat. The race to ship AI had created an interface graveyard of features no one asked for and few actually needed. Users were overwhelmed.

Broken trust loop. One bad AI recommendation eroded confidence in all subsequent ones. Without transparency, even accurate AI became background noise.

Business impact. Low adoption meant we were investing heavily in AI with diminishing returns. Partner satisfaction scores were under threat.

🙌🏻

"In the race to add AI, many SaaS products have become bloated — layering automation where it’s not needed and eroding the user experience." - TechCrunch about the AI arms race in enterprise tools.

04

What We Were Chasing and for Whom

Objectives

My framing for the team: AI should feel like a trusted colleague, not a slot machine. Every design decision had to serve the primary persona — the overwhelmed MSP technician managing 200+ endpoints — while also delivering on the business's need for AI product differentiation.

Garrett Black

Technician · MSP Field & Remote

Configure faster · Less code · More output

JOBS TO BE DONE

Configure and customise apps for clients without writing a single line of code

Resolve tickets faster by surfacing the right fix the first time with less resource

Reduce repetitive manual steps across every client environment

Pain points

Context switching across 5+ tools to resolve a single service ticket

No way to know if a fix was already applied to a similar client before

Configuration requires specialist knowledge not always available

MENTAL MODEL AROUND AI

"Show me the next step — don't explain the whole system." Garrett trusts AI that acts like a senior peer suggesting the fix, not a chatbot asking clarifying questions. Skeptical of AI that adds steps rather than removing them.

Jessica Cho

Sales · Account Executive

Close more deals · Surface lost revenue

JOBS TO BE DONE

Identify at-risk deals and revenue gaps before they become lost opportunities

Build accurate, tailored quotes without back-and-forth with technical teams

Understand client ROI clearly enough to justify the deal to any stakeholder

Pain points

CRM and PSA data are always out of sync. Can't trust pipeline view

Lost revenue signals buried in service reports nobody reads in time

Quote customisation requires engineering input, slowing deal cycles

MENTAL MODEL AROUND AI

"Give me the insight, not the data dump." Jessica wants AI that surfaces what matters — which deal to prioritise, which client is at risk — without her having to build the query. Trusts AI recommendations only if she can see the reasoning behind them.

Lorena Santos

Finance · Controller or CFO

Manage cashflow · Actionable cost insights

JOBS TO BE DONE

Forecast cashflow with enough confidence to make staffing and vendor decisions

Identify cost overruns and margin leaks by project, client, and service line

Produce accurate board-ready reports without 2 days of spreadsheet work

Pain points

Financial data lives in PSA, billing, and spreadsheets — never reconciled

Operational cost visibility is always lagging — by the time she sees it, it's too late

Manual reconciliation every month consumes 2–3 full days of analyst time

MENTAL MODEL AROUND AI

"I'll trust the number when I can verify the source." Lorena is the most AI-skeptical persona. She needs explainability and an audit trail before acting on any AI-generated insight. Prefers AI that flags anomalies for her to review over AI that makes decisions autonomously.

Ray Jackson

Owner · MSP Founder & CEO

Grow profitably · Business visibility

JOBS TO BE DONE

See business health — margin, utilisation, client risk — in a single view without chasing reports

Make hiring and investment decisions backed by forward-looking data, not last month's actuals

Grow revenue per client without proportionally growing headcount or complexity

Pain points

Business visibility requires pulling from 4+ disconnected systems — always stale by the time it's ready

Can't see which clients are profitable and which are silently draining margin

Growth decisions made on gut instinct — no AI-assisted scenario modelling available

MENTAL MODEL AROUND AI

"Tell me what I should be worried about before I ask." Ray wants proactive AI — surfacing risks and opportunities before they become problems. Believes AI should make him look smarter to clients and investors, not just faster at admin. The highest ROI expectation of any persona.

05

What Users, Competitors, and Sales Told Us — and What Changed

Research

We ran a mixed-methods research sprint — partner interviews, session recordings, support ticket analysis, and sales feedback sessions. The findings were humbling. We had been designing for what engineering could build, not for what partners actually needed.

68%

Users who doesn’t trust the data used to train the AI are hesitant to adopt it

56%

Users mentioned that its difficult to get what they want out of AI

75%

Users who doesn’t trust the data also believes that AI lacks the information needed for it to be useful

😤

Insight 1: Users - "I don't trust it."

Partners consistently said they avoided AI suggestions after a single bad recommendation. The pattern was clear: trust, once broken, is hard to rebuild — especially in a cybersecurity-adjacent product where a wrong action has real consequences.

Source: Partner interviews · 40+ sessions across all MSP segments

💬

Insight 2: Sales - "They ask what the AI actually does."

Sales reps reported that top objections during demos were about AI transparency. Prospects wanted to see explainability — the "why" behind a recommendation — before considering adoption. This was a pipeline problem, not just a UX problem.

Source: Sales team quarterly feedback sessions

📉

Insight 3: Data - Feature usage was a cliff.

Session data showed users would click AI buttons on first exposure, then never return. Day 1 engagement looked healthy. Day 14 retention was near zero. The experience wasn't building habit — it was burning through curiosity.

Source: Pendo session analysis · 100+ recorded sessions

🖱️

Insight 4: Stakeholders - "We're building too fast."

Internal stakeholders — including Partner Success and Support — flagged that AI errors were generating a spike in support tickets. The velocity of AI feature shipping was outpacing our ability to design safe, recoverable failure states.

Source: Cross Functional Office Hours · Leadership/Product sessions

We interviewed 35+ Partners (Users) as part of the discovery

Early Interaction with Users Helped us Map Out their Frustrations and Pain Points

06

The Decisions That Actually Mattered

Solutions

This is where I want to be honest: we didn't just redesign screens. We made four strategic decisions that shaped the entire direction of AI in ConnectWise's product ecosystem.

Decision 1: AI that doesn't announce itself — coherent AI over flashy AI

We removed standalone "AI buttons" from primary navigation and embedded AI capabilities contextually within existing workflows. The principle: AI should feel like a natural part of the task, not a separate mode to switch into. If you have to notice the AI, it's not working well enough.

STRATEGIC DECISION

USER JOURNEY

Decision 2: Explainability as a core design requirement, not a feature

Every SideKick™ recommendation now surfaces a brief, plain-language rationale. "Why is SideKick suggesting this?" became a design requirement, not a nice-to-have. We piloted a progressive disclosure pattern — a one-line reason, expandable to full context — to avoid cognitive overload while building trust.

HUMAN FACTORS

Decision 3: Control before automation — always give the user an exit

We established a firm design principle: AI never takes autonomous action without explicit user confirmation for high-stakes tasks. For low-stakes automations (e.g., auto-tagging a ticket), we designed an undo trail. For high-stakes ones (e.g., shifting project timelines), we required a deliberate confirm step. Users stay in control.

USER JOURNEY

AI-UX DESign

Decision 4: Design for failure first — not the happy path

We mapped every AI error state before designing the success path. What happens when AI is wrong? What if it's uncertain? We introduced confidence indicators and graceful fallback flows — so when AI fails, users know it, understand why, and can still complete their task without losing trust in the broader system.

UX STRATEGY

USER JOURNEY

AI Focused Design Thinking

1

Context

Focus on the real user tasks and friction points instead of imagining where AI might fit. Identify workflows that are repetitive, require large-scale processing, or decision support ideal for AI enhancement. Conduct generative research like in-depth interviews and contextual inquiries.

2

Strategy

Map AI opportunities to both product strategy and user journeys. Identify which type of AI model (predictive, generative, classification, etc.) suits the experience best. Validate use-cases with users through concept walkthroughs.

3

Resolution

Co-design the AI-user relationship: is it a guide, a partner, or a doer? Keep interfaces simple and make the AI’s role clear. Avoid “Generate” buttons everywhere. Think about fallback states and explainability early. AI may fail and users deserve transparency.

4

Evaluation

Test for AI explainability, failure recovery, and user mental models. Evaluate onboarding and how easily users grasp the AI's purpose and limitations. Observe if AI interruptions disrupt flow or genuinely help.through concept walkthroughs.

Context based SideKick Menu on Project Management screen in Asio™

Dedicated contextual AI Chat screen in Asio™ - Separates the Generative AI from rest of Asio™ pages

"AI Made Contextual”

Current-State Journey Map · SideKick™ AI Adoption · ConnectWise Asio

SideKick™ AI Workday — Three Personas

6

Phases

3

Personas

5

Pain Points

6

Trust Decisions

🙎🏻‍♂️

Garrett Black - Technician

Primary · Resolves 12–18

tickets/day

👩🏻‍💼

Jessica Cho - Sales

Primary · Closes deals, surfaces revenue

gaps

💰

Lorena Santos - Finance

Primary · Finance · Cashflow, margins, board

reports

DISCOVERY

AI SURFACES

EXPLAINS

DECIDES

EXECUTES

CONFIRMS

1

8:00 AM

Morning Ticket Queue

What GARRETT does

Opens PSA. Manually scans 14 tickets. Prioritises by gut feel — no pattern detection. Three tickets share the same root error but he can't see the connection yet.

"Same drill every morning. Read everything, figure out what's on fire."

Tools ACTIVE

ConnectWise Asio

Teams / Phone

🔁

Start of Day

No unified overview. Day begins with manual reconciliation across tools.

📍

Pain Point 1

3 tickets share a root cause. Garrett resolves them separately — triple the work, zero visibility.

Trust D1

Surface only at ≥85% confidence. Silent

monitoring prevents alert fatigue before trust exists.

−40% irrelevant alerts

😐

Neutral

2

8:22 AM

🔔

SideKick™ Surfaces a Pattern

What GARRETT does

An inline banner inside ticket #4712 — not a modal. "SideKick™ sees a pattern across 3 tickets. 91% match to a known RMM agent config issue."

"Wait — it connected these three? I've been looking at them separately all morning."

AI SURFACING

91% confidence · pattern detected

Risk: Low · no session impact

📍

Inline, not modal

Garrett isn't interrupted. The suggestion lives inside his active ticket. Modals provoke reflex dismissal.

Trust D2

Inline, not modal. Garrett stays in context — no interruption, no reflex close.

2.4× engagement rate

😮

Curious

3

8:24 AM

🔦

Opens Reasoning Layer

What GARRETT does

Taps "See reasoning". Panel shows: data sources used, the pattern detected, 2 previously-resolved matching tickets, plain-English explanation of what changes.

"It found the same pattern from November. That actually makes sense."

REASONING PANEL

Source: PSA + RMM telemetry

2 matching resolved tickets

📙

SHOW YOUR WORK

63% of technicians expand reasoning before acting. This is the critical trust bridge between suggestion and action.

Trust D3

Reasoning one tap away. Source, pattern, history — in plain language every time.

63% read before acting

🤔

Building trust

4

8:26 AM

🔍

The Decision Moment

What GARRETT does

Three paths with equal visual weight: Apply now, Schedule, Not relevant. Each shows its 1-line consequence before tapping.

"No dismiss button — and it tells me what

happens with each choice."

Tools open

ConnectWise Asio

Teams / Phone

⏱️

Pain Point 2

Previous design had one button with no context. Garrett skipped it every time — zero signal back to the model.

🎛️

3-path choice architecture

Every path feeds the model. "Not relevant" + 1-tap reason = 4× more feedback than silent dismiss.

Trust D4

Three paths replace one button. No dismiss — every choice teaches the model.

4× model feedback

🧐

Deliberating

5

8:27 AM

▶️

Executes the Fix

What GARRETT does

Taps "Apply now". Fix applies to all 3 endpoints in 4 seconds. All 3 tickets update simultaneously. A 10-minute undo countdown begins automatically.

"That just saved me 40 minutes of going

into each one separately."

EXECUTION STATE

3 endpoints · fix applied

10 min undo window · live

🔁

Undo window · #1 trust driver

Confidence to act comes from knowing you can undo. Highest - rated feature in usability testing.

Trust D5

Execution receipt + 10-min undo. Act without anxiety about irreversibility.

#1 trust driver · testings

😄

Relieved

6

8:35 AM

Confirmation + Loop

What GARRETT does

3 tickets resolved. Outcome card: "3 endpoints fixed · 38 min estimated time saved · Pattern logged." Receipt saved to timeline. Model improves for next match.

"It tells me what it saved. First time any AI

tool has shown me its own value."

Tools open

3/3 tickets resolved

Pattern logged · model improves

📊

MAKE THE WIN VISIBLE

Explicit outcome card surfaces value immediately. Builds the case for repeat use and team adoption.

Trust D6

Outcome card surfaces the value — time saved, pattern logged, model learning.

−80% escalations

😊

Confident

Emotional Arc across the day

Queue

AI Fires

Reasoning

Decides (Dip)

Executes

Confirms

AI action / surfacing

Decision point

Delight / trust built

Overwhelming

Feedback loop

The end-to-end interaction flow for a SideKick™ recommendation — from trigger to confirmation to action

07

What Pushed Back Hard

Challenges

Shipping AI-UX features comes with its own challenges as there are no golden industry standard set and as UX advocates, we are expected to break the boundaries.

⚡️

Shipping velocity vs. design quality

Engineering timelines were aggressive. The pressure to ship AI features before competitors created constant tension between shipping fast and shipping right. I had to negotiate scope and sequence deliverables to protect design quality on trust-critical patterns while allowing velocity on lower-stakes features.

🧩

AI model uncertainty by design

AI outputs are probabilistic, not deterministic. Designing for a system that can be "wrong sometimes" required

entirely new design patterns — confidence levels, graceful errors, undo flows — that didn't exist in our current design system and had to be created from scratch.

🔐

Cybersecurity stakes raised the design bar

MSPs work in cybersecurity-adjacent environments where an AI false positive can have serious downstream

consequences. Every AI-assisted action carried real risk. Designing for this level of trust and consequence required significantly more research, testing, and iteration than a typical B2B SaaS workflow.

📐

No established AI UX patterns to follow

In 2023, enterprise AI UX best practices were still being written in real time. We couldn't rely on pattern libraries. I had to build original interaction frameworks for explainability, progressive disclosure of AI confidence, and human-AI control handoffs — then document them for scale across the team.

08

What Actually Moved in Numbers

Results

Design speaks in numbers rather than pixels. Here's what the work delivered.

30%

Engagement Increase

We recorded a significant increase in number of MSPs interacting with AI features

50%

Adoption of AI Features

MSPs were discovering the new AI features and adopting them. Thanks to the streamlined onboarding

15%

Retention Increase

The long term valued AI added to the user experience helped reduce the churn by 18% and and 25% increase in customer lifetime value.

Contextual AI Workflow Assistance

Replaced scattered AI buttons with contextual, in-workflow AI assistance → Partners stopped ignoring AI features and started relying on them

New Explainability Patterns

Introduced explainability patterns into SideKick™ → Sales reported reduced AI objections during demos

AI-UX Interaction Pattern Library

Established AI interaction pattern library in Asio Neon™ → Design team could ship new AI features 40% faster with consistent, tested patterns

Asio Neon™ Applied at Enterprise-density Scale

Proved the design system's viability for complex data-heavy UIs and contributed new reusable patterns back to the component library

The Trust Design Mental Model

Shifted the team's mental model: AI design = trust design — this philosophy is now embedded in ConnectWise's product design standards

Based on SideKick™ feature adoption data from Pendo · ConnectWise partner base ~25,000

09

If I Ran a similar Project Again Tomorrow

What I Learned

In the end, the real challenge wasn’t the AI models or the engineering magic behind them. It was something far more human — designing Sidekick on the Asio platform to feel like a teammate MSPs could trust. Not just smart, but considerate. Not just powerful, but intuitive.

🎓 6 Lessons That Will Shape Every Complex Product Decision I Make

AI debt is real. rushing AI features creates compounding trust debt that's 10x harder to pay off than tech debt. Slow down to ship right.

Expalainability isn’t optional. in enterprise AI. Users in high-stakes environments need the "why" — not just the "what."

Research the mental model first. How users think about AI — not how engineers built it — is the only foundation worth designing from.

AI-UX design is built on trust. The happy path is easy. The error state, the uncertain state, the override state — that's where trust is built or broken.

Sales is a design feedback channel. The objections prospects raise are user insights in disguise. I would involve sales research from day one next time.

Document AI patterns on the go. The team velocity gains from a living AI pattern library far outweigh the short-term cost of documentation.

Want to read the full design story?

The portfolio case study tells the strategic story. The full story on Medium has some bonus tips on shipping experiences for AI features on a complex SaaS Platform without any established industry level guidelines and bending some UX rules.

WORK-AI Integration on Asio

Design Systems

SaaS Platform

Leadership

AI Integration on Asio™ — AI Automation using SideKick™

How I led the strategic redesign of ConnectWise's AI experience — moving from "bolt-on AI features" to a coherent, trusted, human-centered AI system that transformed how 25,000+ MSPs work every day.

AI buttons everywhere. Nobody used them. MSPs losing trust in the product they paid for. We had one shot to fix it.

MY ROLE

UX Leader

PLATFORM

Asio™ / SideKick™

DURATION

8 Months

USERS

25,000+ MSPs

TEAM

Cross-functional

READ TIME

~2 Min ⚡️

⚡ TL;DR — 3 Key Outcomes From This Initiative

✅ 20% efficiency gain for MSP workflows

✅ Measurable lift in AI feature adoption

✅ Reduced cognitive load — AI trust scores up

01

The AI Arms Race Was Leaving Users Behind

Context

By 2023, nearly 80% of SaaS platforms had launched at least one AI feature. ConnectWise was no different. In the rush to stay competitive, AI capabilities — summarize, generate, recommend — were being shipped fast and shipped often. But there was a growing, uncomfortable truth nobody wanted to say out loud: users weren't using them.

MSPs — the IT professionals managing hundreds of client endpoints — were navigating interfaces bloated with AI buttons they didn't understand, didn't trust, and couldn't predict. What was supposed to be our competitive advantage was becoming a liability. We were building what I came to call AI Debt — the accumulated cost of rushed, disconnected AI features that erode user trust over time.

80%

SaaS platforms launched AI by 2023

$ 126 B

Global AI software market target in 2027

15%

Decrease in AI feature adoption because of improper onboarding

2X

AI Feature overload without purpose

Fragmented AI buttons ("Generate", "Summarize", "Rewrite") scattered across the Asio™ (Before)

"The AI Feature Overload"

02

UX Lead — Strategy, Research, Design Direction, and Ship

My Role

This wasn't a solo design project — it was a cross-functional strategy effort. My role was to establish the UX direction, keep it human-centered through constant user evidence, and make sure every AI interaction decision had a clear rationale grounded in research, not assumption.

My Responsibilities

UX Strategy · Design Leadership · User Research · Cross-functional Facilitation · Design System Direction · Stakeholder Alignment

Key Deliverables

AI interaction patterns · Workflow redesigns · Research synthesis · Design system components · Executive readouts

Team

Product Managers · AI/ML Engineering · Partner Success · Sales · Customer Support

Tools Used

Maze

Marvin

Pendo

MS Copilot

Asio Neon™

Confluence

Jira

Figma

FigJam

03

SCATTERED ai buttons which didn't add value

The Problem

The AI features we were shipping were feature-driven, not user-driven. We were confusing what was technically possible with what was actually useful. The result? Interfaces that pulled MSPs out of their workflow instead of amplifying it.

What Was Broken

Bolted-on AI. Features like Generate, Summarize, and Rewrite were scattered across interfaces with no coherent context — "glitter on a school project," as I described it internally.

Black box recommendations. SideKick™ would surface suggestions without explaining why. When MSPs couldn't understand AI reasoning, they simply ignored it.

Feature bloat. The race to ship AI had created an interface graveyard of features no one asked for and few actually needed. Users were overwhelmed.

Broken trust loop. One bad AI recommendation eroded confidence in all subsequent ones. Without transparency, even accurate AI became background noise.

Business impact. Low adoption meant we were investing heavily in AI with diminishing returns. Partner satisfaction scores were under threat.

🙌🏻

"In the race to add AI, many SaaS products have become bloated — layering automation where it’s not needed and eroding the user experience." - TechCrunch about the AI arms race in enterprise tools.

04

What We Were Chasing and for Whom

Objectives

My framing for the team: AI should feel like a trusted colleague, not a slot machine. Every design decision had to serve the primary persona — the overwhelmed MSP technician managing 200+ endpoints — while also delivering on the business's need for AI product differentiation.

Garrett Black

Technician · MSP Field & Remote

Configure faster · Less code · More output

JOBS TO BE DONE

Configure and customise apps for clients without writing a single line of code

Resolve tickets faster by surfacing the right fix the first time with less resource

Reduce repetitive manual steps across every client environment

Pain points

Context switching across 5+ tools to resolve a single service ticket

No way to know if a fix was already applied to a similar client before

Configuration requires specialist knowledge not always available

MENTAL MODEL AROUND AI

"Show me the next step — don't explain the whole system." Garrett trusts AI that acts like a senior peer suggesting the fix, not a chatbot asking clarifying questions. Skeptical of AI that adds steps rather than removing them.

Jessica Cho

Sales · Account Executive

Close more deals · Surface lost revenue

JOBS TO BE DONE

Identify at-risk deals and revenue gaps before they become lost opportunities

Build accurate, tailored quotes without back-and-forth with technical teams

Understand client ROI clearly enough to justify the deal to any stakeholder

Pain points

CRM and PSA data are always out of sync. Can't trust pipeline view

Lost revenue signals buried in service reports nobody reads in time

Quote customisation requires engineering input, slowing deal cycles

MENTAL MODEL AROUND AI

"Give me the insight, not the data dump." Jessica wants AI that surfaces what matters — which deal to prioritise, which client is at risk — without her having to build the query. Trusts AI recommendations only if she can see the reasoning behind them.

Lorena Santos

Finance · Controller or CFO

Manage cashflow · Actionable cost insights

JOBS TO BE DONE

Forecast cashflow with enough confidence to make staffing and vendor decisions

Identify cost overruns and margin leaks by project, client, and service line

Produce accurate board-ready reports without 2 days of spreadsheet work

Pain points

Financial data lives in PSA, billing, and spreadsheets — never reconciled

Operational cost visibility is always lagging — by the time she sees it, it's too late

Manual reconciliation every month consumes 2–3 full days of analyst time

MENTAL MODEL AROUND AI

"I'll trust the number when I can verify the source." Lorena is the most AI-skeptical persona. She needs explainability and an audit trail before acting on any AI-generated insight. Prefers AI that flags anomalies for her to review over AI that makes decisions autonomously.

Ray Jackson

Owner · MSP Founder & CEO

Grow profitably · Business visibility

JOBS TO BE DONE

See business health — margin, utilisation, client risk — in a single view without chasing reports

Make hiring and investment decisions backed by forward-looking data, not last month's actuals

Grow revenue per client without proportionally growing headcount or complexity

Pain points

Business visibility requires pulling from 4+ disconnected systems — always stale by the time it's ready

Can't see which clients are profitable and which are silently draining margin

Growth decisions made on gut instinct — no AI-assisted scenario modelling available

MENTAL MODEL AROUND AI

"Tell me what I should be worried about before I ask." Ray wants proactive AI — surfacing risks and opportunities before they become problems. Believes AI should make him look smarter to clients and investors, not just faster at admin. The highest ROI expectation of any persona.

05

What Users, Competitors, and Sales Told Us — and What Changed

Research

We ran a mixed-methods research sprint — partner interviews, session recordings, support ticket analysis, and sales feedback sessions. The findings were humbling. We had been designing for what engineering could build, not for what partners actually needed.

68%

Users who doesn’t trust the data used to train the AI are hesitant to adopt it

56%

Users mentioned that its difficult to get what they want out of AI

75%

Users who doesn’t trust the data also believes that AI lacks the information needed for it to be useful

😤

Insight 1: Users - "I don't trust it."

Partners consistently said they avoided AI suggestions after a single bad recommendation. The pattern was clear: trust, once broken, is hard to rebuild — especially in a cybersecurity-adjacent product where a wrong action has real consequences.

Source: Partner interviews · 40+ sessions across all MSP segments

💬

Insight 2: Sales - "They ask what the AI actually does."

Sales reps reported that top objections during demos were about AI transparency. Prospects wanted to see explainability — the "why" behind a recommendation — before considering adoption. This was a pipeline problem, not just a UX problem.

Source: Sales team quarterly feedback sessions

📉

Insight 3: Data - Feature usage was a cliff.

Session data showed users would click AI buttons on first exposure, then never return. Day 1 engagement looked healthy. Day 14 retention was near zero. The experience wasn't building habit — it was burning through curiosity.

Source: Pendo session analysis · 100+ recorded sessions

🖱️

Insight 4: Stakeholders - "We're building too fast."

Internal stakeholders — including Partner Success and Support — flagged that AI errors were generating a spike in support tickets. The velocity of AI feature shipping was outpacing our ability to design safe, recoverable failure states.

Source: Cross Functional Office Hours · Leadership/Product sessions

We interviewed 35+ Partners (Users) as part of the discovery

Early Interaction with Users Helped us Map Out their Frustrations and Pain Points

06

The Decisions That Actually Mattered

Solutions

This is where I want to be honest: we didn't just redesign screens. We made four strategic decisions that shaped the entire direction of AI in ConnectWise's product ecosystem.

Decision 1: AI that doesn't announce itself — coherent AI over flashy AI

We removed standalone "AI buttons" from primary navigation and embedded AI capabilities contextually within existing workflows. The principle: AI should feel like a natural part of the task, not a separate mode to switch into. If you have to notice the AI, it's not working well enough.

STRATEGIC DECISION

USER JOURNEY

Decision 2: Explainability as a core design requirement, not a feature

Every SideKick™ recommendation now surfaces a brief, plain-language rationale. "Why is SideKick suggesting this?" became a design requirement, not a nice-to-have. We piloted a progressive disclosure pattern — a one-line reason, expandable to full context — to avoid cognitive overload while building trust.

HUMAN FACTORS

Decision 3: Control before automation — always give the user an exit

We established a firm design principle: AI never takes autonomous action without explicit user confirmation for high-stakes tasks. For low-stakes automations (e.g., auto-tagging a ticket), we designed an undo trail. For high-stakes ones (e.g., shifting project timelines), we required a deliberate confirm step. Users stay in control.

USER JOURNEY

AI-UX DESign

Decision 4: Design for failure first — not the happy path

We mapped every AI error state before designing the success path. What happens when AI is wrong? What if it's uncertain? We introduced confidence indicators and graceful fallback flows — so when AI fails, users know it, understand why, and can still complete their task without losing trust in the broader system.

UX STRATEGY

USER JOURNEY

AI Focused Design Thinking

1

Context

Focus on the real user tasks and friction points instead of imagining where AI might fit. Identify workflows that are repetitive, require large-scale processing, or decision support ideal for AI enhancement. Conduct generative research like in-depth interviews and contextual inquiries.

2

Strategy

Map AI opportunities to both product strategy and user journeys. Identify which type of AI model (predictive, generative, classification, etc.) suits the experience best. Validate use-cases with users through concept walkthroughs.

3

Resolution

Co-design the AI-user relationship: is it a guide, a partner, or a doer? Keep interfaces simple and make the AI’s role clear. Avoid “Generate” buttons everywhere. Think about fallback states and explainability early. AI may fail and users deserve transparency.

4

Evaluation

Test for AI explainability, failure recovery, and user mental models. Evaluate onboarding and how easily users grasp the AI's purpose and limitations. Observe if AI interruptions disrupt flow or genuinely help.through concept walkthroughs.

Context based SideKick Menu on Project Management screen in Asio™

Dedicated contextual AI Chat screen in Asio™ - Separates the Generative AI from rest of Asio™ pages

"AI Made Contextual”

Current-State Journey Map · SideKick™ AI Adoption · ConnectWise Asio

SideKick™ AI Workday — Three Personas

6

Phases

3

Personas

5

Pain Points

6

Trust Decisions

🙎🏻‍♂️

Garrett Black - Technician

Primary · Resolves 12–18

tickets/day

👩🏻‍💼

Jessica Cho - Sales

Primary · Closes deals, surfaces revenue

gaps

💰

Lorena Santos - Finance

Primary · Finance · Cashflow, margins, board

reports

DISCOVERY

AI SURFACES

EXPLAINS

DECIDES

EXECUTES

CONFIRMS

1

8:00 AM

Morning Ticket Queue

What GARRETT does

Opens PSA. Manually scans 14 tickets. Prioritises by gut feel — no pattern detection. Three tickets share the same root error but he can't see the connection yet.

"Same drill every morning. Read everything, figure out what's on fire."

Tools ACTIVE

ConnectWise Asio

Teams / Phone

🔁

Start of Day

No unified overview. Day begins with manual reconciliation across tools.

📍

Pain Point 1

3 tickets share a root cause. Garrett resolves them separately — triple the work, zero visibility.

Trust D1

Surface only at ≥85% confidence. Silent

monitoring prevents alert fatigue before trust exists.

−40% irrelevant alerts

😐

Neutral

2

8:22 AM

🔔

SideKick™ Surfaces a Pattern

What GARRETT does

An inline banner inside ticket #4712 — not a modal. "SideKick™ sees a pattern across 3 tickets. 91% match to a known RMM agent config issue."

"Wait — it connected these three? I've been looking at them separately all morning."

AI SURFACING

91% confidence · pattern detected

Risk: Low · no session impact

📍

Inline, not modal

Garrett isn't interrupted. The suggestion lives inside his active ticket. Modals provoke reflex dismissal.

Trust D2

Inline, not modal. Garrett stays in context — no interruption, no reflex close.

2.4× engagement rate

😮

Curious

3

8:24 AM

🔦

Opens Reasoning Layer

What GARRETT does

Taps "See reasoning". Panel shows: data sources used, the pattern detected, 2 previously-resolved matching tickets, plain-English explanation of what changes.

"It found the same pattern from November. That actually makes sense."

REASONING PANEL

Source: PSA + RMM telemetry

2 matching resolved tickets

📙

SHOW YOUR WORK

63% of technicians expand reasoning before acting. This is the critical trust bridge between suggestion and action.

Trust D3

Reasoning one tap away. Source, pattern, history — in plain language every time.

63% read before acting

🤔

Building trust

4

8:26 AM

🔍

The Decision Moment

What GARRETT does

Three paths with equal visual weight: Apply now, Schedule, Not relevant. Each shows its 1-line consequence before tapping.

"No dismiss button — and it tells me what

happens with each choice."

Tools open

ConnectWise Asio

Teams / Phone

⏱️

Pain Point 2

Previous design had one button with no context. Garrett skipped it every time — zero signal back to the model.

🎛️

3-path choice architecture

Every path feeds the model. "Not relevant" + 1-tap reason = 4× more feedback than silent dismiss.

Trust D4

Three paths replace one button. No dismiss — every choice teaches the model.

4× model feedback

🧐

Deliberating

5

8:27 AM

▶️

Executes the Fix

What GARRETT does

Taps "Apply now". Fix applies to all 3 endpoints in 4 seconds. All 3 tickets update simultaneously. A 10-minute undo countdown begins automatically.

"That just saved me 40 minutes of going

into each one separately."

EXECUTION STATE

3 endpoints · fix applied

10 min undo window · live

🔁

Undo window · #1 trust driver

Confidence to act comes from knowing you can undo. Highest - rated feature in usability testing.

Trust D5

Execution receipt + 10-min undo. Act without anxiety about irreversibility.

#1 trust driver · testings

😄

Relieved

6

8:35 AM

Confirmation + Loop

What GARRETT does

3 tickets resolved. Outcome card: "3 endpoints fixed · 38 min estimated time saved · Pattern logged." Receipt saved to timeline. Model improves for next match.

"It tells me what it saved. First time any AI

tool has shown me its own value."

Tools open

3/3 tickets resolved

Pattern logged · model improves

📊

MAKE THE WIN VISIBLE

Explicit outcome card surfaces value immediately. Builds the case for repeat use and team adoption.

Trust D6

Outcome card surfaces the value — time saved, pattern logged, model learning.

−80% escalations

😊

Confident

Emotional Arc across the day

Queue

AI Fires

Reasoning

Decides (Dip)

Executes

Confirms

AI action / surfacing

Decision point

Delight / trust built

Overwhelming

Feedback loop

The end-to-end interaction flow for a SideKick™ recommendation — from trigger to confirmation to action

07

What Pushed Back Hard

Challenges

Shipping AI-UX features comes with its own challenges as there are no golden industry standard set and as UX advocates, we are expected to break the boundaries.

⚡️

Shipping velocity vs. design quality

Engineering timelines were aggressive. The pressure to ship AI features before competitors created constant tension between shipping fast and shipping right. I had to negotiate scope and sequence deliverables to protect design quality on trust-critical patterns while allowing velocity on lower-stakes features.

🧩

AI model uncertainty by design

AI outputs are probabilistic, not deterministic. Designing for a system that can be "wrong sometimes" required

entirely new design patterns — confidence levels, graceful errors, undo flows — that didn't exist in our current design system and had to be created from scratch.

🔐

Cybersecurity stakes raised the design bar

MSPs work in cybersecurity-adjacent environments where an AI false positive can have serious downstream

consequences. Every AI-assisted action carried real risk. Designing for this level of trust and consequence required significantly more research, testing, and iteration than a typical B2B SaaS workflow.

📐

No established AI UX patterns to follow

In 2023, enterprise AI UX best practices were still being written in real time. We couldn't rely on pattern libraries. I had to build original interaction frameworks for explainability, progressive disclosure of AI confidence, and human-AI control handoffs — then document them for scale across the team.

08

What Actually Moved in Numbers

Results

Design speaks in numbers rather than pixels. Here's what the work delivered.

30%

Engagement Increase

We recorded a significant increase in number of MSPs interacting with AI features

50%

Adoption of AI Features

MSPs were discovering the new AI features and adopting them. Thanks to the streamlined onboarding

15%

Retention Increase

The long term valued AI added to the user experience helped reduce the churn by 18% and and 25% increase in customer lifetime value.

Contextual AI Workflow Assistance

Replaced scattered AI buttons with contextual, in-workflow AI assistance → Partners stopped ignoring AI features and started relying on them

New Explainability Patterns

Introduced explainability patterns into SideKick™ → Sales reported reduced AI objections during demos

AI-UX Interaction Pattern Library

Established AI interaction pattern library in Asio Neon™ → Design team could ship new AI features 40% faster with consistent, tested patterns

Asio Neon™ Applied at Enterprise-density Scale

Proved the design system's viability for complex data-heavy UIs and contributed new reusable patterns back to the component library

The Trust Design Mental Model

Shifted the team's mental model: AI design = trust design — this philosophy is now embedded in ConnectWise's product design standards

Based on SideKick™ feature adoption data from Pendo · ConnectWise partner base ~25,000

09

If I Ran a similar Project Again Tomorrow

What I Learned

In the end, the real challenge wasn’t the AI models or the engineering magic behind them. It was something far more human — designing Sidekick on the Asio platform to feel like a teammate MSPs could trust. Not just smart, but considerate. Not just powerful, but intuitive.

🎓 6 Lessons That Will Shape Every Complex Product Decision I Make

AI debt is real. rushing AI features creates compounding trust debt that's 10x harder to pay off than tech debt. Slow down to ship right.

Expalainability isn’t optional. in enterprise AI. Users in high-stakes environments need the "why" — not just the "what."

Research the mental model first. How users think about AI — not how engineers built it — is the only foundation worth designing from.

AI-UX design is built on trust. The happy path is easy. The error state, the uncertain state, the override state — that's where trust is built or broken.

Sales is a design feedback channel. The objections prospects raise are user insights in disguise. I would involve sales research from day one next time.

Document AI patterns on the go. The team velocity gains from a living AI pattern library far outweigh the short-term cost of documentation.

Want to read the full design story?

The portfolio case study tells the strategic story. The full story on Medium has some bonus tips on shipping experiences for AI features on a complex SaaS Platform without any established industry level guidelines and bending some UX rules.