How to Talk to Your Clients About AI
(and Why It’s Really a Conversation About Trust)

By Zoe M.H.


Over the past two years, I have had more conversations about AI than I can count. Not just with engineers or tech enthusiasts, but with people across industries such as healthcare, government, hospitality, and entertainment. These are people who often do complex, invisible work within large, tangled systems.

When AI comes up, it is a coin toss whether I will be met with curiosity and excitement or skepticism and hesitation. However, no matter who I am talking to, we all have the same underlying questions: What does this mean for me, my team, my organization, and the people we serve?

As a user experience designer, I see those questions as an opportunity to talk honestly about how we work, where things get stuck, and what people actually need to do their jobs well.

In this article I will go over how to talk to your clients (real people with thoughts, opinions, and concerns) about AI in a way that builds trust, centers their goals, and respects their expertise. Because in the end, AI only works if the humans in the system feel supported by it, not sidelined.

Throughout, I will use a human-centered lens to share what works and the psychology that explains why.

Start with psychological safety.

If you are introducing AI to a client, stakeholders, or even a coworker, the first step is to create an environment where they feel supported and are open to exploring how AI can help them by creating psychological safety.

Psychological safety is a term introduced by Harvard professor Amy Edmondson, which refers to the shared belief among a group that it is safe to express ideas, ask questions, or admit mistakes without fear of negative consequences. (1)

So, before diving in too deeply, take some time to gauge the person you are talking to and how they feel about AI. Many individuals hold concerns about AI replacing their job and hesitation around not understanding the new technology. Reactions like this are normal, as we are wired to resist situations that are mentally demanding and/or uncertain. So, ask questions that focus on familiar aspects of how someone works, such as: Which tasks are time-consuming? What problems do you tend to encounter?

These kinds of conversations will naturally help you understand the person you are talking to better and build familiarity with their day-to-day work. That familiarity will lay the groundwork for trust. From there, you can help people see AI not as a threat, but as a tool that has the potential to make their work easier.

People make decisions emotionally, then justify them with logic.

Behavioral science shows that we tend to make decisions based on emotion first, and then use logic to back them up. (2) So, you have to clearly demonstrate the value of AI. It also helps to understand that most clients aren’t looking for new technology; instead, they are seeking relief from the everyday manual processes, unclear next steps, tasks that drain time and energy from themselves and their team.

It’s that emotional weight that AI can help alleviate. Try to reframe the value this way: “This isn’t going to replace your team. It’s going to help them stop getting tied up on low-value tasks so they can instead work on the tasks that actually require their attention. "

People respond well to this because it speaks to job satisfaction, not just efficiency. When the promise of AI is linked to clarity, control, and less stress, it becomes something people can feel good about adopting.

Speak to your client in use cases.

An easy shift you can make in conversations about AI is moving from talking about systems to telling stories. Human-centered design teaches us that people understand problems best through relatable moments, not abstract concepts.

When clients hear about predictive models or generative text tools, they might nod along, but what actually clicks are specific scenarios they can identify with.

For example:

  • Imagine your team starting their day with five high-risk tickets already flagged and sorted by urgency.” 

  • “What if agents opened a ticket and found a first-draft reply already written, ready to review and send?”

In these examples, positioning AI through real, familiar scenarios helps teams see how it fits into their world as it is right now. By doing this, you’re able to help them imagine a better workday, and that makes the biggest difference

Design for co-agency.

One of the most important psychological dynamics in AI adoption is control. People will resist tools that make them feel sidelined, but they are able to relax and engage when feeling supported. This reflects a well-studied principle in psychology: perceived autonomy is essential for motivation and satisfaction.

When people believe they still have agency, they are more willing to trust the systems designed to help them. (3)

That is why it is so crucial to design for co-agency. People want to feel important, and it's your job to make sure they understand that has not changed. An example of something you could say would sound like this: “AI makes suggestions, but you are the one to decide if that suggestion works for you.”

This framing keeps people in a decision-making role and reinforces that they are still the expert, just with better information or a head start.

When we align AI with existing workflows, we reduce friction and build confidence. Through this, you can show teams that AI can handle the noisy, low-level inputs so they can focus on higher-value work where their expertise matters most.

Get feedback from users.

Consider the elements that make up a good conversation with someone. Is it ongoing? Is the person you're talking to responsive? Are they able to adapt as the conversation changes? This is how the interaction between people and AI systems should be. 

In "Conversational Design," by Erika Hall, she argues that systems should behave more like conversations than broadcasts. They need to respond, adjust, and improve based on real input from users. (4)

In UX, feedback is an essential part of the design process. And just like everything else. AI adoption cannot be a one-and-done rollout. Design of any kind thrives when users feel heard and see progress. Iteration that takes users' feedback into account and adjusts what isn't working is what builds trust.

Practically, this will look like structuring AI rollout as a cycle: design, validate, and improve. It also means bringing users into the process as co-creators. It's an opportunity to ask what's helpful, what's confusing, and where there is still friction. That information will shape what comes next, and when people see their input shaping the system, the system becomes theirs.

Transparency is the key to trust.

Psychologically, many people feel safest when answers are clear-cut and, conversely, feel uncomfortable in gray areas. Ambiguity can be difficult to sit in, and most of us have a sharp eye when we are being presented with something that has more promise than proof. Research shows that individuals with a low tolerance for ambiguity may experience anxiety and/or discomfort in uncertain situations, which can lead them to resist. (5)

The answer to this is transparency. Being open and honest is necessary when building trust with people, especially those you're working with. Communicate what AI can and cannot do, and explain the importance of correct data, regular system adjusting, and the indispensable role of human judgment. By doing this, you're able to demystify the tech and build confidence among its users.

Also, it's just as important to share your own learning journey. At esolutionsONE, we openly discuss our experiences and what strategies have worked, what challenges we have faced, and how we have adapted when things change. Through this openness, we're able to foster collaboration better and have clients feel like partners in the process rather than passive recipients to a solution. 

When you are honest about limitations and show that you are committed to continuous improvement, trust will naturally follow.

Remember, you’re still in charge.

AI is a really big topic right now, and it makes sense why people feel like it has become the star of the show while everyone else has been cast as a supportive role. So, it's important to remind everyone that people are still in charge and making decisions. AI is here to support your work, not take over.

When you lead conversations about AI with empathy, honesty, and a focus on outcomes, you can help people stop seeing AI as a risk and start seeing it for what it is: a tool.

Sources:

(1) Edmondson, A. C. (1999). Psychological safety and learning behavior in work teams. 

(2) Kahneman, D. (2011). Thinking, Fast and Slow.

(3) Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior.

(4) Hall, E. (2018). Conversational Design

(5) Wikipedia contributors. (n.d.). Ambiguity tolerance–intolerance.