Using Human Relationships to Build Trust in AI

By Hagar Baruch, Senior UX Associate

Artificial Intelligence (AI) provides new possibilities for the design of human experiences. AI can create knowledge, anticipate need, predict outcomes, and offer solutions in ways no other technologies can. But, like other advancements in automation, consumers need to first trust AI as a collaborator in the experience. Without designing for trust, companies risk lower levels of adoption for their AI-powered offerings.    

Recent studies show that 82% of U.S. adults lean toward concern over excitement about incorporating AI into their daily lives (Pew Research 2022), so how do we get customers to trust AI as a collaborator? First we must examine the nature of trust in human experience and its associated design attributes. In human-to-human interactions, trust is the willingness to be vulnerable to the outcomes of someone else’s decisions and actions1.


Consider the nerve-racking process of bringing in a new hire. The first few weeks require a little hand-holding to show them the ropes of your company’s tools and culture, but as they grasp new information, and they begin to exhibit fewer and fewer errors, you start to have confidence in their ability. In no time, you stop seeing them as the new hire and start seeing them as a trusted peer.

Lextant blog AI POV

Just like human relationships, Trust with AI forms from three fundamental experience drivers:

  • The trustor’s trust propensity: how trusting someone is, or how willing they are to rely on another
  • The trustee’s trustworthiness: demonstrated by properly completing a goal
  • Continuation of the trust cycle: the consistency required to build a foundation of trust

We must first overcome the fact that many generally do not trust AI as a collaborator. AI must prove itself trustworthy. To design new and improved customer experiences, Companies must overcome this barrier and develop AI that drives human utilization and cooperation through its trustworthiness.

Trustworthiness boils down to three factors:2

  • Ability: the means or skill required to do something
  • Benevolence: possessing good intentions
  • Integrity: operating with honesty and strong moral principles

The first step in overcoming low trust propensity and building trustworthiness is showcasing what a tool can and cannot do its ability. This step is imperative, as most users have a tendency to project their own assumptions onto an experience or from those perceptions based on early interactions or “first moments of truth” in the experience.

Users may approach a new technology with preconceived notions about both its abilities and its intentions/purpose. For example, if a user approaches a tool expecting it to be malicious, no matter how the tool acts, its actions will be misconstrued that way.3 We can eliminate the gray area of assumptions by stating  clearly what the tool does and does not do (e.g., “This chatbot is designed to provide medical advice, not replace your doctor”) or how the tool will provide information (e.g., “This tool will contact an outside party to help with your needs. It is not designed to formulate solutions alone.”).

Lextant blog AI POV Perception

A user’s perception of the benevolence and integrity of a tool is also affected by the tool’s social interaction design. A tool that is impolite or does things the user considers morally wrong may be viewed in a negative light and even tossed aside. If a tool does something that comes off as extreme, it may be written off as untrustworthy, making trust impossible to repair.4 To combat this robotic faux pas, companies must construct tools with social interaction and with the ethics of the end user in mind. 

So just like a human peer or collaborator , AI must be transparent about its capabilities, follow the ethics of the environment in which it is launched, and work efficiently enough to provide value to its human counterpart. 

Lextant blog AI POV Trust 1

Today, as companies seek to launch their own AI-powered solutions both internally and to their customers, they must first design solutions for trust. Much like a new hire who proves themselves to be trustworthy and dependable, AI must prove itself trustworthy through its actions. Keeping in mind the foundation of human-to-human trust development and applying those same principles to AI is the key to overcoming public apprehension and pushing human-AI collaboration to the next stage. 

Stay tuned for more on UX principles and designing AI-powered experiences here at and on our new podcast — Seriously Curious, all things UX for business, strategy and design.



  1. Mayer, Davis, & Schoorman, 1995
  2. Colquitt, et al., 2007
  3. Pataranutaporn, et al., 2023
  4. Shelble, et al., 2022
(Visited 249 times, 1 visits today)