8 Essential Guidelines for Creating a Successful Human-AI Experience

Seventeen multicoloured post-it notes are roughly positioned in a square shape on a white board. Each one of them has a hand drawn sketch in pen on them, answering the prompt on one of the post-it notes "AI is...." The sketches are all very different, some are patterns representing data, some are cartoons, some show drawings of things like data centres, or stick figure drawings of the people involved.

In today’s world, whether we are interacting with AI systems for the first time or the hundredth, these tools are designed to provide us with a smooth and effortless user experience. The goal is for users to naturally understand and navigate the tool with ease. This means users must identify what AI can and cannot do to set the right expectations. By doing so, they can get the most out of the AI tool and build trust, which is essential for a positive experience. 

The analysis of 14 commercial AI tools highlighted the essential factors for creating a great user experience with AI. These factors fall into three main themes: 

  1. System Capabilities System Capabilities: Understanding the strengths and limitations of the AI system. 
  2. Trust: Building user confidence in the reliability and consistency of the AI. 
  3. User Feedback: Leveraging user feedback to continuously refine and improve the experience. 

These insights will help developing clear and relevant UX interfaces, ensuring a great user experience with AI. 

A. System Capabilities

First, system capabilities encompass the full range of AI features and its role. It’s crucial to set the right expectations from the start, so the user knows what to do and, more importantly, how to do it. 

Here are four guidelines to help you effectively convey system capabilities through your interface.

  1. Describe the Role of AI in the User Experience  

From the very first interaction, the AI should inform users of its role and the service it provides. The best way to do this is by placing a concise, clear statement on the homepage of your interface that quickly explains what your AI does (Figure 1).

Figure 1: Introduce the role of AI by briefly highlighting its knowledge (Detangle).

It’s important to keep this statement simple and straightforward to encourage users to read and understand it easily. 

  1. Present the AI as an Assistant 

Many tools introduce AI as an assistant when first presented to users. The assistant concept is used to emphasize collaboration, helping users understand that the AI is there to simplify their tasks and provide additional support (Figure 2).

However, it’s important to ensure that users don’t have unrealistic expectations of the assistant to avoid frustration.

Figure 2: The homepage is the perfect place to describe the AI’s role as an assistant (Chatspot). 

To achieve this, it’s crucial to clearly specify the assistant’s area of expertise .

  1. Showcase All AI Features and Encourage Testing  

It’s essential for users to be aware of AI’s capabilities. The best way to highlight these features is by presenting them as tools or features within your interface (Figure 3), ensuring they are clearly visible.  

Figure 3: Quickly see and understand what the AI can do (Chatspot).

For chat-based AIs, like Google Gemini or Microsoft Copilot, offering sample prompts or questions is an effective strategy. This not only helps users discover the AI’s features but also encourages them to try them out (Figure 4).  

Figure 4: Prompt examples demonstrate the tasks AI can perform (Microsoft Copilot). 

The more users experiment with these features, the better they’ll understand how to interact effectively with your AI. 

  1. Rely on familiar interfaces to facilitate understanding 

We are all already accustomed to using interface models and designs. For instance, web browsers and messaging apps on smartphones (like SMS, Messenger, WhatsApp) share common interface elements that users are already familiar with.

This approach helps users save time when getting started with the AI and can assist them in understanding what they can and should do.

Figure 4: Perplexity’s homepage is intentionally designed to resemble a search engine (Perplexity, Google). 

For example, Perplexity intentionally drew inspiration from a search engine to indicate that it helps users conduct internet searches (Figure 4). 

B. Trust

It’s essential to empower users to trust the results generated by AI. While understanding the AI’s capabilities helps users know what it can do, they also need to determine if the AI’s output is relevant and reliable for them.

To achieve this, references to sources are crucial, as they serve as evidence and a trust-building argument for the user.

Here are two key guidelines to effectively incorporate trust into your interface.

  1. Display and Provide Access to Sources

After generating a response, the AI should display the sources or data it used. This helps users understand the foundation upon which the answers were built (figure 5).

If the user wants more information, they should have the option to view the full source.

Figure 5: Provide access to sources and in-context reference (Perplexity).

However, it’s important not to clutter the interface and only show sources when they are relevant to the user’s needs.

  1. Make it easier for the user to understand/read the source they are viewing

Sometimes, the sources provided contain an overwhelming amount of information, requiring significant time and effort to read and analyze.

In these cases, the system can be helpful by summarizing the document for the user. For instance, a legal AI might precisely describe the legal source by providing a concise summary.

If the AI doesn’t directly summarize the source, it can highlight the specific section of the document used to generate the response (figure 6).

Figure 6: The tool highlights in green the source’s section used by AI to generate the response (Lefebvre Sarrut).

The key is to simplify reading and information retrieval within the sources, making them less complex and time-consuming for the user.

C. User feedback

User feedback allows individuals to share their thoughts on the quality of the system with designers and developers.

Here, we’ll explore two essential guidelines that ensure customers can leave feedback without requiring too much effort or time.

  1. Design a Feedback System that is Simple Yet Specific

To create an effective feedback system, start by asking yourself these key questions as a designer:

  • At what point in the experience do I need user feedback?
  • What specific information do I need?
  • What is the simplest way for my user to provide this information?

By considering these questions, you can design a feedback system that is more targeted and increases the likelihood of users responding. To make the process easy for users, minimize the number of clicks required. Use simple, relevant words that align with the criteria the user is selecting, and pair them with pictograms to enhance understanding (figure 7).

Figure 7: Simple and precise feedback options tailored to the question (Perplexity).

This approach will make it significantly easier for users to provide feedback.

  1. Communicate the Impact of the Feedback

It’s crucial to communicate the impact of feedback both before and after it is given. Use simple, encouraging phrases and display them at the right moments.

For example, after finishing a conversation with the AI, you could display a message like, “Help me improve—what did you think of this interaction?” followed by the feedback form.

Additionally, it’s important to thank the user for taking the time to provide feedback, emphasizing how their input helps improve the system. If possible, show them how their feedback will influence future developments.

Conclusion: The Most Important Takeaways

These guidelines, divided into three main themes, are crucial for providing an excellent user experience.

First, remember that system capabilities are key to helping users understand the AI’s role and features, ensuring their expectations align with what the AI can actually do.

Trust involves making sure users know whether they can rely on the results by clearly presenting sources. It’s important that users don’t place blind trust in the AI but instead adjust their trust based on the AI’s demonstrated capabilities.

Finally, when it comes to user feedback, the focus should be on creating a simple and encouraging feedback process, prompting users to share their thoughts and thanking them for their input.

More ...

Scroll to Top