Expectations and the design of AI products
What to do when every customer has their own definition of optimal
For most businesses, customers want to know exactly what they are buying and then receive exactly that (plus a little more). This predictability of customer experience builds trust, a critical goal of any new and upcoming brand.
The story is different for artificial intelligence (AI) data products. AI products cannot deliver on perfection, or even promise to be exactly the same every time. Why? Artificial intelligence is built on statistics, a discipline that is entirely centered on the idea of being right probabilistically, often but not always. This means AI products and feature inherently have a probabilistic value proposition.
Lucky for those of us working in this space, predictability does not require perfection. Predictability is achievable at a much lower bar. We simply need to meet customer expectations — expectations we also have a role in setting.
One of the most powerful ways to set customer expectations around probabilistic outcomes is within the product itself through the design of the UI/UX. Consider Google’s homepage, an internet fixture that has changed little in over two decades. There are two buttons: Google Search and I’m Feeling Lucky.
Google search has a probabilistic value proposition. When a query is entered, Google responds with a page with 10 or 15 potential search results, striving for one of them to be the webpage that the user intended.
Let’s consider the other button for a moment: “I’m feeling lucky”. This takes you directly to the Google algorithm’s optimal answer. The single best answer the data can provide. If that’s true, why doesn’t that button say “Optimal Result” instead?
Optimal doesn’t have a single definition across people and across use cases. If you were to type “ great food recipes” into Google’s search engine, it’s likely that you do not have the same answers in mind as if I would. We are not the same person.
Google’s algorithm is optimal on average—it works well either for most customers or for their own most important customers. However, very few users are the average. We each have our own definition of optimal that differs from each other. No matter how good Google’s algorithm is, some percentage of users in some situations will find themselves to be an edge case. From the user’s perspective, Google will appear to get it wrong some of the time -the hallmark of probabilistic value.
Instead of promising optimal, Google designers created a game they could win by asking: are you feeling lucky?
Many users will be delighted to find themselves one of the lucky ones. Still, at the level of individual interaction or outcomes, at the point where AI cannot promise to be optimal or even right at all, the feature is presented to the user as a game of chance. By setting this expectation, Google has rigged the game to be one they can win most of the time.
It is not reasonable to expect perfection from AI. Founders and product designers should be careful to set expectations and so that customers do not feel let down when, inevitably, an AI algorithm gives a wrong answer or makes a bad decision.
This is the tip of the iceberg. There are many other ways to use UI/UX and other simple tricks to provide repeatable and reliable value for AI products. The important part is to do it intentionally, so your customer’s expectations are set appropriately.
Originally published at https://www.fundamentally.ai on June 1, 2020.