AI Overview
Conversational AIAbout Design and AI Writings SxD Blog
The user experience with large language models depends on the prompting. It's not surprising that many users turn to ChatGPT and other models for search. This is what users are familiar with, and having heard that ChatGPT is trained on everything, they expect to find answers from it for which they might otherwise use Google. But language models aren't search engines, and their way of "learning" everything doesn't make them capable of regurgitating it when searched.
To date, most research and design of language models has been around the monological paradigm of prompt-response. That is, models respond to user prompts based on the fact that they are trained and designed to "understand" or respond to the user prompt as is. AI assumes that the user's intent is well represented in the user's prompt.
The conversational capabilities of AI, however, would make it suitable for more engaging and interesting interactions. Rather than assume that the user has made an explicit, coherent, and complete request for information, the model could engage the user to "learn" more about their interest. The model would use bidirectional prompting to prompt the user for more about their interest, what is behind it, what it assumes, is connected to, motivated by, oriented towards, and so on. We haven't had these kinds of automated technologies yet and so we don't have design concepts or methods for engaging user intents in such implied fashion. We design to explicit intents, not implicit intents (though given connected signals, content and advertising networks do stitch together derived intents.)
Long-form and open dialog is a challenge for language models for a number of reasons, including topic drift, memory, long-context scaling issues, and more. Models will lose track of the conversation, become misdirected by their own hallucinations, and because the lack any ground truths or structure, fail to make relevant connections. But I think this can be partially solved for, because the model need only spark the user's interests. Given the capability of continuing open dialog, language models should be able to sustain interesting conversation with users by navigating topical connections supplied by RAG, knowledge graphs, fine-tuning, reasoning and search, or some other method.
I don't think we know yet what these models can do, and we certainly don't know yet what users will do and want to do as they become more familiar with the models. I think interestingness, or an AI's ability to engage in interesting conversation, will gain prominence as an aspect of user experience with generative AI. We have been satisfied with utility and use value, or "needs," for too long — as if users know what their needs are when commencing a search or engaging with social media. Users don't often have a need, a goal, an objective, but are open to engagement when they come across something interesting and which captures their attention. AIs will be capable of meeting this inclination, the more we learn to map and conceptualize interestingness.