Why conversational UI is important

Jordanohara
4 min readFeb 1, 2022

Introduction

Conversational UIs (CUIs) are a new and incredibly exciting user interface paradigm. They offer novel ways of interacting with technology and hold loads of potential to help humans accomplish complex tasks, as well as feel more emotionally connected with the ever growing world of smart and AI assistants. However, CUIs require an additional layer over our current design and development workflow that is not yet integrated into our process. In this article we will explore the benefits of UCIs, how they work, and why we should care about them in 2018 and beyond.

The power of Virtual Assistants (VAs) and Automated Agents (AAs) lies in their ability to keep humans away from boring, tedious and repetitive tasks. The idea of a VA or an AA is to organize and simplify complex intellectual tasks by providing intuitive and friendly interfaces that free up time and allow users to go back to living their lives the way they want. While we’ve witnessed significant advances in this field over the last few years through omnipresent virtual assistants such as Siri and Alexa, we still have a long way to go before we see this become a reality.

As of today, VAs and AAs are restricted to the narrow scope of commands they can answer and tasks they can complete; their abilities depend on how good their programming is and how smart their programmers can make them. However, as technologies such as deep learning and NLP continue to evolve, we will eventually reach a point where VAs and AAs can truly free us from the screen and allow us to accomplish more than just “ordering pizza” or “playing music”.

Microsoft’s Clippy. You either love him or hate him.

The limitations of UI

The current state of Virtual and Automated Assistants is not very different from the old school GUIs we’ve seen throughout modern computing history; for example, when users interact with their VAs they are presented with an interface that follows a sequence of “pages”, or states that change after users complete some predefined action. The only difference is that in the case of VAs and AAs, the sequence is much less rigid and mostly driven by natural language recognition (NLR) techniques combined with machine learning.

However, this presents us with another issue: since our current design and development workflow is tightly coupled to screens, we cannot easily design or develop virtual assistants through existing channels — the task quickly becomes incredibly tedious, repetitive and overwhelming. If we want to truly unlock the power of VA technology, we have to adapt our workflow for this new paradigm.

The rise of Conversational UI (CUIs)

Conversational UIs are a novel user interface paradigm that work in conjunction with Virtual Assistants. While they rely on the same core technology, their differences are important to highlight, as it is the way they are used by humans that makes them stand out from today’s GUIs.

When users interact with a CUI, instead of being presented with an interface that follows a sequence of predefined screens, they are faced with a single interface that will talk to them through multiple “utterances” — textual sentences that are fed through NLR algorithms so they can elicit certain responses from users. For example, if instead of ordering a pizza online with an automated assistant, the user would be asked “What kind of pizza do you want?”, it is up to them to give their answer in a way the Virtual Assistant understands.

This, in turn, requires a several changes to our workflow if we want to make CUIs as effective as possible: for starters, designers and developers have to become familiar with new methods of designing user interfaces that are only separated from screens by thin lines. In addition to this, they have to understand the different interactions and the effects they have when interacting with CUIs.

Microsoft’s Cortana is one of the VA products that runs on voice commands, but also has a traditional GUI for systems that don’t support it.

The workflow behind CUIs

As you can see in the diagram, designing and developing CUIs goes beyond just building the CUI itself; there are several supporting layers that need to be taken into consideration. To start, designers have to understand how users will interact with these interfaces and how their responses will affect them via NLR algorithms by focusing on the text they write when speaking to a CUI. Next, developers have to build an actual Virtual Assistant or Automated Assistant that can understand these responses and then decide how to act. This too is an iterative process, meaning that it isn’t just about building a CUI one time and moving on; rather, it is important for designers and developers to revisit the design of their CUI once the VA or AA has been built so they are sure that the two are being properly complemented one another.

In addition to this, there are several interesting spin-offs that need to be taken into consideration — for example, data scientists have a more important role in the creation of CUIs, as they must use NLR techniques and machine learning algorithms to make sure that responses from users can be interpreted in the right way.

Where to go from here

It is important for companies, startups and developers to understand that creating CUIs means moving away from traditional design and development methods, meaning that they have to start adapting their workflow accordingly. If they are not willing or able to do this, it will be very difficult for them to get the most out of CUIs and Virtual Assistants.

--

--