Chatbots and buttons are a bad idea.
The term Chatbot evolved in the 1990s out of its predecessor, Chatterbot, which was the prevailing term since Joseph Weizenbaum released his computerized therapist Eliza in the 1960s. It was a clever twist on the word “chatterbox” – a person (usually a kid) who wouldn’t shut up.
The essence of a chatbot is chatting. This verb was historically used for oral conversation, but in the last decades “chatting” refers to written dialogs as much as spoken ones. When the first chatterbots arrived on the scene, a graphical user interface was a far dream. Inputs and outputs were usually in text format, which kept the written chat very similar to the spoken one.
Things started changing with the emergence of GUIs, and high-resolution screens that could display anything. This opened a world of possibilities for the entire user experience, and the Chatbot industry, still in its incubation stage, started adopting graphical features into their early releases. Starting with emojis, animations, and in order to bypass the biggest challenge in NLU, found an ingenious solution: Limit the user’s input options, so he could ONLY say one of 3 or 4 possible things. These input options would typically show as buttons, with the input text on them. Options like “yes”, “no”, “maybe later” and “leave this subject”.
Needless to say, this makes the conversation designer’s work SO much easier. What could be better? The Chatbot developer not only provides the responses of the Chatbot, it also pre-defines the possible inputs by the user! Using such buttons make chatbot development a piece of cake. It always reminds me of a voice response system: “For ‘yes’, dial 1. For ‘no’, dial 2. For ‘Maybe later’ dial 3…
A Chatbot that uses buttons may be a bot, but hardly a Chatbot. The program “chats”, but the user does not. He operates a machine, making choices. Still, whether it’s called a Chatbot or just a bot, it did the trick. Easy to develop, easy to deploy, easy for the users to operate. So far so good.
A similar misuse of the term Chatbot also prevails in another area: The growing population of voice assistants. Products like Siri (by Apple), Alexa (by Amazon), and Cortana (by Microsoft) are perfect examples: They are indeed voice assistants, But definitely NOT Chatbot. They simply cannot CHAT. Try to hold a meaningful conversation with any of them. They sometimes can answer a follow-up question, and in rare cases even another one. But try to refer to something you said a moment ago – forget it…
Back to buttoned Chatbots: The big problem with those creatures is not purism. The problem is this: Chatbots with buttons are not Voice-Ready! As long as Chatbots kept the dialogs pretty much the same way people speak orally, any text-based bot is automatically voice ready. But how would you deploy a Chatbot with buttons in a voice environment? Maybe say to the user: “pick one of the following options: (1) yes, (2) no, (3) Maybe later?…
Chatbots that use button need a lot of work to become Voice-Ready. Every single case of any reliance on a visual cue, something that needs a screen and visual attention must be abandoned and replaced by a suitable voice solution. This opens up the NLU problem that the buttoneers tried to avoid in the first place.
If you already developed buttoned Chatbots, you have my sympathies. If you are about to develop one, do yourselves a favor: Undertake a restriction: Use ONLY text. Avoid emojis (unless you want your Chatbot to say to the user “smiley with stuck out tongue, winking eyes” at the end of a response with an emoji).
Every Conversational Component on the CoCo Marketplace is buttonless and totally voice ready. Check it out here!