The following documents how to use the Embody Digital AI Expert app in order to create an AI-driven talking avatar using a spreadsheet.
The application allows you to quickly generate a responsive avatar that uses a conversational AI system. Users can ask questions of the avatar and will receive answers from the avatar both verbally through a text-to-speech (TTS) voice and through an animation of the avatar including verbal (lips moving in time with the speech) and nonverbal behaviors (arm gestures, facial expressions, head movements, etc.). Users can change the appearance of the avatar by generating an avatar face through a selfie or a photo of someone’s face, as well as change the appearance (hair, clothes) of the avatar.
1. Download the app
Download the UBeBot Expert AI app on the Google Play Store:
The app will require a download in order to deliver all the information that is needed to run properly. We recommend turning on WiFi in order to speed up the download process. A default female avatar will be displayed on startup.
2. Configure your avatar
There is a default male and female character that you may use for the app. To change the avatar, press the ‘Avatar’ button at the bottom of the screen to switch between the two gender avatar choices. The avatar hair, shirt, pants and shoes of the default avatars can be changed via the ‘Hair & Clothes’ button.
You can create a new avatar based on a selfie or a head shot of someone else. Press the ‘Avatar’ button at the bottom, then choose Male or Female and then take a selfie, or press the photo button in the upper left hand to retrieve a picture from the mobile device’s photo gallery.
Your photo will be converted into a photorealistic avatar that resembles your face or input photo. This avatar will be given default hair and clothing options, which can in turn be changed by selecting them from the ‘Hair & Clothes’ button.
You can change the background color or use an image for a background by pressing the ‘Background’ button.
3. Create your AI from a spreadsheet
The app utilizes the a conversational AI that can be created by entering questions that your avatar will respond to and the answers associated with those questions. For input, we use Google Sheets (an online spreadsheet application by Google). You will need to have a Google account in order to access Google Sheets.
A template that can be used for this purpose is located here:
Make a copy of this Google Sheets by going to File -> Make a Copy
Rename your Google Sheets copy.
Replace the questions and answers with your own. Enter multiple questions for each answer in the same cell by pressing CTRL-ENTER. When the user talks to the app, the AI will find the best match according to both the questions and the matching answers and automatically make the avatar speak and gesture.
The third column is an identifier for the id of the utterance for later use. You can label it numerically (i.e. 1,2,3,4), or with a specific name to associate with the answer (i.e. greeting, howmany, howold, etc.)
Avoid using any non-alphanumeric characters except for punctuation (period, question mark, exclamation marks.)
Your avatar will automatically speak and gesture based on the responses that you give. The avatar is programmed with a set of gestures and movements that correspond to the way most people communicate with each other, including head movements, facial expressions and arm gestures. When you are finished, turn on link sharing for the Google Sheets which will allow AI Expert to access your Google Sheets. Each Google Sheets has a unique id which can be accessed from the Share button in a browser. Press the green Share button in the upper right from a browser, then click on Get Shareable Link in the dialog box. This will turn on Google Sheets sharing and allow that Google Sheets ID to be accessed by the AI Expert application.
4. Connect your avatar to your AI
In order for the avatar to use the AI that you have programmed, you need link your spreadsheet to the app.
The Google Sheets ID needs to be transferred to the AI Expert app. You can do this by downloading the Google Sheets app from the Google Play Store onto your Android device, open it, then from the three dots on the Google Sheets app select Copy Link which will store the Google Sheets ID into the copy buffer.
In the UBeBot Expert AI app, press the AI button at the bottom of the screen, then in the Google sheets ID field paste in the link that was copied from the Google Sheets ID. Then press the Train AI button.
Press the back button (arrow) in the upper right to return to the main screen. You can then press and hold the microphone button, ask a question, and the AI will find a match to the proper answer, convert it to synthetic speech and automatically animate your avatar.
To make changes to your AI, simply add more questions or answers, press the AI button and touch the Train AI button once again.
5. Try it out!
Press the microphone button on the app, ask a question, then let go of the button. First, your voice will be transcribed into text. Second, the text will be inputted into our AI, returning an answer, which will then be converted into voice and body language.
You can iterate over your application by modifying your AI in the Google Sheets, pressing the AI button, then Train AI, then asking your question again.
5.Adding stress and emotion to your avatar’s responses
Avatar responses can be modified to be more energetic via stress tags, or to be expressed with differing emotion using the emotion tags. In addition, there are a number of special gestures that can be triggered by explicitly indicating those gestures inline in the response utterance.
Each word has a stress value going from 0 to 1 (defaults to 0). To change the stress of a particular word (for emphasis, for example). You can add the stress markup before a word like this:
 yes, I agree
where the  could be any decimal number between 0 and 1, such as .3 or .7. The stress will continue to affect all the words until changed, so to stress only the first word:
 yes,  I agree
Stressed words will elicit stronger and more nonverbal behavior, such as emphasized facial movements or increased arm gestures.
You can use the following emotional tags:
[angry] [surprised] [sad] [agree] [disagree] [sarcasm] [neutral]
To add emotional tone to words, any of the emotional tags can be added inline to the utterance like so:
I can’t believe you said that.
you said what?
I thought I told you to clean up your room.
It was amazing.
no, that doesn’t sound right to me at all.
That’s a great idea [sarcasm] not!
Like stress values, emotional tags will impact each word until changed. You can emphasize only a single word by adding the neutral emotion immediately after the word:
excellent! [neutral] now let’s get back to business.
6. Some Features
a. Test AI Mode
You can put the app into an AI testing mode that will show the response as text and will not animate the avatar or produce synthetic speech. This mode is useful if you want to test out your AI quickly. Press the AI button, then turn off the Use text-to-speech? toggle.
b. Use different TTS voices
You can change the TTS voice used. Press the AI button then choose the male or female voice from the voice list.
c. Pregenerate all the voice
By default, the app will generate and store the synthetic voice that is created for your answer, which will take a few seconds depending on the speed of your Internet connectivity. If an answer has already been given, the synthetic voice response will not be regenerated. If you want to generate the voices for all of your answers in advance, which results in a very fast response time for the character, you can press the GENERATE TTS button. Note that you are limited in the number of voice utterances you can generate. Please contact us at email@example.com to expand this limit.
d. Change your background
You can change the background behind the avatar to a solid color or to use any photo that you have by pressing the Background button.
e. Put your avatar in A/R
You can put your avatar in augmented reality (A/R) by pressing the A/R button, then double clicking on flat surface in the camera view. The avatar will appear on the surface. You can make the avatar smaller or bigger by pressing the rescale button (upper left, double hoops) then pinch gesturing to make it smaller or bigger, then pressing the rescale button once again.
Use of a virtual real estate agent in augmented reality:
Use of a virtual doctor to give non-diagnostic medical information:
Use of an interactive newscaster to allow the news viewer to ask more in-depth questions about a news topic:
Expert on particular subject, such as cooking, or an information agent about an event:
How does your platform know how to animate the avatar?
We have a proprietary, patented technology that allows us to transform a line of text into an animated performance that represents how humans communicate to each other through speech and body movement. You can read more details in our blog that describes this technology applied to a human-like robot: https://medium.com/@ariyshapiro/making-robots-more-human-like-through-nonverbal-behavior-a617e4fe3bcd?source=friends_link&sk=0aa526a81a626226792e7c1c97e7675c
What platforms does it run on?
Currently the user application is running on an Android device. You can, however, acces and change the AI via Google Sheets on any desktop or mobile platform. It is also possible to run the application on a desktop or kiosk. Please contact us at firstname.lastname@example.org if you are interested in this.
Is there an iOS version available?
Not yet, but we plan on releasing one in the near future.
Can this run in VR?
Potentially, yes. Many of the popular VR platforms use Android.Please contact us at email@example.com if you are interested in this.
Do I need Internet connectivity for the app to work?
Yes, the Internet is needed for Google’s speech recognition (ASR) to create a transcription what is spoken by the user of the app. The Internet is also needed to retrieve the synthesized voice files from the text-to-speech (TTS) system.
Can the app be connected to other conversational AI systems such as Google’s DialogueFlow, Amazon’s Lex, Microsoft’s Conversational AI Engine or IBM’s Watson?
Potentially, yes. The avatar system uses voice as input and outputs a talking, animation based on a response that is given. It is possible to use the other vendor’s AI system instead of our own. Please contact us as firstname.lastname@example.org if you are interested in this.
Can this be used in video games?
Yes, the technology can be used for video games. Please contact us at email@example.com if you are interested.
Is there any cost to use the platform?
No, the platform is currently free.
I’d like to use this capability in my own app/platform/device. Can I do this?
Yes! Please contact us at firstname.lastname@example.org to discuss it!
What kind of applications can I build?
Anything that you could build with a chatbot!