Building voice commands

Let's get started with building our voice assistant that we can use to find the weather and turn on/off lights. Because we enabled the weather domain while setting up our Houndify account, we need to add custom commands to turn on/off lights:

  1. On your Houndify dashboard, go to your client's home page. Dashboard | Click on your client.
  2. Locate Custom Commands on the navigation bar to the left. Let's add a custom command each to turn on and turn off the light.
  1. Delete ClientMatch #1 that comes as a template with the custom commands.
Locate Custom commands and Delete Client Match #1
  1. Select Add ClientMatch to add a custom command to turn on lights. Populate the fields with the following information:
  • Expression: ["Turn"].("Lights"). ["ON"]
  • Result: {"action": "turn_light_on"}
  • SpokenResponse: Turning Lights On
  • SpokenResponseLong: Turning your Lights On
  • WrittenResponse: Turning Lights On
  • WrittenResponseLong: Turning your Lights On
  1. Repeat the preceding steps to add a command to turn lights off

Test and verify that these commands work using sample_wave.py. Make your own recording for the test. We have also provided audio files along with this chapter's download (available in the folder audio_files).

Let's make a copy of sample_wave.py to build our assistant. We recommend reading through the file and familiarizing yourself with its function. The detailed documentation for the Houndify SDK is available at https://docs.houndify.com/sdks/docs/python:

  1. In the file stream_wav.py, the StreamingHoundClient class is used to send audio queries, such as request for weather information and commands to turn on/off lights.
  2. The MyListener class inherits the HoundListener class (from the houndify SDK).
  3. The MyListener class implements callback functions for three scenarios:
  • Partial Transcription (the onPartialTranscript method)
  • Complete Transcription (the onFinalResponse method)
  • Error State (the onError method)
  1. We need to make use of action intents to turning on/off lights using voice command.
  2. When we implemented the custom commands on the Houndify website, we added an action intent for each command. For example, the action intent for turning on the lights was:
       { 
"action": "turn_light_on"
}
  1. In order to turn on/off the lights based on the received action intent, we need to import the OutputDevice class from gpiozero:
       from gpiozero import OutputDevice
  1. The GPIO pin that controls the light is initialized in the __init__ method of the MyListener class:
       class MyListener(houndify.HoundListener): 
def __init__(self):
self.light = OutputDevice(3)
  1. On completing transcription, if an action intent is received, the lights are either turned on or turned off. It is implemented as follows:
       def onFinalResponse(self, response): 
if "AllResults" in response:
result = response["AllResults"][0]
if result['CommandKind'] == "ClientMatchCommand":
if result["Result"]["action"] == "turn_light_on":
self.light.on()
elif result["Result"]["action"] == "turn_light_off":
self.light.off()
response is a dictionary that consists of the parsed json response. Refer to the SDK documentation and try printing the response yourself to understand its structure.
  1. We also need to announce the voice assistant's action while turning on/off lights. We explored different text-to-speech tools, and they sounded robotic when compared with off-the-shelf products such as the Google Home or Amazon Echo. We came across this script that makes use of the Google Speech-to-Text engine at http://elinux.org/RPi_Text_to_Speech_(Speech_Synthesis).
Because the script makes use of Google's text-to-speech engine, it connects to the Internet to fetch the transcribed audio data.
  1. Open a new shell script from the Raspberry Pi's command-line terminal:
              nano speech.sh
  1. Paste the following contents:
              #!/bin/bash 
say() { local IFS=+;/usr/bin/mplayer
-ao alsa -really-quiet -noconsolecontrols
"http://translate.google.com/translate_tts?
ie=UTF-8&client=tw-ob&q=$*&tl=En-us"; }
say $*
  1. Make the file executable:
              chmod u+x speech.sh
  1. We are going to make use of this script to announce any actions by the assistant. Test it from the command line using the following code:
              ~/speech.sh "Hello, World!"
  1. The system calls to announce the voice assistant actions are implemented as follows:
              if result["Result"]["action"] == "turn_light_on": 
self.light.on()
os.system("~/speech.sh Turning Lights On")
elif result["Result"]["action"] == "turn_light_off":
self.light.off()
os.system("~/speech.sh Turning Lights Off")

Let's test what we have built so far in this section. The preceding code snippets are available for download along with this chapter as voice_assistant_inital.py. Make it executable as follows:

chmod +x voice_assistant_initial.py

Test the program as follows (audio files are also available for download with this chapter):

./voice_assistant.py turn_lights_on.wav