Blog

The Role of Ethics in Voice Development

When was the last time a computer spoke to you? If you’re like many consumers, it was probably this morning at home while checking the weather via a smart speaker, or on your way to work getting navigation tips from your in-car assistant.

Read more
Blog

Dreaming with Data. Traveling for Context.

Read more
Blog

Multi-modal interaction – How machines learn to understand pointing

Pointing at subjects and objects – be it with language or using gaze, gestures or the eyes only – is a very human ability. Smart, multi-modal assistants like in your car account for these forms of pointing, thus making interaction more human-like than ever before. Made possible by image recognition and Deep Learning technologies, this will have significant implications for the autonomous vehicles of the future.

Read more
Blog

Hear the Call for Help! How Emergency Vehicle Detection Can Make Your Car a Force for Good

In a world of “faster,” Cerence is looking to make sure drivers know when to pull over.

Read more
Blog

Multimodal input meets visual output

Augmentation isn’t a new technology, but automotive applications are providing new impetus. Cerence is at the heart of this development by discovering and using augmented reality as an additional output modality that – combined with various input modes such as voice, eye-tracking and gesture recognition ­– enhances the user experience and provides access to a wide range of information.

Read more