
AI detects smart speakers to tell where your voice sounds come from – AI will soon be on the task to banish wake words from voice assistants.
Carnegie Mellon University researchers have built machine learning models that estimate the direction sound voice comes from spotting your intent without the need for a special gesture.
The whole system recognizes the first, powerful, and closest sounds is always the one aimed directly at a given subject. Anything else tends to be quieter, delayed, and muffled.
The model is also aware that human speech frequencies vary depending on the direction you are facing. Lower frequencies tend to be more omnidirectional.
This method is “lightweight,” software-based, and doesn’t require sending audio data to the cloud, the researchers added.
It could be a while before you see the technology In use, although the team has publicly released code and data to help others build on their work. It is easy to see where this might lead, at least.
You could tell a smart speaker to play music without using a wake word or setting offa horde of other connected devices. It might also help with privacy by requiring your physical presence while eschewing the need for gaze-detecting cameras.
In other words, it would be closer to that Star Trek vision of voice assistant that always know when you are talking to them.
ALSO READ>>>>>Apple announces One More Thing for November 10th event