By adding some electrodes and some electronics to your headphones, Wise can make your music experience much more hands-free than before. Bite your own teeth twice to pause a track or three times to skip to the next song – without making noise, hand gesture or very visible movement, the technology lets you interact with your music players or AR/VR headphones without the need to push any switch. The founders are imagining this to be particularly useful in scenarios where you have your hands full or are too loud for typical voice commands.
The company revealed today that it has raised a total of €2 million (about $2.5 million) with the aim of licensing its technology to existing headphone and headphone manufacturers. The round was led by Paris Business Angels and Kima Ventures, with the support of BPI França.
Wisear showed me its neural interface: By using the aforementioned electrodes to record brain and facial activity, its patent-pending AI technology turns these signals into controls that allow the user to perform actions. The company is quite skeptical of its competitors and suggests that other “thought control” startups are trying to pull the proverbial wool over our eyes.
“Anyone who tells you today that they are controlling thought or mind or anything else is basically distorting the truth for you,” explains Yacine Achiakh, co-founder of Wisear, “If they really have something, then honestly, take it. all your money and give it to them because it will revolutionize everything. It was quite frustrating for us: we noticed people who were saying mind control had a demo that would work in a very specific setting when there was no noise around; people are not moving; it’s sunny outside; and it’s at the right temperature.”
To overcome the “works in the lab” syndrome, the company went back to the drawing board and created a new technology suite with ready-to-use components. The idea is to build a prototype of the technology that works well enough to showcase it, and then license it to AR/VR headset and headset manufacturers.
“We realized that the hardest part of trying to make anything brain-based was actually generalizing it across users and getting it to work in any environment. We took a step back and decided that the neural interface would be based on muscle and eye activity first. The main controls we have are based on jaw activity,” says Achiakh. “We have sensors in the headset that can capture jaw muscle movement and turn it into controls. You don’t need to make any noise. And our goal for 2022 is to have two controls: double and triple jaw clenching. The goal is to scale this to 12 controls over the next three years.”
The company founder showed off the company’s technology on a video call last week and it was, in a word, impressive. The headphones weren’t confused by noises, movement, or anything else Achiakh was doing while talking to me. As he bit his teeth – clenching his jaw, you might call him – the audio player paused and resumed the demo song.
The technology is not yet ready for prime time, but the success rate is quite high.
“We are building the first technology that really works for everyone. At our booth at CES, we got the demo to work for about 80% of people who tried it – and we’re working to improve even more,” says Achiakh. “We are building the only neural interface that can work today. Muscle activity is the real new interface you can build in 2022.”