In the 2013 movie, "Her," audiences were introduced to Theodore Twombly, an introverted man on the verge of divorce from his high school sweetheart. To cope with his loneliness, Theodore decides to purchase the new OS1, the world's first artificially intelligent operating system, advertised as "not just an operating system, but a consciousness" and names her Samantha. The operating system, designed to adapt and evolve like a human being, conducts all communication through voice commands and soon moves beyond being just a voice assistant to a love interest for Theodore.
This future may not seem too unrealistic with the rise of voice assistants like Alexa, Siri, and Bixby. They are an essential component of today's connected homes. Put simply, these devices are to the connected home what the brain is to the human body. They are the center of the system that connects all essential functions and the entity itself would be of little use without it.
Still, as an integral piece of the connected home, they are not being utilized to their full potential today. Most consumers who own devices with a voice assistant use the functionality at least once a day, according to PwC. But the use-cases still prove to be quite elementary. The PwC research outlines the most common uses of voice assistants as:
Nonetheless, this tech is flourishing in both capabilities and popularity. According to Juniper Research, voice assistants will be found in most U.S. homes in the span of just a couple of years. From the estimated 25 million in 2018, forecasts anticipate a jump to 275 million U.S. household voice assistants by 2023. That’s a growth of 1,000 percent in five years. No doubt that Google’s announcement of plans to add 22 new languages to its assistant by the end of the year will aid in worldwide adoption. Amazon, in announcing a recent partnership with Lennar to implement Alexa into all future home builds, may also be a significant contributor.
As voice continues its advancement, how will we see it play out in the connected home?
Sign up for weekly updates on the latest trends, research and insight in tech, IoT and the supply chain.
As we discussed, voice assistants are becoming increasingly smarter thanks to developments in artificial intelligence technology. While their main function is to respond to commands, in doing so, they also learn. The more a person interacts with voice-activated devices, the more trends and patterns the system identifies based on the information it receives. Then, this data can be utilized to determine user preferences and tastes, which is a long-term selling point for making a home smarter.
Furthermore, voice assistants may be evolving to perceive more than preferences and tastes. Google and Amazon are looking to integrate voice-enabled artificial intelligence capable of analyzing and responding to human emotion. While the full extent of what this will look like isn’t clear yet (something like "Her"?) the foundation of this project lies in devices being able to identify and adapt to a user’s motivations and concerns.
At CES 2018, one of the top trends sweeping the smart home market was the integration of voice assistants to other connected devices. Internet of Things (IoT) innovations will allow assistants to become a part of all connected devices, allowing them to travel from room to room without a voice-capable device. This means that our voice-activated devices will be available in every part of the house using other connected devices, even if that room doesn't have the original device.
Recognizing the outstanding potential in this market, big brand names are vying for control, resulting in an increasing number of devices becoming compatible with more than one assistant. But not all devices are created equal. While Alexa boasts an ever-growing, tens-of-thousands of skills, other in-home voice assistants are barely breaking 100. As the adoption rates of these devices increase, there are still several growing pains associated with the adaptation of this new technology into existing products, including the economics and knowledge required to keep up with design, manufacturing, performance and server accessibility needs. This begs two questions on the future of voice-activated devices:
The economics of these choices will likely dictate how the market will trend in the future. Besides, there are other economic considerations, such as the extra licensing fees for wake word algorithm detection and increasing manufacturing costs associated with additional hardware and software implementation.
This also raises the question that if voice activation is implemented on an individual basis in a large portion of commercial devices, such as connected appliances, how will solution providers keep up with the technological advancements? Future changes, such as wake word customization, server library access, multi-command sentencing and actual voice recognition (i.e., determining the user's identity), will need to be realized quickly, which will incur ever-increasing expenses.
With no available industry standards to help guide voice activation performance, designers and manufacturers are developing their own performance test procedures and processes. Many test equipment manufacturers are scrambling to find ways to provide tools to help validate voice activation performance. The question is, without a standard, what is the correct tool and how will it remain relevant over time? This leads to the issue where consumers may own several devices that do not have similar performance under different environmental conditions. The degree of variation in performance will depend on the integrity of the design, the test procedures and the equipment used to verify the performance.
It is conceivable that future voice-activated devices will be able to anticipate your every need, even before you do. This skill set will be developed through machine learning and will be a useful asset in many markets, including the connected home. With all the in-home data collected, devices will have access to vast amounts of knowledge. As a result, cloud servers will become invaluable and access will become increasingly desirable.
With recent online data breaches and enactment of the General Data Protection Regulation, it’s not difficult to understand why. Data security consistently polls as one of the top reasons current non-owners have shied away from adopting connected devices. Business Insider reports that 70 percent of smart speaker owners have not used their device for online shopping due to concerns about providing payment information. Furthermore, Jabil's 2018 Connected Home and Building Technology Trends survey revealed that recent breaches and events have led solution providers to rethink their method for data collection.
Aside from the bugs and kinks of general privacy matters, data affects voice-enabled artificial intelligence in a unique way. As we mentioned, these devices learn as they obtain user information. However, they are not only learning about their users, but also utilizing this data to advance themselves. Artificial intelligence algorithms are built and further developed by data implementation. Ergo, limited data collection means limited evolution. As a result, recent enactments like the GDPR have raised concerns among AI developers on how they will be able to move forward with their innovations.
Artificial intelligence providers will need to thoroughly understand data ownership boundaries and ensure that they stay in line with data protection measures while still collecting the components necessary for advancing AI technology. With fears assuaged, the industry would likely see an even greater form of adoption.
An inevitable part of our future, voice is the centerpiece of a connected home. As it progresses further into our world, it brings with it the chance to simplify life as we know it. The real question is, what will it take to get there?