voice assistant | Dogtown Media https://www.dogtownmedia.com iPhone App Development Tue, 23 Jul 2024 15:04:56 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://www.dogtownmedia.com/wp-content/uploads/cropped-DTM-Favicon-2018-4-32x32.png voice assistant | Dogtown Media https://www.dogtownmedia.com 32 32 Voice-Activated Features in Mobile Apps: The Next Frontier https://www.dogtownmedia.com/voice-activated-features-in-mobile-apps-the-next-frontier/ Tue, 23 Jul 2024 15:04:20 +0000 https://www.dogtownmedia.com/?p=21474 After reading this article, you’ll: Grasp the current state and future potential of voice-activated features...

The post Voice-Activated Features in Mobile Apps: The Next Frontier first appeared on Dogtown Media.]]>
After reading this article, you’ll:

  • Grasp the current state and future potential of voice-activated features in mobile apps, including their benefits for user experience, accessibility, and AI-driven personalization.
  • Understand the key technologies enabling voice features, such as speech recognition, natural language processing, and voice biometrics, and their applications across various industries.
  • Learn about important considerations for implementing voice technology, including choosing the right use cases, ensuring accuracy and reliability, addressing privacy concerns, and technical implementation best practices.

voice activated app

Voice technology and voice-activated features are rapidly changing the way we interact with our devices. With the rise of popular voice assistants like Siri, Alexa, and Google Assistant, voice recognition capabilities are now readily available to mobile app developers. It is estimated that by the end of 2024 there will be over 8 billion voice assistants in use globally, surpassing the world’s population.

As consumers become increasingly accustomed to controlling devices and services through voice commands, integrating voice-activated features into mobile apps provides an important way to enhance user experience. Apps that leverage voice technology can enable hands-free operation, faster access to key functions, and more personalized experiences powered by AI. Additionally, voice control helps expand accessibility for users with disabilities.

This article explores the current state and future potential of building voice-activated capabilities into mobile apps.

Current State of Voice Technology in Mobile Apps

Voice technology and voice control features have become increasingly commonplace in mobile apps over the past few years. Voice assistants like Siri, Google Assistant, and Alexa have accelerated this trend and normalized voice interactions.

Mobile app developmentLeading mobile voice assistants from Apple, Google, Amazon, and others are now deeply integrated into the major mobile operating systems. These assistants can activate to handle voice commands, answer questions, control smart home devices and media playback, and more. Additionally, the major tech companies provide speech recognition and natural language processing APIs that allow developers to build custom voice features.

Current voice control capabilities in apps tend to focus on core functions like search, content playback, dictation, and command triggers. For example, users can now use their voice to initiate searches, play/pause media, enter text fields, set reminders, check statuses, and execute app actions hands-free. Voice tech also enables accessibility features like screen readers.

While voice interactions in apps are still evolving, voice technology components have become standardized to the point where most developers can implement basic voice capabilities with relative ease. The rise of conversational interfaces represents the next wave of innovation, pairing voice UIs with chatbots and AI to enable more advanced workflows. Adoption of voice features is likely to rapidly accelerate as consumers become more accustomed to talking to their devices.

Benefits of Implementing Voice-Activated Features

Integrating voice recognition and voice control capabilities provides a number of important benefits for mobile apps. As these features become more advanced, they will revolutionize user experiences across a diverse range of apps and industries.

​​Enhanced user experience and accessibility

Voice control dramatically enhances overall user experience. Hands-free operation enables efficiency gains, allowing users to complete tasks without having to physically interact with devices. This allows for seamless multitasking. Voice UIs also expand accessibility, providing a more inclusive experience for those with disabilities.

Potential for personalization and AI integration

app personalizationVoice technology paves the way for more personalized, contextually-aware experiences powered by AI. Voice biometrics facilitate customized suggestion and automation based on individual user data. As natural language processing continues to progress, conversational interfaces will become more advanced as well.

Other key benefits include reduced cognitive load, real-time language translation, enhanced privacy compared to screen-based interactions, and the ability to leverage expanded datasets of voice data to uncover unique user insights.

As voice recognition improves and consumers become more comfortable interacting conversationally with devices, the benefits of voice technology will compound. Integrating voice is becoming a vital mobile app feature rather than just a novelty.

Key Technologies for Voice-Activated Features

There are a number of key technologies that enable the functionality of voice-activated features and conversational interfaces in mobile apps. Advances in these underlying technologies are fueling the voice technology revolution.

Speech recognition and NLP

At the most fundamental level, speech recognition technology transcribes spoken audio into machine-readable text in real time. This allows users to speak naturally to control apps instead of using manual input. Natural language processing (NLP) analyzes textual input and determines appropriate responses and actions.

Text-to-Speech and voice biometrics

Text-to-speech (TTS) synthesis allows apps to respond to voice commands audibly. TTS generates computerized speech output to confirm actions or provide information to the user via voice. Voice biometrics verify user identity and facilitate personalized experiences based on voice signatures.

Together, these core technologies allow developers to build a wide range of voice user interfaces, conversational interactions, and voice-controlled features. They enable everything from basic voice-driven search and commands to advanced voice-based workflows that can automate complex processes.

As these supporting technologies become more accurate, natural, and scalable, voice features will become an integral component of mobile app experiences rather than a novelty. Voice is the next paradigm for user interaction.

Potential Applications Across Industries

Voice technology and voice-activated features have tremendous potential to transform workflows and processes across practically every industry. As voice UIs and voice AI capabilities continue maturing, we will see expansive enterprise adoption.

E-commerce and healthcare

In e-commerce and retail, voice technology can enable frictionless transactions, personalized recommendations, and seamless omnichannel experiences. Voice is already being integrated into smart retail environments. In healthcare, voice-driven telehealth, remote patient monitoring, and assistive tools for both patients and providers will help drive better health outcomes. Use of voice AI for remote patient monitoring alone is forecasted to grow 25% annually through 2024.

Smart homes and vehicles

Smart homes and IoT ecosystems will incorporate voice technology for unified, conversational control of connected devices and home automation. Voice control is already becoming ubiquitous in smart vehicles, while integration with maps and navigation tools is improving driver safety. Finally, in the enterprise, voice dictation, automation of workflows, and conversational user interfaces can boost productivity across all business functions.

In the coming years, expect to see voice user interfaces become the norm across everything from consumer electronics to business software. Every app is a potential candidate for voice technology integration given its vast scope of application. Voice promises to be more disruptive than even touchscreens were to the digital landscape.

Key Considerations for Implementing Voice-Activated Features

While integrating voice-activated capabilities offers enormous potential, effectively implementing this emerging technology poses some unique considerations for developers. The utility, usability, and adoption of voice features hinge on accounting for these factors.

Identifying suitable use cases and scenarios

First, suitable use cases must be identified through upfront analysis and user research. Voice excels at particular tasks but may falter at others. Additionally, designing an intuitive, seamless voice user interface (VUI) requires user testing to perfect conversational flow.

Ensuring accuracy and reliability of voice recognition

On the technical side, high accuracy and reliability of speech recognition across languages, accents, and vocabulary is critical for usability. Performance must also meet stringent timing demands, as any lag breaks user engagement. Rigorous testing is key.

Addressing privacy and security concerns

User privacy and security must also be addressed proactively, as sensitive voice data presents unique risks. Ethical use of data should be ensured as well.

By carefully selecting appropriate applications, crafting a frictionless voice UI, maximizing recognition accuracy, supporting multilingual users, and safeguarding privacy, mobile developers can overcome adoption barriers and successfully unlock the power of voice technology for their apps. Incorporating voice is not simple, but following best practices helps ensure it enhances the user experience.

Technical Implementation

While voice technology has matured remarkably, thoughtfully implementing voice features poses some technical considerations. The right tools and techniques are key to seamless adoption.

Choosing the right voice recognition SDK or API

First, developers must choose an appropriate speech recognition SDK or API layer with the accuracy, language support, and platform coverage their app requires. Top cloud speech services include Google Cloud Speech, Amazon Transcribe, and Azure Speech Services.

Integrating voice-activated features into existing app architecture

Voice user interfaces must be tightly integrated into existing app architecture through clean interfaces. This facilitates maintainable code and consistent user experiences across touch and voice modes. Following best practices around modular, test-driven development is critical as well.

Best practices for development and testing

Comprehensive testing across diverse usage scenarios, accents, vocabularies, and environments is crucial for capturing edge cases. Automated testing maximizes coverage. Additionally, performance tuning and resource optimization prevents laggy responses that frustrate users.

By leveraging robust voice recognition tools, crafting clean integrations, rigorously testing voice flows, and optimizing speed, developers can overcome the technical hurdles of building voice-activated features. With best practices, almost any app can start vocalizing.

Partnering with a Mobile App Developer

While many mobile development teams have strong engineering talent, building voice-activated capabilities requires specialized expertise. Partnering with an experienced mobile app development firm can fuel innovation and success when implementing voice technology.

Leveraging an external team with proven experience designing, developing, and deploying voice interfaces introduces needed skills and capacity. These partners stay on top of emerging tools and best practices while also excelling at UX design and conversational interface development.

When selecting a firm, key considerations include technical capabilities around speech recognition, NLP and machine learning, past voice app development projects, design philosophy, and cultural fit. The partner should collaborate closely with internal teams and stakeholders throughout the process as well.

An ideal partnership entails frequent workshops to envision voice experiences jointly, iterative prototyping, transparent development practices, and a framework for maintaining voice integrations post-launch.

Building amazing voice-activated mobile features requires a complementary blend of development talent and voice design expertise. Strategic external partnerships unlock innovation, mitigate risks, and produce cutting-edge voice user experiences that delight users.

Frequently Asked Questions (FAQs) on Voice-Activated Features in Mobile Apps

What are the main benefits of implementing voice-activated features in mobile apps?

The main benefits include enhanced user experience through hands-free operation, improved accessibility for users with disabilities, potential for AI-driven personalization, reduced cognitive load, and the ability to enable more efficient multitasking.

Which key technologies are essential for implementing voice-activated features?

The essential technologies include speech recognition for transcribing spoken audio to text, natural language processing (NLP) for understanding user intent, text-to-speech synthesis for generating voice responses, and voice biometrics for user identification and personalization.

How is voice technology expected to impact different industries?

Voice technology is expected to transform various industries, including e-commerce (enabling frictionless transactions and personalized recommendations), healthcare (improving telehealth and remote patient monitoring), smart homes (facilitating unified control of connected devices), and enterprise environments (boosting productivity through voice-driven automation and workflows).

What are some key considerations when implementing voice-activated features in an app?

Important considerations include identifying suitable use cases, ensuring high accuracy and reliability of voice recognition, addressing privacy and security concerns, designing intuitive voice user interfaces (VUIs), and integrating voice features seamlessly into existing app architecture.

Why might a company consider partnering with a specialized mobile app developer for voice feature implementation?

Partnering with a specialized developer can provide access to expertise in voice technology, stay current with emerging tools and best practices, bring experience in designing conversational interfaces, and help mitigate risks associated with implementing complex voice-activated features.

The post Voice-Activated Features in Mobile Apps: The Next Frontier first appeared on Dogtown Media.]]>
Amazon’s Secretive Smart Home Robots Could Be in Your Home Soon https://www.dogtownmedia.com/amazon-secretive-smart-home-robots-your-home-soon/ Thu, 03 May 2018 15:00:45 +0000 https://www.dogtownmedia.com/?p=11263 Amazon’s expanding its lineup of smart home products. They recently rolled out a web app...

The post Amazon’s Secretive Smart Home Robots Could Be in Your Home Soon first appeared on Dogtown Media.]]>
Amazon’s expanding its lineup of smart home products. They recently rolled out a web app that allows you to create custom Alexa greetings, responses, workflows, and more. The tech giant is also allegedly working on a smart home robot. Amazon has been extremely secretive about details surrounding the new smart home robots.

The Future of… Chores?

We do know that San Francisco-based developer Lab126 is working on the robots. The company is effectively Amazon’s hardware research and development department. The robot project, codenamed “Vesta”, could possibly launch for sale in 2019.

Vesta began a few years ago, and Amazon has been hiring more talent for the Lab126 department since January. Although there has been a lot of speculation surrounding the project, it’s still unknown what needs the robot will actually fulfill.

It could possibly integrate with Echo, which runs Alexa. Or it could be a robot that takes care of chores like laundry or cleaning at predesignated times. We know that the robot uses advanced cameras and computer vision like autonomous cars do.

Speculation & Imagination

Amazon stated that it “doesn’t comment on rumors and speculation.” It’s hard to imagine the limits for a mobile robot combined with the AI powering Alexa. But that’s probably exactly what the tech titan wants.

Amazon keeping this project shrouded in secrecy actually functions as a superb marketing tactic; the mystery is keeping the public guessing. There are many unanswered questions, like “Will it be able to climb multiple floors?”, “Can it throw together a quick meal?”, or “Will it be able to clean up baby and pet stains?” Maybe it will be able to do all of these things.

Put People Before Profit

Right now, that doesn’t seem like too farfetched of a feat for Amazon to pull off. The company is constantly setting a higher bar for record-high profits. But it does seem to come at the cost of many of its employees. Since 2013, seven employees have died working in the Amazon warehouses.

With many employees alleging ridiculous performance quotas, low wages, and even inadequate amount of time to go to the bathroom, it seems the tech titan has some room for improvement in its own quarters. Hopefully, Vesta can lend a helping hand.

The post Amazon’s Secretive Smart Home Robots Could Be in Your Home Soon first appeared on Dogtown Media.]]>
Google’s Wireless Pixel Buds Put a Translator in Your Ear https://www.dogtownmedia.com/googles-wireless-pixel-buds-put-a-translator-in-your-ear/ Thu, 05 Oct 2017 14:32:48 +0000 https://www.dogtownmedia.com/?p=10287 When Apple ditched the headphone jack and introduced the AirPods, many iPhone owners revolted. Consumers...

The post Google’s Wireless Pixel Buds Put a Translator in Your Ear first appeared on Dogtown Media.]]>

When Apple ditched the headphone jack and introduced the AirPods, many iPhone owners revolted. Consumers weren’t ready to be thrust into the wireless future quite yet. But now that people have had a chance to adjust to AirPods, there’s no doubt that the wireless headphone is here to stay.

Android app developers are stoked about the upcoming November launch of the Pixel Buds, wireless headphones optimized specifically for Google Pixel 2. Well, technically Pixel Buds are not wireless: the left and right bud are connected by a cloth cord that drapes across your neck. They charge inside a stylish cloth case.

The Pixel Buds can auto-pair with Pixel phones. All it takes is opening the case next to your phone. Early reviews indicate that unlike AirPods and many other modern headphones, these are not in-ear ear buds. Instead they rest in your outer ear.

Users can control the headphones with simple gestures. Tapping the right bud plays or pauses, swiping left or right adjusts volume, and double tapping reads alerts and notifications as they come in. There’s reportedly no way to skip tracks or customize controls — at least not yet. Holding down on the right bud gives you instant seamless access to Google Assistant.

As Chicago Android app developers might expect, the sound quality is decent, but not designed for true audiophiles. But how much can you expect from a $159 pair of headphones?

What really sets Pixel Buds apart from Apple’s AirPods is the translation feature. Powered by Google Translate, it enables you to access 40 different languages. It is as convenient — and imperfect — as Google Translate always is.

To use this feature, you conjure the Assistant and ask for a little help with your Portuguese. After you say your piece, the app translates your phrase and speaks it aloud to whoever you’re speaking with. They reply into your phone, and the phrase appears in your Pixel Buds.

It’s still a little clunky, but it moves us a little closer to the sci-fi ideal of instantaneous inner ear translation devices. Who thought we’d even get this close?

The post Google’s Wireless Pixel Buds Put a Translator in Your Ear first appeared on Dogtown Media.]]>
Amazon Is Making Alexa-Connected Smart Glasses https://www.dogtownmedia.com/amazon-is-making-alexa-connected-smart-glasses/ Wed, 20 Sep 2017 13:59:11 +0000 https://www.dogtownmedia.com/?p=10279 The fizzled hype (and missed opportunity) of Google Glass remains fresh in memory. Of course,...

The post Amazon Is Making Alexa-Connected Smart Glasses first appeared on Dogtown Media.]]>

The fizzled hype (and missed opportunity) of Google Glass remains fresh in memory. Of course, as we wrote a couple months ago, Google Glass has come back in a big way and found its niche in factories and warehouses. But as the buzz around augmented reality reaches fever pitch in the tech world, many Denver IoT app developers wonder if maybe Glass simply came around too early. It appears that that’s what the people at Amazon’s Lab 126 are thinking.

Leaks indicate that Amazon’s product development lab is tinkering with Alexa-connected “smart glasses.” It appears that the online retail giant has recruited several members from the Google Glass team to bring this product to life. These wearables, however, will differ from Glass in some crucial ways. Aside from the obvious Alexa connection, Amazon’s glasses are expected to skip out on the creepier aspects of Glass, like the camera and screen that so bothered those concerned with privacy (not to mention anybody who wanted decent battery life). Given the vogue for AR right now, it seems to most IoT app developers that Amazon would want to ditch the no screen thing eventually. Otherwise, the company risks falling behind competitors like Apple and Facebook who are hard at work on their AR goggles.

But for now, the main goal and draw seem to be an instant connection to Amazon’s increasingly popular voice assistant Alexa. The glasses are designed to look like, well, glasses. They allow Alexa to communicate with the user via a bone-conduction audio system that makes headphones unnecessary. IoT app developers have wondered how Amazon’s voice assistant could compete with Siri and Google Assistant without being a native feature in a smartphone (especially after the Fire fiasco). This, along with the Echo, may be Alexa’s big chance. Reports indicate that Amazon is primarily working on smart home devices, making their smart glasses a bit of an outlier. We can also expect a home security system connected to Echo Show to launch before the year’s end.

The post Amazon Is Making Alexa-Connected Smart Glasses first appeared on Dogtown Media.]]>
The HomePod is Apple’s Smart Speaker (Emphasis on Speaker) https://www.dogtownmedia.com/the-homepod-is-apples-smart-speaker-emphasis-on-speaker/ Tue, 06 Jun 2017 14:36:31 +0000 https://www.dogtownmedia.com/?p=9997 Most iPad app developers were expecting big news about a Siri speaker at this year’s...

The post The HomePod is Apple’s Smart Speaker (Emphasis on Speaker) first appeared on Dogtown Media.]]>

Most iPad app developers were expecting big news about a Siri speaker at this year’s WWDC, and that’s (sort of) what they got with yesterday’s announcement of the brand new HomePod. Apple developers and fanatics felt such an announcement was long overdue. While the Amazon Echo and Google Home have become surprise hits, pushing smart speakers into the mainstream, Apple has hung back on releasing its own device. And now that the HomePod is here, it’s not quite what internet of things industry observers were expecting.

For one thing, Apple knows its late to the game and has chosen to position their latest gadget as more of a speaker than a Siri-centric smart home device. Apple’s marketing team realized that audiophiles aren’t playing music through their Echos and Homes and that Sonos speakers don’t have the kind of voice assistant functionality that consumers crave right now. So they have brilliantly pitched the HomePod as the great smart home speaker the IoT market lacks. It’s a move that iPad app developers have to admire.

The HomePod will run $349 and should be available in the U.S., U.K., and Australia by the end of the year. The plump, cylindrical speaker’s sound adjusts according to the dimensions of its environment so that your music can fill any room. It also includes a “Musicologist” feature that essentially turns Siri into your personal DJ and musical encyclopedia. For Chicago iPad app developers looking for something more along the lines of the Echo or Home, the HomePod should meet their needs too. Consumers will be able to activate smart home devices or check the weather by talking to Siri through the device.

It will be interesting for iPad app developers to see if Apple’s gambit of emphasizing the HomePod’s speaker functions will pay off. The device was announced at WWDC on the same day that news broke of major Google Home outages, which suggests that maybe it isn’t too late for a new smart speaker to come in and dominate the market. But with the recent announcement of the Echo Show, with its game-changing screen component, it may seem that Apple is a little behind the times.

The post The HomePod is Apple’s Smart Speaker (Emphasis on Speaker) first appeared on Dogtown Media.]]>
Google Lens Turns Smartphone Cameras Into Search Devices https://www.dogtownmedia.com/google-lens-turns-smartphone-cameras-into-search-devices/ Thu, 18 May 2017 14:38:51 +0000 https://www.dogtownmedia.com/?p=9932 Last year, Google CEO Sundar Pichai proclaimed, “We’re evolving in computing from a ‘mobile-first’ to...

The post Google Lens Turns Smartphone Cameras Into Search Devices first appeared on Dogtown Media.]]>

Last year, Google CEO Sundar Pichai proclaimed, “We’re evolving in computing from a ‘mobile-first’ to an ‘AI-first’ world.” Yesterday, during the keynote address of the Google I/O developer conference, the company showed that it’s putting it’s money where it’s mouth is. Pichai made it clear Google’s goal is to incorporate AI technology into all of its products in the coming years. The company proved its commitment to this future with its Google.ai initiative, which will be an invaluable resource for the AI community. But perhaps the most immediately exciting announcement for Android app developers was the Google Lens, a revolutionary blend of AI, computer vision, and AR that will quite literally change how we see the world.

Facebook has said that it wants to turn the camera into the new keyboard; Google is taking it a step further, turning the camera into an instantaneous search device. Google Lens transforms your smartphone camera into an all-seeing, all-knowing eye of sorts. If you point your phone at a flower, Lens can identify that flower; if you point your phone at a business, Lens will provide you with information about that business. Users will be able to incorporate Lens in interactions with Google Assistant (which is now available on the iPhone too — look out, Siri!). It can translate foreign text (“What does this say?”), or take information from a concert marquee or poster and pull up tickets and add the event to your calendar. The practical applications of Lens make it very exciting for app developers.

Lens leverages the enormous amount of information Google has amassed about the world around us and gives us access to it through our cameras. Bay Area Android app developers have no doubt noticed the trend away from text and toward photos. This technology is only going to accelerate that push. Why type in a search when you can just take aim with your phone? We are entering an age where smartphone cameras don’t just take photos — they give us answers.

The post Google Lens Turns Smartphone Cameras Into Search Devices first appeared on Dogtown Media.]]>
Siri Comes Home in the Form of a New Voice-Activated Speaker https://www.dogtownmedia.com/siri-comes-home-in-the-form-of-a-new-voice-activated-speaker/ Tue, 02 May 2017 15:37:24 +0000 https://www.dogtownmedia.com/?p=9892 It’s a testament to Apple’s supremacy in the marketplace and culture that rumors about new...

The post Siri Comes Home in the Form of a New Voice-Activated Speaker first appeared on Dogtown Media.]]>

It’s a testament to Apple’s supremacy in the marketplace and culture that rumors about new products tend to drum up more excitement than most other companies’ big launches. Internet of things app developers and Apple devotees were thrilled this week to hear that the company’s answer to Amazon Echo and Google Home may be right around the corner. Leaked by Sonny Dickson, who revealed design details for the iPhone 8 a few weeks ago, the voice-activated speaker with the decidedly unsexy code-name “B238” might be one of Apple’s big announcements in early June at the Worldwide Developers Conference (WWDC).

This new speaker would bring Siri’s soothing voice into the smart-home experience — and hopefully leave behind her more annoying glitches. If Dickson is to be believed (and usually his information checks out), Apple has been tinkering with Siri’s capabilities with an eye toward a possible smart-home device launch. Users will reportedly be able to play tracks from Apple Music, schedule reminders, check weather, and use AirPlay to stream or mirror content from another iOS device to your TV. Boston iOS app developers curious about the design will be happy to know that it’s said to be reminiscent of the Mac Pro, with a concave top holding the controls and UE boom speaker mesh covering the device. The actual speakers themselves will incorporate technology from Apple-owned Beats.

The software powering the device is expected to be a variation on iOS (think Apple TV’s tvOS or the Apple Watch’s watchOS). It seems likely that Apple will offer a Siri SDK for internet of things app developers to build for the new device. Apple was an innovator in the voice-assistant game, but competitors like Google and Amazon have lapped them when it comes to bringing their voice-assistants into the home. Judging by the excitement already building behind this product, it looks like the tech giant is definitely going to stir up the competition.

The post Siri Comes Home in the Form of a New Voice-Activated Speaker first appeared on Dogtown Media.]]>
Latest Amazon App Update Brings Alexa to the iPhone https://www.dogtownmedia.com/latest-amazon-app-update-brings-alexa-to-the-iphone/ Tue, 21 Mar 2017 14:40:10 +0000 https://www.dogtownmedia.com/?p=9656 Look out, Siri — there’s a new voice on the iPhone, and its name is...

The post Latest Amazon App Update Brings Alexa to the iPhone first appeared on Dogtown Media.]]>

Look out, Siri — there’s a new voice on the iPhone, and its name is Alexa. The Amazon app development team included Alexa as part of the latest update of the online shopping behemoth’s app for iOS, marking the first time the company has directly offered the voice-first service on an iPhone. Before this week’s update, iPhone users could summon the voice assistant via somewhat limited third party apps like Lexi, or through the Alexa app itself so long as they had an Echo already. But now iPhone owners can take advantage of many of Alexa’s skills that were previously limited to the Echo through the Amazon app. This a major step for Amazon’s mobile app developers, who have been pushing to integrate Alexa into as many devices as possible.

As many Seattle mobile developers already know, Alexa needs to continue to expand to new platforms in order to collect the sort of data it requires to expand and enhance its AI. In the past few months, Amazon’s app development team has aggressively pursued this strategy, partnering with Ford, LG, and Huawei to feature the voice assistant in their products. Alexa’s sudden appearance on the iPhone marks a huge jump in scale, bringing the assistant to the second most popular smartphone on the market via one of the most downloaded apps in the App Store. It also puts it in direct confrontation with Siri, its most formidable competitor. Admittedly, Siri’s native integration gives it a major advantage as users will have to open Amazon’s app in order to access Alexa. But once users have the app open, all they have to do is press the microphone icon at the top of the screen to ask a question, queue up a song, or command a connected device. Siri may find itself with a little more downtime these days.

Reputation-wise, Alexa can already go toe-to-toe with Siri, and its spread to the iPhone and a number of other platforms only means that it’s getting smarter. Its skill set will continue to flourish thanks to an initiative Amazon announced last week that provides free promotional credit for thousands of Alexa app developers to build and host new skills using Amazon Web Services. As incentives like this draw in talented developers, Alexa will become sharper and more useful– and more ubiquitous too.

The post Latest Amazon App Update Brings Alexa to the iPhone first appeared on Dogtown Media.]]>
Huawei will Launch a Google Now AI Assistant Competitor https://www.dogtownmedia.com/huawei-will-launch-a-google-now-ai-assistant-competitor/ Thu, 16 Feb 2017 17:37:12 +0000 https://www.dogtownmedia.com/?p=9583 Huawei isn’t the new kid on the block anymore — in fact, it’s the third-largest...

The post Huawei will Launch a Google Now AI Assistant Competitor first appeared on Dogtown Media.]]>

Huawei isn’t the new kid on the block anymore — in fact, it’s the third-largest smartphone manufacturer in the world. While not the go-to smartphone for San Fran app developers, it’s a household name worldwide thanks to their affordable, dependable hardware. The company is aggressively working towards claiming the second place in the next year, and the latest chip on the table could be an in-house AI voice assistant for their products.

For app developers, it’s a move that won’t have immediate effects on US or European markets, but it’s a strong sign that AI is gaining steam, and that Google and Siri will be battling harder for their titles as king and queen of the app AI industry.

Amazon already swiped a huge share from the table with their always-on Alexa devices. App developers will notice a big blind spot in all three tech giant’s strategies — specifically, the Chinese market. That’s where Huawei is poised to have a big upper hand, and could give it the edge they need to stand out in an increasingly crowded Asian market.

With Samsung also on course to launch their own voice assistant within the year, Google may have to rethink their plan to become the go-to assistant for Android devices. Ultimately, it’ll likely be a good thing for consumers, as more smartphone makers compete to offer the most value on independent platforms, rather than trying to monopolize the market with proprietary “must-have” products.

The move could also lead to a wider variety of options for smart devices and wearables, creating interesting opportunities for third-party mobile app developers in the process. Reports from Bloomberg show that Google has been courting third-party device manufacturers with the goal of getting Google products like Google Now pre-installed on as many products as possible. (Think Android Wear.)

How Huawei’s product will perform if and when they try to expand it outside China remains to be seen. So long as anti-Google Internet blocks remain within the country, however, they certainly have an advantage for testing, training, and honing their assistant’s potential.

The post Huawei will Launch a Google Now AI Assistant Competitor first appeared on Dogtown Media.]]>