machine learning app developers | Dogtown Media https://www.dogtownmedia.com iPhone App Development Mon, 17 Apr 2023 13:06:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://www.dogtownmedia.com/wp-content/uploads/cropped-DTM-Favicon-2018-4-32x32.png machine learning app developers | Dogtown Media https://www.dogtownmedia.com 32 32 Clutch Recognizes Dogtown Media as a Top Global B2B Company for 2021 https://www.dogtownmedia.com/clutch-recognizes-dogtown-media/ Tue, 07 Dec 2021 16:19:03 +0000 https://www.dogtownmedia.com/clutch-recognizes-dogtown-media-as-a-2021-b2b-leader-in-artificial-intelligence-for-robotics-copy/ As the 2021 year comes to a close and we anticipate what’s to come in...

The post Clutch Recognizes Dogtown Media as a Top Global B2B Company for 2021 first appeared on Dogtown Media.]]>

As the 2021 year comes to a close and we anticipate what’s to come in 2022, it’s with great appreciation and honor to announce that Dogtown Media has received a global accolade from the major digital rating agency, Clutch.co, as a Top Global B2B Company for 2021.

After 10 years in the mobile app space, reaching a highly regarded and recognized global accolade is a major accomplishment and points to the continued dedication of Dogtown Media to their global client base and their hyper-focus on producing high-quality applications. 

Dogtown Media is Los Angeles’ leading mobile application company, working with organizations in nearly every vertical to bring their unique ideas and solutions to the app market. Dogtown Media prides itself on the satisfaction, approval, and happiness of our clients. And aims to create cutting-edge solutions that are pushing the boundaries of what’s thought o be possible in the mobile application space.

the mobile application space.

And for those who may be unaware, this Clutch.co accolade is only one in a series of major accolades awarded to Dogtown Media such as Top 2021 B2B Leader in Artificial Intelligence for Robotics, a top 2020 Service Provider, and the 27th Best B2B Service Provider in the World in 2019. All of these great accolades point to the dedication to craft and customer, and only scratch the surface of their long laundry list of accolades from Clutch and other prominent rating agencies in the mobile app space. 

“This recognition feels surreal and we are lost for words”, notes founder Marc Fischer. “We feel truly honored to be recognized by such a prestigious rating firm, and hope to continue to provide high-quality, meaningful applications for our clients today and far into the future. “

Here are some of the quotes that stood out most to us:

Here are some of the quotes that stood out most to us:

They were an effective team, met deadlines, and created a great end product.“. — Director, Risk Comm Lab, Temple University

They built an intuitive and simple design, and the team works quickly to address bugs and solve problems.”— Senior Ops Manager, Hospital Innovation Lab

Let’s build something amazing together! Connect with us and get a free tech consultation.

The post Clutch Recognizes Dogtown Media as a Top Global B2B Company for 2021 first appeared on Dogtown Media.]]>
How AI and Brain-Computer Interfaces Know What You’ll Find Attractive https://www.dogtownmedia.com/how-ai-and-brain-computer-interfaces-know-what-youll-find-attractive/ Mon, 03 May 2021 15:00:25 +0000 https://www.dogtownmedia.com/?p=16306 You know the saying “Looks aren’t everything.” But if that were true, dating apps might...

The post How AI and Brain-Computer Interfaces Know What You’ll Find Attractive first appeared on Dogtown Media.]]>
You know the saying “Looks aren’t everything.” But if that were true, dating apps might look completely different. Matchmaking apps like Bumble and Los Angeles-based Tinder would not lead each potential match’s profile with a large photo. In a world where attractiveness is not appreciated, they would have a UI and UX that might have each user’s profile lead with education history or a custom message.

A new artificial intelligence (AI) algorithm is testing just how important attractiveness is by attempting to figure out who you’ll find attractive and why you find that person attractive. A team comprised of researchers from the University of Helsinki and Copenhagen University generated images of fake faces that they then asked people to rate for attractiveness. It then used that feedback to further tune the AI algorithm, making it perform even better at generating fake attractive faces.

Adversarial Strengths

The application, which uses a machine learning development algorithm called generative adversarial network (GAN), creates fake faces by pitting two “adversarial” algorithms against one another. Adversarial means the two algorithms have differing goals: one, called the generator, creates images based on what it learned during its training phase, and the other (called the discriminator) tries to figure out which of the generated images are fake or real. The latter algorithm is tested with photos of real people and fake faces.

The two algorithms eventually start training each other in a loop that allows the generator and discriminator to improve their performance greatly with each cycle. The generator improves its ability to create realistic images, and the discriminator gets better at finding fake faces. This symbiotic relationship may first sound unproductive, but the algorithms work well together and also successfully test each other.

The GAN algorithm was trained on 200,000 images of celebrities, who usually have attractive faces—at least, according to Hollywood standards.

Testing Attractiveness

After the training phase, the generative algorithm developed hundreds of unique faces of people who it believed were of similar attractiveness as the celebrities it “knew” to be attractive. These fake faces were shown to real people who wore brain-computer interfacing equipment hooked up to an EEG (electroencephalography) reader. Using this data, the researchers could measure each person’s brain activity for each photo they saw, down to the neuron’s exact moment of firing.

When a participant saw an image of an attractive face, there was a marked increase in brain activity. This could be partially attributed to the fact that the participants were told to focus harder on faces they thought were attractive. The participants weren’t asked to articulate what specifically they found attractive about any of the images. Instead, the AI stored the EEG datapoints and found the commonalities within each photo.

Those commonalities could be big eyes, high cheekbones, a medium-sized nose, wide-set eyes, small ears, or any other facial feature. The AI found that most participants liked the same aspects of a face in an image. In other words, humans seem to favor most of the same facial features when asked about attractiveness.

Using the common features found by the algorithm, the team distilled this data back down in a format that could be fed to the GAN algorithm. The generative algorithm then took this new information as instruction in making its second batch of attractive faces. Now, the faces had more chiseled jawlines, darker and more mysterious eyes, curlier hair, and more features that we find conventionally more attractive.

Real Looks vs. Fake Faces

When this second round of generated photos was shown to participants, they were instructed to rate the face as attractive or unattractive. For 87% of the newly-generated photos, participants rated the face as attractive. The remaining 13% were either too perfect or there was something that seemed off about their facial features. Even though the participants were told to focus on attractiveness, they were unable to look past how some faces looked fake or off.

AI developers and AI ethics experts worry that this type of well-performing technology could be used to generate faces that look realistic for the purposes of deepfake videos or fake images. Not only do the faces not need to be real, they don’t need to be attractive to cause issues for people or even nations. And the consequences don’t need to be so far-reaching: even social media accounts used for a malicious purpose could use AI-generated fake faces to blend in with the crowd. They might even look normal and real at a quick glance. After all, how much detail can you see in a small circular avatar?

artificial intelligence app development

The Future of Dating?

The future of this type of technology extends far beyond dating and social media. It could be used for political gain or even start a war. The research team is interested in advancing the technology, and it has some ideas for how to use their application in a productive and non-malicious way. Associate professor at the University of Helsinki, Tuukka Ruotsalo, says that the team hopes to dig deeper into attractiveness, as well as explore stereotypes, biases, preferences, and individual differences.

Have you come across an AI-generated face that was attractive but looked off? How did it make you feel? Let us know in the comments below!

The post How AI and Brain-Computer Interfaces Know What You’ll Find Attractive first appeared on Dogtown Media.]]>
Can ‘Quantum Brains’ Accelerate AI? https://www.dogtownmedia.com/can-quantum-brains-accelerate-ai/ Wed, 28 Apr 2021 15:00:45 +0000 https://www.dogtownmedia.com/?p=16285 There’s an unexpected chemical element that could become the basis for a new type of...

The post Can ‘Quantum Brains’ Accelerate AI? first appeared on Dogtown Media.]]>
There’s an unexpected chemical element that could become the basis for a new type of computer: cobalt. Cobalt could help us combine our brain’s capabilities with quantum mechanics, paving the way for a kind of computer that no one’s ever seen before. And one of the most innovative parts about this new computer is that it could learn, like humans do, using the hardware it is made with, removing the need to integrate any extra artificial intelligence (AI) applications.

The model simulates how a human’s brain processes information using neurons and synapses (our own “hardware”) instead of computer CPUs. Intrigued? Read on, it gets even more interesting.

Quantum Computing

Using the inherent quantum properties of cobalt atoms, a team of researchers from the Radboud University in the Netherlands created organized networks of atom spin states. With these networks, a quantum brain was developed that can process information and save it to its memory. This is no longer “artificial” intelligence; it’s the closest computer model we have to a real human brain and how it works.

It’s well-known that machine learning applications are made up of algorithms that take up a lot of energy and require a lot of data. And although Google, Apple, and Amazon have enormous data centers to overcome that limitation, it’s not a realistic situation for the hundreds of smaller AI research firms and institutions. Experts also worry that computing power may be reaching its peak, despite what Moore’s Law says about the rate of advancement of technology.

This new computing method is a promising alternative to overcome these limitations. And, according to the lead author, Dr. Alexander Khajetoorians, the new method “could be the basis for a future solution for applications in AI.”

Integrating Neuroscience

Many AI methods, like deep learning, are already modeled loosely after the human brain. But our current computing technology is limited by the fact that memory and computing units are separated from each other, creating a time, energy, and resource issue when data has to be shuffled back and forth for complex algorithms that require a lot of training data. Experts are concerned about how far we can optimize AI algorithms for efficiency with our current computing technology.

In contrast, the cobalt method allows us to store and compute in one unit. It forgoes CPUs, memory, and chips, allowing for faster computation and memory retrieval as well as less energy consumption. The cobalt method is also extremely flexible: if the algorithm learns that a new factor makes it perform better, it has the capacity to store this updated information in relation to the original for faster retrieval next time. This is incredibly similar to the brain, and it could be the future of computing technology.

Cobalt’s Quantum Spin States

The Radboud University research team has been working on this problem for years. In 2018, the group found out that a single cobalt atom could possibly unlock a computing model closer to neurons and our brains. They found out that they could use several properties of quantum spin states to make this a reality. For example, an atom can have multiple spin states at the same time, and the atom will have a certain probability that it’s in one of each state. That’s similar to how neurons decide to fire and how synapses pass on data.

Another property they dived into was quantum coupling, which involves two atoms binding together in a way that the quantum spin state of one atom influences the other to change. This is also similar to how neurons communicate.

With these two insights, the team worked on building a computing method that was modeled after neurons and synapses. They added multiple cobalt atoms onto a superconducting surface made of black phosphorus. Then they took on the challenge of figuring out if they could induce networking and firing between the cobalt atoms. They wanted to know if they could simulate a neuron firing. They investigated if it was possible to embed information in the atom’s spin states.

After working out a “yes” to those questions, the team used weak currents to send the system 0s and 1s, which could be translated into probabilities of the atoms encoding 0 or 1. Then, the team charged the atoms with a small voltage to simulate how our neurons receive electrical signals before they act (or don’t act). The result was surprising and significant. The voltage caused the atoms to behave in two different ways: it caused them to “fire” and send information to the next atom, and it changed their structure slightly afterward as we see with synapses.

artificial intelligence app development

Khajetoorians said, “When stimulating the material over a longer period of time with a certain voltage, we were very surprised to see that the synapses actually changed. The material adapted its reaction based on the external stimuli that it received. It learned by itself.”

A New Kind of Future

Our current computing hardware requires the dangerous mining of rare elements and materials, and using cobalt quantum states offers more ease, affordability, and efficiency. But it will still be a while before we see this innovation in our data centers and computing models. We will need to prove its ability before we see it being used in San Francisco‘s Silicon Valley.

The team must still figure out how to seamlessly scale the system and demonstrate its usage with a real algorithm. We’ll also need to develop a machine for this new technology. Although there’s still a lot of work to be done, Khajetoorians is excited for the future of his research. After all, his team may be the foundation of AI’s new future.

The post Can ‘Quantum Brains’ Accelerate AI? first appeared on Dogtown Media.]]>
Do You Really Need Machine Learning for Your Chatbot? https://www.dogtownmedia.com/do-you-really-need-machine-learning-for-your-chatbot/ Mon, 05 Apr 2021 15:00:46 +0000 https://www.dogtownmedia.com/?p=16220 Developing chatbots can involve as little or as much complexity as you want, depending on...

The post Do You Really Need Machine Learning for Your Chatbot? first appeared on Dogtown Media.]]>
artificial intelligence app developmentDeveloping chatbots can involve as little or as much complexity as you want, depending on your budget, desired accuracy, and business application. Although chatbots can be as simple as pattern-based applications or as involved as machine learning (ML) applications, they can always be upgraded to the newest technologies when the time is right. With the proper chatbot functionality, you can impress your existing customers and convert your new leads.

In this post about chatbots, we’ll delve into the different kinds of chatbots, when you should consider utilizing natural language processing technology, and some examples of companies that have elevated the chatbot game with their applications.

Types of Chatbots

The best way to categorize chatbots is to separate them by the type of technology used to create the chatbot. There are three kinds of chatbots that a business can utilize for support inquiries.

Pattern-based chatbots

Pattern-based chatbots are made up of pre-set question and answer flows that users will follow when they interact with the chatbot. These chatbots are the simplest types of chatbots available for a business’s use cases. They’re easy to create and deploy.

But for users, this type of chatbot is often frustrating and unyielding. Because of the pre-configured bot flow, the chatbot has limited usability and lacks flexibility. Often, pattern-based chatbots direct users to a help article or landing page to help them answer their questions.

An example of pattern-based chatbots is the kind you often find on Facebook. Although some of the chatbots on Facebook use AI, many use a simple keyword-based rule chain to determine the appropriate response to the customer. Bud Light had a memorable chatbot during the 2017 NFL season that allowed fans to get beer delivered to their home. The chatbot only “worked” on game days, sent reminders to fans a few hours before each game, and tried to get the beer to the fan within an hour of purchase. It was a wildly successful bot with an engagement rate of 75%.

Machine learning chatbots

ML chatbots use a combination of machine learning and natural language processing (NLP). These chatbots usually result in a much better user experience for customers and interested visitors. ML chatbots are often used by businesses with more complex use cases, like healthcare organizations that want to stay in touch with their patients and educate them on their medical conditions.

Because of the ML aspect, these chatbots can learn and improve their responses over time by storing new information in their memory. When you add NLP into the mix, the chatbot becomes more human: it can recognize tone, keywords, synonyms, and the underlying question before it generates a helpful response. The complexity that ML and NLP introduce means that ML chatbots are more difficult to develop and maintain.

ML chatbots require more investment in both money and time. But, in the long run, these chatbots are more promising and friendlier to customers. Marriott deployed its Facebook chatbot in 2016 to simply help customers combine their Marriott and Starwood reward cards. But the use for a multidimensional chatbot grew quickly. Marriott used NLP to develop a Facebook chatbot that took care of many more things for customers: book rooms, redeem rewards, learn about destinations, and even find a career at Marriott.

Hybrid AI chatbots

Hybrid chatbots are a combination of pattern-based chatbots with the benefits that artificial intelligence (AI) applications offer. The result is a contextual chatbot that takes user input to generate an appropriate response. These types of chatbots are still relatively new, and some examples include Siri, Cortana, and Alexa.

Of all of the types of chatbots, hybrid AI chatbots cost the most in money, time, and resources to develop and maintain. On the plus side, these chatbots are flexible: they utilize machine learning when needed and prioritize the context of the user’s problem.

Using Natural Language Processing

NLP is a must-have technology if your business needs require an ML-enabled chatbot. NLP technology combines linguistics, computer science, and AI to improve and create more natural human-computer interactions. NLP’s main goal is to read, decode, understand, and contextualize human language, even if it’s not English.

NLP chatbots want to understand the tone, meaning, and context behind the user’s input before creating a response for the user. For users who interact with NLP chatbots, the experience is smoother and more natural. They feel the freedom to ask more complex questions and expect a better response than a pattern-based chatbot could generate.

But does your company need NLP for your chatbot needs? It depends on your budget, how you’ll use the chatbot, what its purpose will be, how it will be built, and how often customers will use it. Chatbots can require more budget than you’d think, so your chatbot should be used in a way that will generate the highest return on investment.

Other things to consider are: Will the chatbot eventually be full of buttons (preset options for the user to pick from) or will it allow users to freely type questions? If buttons, a pattern-based chatbot might be the best fit. Will the chatbot have a personality or just be a chatbot? If you want it to have a personality, an ML chatbot would be a good choice.

NLP isn’t a great fit for static user guides, but it’s a wonderful technology for booking travel or discussing medical symptoms. Amtrak’s chatbot is one of the most successful chatbots of all time. Named Julie, it allows customers to accomplish a variety of tasks, from booking train rides to generating hotel recommendations and tourism activities. A year into its deployment, Julie had helped over five million customers book travel and get answers to questions. Over the years, Julie has saved Amtrak over $1 million in customer support costs.

artificial intelligence app development

Gyant is a San Francisco-based chatbot development company, and it creates custom chatbots for healthcare facilities. These chatbots are used to communicate with patients, inquire about symptoms, and help patients find a nearby doctor with availability. The chatbot even sends the patient’s symptoms and details to a nearby doctor for diagnosis and prescription. Gyant’s chatbots are multilingual: they can communicate in English, German, Spanish, and Portuguese.

One Thing’s Clear: Chatbots Are Here to Stay

As users, we see chatbots more and more at retailers, product websites, and even at places like hospital help desks. The chatbot industry grows around 26% every year, making this technology one to watch out for in the coming years. And chatbots that use ML and AI are getting better every day, so it’s only a matter of time before we start interacting with chatbots that we could assume are just humans. In the next decade, chatbots are likely to become the main way for customers to interact with companies. So what are you waiting for?

The post Do You Really Need Machine Learning for Your Chatbot? first appeared on Dogtown Media.]]>
Police Claim Right to Use AI Facial Recognition Despite Restrictions https://www.dogtownmedia.com/police-claim-right-to-use-ai-facial-recognition-despite-restrictions/ Mon, 22 Mar 2021 15:00:50 +0000 https://www.dogtownmedia.com/?p=16172 As a whole, artificial intelligence (AI) applications can be incredibly controversial. AI has been found...

The post Police Claim Right to Use AI Facial Recognition Despite Restrictions first appeared on Dogtown Media.]]>

As a whole, artificial intelligence (AI) applications can be incredibly controversial. AI has been found to be racist, sexist, and biased. If placed in the wrong hands, AI tools like facial recognition could create a situation of life or death. When the Capitol Building was attacked in January, the public worked alongside police departments all over the country to help the FBI identify rioters.

Although facial recognition technology has been shown to be inaccurate and racially-biased, it has been used widely by people in the public and private sector in the past few months. The contentious technology has been banned for use by law enforcement in several major metropolitan areas, but police departments say that there are loopholes to get around these rules.

Finding a Way Through the Loopholes

In Pittsburgh, Alameda (California), Madison (Wisconsin), Boston, Northampton (Massachusetts), and Easthampton (Massachusetts), officials have publicly stated that law enforcement bans of facial recognition have loopholes. These loopholes allow police to use facial recognition technology to access information and take action on it.

Some experts say these loopholes aren’t bad. The technology helped the public and local police departments track down rioters for the FBI in recent weeks. But other experts say that loopholes allow law enforcement to continue their behavior and actions without facing any consequences. According to Mohammad Tajsar, a senior staff attorney for the American Civil Liberties Union in Southern California, “If you create a carve-out for the cops, they will take it.”

In Pittsburgh, the loophole is a part of the legislation where it says police departments can use software produced or shared by other police departments. Specifically, it says, the law “shall not affect activities related to databases, programs, and technology regulated, operated, maintained, and published by another government entity.” Madison, Boston, and Alameda have very similar language in their loopholes.

In Madison, police officers can use facial recognition technology that was supplied by a business, even if it’s banned from government usage. In Easthampton, police officers can use the technology as evidence if it was supplied by another police department, but not if it was supplied by a business. In Northampton, law enforcement can use the technology when provided by other police agencies and by businesses.

Ultimately, according to the director of the Technology for Liberty program at the ACLU of Massachusetts, Kade Crockford, a federal ban or restriction on facial recognition would have the best effect on the technology’s usage.

Tracking Usage of the Technology

When so many local and state laws allow facial recognition when used by another police agency, Crockford says that it starts to become really difficult for law enforcement to track if and when facial recognition was used during evidence gathering.

Quite often, however, police officers use the technology knowing that they’re not allowed to do so against citizens who were not informed about the use of facial recognition. For example, in Miami, police arrested protestors using facial recognition. But even the protestors’ defense attorneys didn’t know that facial recognition, and not “investigative means”, was used to track down their clients. In another incident, Jacksonville police arrested a citizen who sold $50 of cocaine using facial recognition, but this information wasn’t disclosed in the police report.

Besides law enforcement purposefully hiding when facial recognition is used in an investigation, the technology was banned because it is deeply biased against people of color and women. So even when a police officer uses the technology when provided by a business, like in the case of Home Depot, Rite Aid, and Walmart, it’s still highly possible that the technology isn’t working correctly. Jake Laperruque is a senior counsel at the Constitution Project, and he says, “If this is something that’s going to lead to a store calling the police on a person, that to me creates a lot of the same risks if you worry about facial recognition misidentifying someone by the police.”

artificial intelligence app development

Following Portland’s Lead

It seems that one of the only cities with incredible forethought and thoughtfulness about its citizens is Portland, Oregon. The city passed the most exhaustive ban on facial recognition technology to date last September. The law prohibits law enforcement, as well as public places and businesses, from using the technology. This includes restaurants, brick-and-mortar stores, and anywhere the public would visit.

Hector Dominguez is the Open Data Coordinator with Portland’s Smart City PDX. He says that once the department did their due diligence about the issue to develop Portland’s facial recognition ban, the department started “getting a lot of community feedback and recognizing the role that private businesses are having in connecting people’s information.” Even more worrisome were businesses that that appeared from thin air to lobby against these tight regulations.

Amazon, for example, lobbied Portland for the first time ever and spent $12,000. The Oregon Bankers Association asked for an exception for use of the technology when providing law enforcement video of robberies. And the Portland Business Alliance asked for exemptions for retailers, airlines, banks, hotels, concert venues, and amusement parks. But Portland stayed strong and allowed only one exemption: if a business says that they must use the technology to comply with federal, state, or local laws. This includes agencies like the Customs and Border Protection who work at the airport.

According to an organizer in Portland with Fight for the Future, Lia Holland, police departments may use facial recognition for malicious behavior but private businesses use the technology for similarly malicious reasons. One is more hidden (like a business connecting, monitoring, and tracking their customers’ faces to purchase behaviors or intent) while the other is more in-your-face. In everyday circumstances, says Holland, businesses have more reason to surveil than law enforcement does.

Policing the Public Today

Although facial recognition technology is an example of an advanced machine learning application, it is ingrained with a bias that could negatively impact someone’s life for decades. In that sense, the technology is still in its infancy and needs a lot of fixing, testing, and training in order to be up to par with even Portland’s strict guidelines. Until then, facial recognition is not suitable for use against the public any time soon, especially for businesses that will quietly profit from it and for law enforcement that will act violently upon its findings.

Would you buy stealth clothing that confuses facial recognition algorithms? Let us know in the comments below!

The post Police Claim Right to Use AI Facial Recognition Despite Restrictions first appeared on Dogtown Media.]]>
Clutch Recognizes Dogtown Media as a 2021 B2B Leader in Artificial Intelligence for Robotics https://www.dogtownmedia.com/clutch-recognizes-dogtown-media-as-a-2021-b2b-leader-in-artificial-intelligence-for-robotics/ Thu, 11 Mar 2021 18:00:53 +0000 https://www.dogtownmedia.com/?p=16146 The goal of robotics is to develop and construct meaningful machines that will support and...

The post Clutch Recognizes Dogtown Media as a 2021 B2B Leader in Artificial Intelligence for Robotics first appeared on Dogtown Media.]]>
artificial intelligence app development

The goal of robotics is to develop and construct meaningful machines that will support and help human processes. The multidisciplinary field fuses technologies such as artificial intelligence and machine learning together to develop innovative solutions.

Dogtown Media is Los Angeles’ leading robotics company, working with enterprises and organizations to help their businesses. Our team prides itself on the satisfaction, approval, and happiness of our clients. We want to create cutting-edge solutions to solve simple frustrations and tackle business hurdles.

artificial intelligence app development

Just recently, Dogtown Media was hailed as a top agency on Clutch for its excellence in AI for robotics. If you’re not familiar, Clutch is a B2B review platform based in Washington, DC. The site is well respected in the space for its commitment to providing data-driven content and verifying client reviews.

This recognition feels surreal and we are lost for words. Our team wants to send its sincerest thanks to Clutch for this award. We believe that this award is a great sign for our 2021 run, and we are looking forward to a prosperous year. 

 We know that this recognition was made possible thanks to our clients’ amazing feedback. We owe this success to our clients especially those who left us their review on Clutch. 

Here are some of the quotes that stood out most to us:

“They’re a small shop that’s motivated and offers a comprehensive list of services and capabilities. They’re accountable and willing to work by our side to make the best product possible. The entire team is professional and eager to solve our problems.” — Founder, Mobile Sales Training Company

“Dogtown Media developed a solution for a project considered impossible to do in the tech world. They’ve taken every goal we’ve had and delivered above and beyond our expectations, beating our requirement of achieving a 50% accuracy rating on visual search capabilities with a software that is over 90% accurate.” — CTO, Innovengine

Let’s build something amazing together! Connect with us and get a free tech consultation.

The post Clutch Recognizes Dogtown Media as a 2021 B2B Leader in Artificial Intelligence for Robotics first appeared on Dogtown Media.]]>
Has COVID-19 Catalyzed an Automation Revolution? https://www.dogtownmedia.com/has-covid-19-catalyzed-an-automation-revolution/ Mon, 08 Feb 2021 16:00:32 +0000 https://www.dogtownmedia.com/?p=16023 Robots are getting better at their jobs, and robotics engineers are building more life-like robots...

The post Has COVID-19 Catalyzed an Automation Revolution? first appeared on Dogtown Media.]]>
Robots are getting better at their jobs, and robotics engineers are building more life-like robots than ever. The technology is a tool in the larger field of automation, which, like all industries, was largely affected by COVID-19. As stay-at-home orders were enacted and employees became infected, companies that had automation in place were much better equipped to ride out the pandemic compared to competitors that relied on human labor.

Siddhartha Srinivasa, a computer science professor at the University of Washington and director of robotics artificial intelligence (AI) at Amazon, said he wants to make robots unsexy again. For example, he said, we don’t consider our dishwashers to be state-of-the-art and sexy even though they’re incredibly complex mechanical robots. According to Srinivasa, “When something becomes unsexy, it means that it works so well that you don’t have to think about it. … I want to get robots to that stage of reliability.” Although we haven’t reached that stage yet, Srinivasa is one of many AI developers around the world who want to drastically improve the perception of automation and robotics.

The Impact of the Pandemic

The economic, business, industrial, and consumer effects of the pandemic cannot be understated. Many businesses scrambled to implement automation during the start of the pandemic to prevent their employees from taking on higher infection risk. Most of them were targeting work carried out by humans that were necessary to business operations so that the company could continue business as usual.

According to research by Digital Trends, some of the industries that significantly increased their automation efforts include grocery stores, meatpacking facilities, and manufacturing, among others. In June 2020, 44% of corporate financial officers that were surveyed said their company was considering adding more automation into their workflows to combat the negative effects of the pandemic. But that number is low as MIT economist David Autor describes the effect of COVID-19 on the economy as one “that forces automation.”

Autor says that there has been no reduction in demand for automation as companies hurry to automate in sectors that are facing a shortage of workers. One sector that has faced the worst economic downturn is hospitality. As consumers completely halt their travel plans and cancel reservations without booking another, the hospitality industry saw demand disappear virtually overnight.

In sectors like agriculture and distribution, automation is boosting revenues while keeping labor costs down. Specifically, in the distribution industry, e-commerce has changed the landscape of shipping, inventory tracking, and package receiving. More and more warehouses are becoming automated, which is increasing productivity and efficiency while keeping employees safe.

China’s Role

Of all of the countries in the world, China is in the best position to lead the world into increased automation. Much of the world’s manufacturing is done in China with Chinese labor, and even though the country has an enormous workforce, labor costs have risen by 10 times in the past two decades. Being the largest and fastest-growing global market for industrial robotics, China has the biggest incentive to automate factories and manufacturing companies within and outside of the mainland.

China’s industrial robotics market share increased to $5.4 billion in 2019, representing 33% of global sales. However, like most of the world, China’s workforce is getting older and reaching retirement age but the country is facing major issues finding young people to replace the retired population with. For maximum short-term benefit, automation is needed to stabilize the global economy.

In some areas, like restaurant automation, China is ahead of the rest of the world. In early 2020, a UBS Group AG survey found that 17% of consumers in the U.S. ordered meals through their phone once a week or more while 64% of respondents based in China ordered meals once a week or more using their mobile device. Although a mobile app may not be robotic automation, experts believe that robot waiters and chefs aren’t too far away.

The Next Step for Robots

Robots have slowly made their way into the mainstream, but they have mostly operated in the way of fun (looking at you, Boston Robotics), delivery, and factory automation. During the pandemic, we saw the rise of robots in hospitals, airports, and offices that continuously clean and deliver important medications as needed. In fact, there have been over 66 different kinds of these “social” robots, say researchers from Pompeu Fabra University.

The robot revolution that everyone imagines — the one where automation, robotics, machine learning development, and AI all seamlessly come together to transform nearly every industry — hasn’t happened yet. There’s nothing that points to a robot revolution overnight, but it seems like we may reach the revolution slowly and one step at a time. When 5G is more widely available, automation will become accelerated, allowing robotics to grow more rapidly.

People-Facing Robots

Unfortunately, consumer-facing robots are still met with hesitation, fear, admiration, and rejection, all at the same time. For example, Walmart ended its contract with San Francisco robotics development firm Bossa Nova. The end of the contract meant that 1,000 inventory robots were pulled from Walmart stores because the company was worried about how customers would react to the six-foot scanning robots.

artificial intelligence app development

Experts aren’t so sure that the World Economic Forum’s forecast of almost 50% of tasks worldwide being handled by machines by 2025 is actually feasible or realistic. But it is still possible.

Just The Start of It All

Even with the increased automation efforts caused by the pandemic, it seems unlikely that we’ll see robots appear in more aspects of our lives within a short amount of time. It will happen one day, but it will take a lot of time and gradual acceptance of robots before people adapt to them psychologically and practically. Until then, robots will still be seen as sleek and sexy.

The post Has COVID-19 Catalyzed an Automation Revolution? first appeared on Dogtown Media.]]>
Does AI’s Full Potential Depend on a Physical Body? https://www.dogtownmedia.com/does-ais-full-potential-depend-on-a-physical-body/ Mon, 04 Jan 2021 16:00:19 +0000 https://www.dogtownmedia.com/?p=15897 Artificial intelligence (AI) has been expanding its skillset and improving its performance in a variety...

The post Does AI’s Full Potential Depend on a Physical Body? first appeared on Dogtown Media.]]>
Artificial intelligence (AI) has been expanding its skillset and improving its performance in a variety of industries. It’s become an integrated part of image processing and identification, healthcare diagnosing, automatic translation, autonomous cars, and speech analysis. But these advancements beg the question: when will AI reach its full potential? When it’s infiltrated every part of our lives and the world around us? When it no longer has any challenges to overcome?

Alan Turing, the London-born math, computer science, and philosophy prodigy, speculated about this almost a century ago. He concluded (and proved) that there would be computations that would never finish completing, while others might take years or centuries to finish. However, we know that reality and theory, although closely related, are not always indicative of one another, and AI could push the boundaries of what we think is possible.

A Theory of Mind, Not Computer Science

One of those decade-long solutions to a problem is when we ask a supercomputer to compute the possible combinations of future moves in a chess game. It doesn’t take much time to calculate a few moves ahead, but the problem arises when you ask it to figure out all of the moves until the end of an 80-moves chess game. A year after programming the computer, it would still only have explored a small part of the chessboard. This is an issue caused by scaling up a simple problem.

Decades ago, AI did well at smaller games, but it had difficulties scaling up to larger games like chess. But modern-day AI has used a variety of mathematical concepts and machine learning development techniques to jump over this hurdle. It can now beat the world’s best Go player by looking many more moves ahead than the human player could ever manage to. But if we look closely at the scaling-up problem, it’s more of a computer science problem than an AI limitation.

The ultimate goal for AI is to perform seamless human-computer interaction. It must be able to take a variety of feedback and adjust its performance accordingly, it must act intelligently, it needs to communicate clearly, and if at all possible, it needs to be interactive, friendly, and even social. So how do we scale up the AI far from playing games as a computer screen to a “normal-looking” intelligent technology? We want AI to eventually act like a human being, one that remembers your past conversations, one that understands belief systems, can read in between the lines, and identify intentions when someone is speaking.

The psychological term for this is “theory of mind”, and it refers to someone knowing that the person they’re interacting with has their own thoughts and experiences, sees the world around you, and wants to connect and relate with you. With the theory of mind, the person should be able to map the other person’s thoughts, experiences, and intentions to their own.

The Self Model

The issue is that AI applications are still mainly screen-based. They are integrated into chatbots that we access through a computer screen, for example. But for an AI to truly have a conversation and pick up all of the nuances of a conversation, like signals, gestures, and unspoken intentions, it needs to have a physical body. And it needs to be aware of its physical body, not only in relation to the other person but also in relation to the world. Human children slowly work their way to such a mental model, so it’s not completely out of the question to program it into an AI.

Social interaction depends on every party having a “sense of self” while maintaining a mental model of the other parties. For an AI, a sense of self would include an understanding of how its body operates, a map of the space it’s in, a catalog of skills and actions, the ability to learn more, and a subjective perspective that could be updated and changed based on a conversation.

As we know, AI can be ingrained with the same biases of its designers and developers. So an AI built with experiences needs to understand the experiences it was programmed with, and it also needs to add new experiences to its memory — just like a human. Thus, to truly relate to and connect with humans, an AI needs a physical body.

Perception and Movement

It’s possible that we develop AI-enhanced robots that follow the developmental and growth patterns of human infants and children. It takes years for children to learn how to touch, feel, and act, not to mention learning about the consequences of actions. There is a massive amount of research and first-hand observation that can be done on human children to inform how we built these AI robots with physical bodies.

artificial intelligence app development

These days, research teams are experimenting around mimicking infancy in robots, allowing the robot infant to learn from its caregiver, learn about its surroundings, teach itself about the physics of the world around it, and continue to learn and grow based on actions, consequences, and conversations. We’ll have to see how these experiments turn out, but one thing is for sure: the future of AI is not in embedded systems; it’s in embodied systems.

The post Does AI’s Full Potential Depend on a Physical Body? first appeared on Dogtown Media.]]>
Adversarial Machine Learning: A Looming Threat to AI Security https://www.dogtownmedia.com/adversarial-machine-learning-a-looming-threat-to-ai-security/ Mon, 28 Dec 2020 16:00:28 +0000 https://www.dogtownmedia.com/?p=15879 Machine learning is great for niched-down problems that require an artificial intelligence (AI) algorithm to...

The post Adversarial Machine Learning: A Looming Threat to AI Security first appeared on Dogtown Media.]]>
Machine learning is great for niched-down problems that require an artificial intelligence (AI) algorithm to pay attention to very specific details. For that reason, it’s becoming a very popular technology. But machine learning applications used in essential fields like healthcare, transportation, and finance carry a lot of responsibility. If one of these algorithms were to become adversarial through a hack or a malicious developer, they could cause a lot of damage to people, companies, and the public’s trust in technology.

A research collaboration between 13 organizations focuses on finding ways to compromise machine learning algorithms. One of the biggest outcomes of this partnership is a framework called Adversarial ML Threat Matrix (AMLTM) to detect and respond to various types of adversarial attacks against machine learning algorithms. The researchers have shown that the threat to machine learning is real and here to stay and AI systems need to be secured immediately.

Protecting Machine Learning

The AMLTM follows the convention and layout of ATT&CK, which is a tried-and-true framework developed by MITRE, one of the companies in the collaboration. ATT&CK was created to handle security threats in enterprise networks. Similar to the AMLTM table, ATT&CK also utilizes a matrix that lists various adversarial tactics and what malicious actors typically do in each tactic. This helps cybersecurity experts and threat analysts find patterns and warning signs of potential attacks.

The ATT&CK table is well-known in the cybersecurity industry, so it made sense for the AMLTM to mimic the layout of the popular table, making it easier to understand and accessible for both cybersecurity engineers and machine learning engineers. Each column represents a tactic, while each cell contains specific techniques to look for. Pin-Yu Chen is an AI researcher at IBM, another company that joined the collaboration. He describes the matrix as one that “bridges the gap by offering a holistic view of security in emerging ML-based systems, as well as illustrating their causes from traditional means and new risks induce by ML.”

Chen says that machine learning is going to become a mainstay as it expands into other industries that are experiencing a digital transformation. These machine learning applications could even include offering high-stakes decision-making. In fact, he says, “The notion of ‘system’ has evolved and become more complicated with the adoption of machine learning and deep learning.” For companies that change from a transparent rule-based system to a black-boxed machine learning- or AI-enhanced system, the new “smarter” system would be at a considerably higher risk of attack and infiltration.

Complexities of Securing Machine Learning

With every new emerging technology comes a unique set of security and privacy problems. Web apps with a database backend created SQL injection threats. Improved JavaScript for websites’ frontend and backend created cross-site scripting threats. The Internet of Things created botnet threats (like the Mirai botnet) and proliferated the strength of DDoS attacks. Mobile phones introduced the threat of spying without permission. Although we have developed a host of protective measures, built-in security protocols, and ongoing research groups for these threats, it takes a lot of time, testing, and loss of revenue to create a robust cybersecurity solution.

For machine learning algorithms, there are vulnerabilities that are embedded within the thousands or millions of parameters of deep neural networks. It’s outside the scope of today’s security tools, and the vulnerabilities are extremely difficult to find manually. Chen agrees that machine learning is so new that current software security doesn’t fit the bill yet, but he adds, adding machine learning into today’s security protocols and landscape helps us develop new insights and improve risk assessment.

artificial intelligence app development

The AMLTM doesn’t skip a beat: it comes with case studies that involve adversarial machine learning, traditional security vulnerabilities, and combinations of both. It shows that adversarial attacks on machine learning systems aren’t just limited to the testing phase of an algorithm; they’re found in live systems as well. This really hits home that any machine learning system is vulnerable to malicious attacks, which raises the seriousness of the problem for all developers and engineers involved in the development, testing, and implementation of each system.

In one case study, the Seattle-based Microsoft Azure security team (another company in the AMLTM partnership) researched and consolidated information about a machine learning model. Then, they got access to the model using a valid account on the server. Using the information they’d found, they were able to detect adversarial vulnerabilities and create attacks against the model. Using these case studies, the research group hopes that security vendors will create new tools to secure and find weaknesses within machine learning systems in the future. Chen says, “Security is only as strong as its weakest link.”

Watching for Adversarial Threats

Without the AMLTM table, machine learning and AI developers and companies using these emerging technologies were creating algorithms blindly and without enough security. But the new matrix should give engineers the power to enhance their system’s security.

Chen says that he wants machine learning engineers to not only test-drive their algorithms but also perform crash-dummy collision tests to bring out the most vulnerable parts of the algorithm’s design. His ultimate hope for the AMLTM table is that “the model developers and machine learning researchers can pay more attention to the security (robustness) aspect of the model and looking beyond a single performance metric such as accuracy.”

The post Adversarial Machine Learning: A Looming Threat to AI Security first appeared on Dogtown Media.]]>
What’s an Algorithm? A Look at the Magic Behind Data-Driven Decisions https://www.dogtownmedia.com/whats-an-algorithm-a-look-at-the-magic-behind-data-driven-decisions/ Mon, 21 Dec 2020 16:00:50 +0000 https://www.dogtownmedia.com/?p=15859 The term “algorithm” in computing is inescapable. Found everywhere, it can mean an artificial intelligence...

The post What’s an Algorithm? A Look at the Magic Behind Data-Driven Decisions first appeared on Dogtown Media.]]>
The term “algorithm” in computing is inescapable. Found everywhere, it can mean an artificial intelligence (AI) application used in the cloud, for quantum computing, in self-driving cars, on social media, or a host of other technologies. Like programming languages, algorithms are a set of instructions for a computer. Unlike programming languages which dole out very specific instructions, however, algorithms go much deeper than that; they ask the computer to find the best fit, best probability, or best prediction about a given set of information.

An algorithm gives a computer a set of facts (data) and tells it how to transform it into useful information, like instructions for machines, knowledge for people, or input for another algorithm. Algorithms can work on problems as simple as number sorting or as complex as finding your soulmate. But how do they work? Let’s take a deeper look into the anatomy of an algorithm.

Problem-Solving

At their core, algorithms break down a complex problem (one that isn’t built into a programming language and can’t be readily found in a language’s library) into a multi-step, solvable set of instructions for a computer. Many of these “complex” problems aren’t complex to us (like the number-sorting example), but they’re complex for a computer to understand without breaking it down into easier-to-understand steps. For example, how do you choose what you’re going to wear today?

For most people, it’s not too difficult to find something that looks nice and makes you feel confident and throw it on. Easy, right? Try to think about how a computer would approach this problem. You’d have to write some code on what you’ve worn recently, what fits and doesn’t fit anymore, what event you’re dressing up for or the style you’re going for, and even what shoes you have on hand to match with. Suddenly, the problem becomes more in-depth, with each topic involving multiple layers of decision-making and historical data.

The Inner Workings of an Algorithm

All of the aforementioned aspects (clothes were worn recently, clothes that fit and don’t fit, the style to dress for, and what shoes to match with) are all inputs for an algorithm. An input is a piece of information that will be directly used by the algorithm to make decisions. This information is encoded in the form of data, whether it’s an array of clothes that fit or a number and unit representing the average temperature for the day.

The algorithm then takes this input data and transforms it through a series of steps that involve calculations, comparisons, and decision-making. For the getting dressed example, the transformations would involve using the inputs to make a final decision about what to wear, comparing it against what fits and doesn’t fit, and recalculating the clothing choice accordingly. It might even continuously scrape the web for the most recent fashion trends so that it can develop an outfit for you that’s fashion-forward.

The algorithm then returns an output of the final outfit it has chosen for you — this is akin to the algorithm’s final answer or response. The output is data that has reached the end of its calculations and comparisons. Often, the output is used as input for another algorithm, known as stringing together algorithms. In this way, each algorithm is tackling one step at a time, while the entire ecosystem of connected algorithms works together to solve an overarching question or problem. For example, the final outfit presented might be used to develop a weekly schedule of outfits using another algorithm, which compares each outfit found and arranges them for maximum variety. Output can also mean visual or auditory answers to the problem presented.

How Machine Learning Works

Machine learning is a subset of AI, and it refers to a special category of algorithms that try to learn based on historical data about decision-making. Machine learning is used for prediction, recommendations, and searching for information. For more complex user interfaces, a machine learning algorithm may present a variety of choices that the user can accept or deny.

In our “getting dressed” example, for algorithms that utilize machine learning development principles, this user feedback can be used to inform the next day’s clothing choices. For example, if the algorithm records the user’s feedback for a long-enough time, it can eventually surmise that you don’t really like those black jeans because you never choose an outfit with them and whenever presented with an outfit involving the black jeans, you always reject that choice. It could even monitor your social media to see what outfit components get the most likes and engagement to inform the next outfit it chooses for you.

artificial intelligence app development

Applications for Tomorrow

Machine learning and AI have been used for some seriously cool applications. We recently wrote about how an AI algorithm uses your face to estimate your attractiveness, BMI, gender, and even life expectancy. Another great example is an AI algorithm has been tested using technology developed in Boston. It allows your car to read your emotions while driving and helping you calm down during stressful driving incidents. AI has been ideated to help judges figure out if someone is innocent or guilty, but that has exposed some serious flaws with AI: bias, racism, and sexism.

Although these examples may seem far-fetched, kooky, and even downright crazy, they’re a good glimpse into what the future of algorithms holds. Developers will continue to come up with new ways to apply algorithms, and we’ll see a variety of new ideas in the coming decade that utilizes the basic concept of taking input data and transforming it until it becomes an output.

The post What’s an Algorithm? A Look at the Magic Behind Data-Driven Decisions first appeared on Dogtown Media.]]>