machine learning applications | Dogtown Media https://www.dogtownmedia.com iPhone App Development Fri, 21 Apr 2023 16:15:33 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.1 https://www.dogtownmedia.com/wp-content/uploads/cropped-DTM-Favicon-2018-4-32x32.png machine learning applications | Dogtown Media https://www.dogtownmedia.com 32 32 Clutch Recognizes Dogtown Media as a Top Global B2B Company for 2021 https://www.dogtownmedia.com/clutch-recognizes-dogtown-media/ Tue, 07 Dec 2021 16:19:03 +0000 https://www.dogtownmedia.com/clutch-recognizes-dogtown-media-as-a-2021-b2b-leader-in-artificial-intelligence-for-robotics-copy/ As the 2021 year comes to a close and we anticipate what’s to come in...

The post Clutch Recognizes Dogtown Media as a Top Global B2B Company for 2021 first appeared on Dogtown Media.]]>

As the 2021 year comes to a close and we anticipate what’s to come in 2022, it’s with great appreciation and honor to announce that Dogtown Media has received a global accolade from the major digital rating agency, Clutch.co, as a Top Global B2B Company for 2021.

After 10 years in the mobile app space, reaching a highly regarded and recognized global accolade is a major accomplishment and points to the continued dedication of Dogtown Media to their global client base and their hyper-focus on producing high-quality applications. 

Dogtown Media is Los Angeles’ leading mobile application company, working with organizations in nearly every vertical to bring their unique ideas and solutions to the app market. Dogtown Media prides itself on the satisfaction, approval, and happiness of our clients. And aims to create cutting-edge solutions that are pushing the boundaries of what’s thought o be possible in the mobile application space.

the mobile application space.

And for those who may be unaware, this Clutch.co accolade is only one in a series of major accolades awarded to Dogtown Media such as Top 2021 B2B Leader in Artificial Intelligence for Robotics, a top 2020 Service Provider, and the 27th Best B2B Service Provider in the World in 2019. All of these great accolades point to the dedication to craft and customer, and only scratch the surface of their long laundry list of accolades from Clutch and other prominent rating agencies in the mobile app space. 

“This recognition feels surreal and we are lost for words”, notes founder Marc Fischer. “We feel truly honored to be recognized by such a prestigious rating firm, and hope to continue to provide high-quality, meaningful applications for our clients today and far into the future. “

Here are some of the quotes that stood out most to us:

Here are some of the quotes that stood out most to us:

They were an effective team, met deadlines, and created a great end product.“. — Director, Risk Comm Lab, Temple University

They built an intuitive and simple design, and the team works quickly to address bugs and solve problems.”— Senior Ops Manager, Hospital Innovation Lab

Let’s build something amazing together! Connect with us and get a free tech consultation.

The post Clutch Recognizes Dogtown Media as a Top Global B2B Company for 2021 first appeared on Dogtown Media.]]>
Clutch Recognizes Dogtown Media as a 2021 B2B Leader in Artificial Intelligence for Robotics https://www.dogtownmedia.com/clutch-recognizes-dogtown-media-as-a-2021-b2b-leader-in-artificial-intelligence-for-robotics/ Thu, 11 Mar 2021 18:00:53 +0000 https://www.dogtownmedia.com/?p=16146 The goal of robotics is to develop and construct meaningful machines that will support and...

The post Clutch Recognizes Dogtown Media as a 2021 B2B Leader in Artificial Intelligence for Robotics first appeared on Dogtown Media.]]>
artificial intelligence app development

The goal of robotics is to develop and construct meaningful machines that will support and help human processes. The multidisciplinary field fuses technologies such as artificial intelligence and machine learning together to develop innovative solutions.

Dogtown Media is Los Angeles’ leading robotics company, working with enterprises and organizations to help their businesses. Our team prides itself on the satisfaction, approval, and happiness of our clients. We want to create cutting-edge solutions to solve simple frustrations and tackle business hurdles.

artificial intelligence app development

Just recently, Dogtown Media was hailed as a top agency on Clutch for its excellence in AI for robotics. If you’re not familiar, Clutch is a B2B review platform based in Washington, DC. The site is well respected in the space for its commitment to providing data-driven content and verifying client reviews.

This recognition feels surreal and we are lost for words. Our team wants to send its sincerest thanks to Clutch for this award. We believe that this award is a great sign for our 2021 run, and we are looking forward to a prosperous year. 

 We know that this recognition was made possible thanks to our clients’ amazing feedback. We owe this success to our clients especially those who left us their review on Clutch. 

Here are some of the quotes that stood out most to us:

“They’re a small shop that’s motivated and offers a comprehensive list of services and capabilities. They’re accountable and willing to work by our side to make the best product possible. The entire team is professional and eager to solve our problems.” — Founder, Mobile Sales Training Company

“Dogtown Media developed a solution for a project considered impossible to do in the tech world. They’ve taken every goal we’ve had and delivered above and beyond our expectations, beating our requirement of achieving a 50% accuracy rating on visual search capabilities with a software that is over 90% accurate.” — CTO, Innovengine

Let’s build something amazing together! Connect with us and get a free tech consultation.

The post Clutch Recognizes Dogtown Media as a 2021 B2B Leader in Artificial Intelligence for Robotics first appeared on Dogtown Media.]]>
Dogtown Media Is Named a Top Machine Learning Developer of 2021 by Techreviewer! https://www.dogtownmedia.com/dogtown-media-is-named-a-top-machine-learning-developer-of-2021-by-techreviewer/ Tue, 09 Feb 2021 16:00:16 +0000 https://www.dogtownmedia.com/?p=16043 We’ve barely just begun 2021, but it’s already shaping up to be an amazing year...

The post Dogtown Media Is Named a Top Machine Learning Developer of 2021 by Techreviewer! first appeared on Dogtown Media.]]>
machine learning apps

We’ve barely just begun 2021, but it’s already shaping up to be an amazing year for Dogtown Media. We were recently featured as a top machine learning developer by Techreviewer.co! Thanks so much to our clients, team, and community for your continued support — you’re the ones who make this possible.

Techreviewer is a trusted B2B information platform that regularly conducts market research across numerous sectors, such as development, design, and marketing. Its meticulous methodology quickly helps companies identify high-quality services for a variety of complex technical tasks. Whether you’re searching for experts in AI app development, the Internet of Things, or business analysis, Techreviewer’s analysts can expedite the process of finding the best technology partner for your business needs.

For Techreviewer’s list of 2021’s top machine learning companies, the review hub evaluated each contender by examining its demonstrated expertise, experience, quality of services, and reliability to deliver products that went above and beyond clients’ requirements. That last component was especially crucial in elucidating the top players in this sector; the product must leverage key aspects of machine learning to help transform a client’s digital presence.

Over 500 companies were considered for this prestigious award. We’re extremely proud to be able to say that we made the cut after quite a few time-intensive assessments!

Besides being recognized as one of the best machine learning app developers of 2021, we’re also grateful to have been featured as one of 2021’s best iPhone app developers as well as a top mobile app developer in Los Angeles by Digital.com!

With this award, we couldn’t be more excited to see what else is in store for our company in the new year! Thanks to Techreviewer for recognizing our hard work. And thanks again to our clients, team, and community — we couldn’t have done this without you!

About TechReviewer.co

Techreviewer a trusted analytical hub that carries out studies and compiles the lists of top development, design and marketing companies. The platform helps to find the best companies that provide high-quality IT services for technical support, development, system integration, AI, Big Data, and business analysis. As a result of objective market analysis, the Techreviewer platform determines the most successful and reliable IT companies and makes top ratings for each of the service categories. Techreviewer’s ranking lists help organizations select the right technology partner for their business needs.

The post Dogtown Media Is Named a Top Machine Learning Developer of 2021 by Techreviewer! first appeared on Dogtown Media.]]>
Adversarial Machine Learning: A Looming Threat to AI Security https://www.dogtownmedia.com/adversarial-machine-learning-a-looming-threat-to-ai-security/ Mon, 28 Dec 2020 16:00:28 +0000 https://www.dogtownmedia.com/?p=15879 Machine learning is great for niched-down problems that require an artificial intelligence (AI) algorithm to...

The post Adversarial Machine Learning: A Looming Threat to AI Security first appeared on Dogtown Media.]]>
Machine learning is great for niched-down problems that require an artificial intelligence (AI) algorithm to pay attention to very specific details. For that reason, it’s becoming a very popular technology. But machine learning applications used in essential fields like healthcare, transportation, and finance carry a lot of responsibility. If one of these algorithms were to become adversarial through a hack or a malicious developer, they could cause a lot of damage to people, companies, and the public’s trust in technology.

A research collaboration between 13 organizations focuses on finding ways to compromise machine learning algorithms. One of the biggest outcomes of this partnership is a framework called Adversarial ML Threat Matrix (AMLTM) to detect and respond to various types of adversarial attacks against machine learning algorithms. The researchers have shown that the threat to machine learning is real and here to stay and AI systems need to be secured immediately.

Protecting Machine Learning

The AMLTM follows the convention and layout of ATT&CK, which is a tried-and-true framework developed by MITRE, one of the companies in the collaboration. ATT&CK was created to handle security threats in enterprise networks. Similar to the AMLTM table, ATT&CK also utilizes a matrix that lists various adversarial tactics and what malicious actors typically do in each tactic. This helps cybersecurity experts and threat analysts find patterns and warning signs of potential attacks.

The ATT&CK table is well-known in the cybersecurity industry, so it made sense for the AMLTM to mimic the layout of the popular table, making it easier to understand and accessible for both cybersecurity engineers and machine learning engineers. Each column represents a tactic, while each cell contains specific techniques to look for. Pin-Yu Chen is an AI researcher at IBM, another company that joined the collaboration. He describes the matrix as one that “bridges the gap by offering a holistic view of security in emerging ML-based systems, as well as illustrating their causes from traditional means and new risks induce by ML.”

Chen says that machine learning is going to become a mainstay as it expands into other industries that are experiencing a digital transformation. These machine learning applications could even include offering high-stakes decision-making. In fact, he says, “The notion of ‘system’ has evolved and become more complicated with the adoption of machine learning and deep learning.” For companies that change from a transparent rule-based system to a black-boxed machine learning- or AI-enhanced system, the new “smarter” system would be at a considerably higher risk of attack and infiltration.

Complexities of Securing Machine Learning

With every new emerging technology comes a unique set of security and privacy problems. Web apps with a database backend created SQL injection threats. Improved JavaScript for websites’ frontend and backend created cross-site scripting threats. The Internet of Things created botnet threats (like the Mirai botnet) and proliferated the strength of DDoS attacks. Mobile phones introduced the threat of spying without permission. Although we have developed a host of protective measures, built-in security protocols, and ongoing research groups for these threats, it takes a lot of time, testing, and loss of revenue to create a robust cybersecurity solution.

For machine learning algorithms, there are vulnerabilities that are embedded within the thousands or millions of parameters of deep neural networks. It’s outside the scope of today’s security tools, and the vulnerabilities are extremely difficult to find manually. Chen agrees that machine learning is so new that current software security doesn’t fit the bill yet, but he adds, adding machine learning into today’s security protocols and landscape helps us develop new insights and improve risk assessment.

artificial intelligence app development

The AMLTM doesn’t skip a beat: it comes with case studies that involve adversarial machine learning, traditional security vulnerabilities, and combinations of both. It shows that adversarial attacks on machine learning systems aren’t just limited to the testing phase of an algorithm; they’re found in live systems as well. This really hits home that any machine learning system is vulnerable to malicious attacks, which raises the seriousness of the problem for all developers and engineers involved in the development, testing, and implementation of each system.

In one case study, the Seattle-based Microsoft Azure security team (another company in the AMLTM partnership) researched and consolidated information about a machine learning model. Then, they got access to the model using a valid account on the server. Using the information they’d found, they were able to detect adversarial vulnerabilities and create attacks against the model. Using these case studies, the research group hopes that security vendors will create new tools to secure and find weaknesses within machine learning systems in the future. Chen says, “Security is only as strong as its weakest link.”

Watching for Adversarial Threats

Without the AMLTM table, machine learning and AI developers and companies using these emerging technologies were creating algorithms blindly and without enough security. But the new matrix should give engineers the power to enhance their system’s security.

Chen says that he wants machine learning engineers to not only test-drive their algorithms but also perform crash-dummy collision tests to bring out the most vulnerable parts of the algorithm’s design. His ultimate hope for the AMLTM table is that “the model developers and machine learning researchers can pay more attention to the security (robustness) aspect of the model and looking beyond a single performance metric such as accuracy.”

The post Adversarial Machine Learning: A Looming Threat to AI Security first appeared on Dogtown Media.]]>
How AI Algorithms Are Taking Over Employee Management https://www.dogtownmedia.com/how-ai-algorithms-are-taking-over-employee-management/ Mon, 05 Oct 2020 15:00:44 +0000 https://www.dogtownmedia.com/?p=15611 Artificial intelligence (AI) has become a major part of our daily lives. We interact with...

The post How AI Algorithms Are Taking Over Employee Management first appeared on Dogtown Media.]]>
Artificial intelligence (AI) has become a major part of our daily lives. We interact with algorithms on social media, we get emails and ads depending on our purchases and browsing behavior, and we even get matched with a driver for Uber or Lyft through this smart technology.

Though, whether or not algorithms are good for employees is another question. When AI manages employees and their compensation, sentiments can quickly turn negative. At a growing number of companies, self-learning AI algorithms are being used to help hire, measure productivity, set tasks and goals, and, perhaps worst of all, even terminate employees.

AI Is Already at Work

When AI algorithms are given the responsibility to make and execute decisions that affect employees, it’s called “algorithmic management”. Many employees who’ve experienced algorithmic management attest that it’s impersonal and can entrench a company in pre-existing biases. It also deepens the power imbalance between management and an employee.

More than 700 companies have tested AI algorithms to score an applicant’s success in a job interview. HireVue is an AI development company that uses algorithms to score applicants on their language, facial expressions, and tone. The company claims that they speed up the hiring process by 90%.

For employees who want to challenge algorithmic management, it’s next to impossible because the algorithm’s code is a secret: the decision-making process and analysis are hidden. Unsurprisingly, it can be frustrating to see input go in and output come out without any explanation as to how the algorithm reached its conclusions. It’s also difficult to reach someone who could give you access to scrutinize or understand the algorithm, and even if you did find someone to help, it’s likely that they would be legally bound not to.

AI’s Impact on the Gig Economy

The gig economy companies already treat their workers like contractors and not employees, so it’s no wonder that new technology would be tested on gig economy workers before actual full-time employees. Companies like Uber, Lyft, and Deliveroo use machine learning algorithms to allocate, evaluate, monitor, and reward work for their contractors.

But this can spiral out of control quickly. In the past year, as the pandemic reached full strength, Uber Eats’ workers complained that there have been unexplained changes to the algorithm, which have resulted in lower incomes and fewer jobs to complete. However, contractors can’t be 100% sure that the algorithm caused this mess for them because no company that uses AI to manage the social and evaluation aspects of their employees is being transparent with their code. No employee can even guess as to how much an algorithm is controlling their well-being and income.

In an interview with 58 food-delivery workers, most knew that their gigs were being allocated by an algorithm, and they knew the app collected data to use later on. But they didn’t know exactly how the data was being used to give them more or fewer gigs. To try to game the algorithm, many workers tried to create strategies to get more jobs, such as taking them on as fast as possible and waiting in “magic locations.” A problem arose, however: Noone is working gigs to get forced to stay in one location to find more jobs. The flexibility benefit of the gig economy goes out the window when you have to wait all day at a “magic location.”

Ingrained Biases and Issues

Research has shown that AI algorithms are deeply biased, mostly because they’re developed and tested by white men, which introduces bias against women and people of color into the algorithms they’ve coded. An unforgettable example is the COMPAS software used by parole officers, U.S. judges, and probation officers to rate the risk level of a citizen in re-offending. A 2016 investigation by New York City-based ProPublica concluded that the COMPAS software was incredibly discriminatory because it had incorrectly classified black subjects as a higher risk 45% of the time, compared with just 23% for white subjects.

Because algorithmic management gives executives insight into the inner workings of the algorithm while hiding every detail away from employees that are affected by it, there are two impactful effects. It entrenches systemic biases, perpetuating the exact type of discrimination that the COMPAS software exposed years ago. And it creates a wider power imbalance between workers and management. For example, when Uber Eats couriers asked corporate to give them reasons for why their gig count was lower than normal, Uber told them that they “have no manual control over how many deliveries you receive.”

In the Australian state of Victoria, where Amazon workers in Melbourne are being timed to scan items by an AI algorithm, the government wrote, “The absence of concrete evidence about how the algorithms operate makes it hard for a driver or rider to complain if they feel disadvantaged by one.” The report also noted that it’s difficult to confirm if there is real “concern over algorithm transparency.”

But that’s the point, isn’t it? There is no list of employees concerned about algorithmic transparency, and because there aren’t any explicitly written articles by employees all over the world complaining about this lack of transparency, it’s difficult to start coming up with a solution for it.

Looking into Algorithms

Until the transparency, impact, and effects of algorithmic management are closely studied and researched, it is imperative that we take this technology with a grain of salt. As with any new tech application, we must make tweaks to improve it and be prepared to shut down the technology if it proves to be destructive to the well-being of others. Without humans providing oversight over this technology, we won’t be able to develop safe automation of traditionally human-centered tasks such as employee management. If we can’t provide safety and support nets for each other, no machine will ever be able to.

What do you think of using AI algorithms for employee management? Have you dealt with this personally? As always, let us know your thoughts in the comments below!

The post How AI Algorithms Are Taking Over Employee Management first appeared on Dogtown Media.]]>
Machine Learning Can’t Fix Algorithmic Bias — but Humans Can https://www.dogtownmedia.com/machine-learning-cant-fix-algorithmic-bias-but-humans-can/ Tue, 11 Aug 2020 17:00:57 +0000 https://www.dogtownmedia.com/?p=15417 Original article featured in Quartz at Work. The fact that tech has a long way...

The post Machine Learning Can’t Fix Algorithmic Bias — but Humans Can first appeared on Dogtown Media.]]>

Original article featured in Quartz at Work.

The fact that tech has a long way to go when it comes to its lack of diversity shouldn’t be news to anyone at this point. The technology sector is the third biggest contributor to the US economy. And the people behind it—from founders to hiring managers to investors—overwhelmingly look like me: white men with a visible degree of affluence.

Similarly, more than 90% of American venture capitalists are white men, and those white men tend to fund startups led by people who also look like them. And as a result, men receive 35 times more funding than women.

Obviously, this should not be the reality. While programs like Girls Who Code work to turn the tide and diversify STEM fields, women still earn just 18% of computer science bachelor’s degrees in the US. Meanwhile, white men are getting the education, the opportunities, and, eventually, the leadership roles.

Ultimately, tech is the arm of innovation in our country. But since it’s largely being programmed by people who look and think alike, the impact on everything in our world couldn’t be more immense.

Robots versus humans?

If there’s one thing we’ve learned, it’s that we can’t rely on technology as a solution to fix, well, technology. In other words, don’t blame the algorithm: AI and machine learning won’t fix our tech bias problem when they are inherently biased because of how it was designed, and by whom. Humans got us into this mess, and humans need to solve it.

A host of studies have identified direct links between diversity and workplace performance. In 2017, for example, global consultancy firm McKinsey & Company examined 1,000 companies and found that diverse teams yielded substantial improvements in profitability and long-term valuation. A study that looked at 4,277 companies in Spain illustrated that the companies with the most women had a better chance of bringing radical innovations to market.

When technology like machine learning is designed, coded, built, and scaled by a homogenous team, the results can be disastrous and even, quite literally, deadly. A recent story from the Georgia Institute of Technology indicated that autonomous vehicles might have a more difficult time detecting pedestrians with darker skin. The study found that because those driverless cars were programmed mostly by young white male engineers, systems were 5% less accurate when recognizing people who have darker skin than those with lighter skin, stemming from the fact that fewer images of people with darker skin tones were used during programming and testing.

The impact is heavily felt in the hiring and recruitment process as well. Some suggest letting AI sort and rank job candidates. But that solution runs into the same problem as the driverless cars, because in most cases homogeneous groups are building it. For instance, when Seattle-based Amazon relied on an internal recruiting tool programmed based on past hiring decisions, because those decisions were made by people who had largely favored men over women, the algorithm learned to do the same. When engineers programmed the machine learning tool to ignore overtly gendered words in response, the technology instead started seeking implicitly gendered adjectives to accomplish what it had learned was the goal: selecting and promoting male applicants.

We often subconsciously try to mirror ourselves, spending time and working with people of similar backgrounds. Bias is real and more often than not, creeps into so much of the hiring process. One of the best solutions we’ve found is to intentionally put on blinders. Blinding certain candidate information and focusing on their achievements or experience over other elements like names, zip codes, and backgrounds eliminates the risk of subconscious bias creeping into decisions. Looking instead at where people have excelled or shown willingness to learn new skills and adapt lets our decision-making focus on what matters—professional potential and ability—rather than any preconceived, inherent notions we have about someone.

It should come as no surprise that tech visionaries turn to their engineering prowess for everything, but there comes a time when people problems need to be solved by people.

Want to leverage emerging technologies like 5G, AI, and IoT in your organization? Get in touch with my team for a Free Consultation.

The post Machine Learning Can’t Fix Algorithmic Bias — but Humans Can first appeared on Dogtown Media.]]>
OpenAI’s Text Generator Writes Like a Human — and It’s Going Commercial https://www.dogtownmedia.com/openais-text-generator-writes-like-a-human-and-its-going-commercial/ Mon, 20 Jul 2020 15:00:54 +0000 https://www.dogtownmedia.com/?p=15346 The development of artificial intelligence (AI) holds unprecedented possibilities — some good and some bad....

The post OpenAI’s Text Generator Writes Like a Human — and It’s Going Commercial first appeared on Dogtown Media.]]>

The development of artificial intelligence (AI) holds unprecedented possibilities — some good and some bad. OpenAI is a research institute originally started to steer the technology away from the latter category. Recently, the startup announced its first commercial product: An AI-powered text generator that was previously deemed too dangerous to release to the public.

Previously Too Dangerous, but Now Yours for a Price

OpenAI was founded in San Francisco in late 2015 by Elon Musk, Sam Altman, Ilya Sutskever, and Greg Brockman. Its main mission? To ensure that the use of artificial general intelligence (AGI) is safe for humanity. Musk resigned from the company’s board in early 2018 but has remained a donor.

By the beginning of 2019, the startup announced it had created GPT-2, a natural language processing (NLP) neural network. GPT-2 could produce text so cogent and natural that it was difficult to distinguish from human writing. This consequently raised valid concerns among its creators that GPT-2 could be leveraged by bad actors to make propaganda or fake news. For this reason, OpenAI initially chose not to release GPT-2 to the public.

General sentiment for the news surrounding GPT-2 was split between two opinions: Either this was a carefully crafted publicity stunt or a warning sign of the imminent automation apocalypse to come. Well, it turns out that the public will have its chance to revisit this dilemma. GPT-2’s successor, GPT-3, is complete. And it’s going commercial.

In the short time span between GPT-2 and GPT-3, fake news has become a more ubiquitous issue in technology and politics. And as the world contends with the current coronavirus pandemic and the upcoming US Presidential election, many would say that a human-like AI text generator is the last thing we need at this moment. But yet, here we are.

Petabytes of Possibilities

Researchers at OpenAI published a paper detailing GPT-3’s capabilities on the open-access repository arXiv. In it, they describe GPT-3 as an autoregressive language model with a whopping 175 billion parameters. That’s a ton. To put it in perspective, GPT-2’s final iteration contained 1.5 billion parameters. And Microsoft’s Turing Natural Language Generation model had 17 billion parameters.

You may be wondering, “What’s a parameter?” Basically, a parameter is an attribute defined by a machine learning model based on its training data. Going from 1.5 billion to 175 billion parameters is obviously no small feat. But, perhaps most surprisingly, the tech behind GPT-3 isn’t necessarily more advanced than comparable tools; it doesn’t even introduce any new training methods or architectures.

To reach 175 billion parameters, GPT-3’s creators scaled up the input data quantity. All of the data came from the non-profit Common Crawl. As its name implies, Common Crawl scans the open web each month. It then downloads the content of billions of HTML pages and makes it available in a format convenient for mass-scale data mining. Currently, Common Crawl has petabytes of information accessible in over 40 languages. To improve the data’s quality, OpenAI applied a few filtering techniques.

“GPT” is short for Generative Pretrained Transformer. Instead of studying words sequentially and making decisions based on their position, GPTs model the relationships between a sentence’s constituents all at once. With this information in tow, the GPT can weigh the likelihood that a given word will be preceded or followed by another word. It even accounts for how this probability is changed by the inclusion of other words in the sentence.

The algorithm behind a GPT ends up learning from its own inferences after identifying the patterns between words in a gargantuan dataset. This is known as unsupervised machine learning, and it’s not simply restricted to words. For instance, GPT-3 can apply the same methodology to comprehend the relationship between concepts and recognize context.

Whether it was translating, answering questions, or filling in the blanks for incomplete sentences, GPT-3 performed quite well. In the research paper, its creators also noted that it could do “on-the-fly reasoning” and was capable of generating short news articles that were indiscernible from human-written ones.

What Comes Next?

It’s undeniable that GPT-3’s capabilities are amazing — and also frightening. The research paper’s authors acknowledge that it could be misused in myriad ways; spamming, phishing, misinformation generation, and manipulation of legal and governmental processes were all mentioned. On the bright side, GPT-3’s API could be used to create new entertainment experiences, improve chatbot fluency, and much more.

GPT-3’s immense potential is difficult to fathom. Even its creators have admitted that it’s not exactly clear how the system may be used. With that said, OpenAI does plan on taking things slow and keeping a careful eye out for possible nefarious use cases. Each customer will be thoroughly vetted, and the research organization is working on new safety features. GPT-3’s API isn’t available to all yet. Access is invitation-only right now, and the pricing is still undecided.

Currently, around a dozen customers are using GPT-3. SaaS web search provider Algolia is using the API to improve its product’s understanding of search queries that use natural language. Social news aggregation platform Reddit is exploring possibilities for automating content moderation. And mental health platform Koko is leveraging GPT-3 to analyze when its users are in a “crisis.”

Now that OpenAI has taken steps into the commercial arena, there’s no turning back now. Many will be watching the startup’s next moves closely and curiously. We hope that the release of GPT-3 does not cause the organization to stray away from its original intent. After all, safe AGI isn’t just a business priority — it’s a necessity for humanity.

The post OpenAI’s Text Generator Writes Like a Human — and It’s Going Commercial first appeared on Dogtown Media.]]>
Do AI Algorithms Have a Place in Law? https://www.dogtownmedia.com/do-ai-algorithms-have-a-place-in-law/ Mon, 22 Jun 2020 15:00:22 +0000 https://www.dogtownmedia.com/?p=15233 The development of artificial intelligence (AI) has opened up numerous possibilities. But should they all...

The post Do AI Algorithms Have a Place in Law? first appeared on Dogtown Media.]]>

The development of artificial intelligence (AI) has opened up numerous possibilities. But should they all come to fruition? Around the world, algorithms are making probation decisions and predicting whether a person will commit a crime. Opponents of this usage of AI are calling for more human oversight.

How Predictive Algorithms Affect the Probation Experience in Philadelphia

To see the use of AI in legal systems, you don’t have to visit tech hubs like San Francisco or Beijing; they’re already more ubiquitous than anyone would initially assume.

Darnell Gates is currently on probation in Philadelphia. Prior to being released from jail in 2018, he had served time for driving a car into a house and threatening his former partner with domestic violence. After being deemed “high risk” by an algorithm, he was ordered to visit a probation office once a week. This requirement eventually stretched to every two weeks, then to once a month.

During all of this time, Mr. Gates never realized the monumental role that AI played in his rehabilitation — until The New York Times told him about it in an interview. His response? “You mean to tell me I’m dealing with all this because of a computer?”

But Gates certainly isn’t alone in his predicament. Created by a professor at the University of Pennsylvania, this algorithm has been shaping the experiences of probationers in Philadelphia for more than five years.

Is Automating Life-Altering Decisions Right?

The Philadelphia probation algorithm is but one of many that are making life-changing choices about people in the US and Europe. And authorities are leveraging these predictive algorithms for more than probation rules; they’re also being applied to set prison sentences and police patrols.

In Britain, an algorithm is being used to rate which teenagers could potentially become criminals. In the Netherlands, one is flagging welfare fraud risks. Berlin-based watchdog Algorithm Watch has identified similar use cases in 16 European countries. But in the US, it’s much more widespread.

Per the Electronic Privacy Information Center, almost every American state is employing a sort of legal governance algorithm in one way or another. As the practice proliferates, United Nations investigators, lawyers, and communities are becoming more outraged by this growing dependence on automation for law and order. Why? Because they believe it’s removing transparency from legal processes.

It’s not exactly crystal-clear how each system or algorithm is making choices. Are they based on age? Gender? Race? That’s difficult to say; many countries and states don’t require algorithm creators to disclose their formulas. Unsurprisingly, opponents of this automation use are worried that biases are being baked into the decision-making process.

Ideally, these algorithms would cut government costs, reduce burdens on understaffed agencies, and eliminate human bias. But opponents believe that governments aren’t showing much interest in the last category — a recent UN report cautions that governments are risking the possibility of “stumbling zombie-like into a digital-welfare dystopia.”

A Black Box That Eliminates Bias or Promotes It?

At its most basic level, a predictive algorithm functions by using historical data and statistical techniques to calculate a future event’s probability. Thanks to advancements in computing power and increases in available data, they’ve now been augmented to an unprecedented degree.

The private sector uses these tools quite often. Whether it’s to predict an individual’s likelihood to get sick, cause a car accident, default on a loan, or click an internet ad, algorithms are employed everywhere these days. With a vast mountain of data on the public, it’s hardly shocking that governments are eager to utilize them as well.

But back in Philadelphia, implementing algorithms is turning out to be more troublesome and complex than the government initially thought. Pennsylvania has mandated the development of an algorithm to aid courts in deciding sentences after someone is convicted.

Todd Stephens, one of its state representatives, is part of the commission working to make this happen. He explains, “We walked into a hornet’s nest I didn’t even know existed.”

The commission’s original proposal for the algorithm had it leaning strongly on data provided by local county probation departments. But many communities and the American Civil Liberties Union protested this plan for fear that it would expand predictive algorithms’ capabilities in the wrong way. In response, the commission opted for a simpler implementation based on software already being used in the state’s courts.

Unfortunately, even if the government shared how the algorithm arrives at its decision, the math behind it is far too difficult for a layperson to comprehend in a timely manner. Many of the algorithms being used by the Philadelphia criminal justice system were created by Richard Berk, Professor of Statistics and Criminology at the University of Pennsylvania.

There’s no denying that it would be hard for a layperson to easily understand the algorithm. But Dr. Berk says that human judgment suffers from the same problem: “All machine learning algorithms are black boxes, but the human brain is also a black box. If a judge decides they are going to put you away for 20 years, that is a black box.”

Fleeting Controversy or a Problem That’s Here to Stay?

Dr. Berk believes that controversy around legal predictive algorithms will fade as their usage becomes more widespread. He sees them as akin to the algorithms used in commercial airliners’ automatic piloting systems. “Automatic pilot is an algorithm,” he explains. “We have learned that automatic pilot is reliable, more reliable than an individual human pilot. The same is going to happen here.”

Of course, it will take more convincing than that for people whose future is at stake here, such as Mr. Gates: “I can’t explain my situation to a computer. But I can sit here and interact with you, and you can see my expressions and what I am going through.”

What do you think of these predictive algorithms? Do they have a place in law? Or should they be eschewed in favor of a more human touch? As always, let us know your thoughts in the comments below!

The post Do AI Algorithms Have a Place in Law? first appeared on Dogtown Media.]]>
The Internet of Things Can Revamp Research & Development https://www.dogtownmedia.com/internet-of-things-can-revamp-research-development/ Wed, 10 Jun 2020 15:00:25 +0000 https://www.dogtownmedia.com/?p=15185 The progression of technology and scientific knowledge go hand-in-hand. We now live in an era...

The post The Internet of Things Can Revamp Research & Development first appeared on Dogtown Media.]]>

The progression of technology and scientific knowledge go hand-in-hand. We now live in an era of constant advancement in both fields. Fueled by unquenchable curiosity and machine-powered efficiency, we’re pushing the boundaries of what we know with every passing day.

Breakthroughs such as curing ailments with gene editing, bioprinting human organs, and making preventative medicine a reality are just some of the topics that consistently take over headlines. Each of these developments has the ability to significantly improve our lives.

But the unfortunate truth is that current paradigms do not support the optimization of research potential. As a result, discovery yield is skewed; some findings hold merit, while many others fall to the wayside after their promise fades. Numerous research papers are retracted while few endeavors go on to shape a better future for society.

Luckily, the Internet of Things (IoT) can address this. Implementing IoT development in the laboratory environment can bring a new level of efficiency to research and development (R&D). In the future, IoT’s data collection and automation capabilities will usher in the arrival of the smart lab.

The Issues With the Status Quo

Numerous barriers in laboratories obstruct scientific progress. They come in a variety of forms; human errors, lack of compliance, device malfunctions, and miscommunication name only a few.

Perhaps one of the biggest problems is manual recording of machine data output. This repetitive and time-intensive task is ripe for human mistakes to occur. Besides this possibility, data loss, an issue when a scientist is too selective about what to record, can also arise. Sometimes, researchers can’t decipher the information at hand or even the handwritten notes of their colleagues.

All of these issues mentioned do have one thing in common: a lack of connectivity, both between researchers and the equipment they employ. And if you’re an avid reader of our blog, then you know that nothing solves connectivity problems like IoT.

How IoT Overcomes These Obstacles

IoT can bring about a more effective, efficient way of conducting research experiments and collecting the resulting data. This technology can facilitate the connection of every element in the laboratory, from scales to centrifuges and everything in between. Machine output can be digitally transmitted via digital format, saving scientists hours of time and effort as well as eliminating the chance of human error.

With IoT, labs can connect all devices to the cloud or local server. This enables researchers to access and control experiments and processes anywhere, anytime (as long as they have an internet connection). If a scientist from the Bay area is visiting Boston, for example, then they can check in on their operations and developments in San Francisco remotely to ensure that everything’s running smoothly.

In the laboratory, IoT can take on various forms. Automation will be one of the most common examples. Automating all lab equipment, even down to material containers, can unlock unparalleled productivity. Currently, this is an expensive route to go, leaving it only available to successful industry lab facilities. As with other technologies, the price should diminish substantially in the future.

IoT Adoption Is on the Rise

Despite steep costs, the adoption of IoT in the laboratory setting has seen an abundant increase in demand. This is most readily apparent in industrial R&D; the need to compete in the global market means the benefits of IoT easily offset any expensive price tag.

Alongside this demand growth, our society in general has pivoted more towards digital solutions and an emphasis on easier access to data in recent years. And, as in consumer markets, IoT implementation brings numerous advantages to R&D, such as seamless experiment execution, more accurate data documentation, and more accessible research findings.

It’s no surprise that those working in R&D want to experience the same convergence of our digital and physical worlds that consumers around the world are now privy to. With that said, IoT will eventually give rise to the smart lab.

The Future of Laboratories

Everyone has heard of smart homes and smart cities. Both are made possible by a mixture of IoT and artificial intelligence (AI). Similar to these concepts, the smart lab simply refers to connecting all laboratory machines and sensors to the internet.

By controlling all of these devices externally, researchers can execute experiments with unprecedented speed and precision. The application of machine learning (ML) and AI technology further streamlines these benefits and enables far easier data documentation than what researchers contend with today. Connecting every lab tool in this way allows for a smart environment where machines can both predict experiment outcomes and produce hypotheses based on these findings.

With a drastic reduction in the amount of human intervention required, researchers will be freer to dedicate time to more important initiatives. And as all data is stored in the cloud, they can rest assured knowing no research will be lost. Collectively, these advantages will accelerate scientific development.

A Smarter, More Connected Future for Science

With a strong emphasis on efficiency, compliance, and precision, the laboratory environment is the perfect place for IoT integration. IoT-enabled devices could considerably increase both productivity and discovery yield.

Science is governed by a stringent set of principles. Yet, today, many researchers struggle with ensuring their work is FAIR ( findable, accessible, interoperable, and replicable). IoT and smart labs will change this by ushering in a new era for R&D — one that will continue to reshape and improve research for years to come.

What do you think of IoT’s future in R&D? Do you think there are any valid concerns about implementing this technology in this field? Or do the pros greatly outweigh the cons? As always, let us know your thoughts in the comments below!

The post The Internet of Things Can Revamp Research & Development first appeared on Dogtown Media.]]>
AI’s Accelerating Our Race Towards a COVID-19 Cure https://www.dogtownmedia.com/ais-accelerating-our-race-towards-a-covid-19-cure/ Wed, 03 Jun 2020 15:00:35 +0000 https://www.dogtownmedia.com/?p=15153 Although most of us are confined to our homes, scientists are still working around the...

The post AI’s Accelerating Our Race Towards a COVID-19 Cure first appeared on Dogtown Media.]]>

Woman scientist in laboratoruy searching for coronavirus cure dressed in ppe suit. Chemist researcher during global pandemic with covid-19 checking sample in biochemistry lab

Although most of us are confined to our homes, scientists are still working around the clock to find a cure for the coronavirus, and they’re working faster than ever. The COVID-19 pandemic is speeding up research all over the world.

Neural networks are watching incoming data for any indication of a potential cure, and the results are promising. At Argonne National Laboratories, one of the nine supercomputing systems managed by the U.S. Department of Energy, researchers are modeling how existing drugs “dock” with viral proteins.

This research focuses on the potential strength of an attachment of the drug to the functional protein of the virus, effectively rendering the virus useless. Using an innovative computer chip from San Francisco-based Cerebras Systems, the supercomputer is working long hours to uncover information that could make the difference between finding a cure tomorrow or years from now.

Drug Discovery with AI

The model at Argonne Labs is a medical application that uses machine learning to predict docking scores. These scores help scientists prioritize shortlist drugs to experiment with at the lab.

Argonne Labs is a high-energy supercomputing lab that already works overtime in its research endeavors. But this project is unlike any other; its speed and efficacy affect how many lives we can save before the virus spreads further.

Rick Stevens is the associate laboratory director at Argonne. He says that the lab is working so fast that it’s accomplishing feats that would normally take years within just months. The lab set up three different neural networks to calculate a combined score, rather than relying on a score from one algorithm.

However, for Stevens, that’s not the part that’s most interesting; he’s more fascinated by the fact that the models use images of molecules to simulate protein docking, whereas most other researchers are using actual chemical models of the molecules.

The team isn’t sure exactly why this works so well, but they’re publishing several papers to accelerate innovation in the scientific community.

Better Images = Speedier Success

Neural networks are a subset of deep learning, which is a subset of machine learning development. They make graphs to illustrate connections between inputs, and this helps elucidate relationships for the researchers.

Making covid 16 vaccine injection, hands in medical gloves with vaccine ampoula and syringe

280 scientists across 20 labs and research facilities are working on finding the cure for coronavirus in this case. This includes researchers from London to Chicago to San Diego.

The key to the massive effort and rapid research is Cerebras’s innovative computer chip that’s been deemed to be the largest chip in the world. Argonne is the first customer to use the chip, named CS-1; it allows the researchers to iterate on many different combinations of neural networks without needing additional CPUs or GPUs.

It goes without saying that the new chip can handle tons more images than any of the chips out on the market right now. On a standard GPU, memory fills up quickly, especially when you’re training the algorithms on high-quality images. But this chip allows many more high-resolution images and maintains its speed and processing power at the same time.

Andrew Feldman is the CEO and a co-founder of Cerebras. He says the chip was built for this type of work.

Doctor nurse vaccinating senior patient at home with safety masks during coronavirus outbreak – Focus on old woman face

“They’re running 30, 50 days on machines that cost a quarter of a billion dollars to do the work that we’re doing in a single machine that’s the size of a dorm fridge. And that’s enormously exciting,” Feldman says.

Finally Finding a Cure

Argonne has pinpointed several molecules that are showing inhibition, which is a great sign for a molecule to be a potential cure. However, other than the computer processing work, there are many more steps that have to be taken before the promising molecules can become part of a vaccine.

For example, the compounds must be tested in the lab with live virus assays first. If they pass the test there, they will be testing in animals. If a molecule isn’t already an existing drug, chemists will need to take the time to synthesize the molecule. And that’s why it can take 18 months to 10 years to develop a vaccine for a novel virus.

Stevens says, “We’re trying to validate whether the computational work actually holds up in the experimental work, and a number of those are progressing to wholesale assays to test” for efficacy against the virus.

Argonne is working on other projects that could impact the virus and its cure, like understanding the virus’s protein structure, looking for antibodies using machine learning, and searching for the right binding agent in humans. This work is tedious and complex, but it could help us answer any questions that arise during the vaccine development process without wasting more time.

Tomorrow’s Good News

Stevens says the teams are working on publishing a few papers about the process, the new chip, and the imaging neural networks, which should be available for reading in the next couple of weeks. He is ensuring the data is properly peer-reviewed and tested before publishing the results, saying that he doesn’t want to publish something that hasn’t been vetted or tested extensively.

Until we’ve got the vaccine, researchers and scientists all over the world are going to be sacrificing sleep to find promising molecules and compounds. How’s the virus impacting you, and what hope do you have for a vaccine? Let us know in the comments below!

The post AI’s Accelerating Our Race Towards a COVID-19 Cure first appeared on Dogtown Media.]]>