ETEAM Blog The Dark Side of Artificial Intelligence and Machine Learning

Referred to as the “new electricity”, artificial intelligence (AI) is revolutionizing our learning, work and play. AI typically entails machines with the ability of performing human-like tasks. But, could there be a downside to all of this?

On the other hand, machine learning (ML) is a simple way in which we can achieve AI. ML also entails the process of learning without relying on programming. Though AI helps humanity, this innovation can affect our normal lives when it begins to make decisions that have personal impacts on us.

 

 

Do Artificial Intelligence and Machine Learning have Negative Impacts on Humans?

There are predictions that by 2029, computers will have the ability and capacity to outsmart humans. This means that the computers will be able to learn from experiences and comprehend several languages. Just like computers, robots are also likely to evolve faster than humans, thus outsmarting us.

The good thing about artificial intelligence is that it is always bringing together a combination of modern technologies to make life better. These technologies include driverless transportation and targeted treatments in the healthcare sector. However, this innovation is also impacting the way we live today and how we will live in the future. For instance, AI technologies are taking up most of our jobs and increasing data privacy concerns.

 

 

Where Artificial Intelligence and Machine Learning Can Go Wrong

As much as people are excited about how AI-related technologies are changing their lives, it is time for them to look at what could possibly go wrong with these technologies. When looking at the present and the future, AI technologies may create opportunities for cyber threats and lead to data privacy concerns. They may also lead to ethics and morality concerns, non-transparency issues and the “black-box problem” as explained below.

Creates an Opportunity for Cyber Attacks

Cyber security is one of the IT vectors that AI technologies seem to be targeting. However, there are concerns that hackers are using the same approaches that AI developers are using to design cyber security mechanisms to develop malicious bots. Hackers find it easy to break AI systems because the used codes have flaws and are usually a combination of several programming methodologies.

AI technologies may also create room for computer users to reveal their passwords to unsuspecting users. They may allow the hacker to send downloadable malware files to a computer user that seeks to steal login credentials. This technique can also allow hackers to gain access to autonomous devices.

The “Black-box” Problem

AI applications rely on machine-learning algorithms or neural networks to mimic the functioning of the human brain. The problem is that it is impossible to explain how these algorithms manage to produce accurate results. This “black-box problem” is one of the dark sides of artificial intelligence and machine learning. It is sad that people don’t get access to the information regarding the automated decision-making that AI applications subject them to.

Lack of Transparency

There are concerns regarding the decision of AI system designers refusing to reveal to us the type of input data they feed into the AI systems. For instance, the engineers of the Google search engine don’t reveal how the search engine usually ranks search results. They may be considering such processes as company trade secrets. What they are doing is contrary to our expectations of AI applications being transparent.

Lack of transparency in AI applications is beginning to raise speculations regarding how these technologies can really positively change our lives. As much as program designers are arguing that their AI designs are proprietary, it is time for them to make their processes more transparent. Their failure to stick to this simple policy may force most people to lose their trust in future AI technologies.

Values and Morality

People from different walks of life are raising ethical questions regarding the future of AI. There are data laws in other jurisdictions that protect the rights of people based on how AI technologies affect them. In some countries, when companies and individuals fail to adhere to these strict policies, they may be liable to criminal prosecution or penalties.

Increased Data Privacy Concerns

One problem with the new wave of AI applications is that they demand too much data from people. It is good to note that through AI, machine-learning technologies simplify the process of analyzing large data sets by looking for specified patterns. However, when the process of extracting the data invades the privacy of people, then it is time that these technologies stop being invasive. The data extraction process should only take place when people consent to it.

It is unfortunate that the data collectors usually ask people to sign contracts with hundreds of pages before taking their data. Since most people receive most of these agreement policies online, it may take time for them to read and understand the entire document. Most of them end up clicking ‘accept’ before carefully reading what the agreement document entails thus agreeing to share their personal information.

Final Thoughts

AI technologies may be manipulative if proper oversight is not in place. It is up to market participants, lawyers and policymakers to work together when coming up with an effective regulatory statute that can guide AI-related decision-making processes. Since these technologies directly interact with us, we should also join these stakeholders in regulating AI decision-making. We should be on the verge of setting up an AI watchdog to ensure that the usage of AI programs is fair. Before the programs collect data from us, we have the right to consent to them or deny them the permission.

Get the latest from ETEAM straight to your inbox!

Follow ETEAM

SEE ALSO