Artificial Intelligence's Ethical Concerns

Many people consider artificial intelligence to be a game-changing technology. Will AI systems be able to take us around one day? Are you going to do our laundry? Do you want to mow our lawn? Raise our children? How about fighting wars? Who's going to write these articles? Make political ad campaigns? These questions cause people to shift their focus away from the functional capabilities and toward the ethics of developing such powerful and potentially life-altering technologies. As a result, it makes sense to spend time thinking about what we want these systems to do and making sure we address ethical issues now so that we can develop them with humanity's best interests in mind.

Will artificial intelligence (AI) take the place of human workers?

Many people are concerned that AI-enabled systems would eventually replace workers in a variety of industries. When it comes to jobs, AI elicits a range of emotions and viewpoints. However, it is becoming increasingly clear that AI is a job category killer rather than a job killer. Jobs are not destroyed, as they have been with every wave of technology, from the early industrial revolution's automatic weaving looms to today's computers, but rather employment shifts from one place to another and completely new categories of employment are created. In the AI-enabled marketplace, we will and should expect the same. AI will inevitably replace entire categories of work, according to research and experience, particularly in transportation, retail, government, professional services jobs, and customer service. On the other hand, instead of taking orders, fielding basic customer service requests or complaints, or data entry-related tasks, businesses would be able to devote their human resources to much better, higher-value tasks.

Indeed, whether or not AI is used, the shift to this new era of digital transformation is causing concerns about job displacement. All AI does is accelerate digital transformation in specific business processes. As businesses seek to adapt and implement AI strategies, we believe it is critical to have open and honest discussions with employees. Experience and research show that businesses that use augmented intelligence approaches, in which AI augments and assists humans in doing their jobs better rather than entirely replacing them, not only have a faster and more coherent ROI, but also have a much warmer reception from workers. People prefer to work alongside machines rather than being replaced by them.

Will AI exacerbate the rise of fake news and misinformation?

Artificial intelligence (AI) systems are becoming increasingly adept at creating fake images, videos, conversations, and other forms of content. We already have a hard time believing what we hear, see, or read. What happens when you can't tell whether an image is real or artificially created, or whether you're speaking with a bot or a real person? Bots were widely reported to have played a role in spreading political propaganda during the 2016 US Presidential election. These automated social media accounts aided in the creation and spread of misinformation on the internet in an attempt to influence voters and stoke partisan animosity. Bots, unlike humans, never stop working and can produce a large amount of content in a short period of time. This news, whether true or not, begins to go viral once it is shared and retweeted by others, and it is practically unstoppable. These bots are capable of disseminating false or highly distorted information, amplifying messages, and implanting thoughts and ideas into people's minds. Fake imagery or audio may be used by criminals and state actors to damage people or businesses, as well as to disrupt government activities. All it takes now is a few malicious actors spreading false claims to traumatise the public and quickly shift public opinion.

Governments and companies alike will need to consider how to limit the potential harm caused by AI-assisted content creation. In fact, we urge businesses and governments to view fake content as malicious in the same way that cybersecurity threats are, and to respond accordingly. Propaganda, misinformation, malicious interference, blackmail, and other types of "information crime" can be just as damaging to systems as physical and electronic attacks. The world is woefully unprepared for the unleashing of AI on unprotected people. Corporations that freely trade in user-generated content are just as responsible for misuse as governments.

Do we want access to AI technology to be easy for bad people?

While AI has a lot of potential for good, we must be wary of AI in the hands of malicious users. As technology advances, artificial intelligence (AI) has the potential to cause significant harm if used maliciously. What happens when individuals, criminal groups, and rogue nations use artificial intelligence for nefarious purposes? Many businesses are already asking themselves these questions and taking steps to protect themselves from malicious AI attacks. New technologies have the potential to exploit the vulnerabilities of AI and machine learning-based systems. As AI systems improve, they will be able to alter the nature of threats, making them harder to detect, more random in appearance, more adaptable to systems and environments, and more effective at detecting and targeting system vulnerabilities. This is supposed to be horrifying. We need to think about how we build and manage our digital infrastructure, as well as how we design and distribute AI systems, right away. Detecting these malicious attacks will only become more difficult as time goes on.

Furthermore, machine learning service providers, particularly those that provide on-demand cloud-based services, should consider who their customers are. If malicious users are using their platforms to carry out distributed AI-enabled attacks or other criminal acts, governments will follow in the footsteps of financial institutions and impose new “Know Your Customer (KYC)” regulations. If these platform providers do not want to be on the receiving end of regulatory scrutiny, they must get ahead of the curve and begin their own efforts to ensure that they understand who their clients are and what they are doing on their platforms.

Is ubiquitous surveillance already a reality? Is Artificial Intelligence (AI) the new Big Brother?

AI allows businesses and governments to keep an automated and intelligent eye on what people are doing at all times. Will AI put an end to privacy in the future? Will "Big Brother" really keep an eye on you all the time? With the advancement of facial recognition technology, it is becoming easier to identify individuals in large crowds at stadiums, parks, and public spaces without their permission. Microsoft asked Congress to investigate and regulate the use of facial recognition technology in 2018. “We live in a nation of laws, and the government needs to play an important role in regulating facial recognition technology,” said Bradford Smith, the company's president. What's interesting about this statement is that tech companies rarely advocate for regulation of their products, so for Microsoft to be asking the US Congress to regulate facial recognition, they must already be aware of the risks.

We believe that in an AI-enabled future, everyone and everyone will know everything about everyone else. This means that everyone will be presumptively aware of who we are, what we want, where we are, and what we're up to. This pervasive knowledge will become part of our expectation of where we are, just as we now expect to be able to get Internet, electricity, and information whenever and wherever we need it. We won't be able to simply "unplug" for a while. We may soon find ourselves in a world where only a few corporations and governments have an uncomfortably high level of expertise and control over everyone's lives.


Will sentient machines be able to exercise their rights?

How should machines be handled and regarded in society as they become more intelligent and we ask more of them? How should machines be controlled once they can imitate emotion and behave similarly to humans? Should machines be considered beings, animals, or inanimate objects? To what extent do we ascribe liability and responsibility to the devices themselves over the individuals who are ostensibly in charge of them at this point? An autonomous car struck and killed a pedestrian in March 2018. The fact that a machine killed a human being infuriated the public.

But why was this accident so upsetting to people? Every day, tens of thousands of people are killed in car accidents caused by humans behind the wheel. What difference does it make that the engine was driven by a machine? The reason for this indignation is that society has yet to recognise when a machine kills a human, and it may never do so. However, the chances of eliminating all traffic-related deaths are almost definitely nil. As a result, if we want self-driving cars on the road, this situation will repeat itself. Despite the fact that machine-driven vehicles have much lower overall fatality rates than human-driven vehicles, the issue of liability and control is largely an ethical one. As a result, we must be asking these questions, determining what we will accept and what is ethical, and enacting rules and regulations now to prevent future tragedies.

Making AI decision-making more transparent

Although there are numerous approaches to machine learning, no machine algorithm has re-energized the AI market like deep learning. Deep learning, on the other hand, is a "black box." We don't really understand how deep learning works, which can be a huge problem when we rely on it to make important decisions like loan applications, parole decisions, and hiring decisions. Unexplainable AI systems should not be tolerated, particularly in high-risk situations. If we want to have AI systems we can trust, we need to include explainable AI in the mix.

Deep learning makes extensive use of training data. As a result, it should come as no surprise that biassed training data is used to teach these systems, resulting in biassed AI systems. People make the mistake of assuming that training data is always “clean,” comes from a large pool, and represents society as a whole, but the results show that this is not the case. Google's image recognition system incorrectly classified images of minorities, Goldman Sachs' Apple Card has been criticised for gender bias, and software used to sentence offenders has been found to be biassed against minorities. We must demand that machine learning algorithms be able to explain themselves if we are going to use them to make any kind of worthwhile decision. Would you really let a human driver hit your car and then give you no explanation when you ask why? You wouldn't, of course. This is something we should not tolerate from machines either!

Taking action to address these concerns

If we don't ask these questions now and develop ethical AI, the consequences will be much worse than most people understand. Do we have faith in corporations to do the right thing? Do we have faith in countries to act responsibly? We'd like to believe that with public feedback and the raising of ethical questions and concerns now, we can create a future that isn't so bleak. Bad actors will still attempt to influence, infiltrate, and manipulate the system. Because AI is coming whether or not we're ready, businesses, organisations, and people should keep asking questions, working toward ethical AI, and fighting automated bots and malicious attacks.

Comments