AI Safety and How to think about it

AI Safety and How to think about it

When a technology is new and promising, all the focus goes into the efforts to develop and realize its full potential. Little is known of what that potential might be and how profoundly it may affect the world. In terms of the pace of today’s technological growth, AI has been long in development and is rapidly pervading everyday life, bringing an endless array of benefits to society. The scale that it has already reached and the inevitable heights it is destined to attain, has bestowed upon us the responsibility to ensure it remains within the ethical bounds of the humanity we have evolved to be. 

Although malicious use of such powerful technology is a big concern, the unanticipated ends pose serious problems too. And the ubiquity of this technology means, it is of utmost importance now to identify and address the threats its intentional and unintentional misuse poses.

Among the most popular AI applications is its use in natural language processing. An AI text model that is supposed to interact with humans on a professional level holds the potential to significantly influence them. An unbiased model is paramount in that case. There have been recent highly popular examples of experimental models interacting with humans on social media and churning out morally deplorable results that included foul language, racist comments and explicit references and much more. There it becomes an ethical responsibility to train such an AI with carefully curated data to avoid any discriminatory and morally unacceptable outcomes. 

AI is at the heart of modern autonomy. We are increasingly delegating critical decisions, including ones that affect our health and lives, to AI systems. In that case, the margin of error is negligible. Yet the model is only as good as its design, training and testing. It is unethical to shirk responsibility in case a model makes a critical error of judgement that may cost humanity in value or life because the behaviour of the model can be clearly traced back to the quality of data it was trained with, the meticulousness of the cycles of training and testing it was put through. There need to be laws and regulations in place to prevent the developers of this technology from getting such critical systems to the real world unless they have been rigorously tested against all possible scenarios. This starts with setting up strict, uncompromising standards that these machines have to meet repeatedly to inspire confidence in their safety. Then have dedicated regulatory bodies to monitor and enforce those standards.

Ever since there have been pictures, we’ve been manipulating them. The technology has become more convincing over time. Swapping faces, changing backgrounds, colours, removing or adding objects have become so easy lately that it is a fun pastime of people on the internet.

While tampering with photos with malicious intent is detrimental to society and is already causing harm, videos have retained their reputation of being hard to manipulate to a convincing accuracy until recently. With the advent of technologies like deep fake, it is now possible, using AI to manipulate videos, change faces and artificially create evidence. Videos manipulated using deep fake are becoming increasingly hard to identify as fake, posing a great threat to the society as a whole with its potential of misuse. Even if forensics are able to find the truth eventually, such fake videos can cause irreparable damage to the reputation and life of the targeted person or group.

From self-driving cars to the mad race towards Artificial General Intelligence (AGI) we should keep in mind that the end goal is to exalt humanity, not harm it. Taking the shortest possible path towards creating the ever-powerful AI is not the correct course of action anymore. Instead, the way should be set through collaboration with social scientists, thorough experimentation, strong regulations and great preparation for the challenges that lie ahead. The challenges that law and order face with the current pace and proliferation of technology is monumental. Never has there been a more dire need to understand human behaviour and ethics than now when we are creating machines in our own image to replace ourselves only much more powerful and efficient. To keep the machines in check we need to decode and replicate the human morality that keeps humanity as a whole thriving and on the right track.

What are your views on AI Safety? Feel free to reach out to us at

Leave a Reply