Chief Justice of India Issues Warning on AI in Judicial Proceedings: Calls for Ethical Framework

CJI Chandrachud Highlights Concerns of Bias and Privacy Rights in AI Utilization

Apr 14, 2024 - 12:20
Chief Justice of India Issues Warning on AI in Judicial Proceedings: Calls for Ethical Framework

Chief Justice of India D.Y. Chandrachud issued a warning on Saturday, stating that the use of AI in contemporary processes—including judicial proceedings—raises difficult moral, legal, and practical questions that need careful consideration.

The Chief Justice of India, in his inaugural speech at the two-day conference on "Technology and Dialogue" between the Supreme Courts of Singapore and India, emphasized the need for appropriate consideration and a regulated framework to prevent the misuse of AI that could jeopardize citizens' fundamental and other privacy rights.

"While there is enthusiasm about AI's potential applications, there are worries about mistakes and misunderstandings. Instances of "hallucinations," in which AI produces false or misleading replies, may arise in the absence of strong auditing procedures, CJI Chandrachud warned. This might result in incorrect advice and, in the worst situations, miscarriages of justice. He related a US case in which a lawyer in New York filed a legal brief that included made-up court rulings.

"Bias in AI systems is a complicated problem, especially in the case of indirect discrimination. According to the CJI, this kind of discrimination happens when ostensibly neutral laws or algorithms disproportionately harm certain groups, eroding their legal rights and safeguards.

Indirect prejudice, according to the CJI, may appear at two critical phases of the AI industry. First, skewed results during the training stage might result from missing or erroneous data. Second, prejudice may happen when data is processed, often in the form of opaque "black-box" algorithms that prevent developers from seeing how decisions are made.

Algorithms or systems that conceal their internal workings from users or developers are referred to as "black boxes" because they make it challenging to comprehend how choices are made or the reasons behind certain results.

Automated recruiting systems are a prime illustration of this, since algorithms may unintentionally favor certain demographics over others without the creators' knowledge of how or why these biases are being maintained. Concerns about accountability and the possibility of discriminatory consequences are raised by this lack of openness, according to CJI Chandrachud.

The CJI noted that the European Commission's proposal for EU regulation of AI emphasizes the dangers of using AI in legal contexts, classifying certain algorithms as very dangerous since they are "black box" systems.

Given its inherent intrusiveness and abuse potential, he said that face recognition technology (FRT) is a great example of high-risk artificial intelligence. FRT is the most popular kind of biometric identification; it is described as a "probabilistic technology that can automatically recognise individuals based on their face to authenticate or identify them."

Thus, international cooperation and collaboration are essential to achieving AI's full potential. AI brings with it previously unheard-of potential as well as difficult problems, many of which have to do with ethics, responsibility, and prejudice. Justice Chandrachud informed the assembly that "a concerted effort from stakeholders worldwide, transcending geographical and institutional boundaries, is required to address these challenges."

Rajesh Mondal I am founder of Press Time Pvt Ltd, a News company. I am also a video editor, content Creator and Full Stack Web Developer. https://linksgen.in/rajesh