wordpress blog stats
Connect with us

Hi, what are you looking for?

UNESCO discusses how the use of AI generated evidence leads to discrimination

The drowsiness detection systems in cars were not originally meant to be used as evidence, and thus, lack the parameters to be admissible as evidence, it was pointed out.

The United Nations Educational, Scientific and Cultural Organization (UNESCO) recently conducted a webinar on the admissibility of AI-generated evidence in courtrooms. This was the second webinar in UNSECO’s Judges Initiative Webinar Series: Artificial Intelligence and the Rule of Law. The webinar focused heavily on the challenges posed by drowsiness detection systems in legal proceedings and the discrimination that they perpetuate when used as evidence. 

What does “AI-generated evidence” even mean?

Let’s look at this with an example. Imagine your car has a drowsiness detection system. One morning, even though you feel alert and vigilant, the system tells you that you should take a break. You trust your own judgment and feel confident that you aren’t drowsy but despite that, you accidentally hit a bike. In such a situation, those investigating the accident can use your car’s drowsiness detection system, and the alert you received, as a way to establish that you weren’t vigilant. This would fall under the purview of AI-generated evidence. 

“This is not science fiction. In Europe, a new regulation mandates all cars to be equipped with a driver drowsiness and attention warning system. And that is from 6 July 2022 for new types of cars, and from 7 July 2024 for all new vehicles,” said Sabine Gless, Professor of Criminal Law and Criminal Proceedings at the University of Basel (Switzerland) discussing the drowsiness alert example mentioned above during the webinar.

Why it matters:

AI systems, like the drowsiness detection system, suffer from what the webinar called “the white guy problem”. This means that the training data used to teach drowsiness detection systems lacks diversity because predominantly young white males who volunteer to engage in a computer game are used to gather the testing data. This makes the data biased against elderly females with drooping eyelids or drivers with dark skin. 

These systems could also be biased against the roads you drive on; Gless mentioned that most drowsiness detection systems are trained on the middle lane of the road. “So if you drive on a road that has no middle lane, the car will constantly think that you are on the wrong side. It thinks that the left side of the road is the middle lane and it will beep on you,” she added. As AI systems become more widespread across industries, it becomes important to discuss the legal challenges they bring with them. 


Article continues below. You might also want to read: UNESCO Discusses Intellectual Property In The Era Of Generative AI, Brings Up Questions On Data Ownership

Advertisement. Scroll to continue reading.

The challenge of defending yourself against AI-generated evidence: 

Discussing how people can defend themselves against the evidence generated by drowsiness detection systems, Gless said,  “The traditional way [of defending yourself ] would be the parties or the bench appoint an expert witness, and that the expert witness can explain the accessibility, the traceability, and the reproducibility of such evaluative data like Drowsiness Detection Alerts. But the question is, would this enable defendants to meaningfully confront the incriminating data?” Gless says. 

The “black-box” nature of AI systems makes it hard for their decisions to be explained. “The black box problem may be entangled with a trade secret issue that is confidential information that the car maker will not disclose because of their protection of a competitive advantage in car design,” Gless added. 

She further pointed out that the use of these systems as evidence goes beyond their original intended purpose. “Car and driving assistance systems are not designed as forensic tools. They lack a proper metric or certification or any other system that would really document that trustworthiness,” she said, adding that there is a high chance that car producers alert the driver (in the case of AI-enabled drowsiness detection systems) more often than necessary so that they are on the safe side.  

Verifying AI-generated evidence:

The only way to make sure whether the data generated by an AI system is a trustworthy piece of evidence is by checking whether the system does what it claims to do and by ensuring that it consistently produces accurate results when applied to similar circumstances. 

For this, according to Paul W. Grimm, Professor of the Practice of Law at Duke Law School, “the parties [involved in the case must] have access to sufficient information to be able to demonstrate to the judge the method by which the artificial intelligence application was designed, tested, the purpose for which it was intended to accomplish, and whether or not it is being used in the particular case for the purpose for which it was tested and if it is reliable and valid.” He addressed the trade secret concern raised by Gless by saying that judges should “enable disclosure to the litigants under circumstances that will not threaten the competitive advantage for the developer of the software.”

The liar’s advantage created by AI systems:

Grimm mentioned a case from the UK where a wife attempted to prove that her husband was an unfit parent by bringing in a video recording of him into the court as evidence, only for the “evidence” to be a deep fake. Through this, he tried to establish how unauthentic evidence may end up making court judgments unjust. 

Advertisement. Scroll to continue reading.

“But on the other side, the public awareness and the pervasive knowledge that this technology enables evidence to be faked allows a person who has evidence that is authentic, offered against them, which is the result of computer systems, can now argue in a conclusory fashion, this is a deep fake,” Grimm said. 

Whether you have to prove that the opposing party’s evidence is fake, or that your evidence is authentic, finding the experts to verify evidence can be costly. “The question, though, in a developing world context is, are we going to say to people, you both need to hire experts, let’s say the state and an accused, or two parties to a civil case?” said Hanani Hlomani  Research Fellow at the non-profit think tank ICT Africa. 

Even if the parties in a case are able to overcome financial barriers and hire AI experts to verify evidence, there is a high chance that they wouldn’t be able to entirely explain the evidence either. “[In the case of] large language models where there’s an enormous amount of data, the machine then takes over and starts writing its own algorithm. And so, the humans who initially designed are no longer involved in all aspects of its development,” Grimm said. 

Preventing tampering or malicious use of AI-generated evidence:

The liar’s advantage perpetuated by AI-generated evidence makes it important for courts to make sure that the evidence isn’t tampered with after it is displayed in court. “One thing that courts can do is to issue orders in cases that restrict who has access to that, have them sign or acknowledge that they have knowledge of that court limitation, and that if they use it for a purpose other than what has been approved in the litigation, that could subject them to contempt of court,” Grimm said. 

He also mentioned that a technical solution would be to “make sure that the format in which this information has been provided cannot be manipulated after it has been disclosed and used for an improper purpose in the case.”


STAY ON TOP OF TECH POLICY: Our daily newsletter with the top story of the day from MediaNama, delivered to your inbox before 9 AM. Click here to sign up today!

Advertisement. Scroll to continue reading.

Also read:

 

Written By

MediaNama’s mission is to help build a digital ecosystem which is open, fair, global and competitive.

Views

News

Is open-sourcing of AI, and the use cases that come with it, a good starting point to discuss the responsibility and liability of AI?...

News

RBI Deputy Governor Rabi Shankar called for self-regulation in the fintech sector, but here's why we disagree with his stance.

News

Both the IT Minister and the IT Minister of State have chosen to avoid the actual concerns raised, and have instead defended against lesser...

News

The Central Board of Film Certification found power outside the Cinematograph Act and came to be known as the Censor Board. Are OTT self-regulating...

News

Jio is engaging in many of the above practices that CCI has forbidden Google from engaging in.

You May Also Like

News

Google has released a Google Travel Trends Report which states that branded budget hotel search queries grew 179% year over year (YOY) in India, in...

Advert

135 job openings in over 60 companies are listed at our free Digital and Mobile Job Board: If you’re looking for a job, or...

News

By Aroon Deep and Aditya Chunduru You’re reading it here first: Twitter has complied with government requests to censor 52 tweets that mostly criticised...

News

Rajesh Kumar* doesn’t have many enemies in life. But, Uber, for which he drives a cab everyday, is starting to look like one, he...

MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ

Subscribe to our daily newsletter
Name:*
Your email address:*
*
Please enter all required fields Click to hide
Correct invalid entries Click to hide

© 2008-2021 Mixed Bag Media Pvt. Ltd. Developed By PixelVJ