“Up to this point, AI policy has been largely made up of industry-led approaches like encouraging transparency, mitigating bias, and promoting the principles of ethics. I’d like to make one simple point in my testimony today, these approaches are vital, but they are only half measures,” Woodrow Hartzog, a law professor at the Boston University School of Law, said during the US subcommittee hearing entitled 'Oversight of A.I.: Principles for Regulation' which was held on September 12. Explaining what a half measure is, Hartzog said that it is an “approach that is necessary but not sufficient, that risks giving us the illusion that we've done enough.” He explained that audits, assessments, and certifications are necessary but the “[AI] industry leverages these to dilute our laws into managerial box-checking exercises that entrench harmful surveillance-based business models.” Besides Hartzog, William Dally (NVIDIA Corporation ) and Brad Smith (Microsoft Corporation) also deposed at the hearing. The hearing was chaired by Senator Richard Blumenthal, who has just proposed a bipartisan framework for regulating artificial intelligence along with fellow sub-committee member Josh Hawley earlier this month. Here's a round-up of significant points made during the deposition. Does transparency and self-regulation lead to unbiased AI? Hartzog mentioned that transparency and self-regulation are popular proposals for opaque AI systems but that it doesn’t produce accountability on their own. He suggested that lawmakers need to intervene when AI tools are harmful and abusive. He went on to discuss how AI systems are known to be biased along lines of…
