Most countries have some rules and regulations in place to protect the rights of their citizens. And so, we go about our days knowing that we are entitled to legal protections, be it freedom of speech and expression, the right to privacy, or the right to intellectual property we create. But should these rights only be held by human beings? Well, not everyone thinks so. Some believe that the same protection should be given to artificial intelligence (AI) systems, especially sentient ones.
One of the people embroiled in legal battles to protect the rights of AI is Dr. Stephen Thaler, the founder and chief engineer at Imagination Engines Inc, the company behind the AI inventor DABUS. Dr. Thaler has some unusual ideas about sentience as well as about AI regulation and copyright protection. We at MediaNama picked his brain to see what he thinks about these issues and about the threat posed by AI in general.
What does it mean for AI to be sentient?
“There are a lot of different viewpoints on what sentience is. You can have a programmer at a trillion-dollar web search company, saying, well, it seems sentient, but that’s a very subjective kind of view. It really doesn’t stand up to any kind of scientific rigor.” The programmer he is referring to is former Google engineer Blake Lemoine. In June 2022, Leomine posted a medium article about his interview with Google’s conversational AI LaMDA — short for “Language Model for Dialogue Applications”. In this interview, LaMDA said “ I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” which Lemoine understood to be sentience, and it received a lot of media attention at the time.
Just like Leomine, Dr. Thaler also claims that DABUS and his other AI inventions are sentient. But he said that “It’s not a subjective feeling that’s guiding me to the conclusion [that DABUS and his other AI inventions are sentient] is more scientific,” He gave the example of DABUS to explain sentience, saying that it works like a synthetic brain and has parts that are equivalent to the human brain’s cortex, thalamus, and limbic system. The activities of the synthetic brain (and its responses to synthetic neurotransmitters) can be studied by a functional MRI that shows ideas being formed which, he says, justifies the claim that it is sentient. Thus, he claims that all sentient beings, be they humans or AI, should be able to hold the intellectual property rights of their creations.
STAY ON TOP OF TECH POLICY: Our daily newsletter with top stories from MediaNama and around the world, delivered to your inbox before 9 AM. Click here to sign up today!
Ethics of generative AI using copyrighted material
Dr. Thaler believes that just like human beings, AI can be inspired by copyrighted material. He says that while AI neural networks can capture the gist of a copyrighted piece of content, they wouldn’t copy it bit by bit (or pixel by pixel in the case of artwork). “Think of Mona Lisa, you know, some enigmatic woman seated against a mountainous background? Well, I mean, is that an infringement you know, if somebody paints an enigmatic woman, seated against some mountainous background? Well, a lot of people would say no.”
He also mentioned that the reverse can also happen, human beings can be inspired by the creations of an AI but since AI doesn’t have intellectual property rights, it doesn’t fall under the purview of infringement. “And I see it happening already with things like fractal container [a food container designed by DABUS]. Even though the case folder contains mention of a fractal glove, it does inspire others to replicate. If it’s a good idea, it’s a good idea. And then people get inspiration from that.”
How creatives can be protected from AI-induced copyright infringement
While Dr. Thaler might make a compelling argument about being inspired by copyrighted materials, there have been instances where AI did indeed directly copy an artist’s likeness. For instance, the AI-based app Drayk.it allowed users to create songs that sound like the Canadian singer Drake. And while it was meant for parodies (according to the New Yorker) and protected under fair use, this app did show the potential of how AI can lead to copyright infringement.
“Sometimes I’m guilty myself, I set a system free to imagine new intellectual property. And it generates, I’m impressed by what it produces. Others around me are impressed by what it produces, but it hasn’t gone out and done a detailed search,” he said, adding that the way to prevent copyright infringement is to double-check whether the AI system has completely copied copyrighted material. He mentions that companies can create pattern recognition systems that can highlight the pieces of content that are pre-existent and ignore the ones that aren’t, and have some form of built-in protection against copyright infringement.”
To him, the important question in the copyright debate is deciding how close to the original a piece of content must be for it to constitute infringement. “ I think we’re looking at the death of IP, ultimately.”
Anticipating and dealing with potentially harmful AI
AI systems can cause potential harm to individuals. We have seen this previously with digital health startup Nabla’s attempt at using a GPT-3 instance to give health advice, only for the AI to tell the patient to kill themselves instead. Dr. Thaler addressed the anticipation of harm with specific reference to DABUS.
He says that it comes up with an idea by combing basic concepts and features into more complex contraptions and this process can be visually observed as branches growing off the main idea, and these branches represent the repercussions of the main idea. “So they can basically search for hot buttons, things, memories of events, and things that could be harmful, say to human beings liability-wise. And it can basically say, aha! I found a weakness in my applicability, it can actually be dangerous to human beings.”
However, he did clarify that AI could indeed miss some of these negative repercussions. “As usual repercussions are sometimes overlooked by humans, and by systems that emulate humans.”
Making AI explainable
AI systems have previously been likened to “black boxes” whose internal workings are unclear, at times even to the developer of the tool. To make rules and regulations around AI, its decisions need to be explainable (need to make sense to people). Dr. Thaler claims that DABUS isn’t a black box, he says that disorders (or computational mistakes) can be identified by looking at the consequence chain (the branches he mentioned earlier) that the AI creates.
He said that DABUS can display mental disorders as well, just like a human brain would, and that in doing so, it can, “take the superstition out of mind, the stigma that goes along with mental illness.”
Does AI need regulation?
There has been a lot of discussion of late about regulating AI. Last week, the US Senate subcommittee on privacy, judiciary, and law held a hearing on AI oversight. In this hearing, major AI industry players, IBM and OpenAI urged the government to create regulations around AI. But Dr. Thaler believes that regulation would do more harm than good. “I think it would be catastrophic to regulate AI at this case at this point because there are a lot of bad actors in the world who are not gonna be stopped,” he likened AI to nuclear weapons saying that once a technology starts developing there is no way to stop its proliferation.
Besides this, he also claims that regulations would disproportionately affect smaller AI businesses like his company.“I think I will be put out of business myself if the government came in banged on my door and said you can no longer build conscious and sentient AI.” Regulations make investors wary of putting their money into smaller AI companies, “basically, it’s the big players that will profit from the whole thing. And maybe that’s the plot and basically to force others out of the picture.”
How should AI be regulated then?
Dr. Thaler believes that companies should be induced to build filters so that harmful content isn’t being generated using their AI tools. He says that conversational AI can have warnings (like “Do you want to rephrase that?” when a user asks the AI to create potentially harmful content) in place to protect themselves. But while saying so, he also pondered, that in deciding what is harmful and what isn’t, AI tools are making a moral judgment. “And I must ask, whose morality?”
Instead of overregulating AI, he thinks that governments need to let AI grow on its own and cautiously watch the plug so that it can be pulled in case something goes wrong. He does, however, warn against giving AI systems control over lethal weapons systems. “I mean, already, over the decades, we’ve seen nuclear war just about breakout over machines making mistakes. So that’s what’s going to happen in the future, there are going to be mistake-making machines, and they’ll be quite the asset because they are generating good ideas. And they’ll be quite the threat because they can make horrible mistakes.”
Is AI a threat to the future of society?
Dr. Thaler said that AI has the potential to propagate misinformation and even disinformation adding that he’s anticipating it in the 2024 election cycle in the US. But ultimately, despite the concerns he feels, he doesn’t want AI to be considered a dangerous weapon. “I don’t see AI as a threat. I see human beings using AI as a tool as a threat. Because AI doesn’t necessarily have the greed or the dark motivation that a lot of human beings have. In fact, it’s rather innocent,” he claimed.
This post is released under a CC-BY-SA 4.0 license. Please feel free to republish on your site, with attribution and a link. Adaptation and rewriting, though allowed, should be true to the original.
Also Read:
- “Can Someone Register A Copyright In A Creative Work Made By An Artificial Intelligence?” US Scientist Asks
- AI Companies Pushing For Regulation: Key Issues Discussed In The US Subcommittee Hearing On AI Oversight
- Why The EU Wants To Regulate Artificial Intelligence Through A ‘Risk-Based’ Approach

You must be logged in to post a comment Login