Opinion: Shouldn’t we already have AI laws?

“But is it legal?”

I remember working with someone who always asked that when we were confronted with a challenging decision. Which really means, but could we get away with doing it?

I would then counter with, “It shouldn’t matter if it’s legal. What we should be asking is, what’s the right thing to do?” 

We often look to the law when we aren’t sure about what the right course of action is. 

That’s one of the problems we are facing now. We look at AI and we think, there should be some laws around it. But there aren’t. And the guys who are willing to say, “well if it isn’t illegal…” are the ones making decisions. They aren’t thinking about “what is right” or even “should we do it?” 

We can complicate this further by questioning how we define “right.” There’s a deep rabbit hole of discourse we could go down without ever reaching any objective definition for what is “right.” And unfortunately not enough people are willing to have that conversation. And yet, it’s so crucial that we have this, and other related discussions around ethics and technology now. 

Technology Needs Regulation

With the rapid advance of technology in general in the last 40 years and AI specifically in the last few years, we are constantly confronted with challenges that are personal, social, and arguably even existential. 

And we can zoom in and be very specific to Generative AI and chatbots. These have been deployed without consultation and dangerously, with limited guardrails. Some will argue that involving the larger public is “iterative deployment” so that how people use the systems will influence future development of those systems. While I think there is some idealized point where that argument can be made, in practice and execution, this has led to disruption and a revelation of vulnerabilities that has resulted in catastrophic outcomes. 

Here’s the thing with software deployments as opposed to other products like say pharmaceuticals, food, or even toys. For the most part, developers can release apps and there are times when they can ship something bad or broken. A bad app could keep crashing, brick your phone, build a vulnerability into your system, etc.

None of these are good, but they have thus far not necessitated aggressive regulation. On the other hand, new medicine needs to go through rigorous testing and we expect toys to be designed and shipped in a way that they would not harm children. The level of potential harms established a need for guidelines and consumer protections.

Given digital penetration and the power of AI tools, we are seeing technology have much broader impacts. In other countries we have cases being litigated over chatbots driving people to violent acts that have included murder and suicide. The mass violation of privacy and harassment in the creation of non-consensual images of people on by Grok on X, and the DICT’s response of playing hokey-pokey with the problem (one foot in, one foot out, and then you turn yourself around) all lead to another question I’ve been asked often in conversation around AI:

“Shouldn’t there be a law for that?”

Ethics should guide the law

The simple answer is, yes there probably should be. Whether it’s about AI use, or social media platform behaviors, or limits to data collection, we should be developing a regulatory regime that both encourages development and protects citizens, especially in spaces that are complicated and complex and might be infringing on rights even if citizens are unaware of it. 

At the same time, the technology moves so fast and legislation moves painfully slow. The slew of AI bills filed in the 19th Congress all died on the vine. Many have been refiled and new ones are in play. However, when we look at both the need to promote and facilitate development so that we can be competitive, and the need to protect people from harms, legislation will be playing catch-up. 

In the absence of “is it legal/is there a law for that?” I’m proposing that we need to beef up our ethics discussions. I know that might sound silly to some. After all, when did we ever have an ethics conversation about our laptops or cellphones before? It’s not like we ever had iPod ethics. But this points out the fact that technology has become such a powerful force in our lives that our very use of them necessitates ethical frameworks around use and deployment. And in the absence of laws at present, then ethics would serve us well. 

Ethics question what the “right” or normative behavior is. How would we want our society to function? Rather than waiting for legislation to define this, we as a society need to have conversations, discussions, and consultations around how we want technology to be deployed around us.

This would shift from us waiting for laws to tell us what is allowed–which then allows bad faith actors to just release and deploy things because there is no law against them–to being in a conversation of what we believe should be “right” in our society, which should then guide the development of policy. 

Right now the questions are often, what do we do about deepfakes? Or how do we stop cheating in schools? Or how do I upskill because AI will take my job? Or what do we do about AI slop? 

But there are many more questions that we will need to confront. Among them: what is our stance on lethal autonomous weapons systems (LAWS) and in a world where more technologically advanced countries will have more access to these, what will the new rules of warfare and peace-keeping look like?

How do we begin to draw the lines around surveillance and privacy, especially as we all walk around with devices that could be used to surveil our incredibly personal activities? What levels of intervention should the government have in our communications? At what point can the government step in to shut things down? And when should it, like in the case of Grok?

These are only a few questions that we will be facing. This is obviously a difficult task, given how divided our public spheres are and how sometimes we can’t even agree on ground truth. But if technology and especially AI are such powerful factors in our world, then we need to begin wrestling with it from an ethical perspective. We need to have ethical frames to guide regulatory development. 

Read more...