
Fordham Professor Chinmayi Sharma Imagines a “Hippocratic Oath” for AI

Earlier this year, Fordham law school professor Chinmayi Sharma published a paper entitled “AI’s Hippocratic Oath.” In the face of a tech industry that seems unwilling to regulate its own development process, despite growing evidence of harm, Sharma proposes a new professional licensing process for artificial intelligence engineers. Like doctors, lawyers, and accountants, she argues artificial engineers should have certain ethical standards required of them, a kind of Hippocratic oath that they must follow.
Her paper has drawn attention not only in the tech and legal spheres but in governmental ones. In September, it was mentioned at a Senate Judiciary subcommittee hearing by David Evan Harris, a Senior Policy Advisor at the California Initiative for Technology and Democracy and former research manager at Meta, one of a number of former tech employees giving testimony regarding their concerns about the lack of self-regulation in the industry.
I spoke to Professor Sharma by Zoom earlier this month. This interview has been edited and condensed for clarity.
I’m really fascinated by this idea of a Hippocratic Oath for tech engineers.
Some people love the paper, some people hate it. My goal was provocation. I want you to think about things in a different way than our traditional schools of thought around regulating big technology are.
What got you interested in the topic?
I attended a conference where we were talking about trust and safety in AI. It’s hard to know at an individual level when AI is being harmful. For example if social media was increasing the likelihood for depression, we can only know that that’s happening at the aggregate level. But who has that information? The social media companies.
As a public we also don’t know what’s possible for technology, because we’re not experts. So for example in products liability, a plaintiff needs to show a reasonable alternative design: You have to say not only is this harmful, but there’s another way to have built this and maintained the benefits of it. That’s a very hard thing for a plaintiff to show when it comes to AI.
But this isn’t the only place where these problems have existed. It’s true of medicine: There is a huge expertise gap between the thing being done and the public. And in some fields, we think that the services provided are so important that we don’t want that field to benefit just itself or their direct client. For example, we don’t want Enron accountants to just care about Enron, we want them to care about the public at large that’s dependent on accounting reports to make investments.
So I asked, who has this information [in tech]? It’s the engineers. Engineers know how to build things, they know what’s going on in their company, and they have the information relevant to when something is causing harm.
Their interests are not perfectly aligned with their employers, either.

In the paper you talk about how an “AI engineering for the common good” approach would benefit engineers. How do you see that?
If engineers can be held liable for malpractice, they’re empowered against their employers to say, I can’t do that thing you want me to do. If a company is paying for malpractice insurance, they’re also less likely to ask you to do things that’ll expose themselves to liability.
The second benefit is information sharing. Right now, AI folks do not feel comfortable talking to other people in their field about discoveries that they’re making and things that they’re finding, because all of that is considered confidential business information. If you have a professional board that says, No, this kind of information belongs in the public domain so that we can improve our field. That can foster information sharing. A hospital can’t say we found a new surgical technique that’s been shown to save lives, but this is our competitive advantage, so we’re not going to share this research with others in the profession. [Licensing] has cultivated an information-sharing norm in the discipline. Everything is in the pursuit of better science.
I also really believe, and I think sociology has shown, that there can be a cultural change within an organization if you give it a name and imbue it with a purpose. I think there are bad doctors out there for sure, but generally speaking I think doctors take their jobs very seriously, and that prioritizing patient outcomes is important to them.
If you talk to people in the tech world about why they are so passionate about AI, they’ll often say they think it’s better for humanity. They think that they’re going to build things that are capital-“G” Good for the world. If that’s true, then there’s some core there of wanting to do good. And if you can galvanize that, then maybe you can build a culture of social responsibility.
Do you think that has the capacity to trickle down into the nature of the products, as well?
That’s my hope. Start with the super small level: If engineers thought that there’s no way to use an AI chatbot that doesn’t ultimately lead to mental health issues, then they could refuse as a profession to build products for minors that have AI in it.
On a grander scale, [they could say] we will not build law enforcement facial recognition technology, because we do not think that the science is accurate enough. There are examples of some of those tools explicitly saying do not use this for law enforcement, but that’s not an enforceable thing. It’s just a warning. But what if you just said, we won’t build this?
One argument you hear a lot about AI is that there’s just no controlling its development, it’s just this massive wave coming, rather than the work of companies trying to make money.
This is one of the biggest pieces of feedback I get on this paper. Tech has this libertarian undertone. “They would never get on board with something like this.”
And my thought is, Yeah, sure. But doctors didn’t want this either. They actually lobbied to get rid of licensing until all of a sudden that meant that everyone started suing them out of existence. Then they proactively lobbied for licensing.
In my paper, I argue one of two things will happen: We’ll either pass some really strong legislation that’ll be worse than what I’m advocating for, which is unlikely given political gridlock, or after some bad stuff happens, the public will say we hate AI, and start suing AI [companies] in some really big ways that makes it concerning for the industry.
What other kinds of critiques do you run into?
Generally, there are three interconnected points: One, AI is not advanced enough as a field for us to be able to get any standard practice for developers. Medicine is so well established, it’s obviously different than medicine. AI is just too early on.

Second, this is going to chill innovation. We have been a fertile environment for tech. That’s why most of the tech companies come from the US. If you do something like this, it will chill technology.
The third and biggest critique is that licensing boards have a reputation for being anticompetitive. They’re basically boards that are made up of professionals in the field who have an incentive to promote scarcity so they can charge more and keep a lot within their jurisdiction. So for example, nurse practitioners are not allowed to prescribe, only doctors are. But that has been to the detriment of the public.
To the extent that there are disciplinary boards, if they’re only staffed by people in the profession, there’s also lack of enforcement.
And how do you respond to these questions?
My response to the first is, I disagree. I think there are standards in AI development. Also, under a professional standard of care, there isn’t one version of what reasonable behavior is, there can be multiple schools of thought, as long as they’re backed by science of someone else in the profession.
To the second point, I don’t think it’ll chill innovation, because I think of innovation as closer to discovery. Some people think of innovation as commercialization. I don’t think those are the same. I think it’s okay if we bring fewer things to market, as long as we’re allowing research to be robust. And if this means fewer shitty snake oil products make it to the public, I’m very much okay with that.
That actually parallels medicine nicely. We allow drug companies to research all kinds of things, but that doesn’t mean they get to sell whatever they want.
Yes. And they’re also not allowed to say bloodletting is the way to cure yourself of epilepsy, just because someone might believe that to the core of their soul. The medical profession can say we are making the normative decision that that is not valid, because we don’t have any science to support it.
The benefit of an expert-driven legally enforcing self-governance world is, as new science comes out, we can change those standards. So at one point in time Body Mass Index was the standard used to measure health. Now it’s clear that BMI is a very racially discriminatory measure of health, and it’s no longer encouraged practice to use that.
As for licensing boards, I think that’s a very valid concern about licensing boards being anti-competitive. But I think there are ways to structure them to be more balanced and to have more oversight. For example, you could say that you cannot be a practicing member of the profession while you’re on the board. Or decide half the board is professionals from the field, but the other half is from other disciplines that are related to or impacted by it.
If some kind of regulation like this is not put in place, what would be your fear for the future?
The first thing that comes to mind is a widening disparity between the haves and have nots. Whatever groups are already marginalized will be further marginalized, because surveillance always impacts minorities more. It’s going to be used in places like immigration, public housing. In law enforcement there’s already preexisting bias against certain populations that’s going to be supercharged by AI.
In classrooms, it’s going to be the underfunded schools that are more reliant on AI, but that’s going to be AI for surveillance of students or plagiarism detection technology, which is super inaccurate. If you use it for automated grading, those also have biases against English as a second language students.
Then there’s the classic: If there’s really good AI out there, who’s going to be able to afford it? It’s going to be people who already have a lot of resources.
But my biggest concern is, nothing is ever free. If it’s free, then you’re the product. I think AI can influence the public into being more efficient products. Julie Cohen has a great book [Between Truth and Power] about how these personalized tech interfaces, what they’re actually trying to do is influence people into falling into maybe one of three categories. They’re trying to narrow the human experience because it’s more efficient. That’s extremely frightening both in terms of the fact that they can do that and the outcome.
If most of the articles end up being written by AI and most of the images end up being made by AI, it’s not impossible to see how that could happen.