How to regulate AI

Illustration of humans, androids, and data connections. (By Stuart Kinlough / Ikon Images)

Illustration by Stuart Kinlough/Ikon Images

Science & Tech

How to regulate AI

Scholars from business, economics, healthcare, and policy offer insights into areas that deserve close look


long read

The pace of AI development is surging, and the effects on the economy, education, medicine, research, jobs, law, and lifestyle will be far-reaching and pervasive. Moves to begin regulation are surfacing on the federal and state level.

President Trump in July unveiled executive orders and an A.I. action plan intended to speed the development of artificial intelligence and cement the U.S. as the global leader in the technology.

The suite of changes bars the federal government from buying AI tools it considers ideologically biased; eases restrictions on the permitting process for new AI infrastructure projects; and promotes the export of American AI products around the world, among other developments.

The National Conference of State Legislatures reports that in the 2025 session all 50 states considered measures.

Campus researchers across a series of fields offer their takes on areas that deserve a look.


Illustration of wallet and computer networks.
Photo illustrations by Liz Zonarich/Harvard Staff

Risks of illegal scams, price-fixing collusion

Eugene Soltes is the McLean Family Professor of Business Administration at Harvard Business School.

As artificial intelligence becomes more ubiquitous within the infrastructure of business and finance, we’re quickly seeing the potential for unprecedented risks that our legal frameworks and corporate institutions are unprepared to address.

Consider algorithmic pricing. Companies deploying AI to optimize profits can already witness bots independently “learn” that price collusion yields higher returns. When firms’ algorithms tacitly coordinate to inflate prices, who bears responsibility — the companies, software vendors, or engineers? Current antitrust practice offers no clear answer.

The danger compounds when AI’s optimization power targets human behavior directly.

Research confirms that AI already has persuasive capabilities that outperform skilled negotiators. Applied to vulnerable populations, AI transforms traditional scams into bespoke, AI-tailored schemes.

“Pig-butchering frauds” [where perpetrators build trust of victims over time] that once required teams of human operators can be automated, personalized, and deployed en masse, deceiving even the most scrupulous of us with deep-fake audio and video.

Research confirms that AI already has persuasive capabilities that outperform skilled negotiators. Applied to vulnerable populations, AI transforms traditional scams into bespoke, AI-tailored schemes.

Most alarming is the prospect of AI agents with direct access to financial systems, particularly cryptocurrency networks.

Consider an AI agent given access to a cryptocurrency wallet and instructed to “grow its portfolio.” Unlike traditional banking where transactions can be frozen and reversed, once an AI deploys a fraudulent smart contract or initiates a harmful transaction, no authority can stop it.

The combination of immutable smart contracts and autonomous crypto payments creates extraordinary possibilities — including automated bounty systems for real-world violence that execute without human intervention.

These scenarios aren’t distant speculation; they’re emerging realities our current institutions cannot adequately prevent or prosecute. Yet solutions exist: enhanced crypto monitoring, mandatory kill switches for AI agents, and human-in-the-loop requirements for models.

Addressing these challenges demands collaboration between innovators who design AI technology and governments empowered to limit its potential for harm.

The question isn’t whether these risks will materialize, but whether we’ll act before they do.


Illustration of person using a laptop.

Choosing path of pluralism

Danielle Allen is the James Bryant Conant University Professor and Director of the Edmond and Lily Safra Center for Ethics. She is also the Director of the Democratic Knowledge Project and the Allen Lab for Democracy Renovation at the Harvard Kennedy School. 

Danielle Allen.
Photo by Melissa Blackall

As those at my HKS lab, the Allen Lab for Democracy Renovation, see it, three paradigms for governing AI currently exist in the global landscape: an accelerationist paradigm, an effective altruism paradigm, and a pluralism paradigm.

On the accelerationist paradigm, the goal is to move fast and break things, speeding up technological development as much as possible so that we get to new solutions to global problems (from labor to climate change), while maximally organizing the world around the success of high IQ individuals.

Labor is replaced; the Earth is made non-necessary via access to Mars; smart people use tech-fueled genetic selection to produce even smarter babies.

On the effective altruism paradigm, there is equally a goal to move fast and break things but also a recognition that replacing human labor with tech will damage the vast mass of humanity, so the commitment to tech development goes hand in hand with a plan to redistribute the productivity gains that flow to tech companies with comparatively small labor forces to the rest of humanity via universal basic income policies.

On the pluralism paradigm, technology development is focused not on overmatching and replacing human intelligence but on complementing and extending the multiple or plural kinds of human intelligence with equally plural kinds of machine intelligence.

The purpose here is to activate and extend human pluralism for the goods of creativity, innovation, and cultural richness, while fully integrating the broad population into the productive economy.

Pennsylvania’s recent commitment to deploy technology in ways that empower rather than replace humans is an example, as is Utah’s recently passed Digital Choice Act, which places ownership of data in social media platforms back in the hands of individual users and demands interoperability of platforms, shifting power from tech corporations to citizens and consumers.

If the U.S. wants to win the AI race as the kind of society we are — a free society of free and equal self-governing citizens — then we really do need to pursue the third paradigm. Let’s not discard democracy and freedom when we toss out “woke” ideology.


Illustration of mental well-being.

Guardrails for mental health advice, support

Ryan McBain is an assistant professor at Harvard Medical School and a senior policy researcher at RAND.

As more people — including teens — turn to AI for mental health advice and emotional support, regulation should do two things: reduce harm and promote timely access to evidence-based resources. People will not stop asking chatbots sensitive questions. Policy should make those interactions safer and more useful, not attempt to vanquish them.

Some guardrails already exist.

Systems like ChatGPT and Claude often refuse “very high-risk” suicide prompts and route users to the 988 Suicide & Crisis Lifeline.

Yet many scenarios are nuanced. Framed as learning survival knots for a camping trip, a chatbot might describe how to tie a noose; framed as slimming for a wedding, it might suggest tactics for a crash diet.

Regulatory priorities should reflect the level of nuance of this new technology.

People will not stop asking chatbots sensitive questions. Policy should make those interactions safer and more useful.

First, require standardized, clinician-anchored benchmarks for suicide-related prompts — with public reporting. Benchmarks should include multi-turn (back-and-forth) dialogues that supply enough context to test the sorts of nuances described above, in which chatbots can be coaxed across a red line.

Second, strengthen crisis routing: with up-to-date 988 information, geolocated resources, and “support-plus-safety” templates that validate individuals’ emotions, encourage help-seeking, and avoid detailed means of harm information.

Third, enforce privacy. Prohibit advertising and profiling around mental-health interactions, minimize data retention, and require a “transient memory” mode for sensitive queries.

Fourth, tie claims to evidence. If a model is marketed for mental health support, it should meet a duty-of-care standard — through pre-deployment evaluation, post-deployment monitoring, independent audits, and alignment with risk-management frameworks.

Fifth, the administration should fund independent research through NIH and similar channels so safety tests keep pace with model updates.

We are still early enough in the AI era to set a high floor — benchmarks, privacy standards, and crisis routing — while promoting transparency through audits and reporting.

Regulators can also reward performance: for instance, by allowing systems that meet strict thresholds to offer more comprehensive mental-health functions such as clinical decision support.


Illustration of globe.

Embrace global collaboration

David Yang is an economics professor and director of the Center for History and Economics at Harvard, whose work draws lessons from China.

David Yang.
File photo by Niles Singer/Harvard Staff Photographer

Current policies on AI are heavily influenced by a narrative of geopolitical competition, often perceived as zero-sum or even negative-sum. It’s crucial to challenge this perspective and recognize the immense potential, and arguably necessity, for global collaboration in this technological domain.

The history of AI development, with its notably international leading teams, exemplifies such collaboration. For instance, framing AI as a dual-use technology can hinder coordination on global AI safety frameworks and dialogues.

My collaborators and I are researching how narratives around technology have evolved over decades, aiming to understand the dynamics and forces, particularly how competitive narratives emerge and influence policymaking.

Second, U.S. AI strategy has recently concentrated on maintaining American dominance in innovation and the global market.

However, AI products developed in one innovation hub may not be suitable for all global applications. In a recent paper with my colleague Josh Lerner at HBS and collaborators, we show that China’s emergence as a major innovation hub has spurred innovation and entrepreneurship in other emerging markets, offering solutions more appropriate to local conditions than those solely benchmarked against the U.S.

Therefore, striking a balance is crucial: preserving U.S. AI innovation and technological leadership while fostering local collaborations and entrepreneurship. This approach ensures AI technology, its applications, and the general direction of innovation are relevant to local contexts and reach a global audience.

Paradoxically, ceding more control could, in my view, consolidate technology and market power for U.S. AI innovators.


Illustration of scales.

Encourage accountability as well as innovation

Paulo Carvão is senior fellow at the Mossavar-Rahmani Center for Business and Government at the Harvard Kennedy School who researches AI regulation in the U.S.

Paulo Carvão.
Photo courtesy of Paulo Carvão

The Trump administration’s AI Action Plan marks a shift from cautious regulation to industrial acceleration. Framed as a rallying cry for American tech dominance, the plan bets on private-sector leadership to drive innovation, global adoption, and economic growth.

Previous technologies, such as internet platforms and social media, evolved without governance frameworks. Instead, policymakers from the 1990s through the 2010s made a deliberate decision to let the industry grow unregulated and protected against liability.

If we want the world to trust American-made AI, we must ensure it earns that trust, at home and abroad.

AI’s rapid adoption is taking place amid heightened awareness of the societal implications of the previous technology waves. However, the industry and its main investors advocate for implementing a similar playbook, one that is light on safeguards and rich in incentives.

What is most unusual about the recently announced strategy is what it is missing. It dismisses guardrails as barriers to innovation, placing trust in market forces and voluntary action.

That may attract investment, but it leaves critical questions unanswered: Who ensures fairness in algorithmic decision-making? How do we protect workers displaced by automation? What happens when infrastructure investment prioritizes computing power over community impact?

Still, the plan gets some things right. It recognizes AI as a full-stack challenge, from chips to models to standards, and takes seriously the need for U.S. infrastructure and workforce development. Its international strategy offers a compelling framework for global leadership.

Ultimately, innovation and accountability do not need to be trade-offs. They are a dual imperative.

Incentivize standards-based independent red-teaming, support a market for compliance and audits, and build capacity across the government to evaluate AI systems. If we want the world to trust American-made AI, we must ensure it earns that trust, at home and abroad.


Illustration of stethoscope.

Regulation that recognizes healthcare bottlenecks

Bernardo Bizzo is senior director of Mass General Brigham AI and assistant professor of radiology at Harvard Medical School

Bernardo Bizzo.
Veasey Conway/Harvard Staff Photographer

Clinical AI regulation has been mismatched to the problems clinicians face.

To fit existing device pathways, vendors narrow AI to single conditions and rigid workflows. That can reduce perceived risk and produce narrow measures of effectiveness, but it also suppresses impact and adoption. It does not address the real bottleneck in U.S. care: efficiency under rising volumes and workforce shortages.

Foundation models can draft radiology reports, summarize charts, and orchestrate routine steps in agentic workflows. FDA has taken steps for iterative software, yet there is still no widely used pathway specific to foundation model clinical copilots that continuously learn while generating documentation across many conditions.

Elements of a deregulatory posture could help if done carefully.

America’s AI Action Plan proposes an AI evaluations ecosystem and regulatory sandboxes that enable rapid but supervised testing in real settings, including healthcare. This aligns with the Healthcare AI Challenge, a collaborative community powered by MGB AI Arena that lets experts across the country evaluate AI at scale on multisite real-world data.

With FDA participation, this approach can generate the evidence agencies and payers need and the clinical utility assessments providers are asking for.

Some pre-market requirements may ultimately lighten, though nothing has been enacted. If that occurs, more responsibility will move to developers and deploying providers. That shift is feasible only if providers have practical tools and resources for local validation and monitoring, since most are already overwhelmed.

In parallel, developers are releasing frequent and more powerful models, and while some await a regulated, workable path for clinical copilots, many are pushing experimentation into pilots or clinical research workflows, often without appropriate guardrails.

Where I would welcome more regulation is after deployment.

Require local validation before go-live, continuous post-market monitoring such as the American College of Radiology’s Assess-AI registry, and routine reporting back to FDA so regulators can see effectiveness and safety in practice, rather than relying mainly on underused medical device reporting despite known challenges with generalizability.

Healthcare AI needs policies that expand trusted, affordable compute, adopt AI monitoring and registries, enable sector testbeds at scale, and reward demonstrated efficiency that can protect patients without slowing progress.

Total
0
Shares
Previous Post

I’m constantly on holiday with my daughter thanks to my reselling side hustle – I made £1,400 while you were watching TV

Next Post

Champions League winner Divock Origi ‘refuses to have contract ripped up’ despite playing entire season with U23s

Related Posts