In Engineering, an AI Regulation Scalpel is Better Than a Broad-Ban Sword

In Engineering, an AI Regulation Scalpel is Better Than a Broad-Ban Sword

In Engineering, an AI Regulation Scalpel is Better Than a Broad-Ban Sword


Jeff Albee

Jeff Albee, Vice President, Stantec

In July the Senate soundly rejected a proposed 10-year ban on artificial intelligence regulation in the latest continuing resolution to fund the government, creating questions among citizens and lawmakers alike about the role of government in regulating new technologies. 

Sweeping, top-down mandates like the proposed ban won’t work when it comes to AI. The speed and complexity of AI development demand a small-scale, agile approach one that evaluates risk and regulation industry-by-industry. 

The Verdict is In

AI is poised to become the next major driver of economic systems, potentially upending decades of traditional labor practices. AIs of all shapes and sizes, especially large language models, have exploded into professional spaces as employers and employees flock to these systems and test their limits.

The race for AI dominance has divided elected leadership into two broad camps: those who want to establish immediate protections to mitigate some of AI’s potentially more harmful impacts, and those who believe a more pragmatic approach is necessary to ensure American success. 

While the 99-1 defeat of the Senate bill is a clear indication that state leadership doesn’t want a broad federal ban limiting their powers, more recent announcements from the White House indicate the federal government may refuse to cede the issue wholly to the states. American elected officials now have a question to answer: how big of a role should the government play in regulating AI? 

One thing is certain: AI is developing faster than any regulatory body can keep pace. Any regulation that’s sweeping in nature — whether it aims to promote or limit AI — is likely to be obsolete by the time it’s fully enacted. Two things can also be true at once: AI development and implementation are key to building a competitive economic future for America, and regulation is an essential part of achieving a fully AI-unlocked future. 

The issue with something like an outright ban is that it paints AI under too broad a brush, treating every AI use as being more or less equal. However, AI isn’t just one thing; it encompasses more than just ChatGPT, Gemini, or Claude. It’s not just used to craft silly, surreal videos or draft memos and essays. Increasingly, we’re seeing AI developed and deployed in riskier and more high-stakes environments. Financial systems, healthcare systems, and infrastructure engineering systems have all begun testing AI to try to find real answers to real problems. From loan credibility to health diagnoses and triage, the more AI interacts with real human issues, the greater the chance for an error to have potentially devastating consequences. 

Especially in industries like infrastructure engineering and science, which depend heavily on accuracy and precision, an outright ban on AI regulation could prove dangerous. Without clear regulations, engineering firms could easily pass off AI-generated designs as their original work. Developers and builders might increasingly rely on machine-produced models and calculations, trusting them as accurate, even without human oversight.

It’s a world in which bridges get less stable, buildings start cracking, and hydroelectric plants can’t contain their water — and it’s one that we’re living in today. Lives could be at real risk due to an over-reliance on AI. 

This is not to say that AI is bad or that those in higher-risk professions shouldn’t use it. Instead, the ask of AI in these cases is so specific and the risk of failure is so high that governments — or some regulatory body — need to have a role in determining what’s appropriate and what’s not, just like they have a role today in governing these professions writ large. 

Regulation, especially in these high-risk industries, acts as a playing field leveler to protect both its consumers and itself. Regulation in AI is comparable to brakes on a race car. The point of the race car is to go not just fast, but be the fastest and brakes help them to do that, giving drivers control and offering safety even at top speeds. AI needs regulatory ‘brakes’ for high-stakes industries to help companies define the bounds of AI’s use in a way that channels speed and power towards its development and use.  

Big sweeping regulations won’t get us there, and neither will big sweeping bans. The key to AI regulation is to start small, start specific, and be agile. The biggest issue with the use of AI in engineering is that AI is fundamentally a “black box” system with training data, inner workings and reasoning systems that are not visible or understandable, even to AI researchers. In a rules-based science like engineering, this inexplicability of AI poses significant risks, particularly construction risk. 

Rather than banning the use of AI in engineering altogether, regulators should start small, focusing on a granular level of risk and quality. It undermines the utility of AI to require humans to double-check every AI calculation. Still, regulators can and should be able to define the critical components that require human verification and explanation. Can the layout of the beams on a structure be explained? Have humans checked the location and extent of an engineering ‘model’ output for accuracy and safety? How much and what was the quality of your training datasets?

These are industry-specific regulations that are only relevant to the engineering and infrastructure industry, but they’re vital to ensuring that the profession remains reputable and safe. It’s not about tamping down on the potential of AI to transform the industry or to cut costs, but about ensuring that what makes these industries so vital gets protected. 

Regulation can provide clear boundaries that reduce uncertainty, allowing competitive and ethical innovation to thrive. With a federal ban no longer on the table, it’s time for lawmakers and regulatory agencies to take up the mantle. America’s AI success depends on it. 

Jeff Albee is the Vice President and Director of Digital Solutions at Stantec



Source link