Former Rep. Will Hurd (R-Texas) mentioned in an op-ed Tuesday that he was “freaked out” by a briefing whereas beforehand serving on the board of ChatGPT-maker OpenAI and known as for guardrails on the event of “synthetic common intelligence (AGI).”
“At one level in my two years on the board of OpenAI, I skilled one thing that I had skilled solely as soon as in my over twenty years of working in nationwide safety: I used to be freaked out by a briefing,” Hurd wrote within the op-ed in Politico Journal.
The briefing was in regards to the AI system now often known as GPT4, which Hurd advised represented “step one within the course of” of reaching AGI, a nonetheless hypothetical type of AI that has human-like capabilities and may be taught by itself.
“Indistinguishable from human cognition, AGI will allow options to advanced international points, from local weather change to medical breakthroughs,” Hurd mentioned. “If unchecked, AGI may additionally result in penalties as impactful and irreversible as these of nuclear struggle.”
The previous Texas consultant, who stepped down from OpenAI’s board in June to run for president, pointed to the current turmoil on the firm with CEO Sam Altman’s high-profile ouster and return in calling for guardrails on the quickly growing know-how.
“As this know-how turns into extra science reality than science fiction, its governance can’t be left to the whims of some individuals,” Hurd mentioned. “Just like the nuclear arms race, there are unhealthy actors, together with our adversaries, shifting ahead with out moral or human concerns.”
“This second isn’t just about an organization’s inside politics; it’s a name to motion to make sure guard rails are put in place to make sure AGI is a power for good, quite than the harbinger of catastrophic penalties,” he added.
Hurd argued that AI needs to be held accountable to present legal guidelines and that builders ought to compensate creators whose work is used to coach AI programs.
He additionally known as for a allowing course of for highly effective AI programs, during which builders would apply for a allow with the Nationwide Institute for Requirements and Know-how (NIST) earlier than releasing their merchandise.
“Identical to an organization wants a allow to construct a nuclear energy plant or a car parking zone, highly effective AI fashions ought to must receive a allow too,” Hurd mentioned. “This can be sure that highly effective AI programs are working with protected, dependable and agreed upon requirements.”
Copyright 2023 Nexstar Media Inc. All rights reserved. This materials will not be printed, broadcast, rewritten, or redistributed.