OpenAI has raised tens of billions of dollars to develop AI technologies that are changing the world.

 

But there's one glaring problem: it's still struggling to understand how its tech actually works.


During last week's International Telecommunication Union AI for Good Global Summit in Geneva, Switzerland, OpenAI CEO Sam Altman was stumped after being asked how his company's large language models (LLM) really function under the hood.

"We certainly have not solved interpretability," he said, as quoted by the Observer, essentially saying the company has yet to figure out how to trace back their AI models' often bizarre and inaccurate output and the decisions it made to come to those answers.

When pushed during the event by The Atlantic CEO Nicholas Thompson, who asked if that shouldn't be an "argument to not keep releasing new, more powerful models," Altman was seemingly baffled, countering with a half-hearted reassurance that the AIs are "generally considered safe and Sam Altman Admits That OpenAI Doesn't Actually Understand How Its AI Worksrobust."

Comments

Popular posts from this blog

Iain’t answering no Caitlin Clark questions.' Chennedy Carter mum on hard foul on Fever star.

Outrage as Nigeria changes national anthem

Israeli tanks penetrate the centre of Rafah and air attacks persist across the city despite global calls to end the carnage.