An OpenAI employee explains the reasons for his resignation: “I don’t want to work on the Titanic of AI”

The worker seems certain that OpenAI is going to end up sinking as happened to the gigantic ship

An OpenAI employee explains the reasons for his resignation: "I don't want to work on the Titanic of AI"
OpenAI has a clear policy of moving forward at all costs

It is clear that GPT-4o is one of the most futuristic models we have had the opportunity to test. OpenAI’s new AI has proven to work like a real shot, and when its improved application finally arrives, it will be an undisputed ally of many people. However, AI has become a reason to raise the eyebrow because of the security problems it can bring. Many OpenAI workers have resigned precisely because of this issue. Recently, one of them spoke on a podcast and made clear the parallels between the company and history.

The Titanic of AI

More and more OpenAI workers are leaving the company when they realize that security is being abandoned to the detriment of releasing increasingly powerful versions. In Alex Kantrowitz’s podcast, a former employee has explained why he resigned and pointed out the existing problems in the company that should be solved before continuing down the path of development without stopping:

“I really didn’t want to end up working for the AI Titanic, and that’s why I quit. During my three years at OpenAI, I sometimes asked myself a question: was the path OpenAI was following more like the Apollo program or the Titanic?”

According to William Saunders, the former OpenAI employee, the similarity between OpenAI and White Star Line is evident. Both wanted new products quickly and without considering safety issues before launching them. In this way:

“(The Titanic) was called unsinkable, but at the same time there were not enough lifeboats for everyone and that’s why when the disaster happened, many people died.”

For him, this is what OpenAI itself would be doing, moving forward at all costs, launching increasingly powerful and more impressive models but without taking into account that this development could cause a serious problem for the technology that exists today and, by extension, for the humans who use it and try to become more productive thanks to it.

In fact, he does not deny that new technologies carry ingrained risks, but he assures that it is necessary to have at least damage control prepared and not jump into the abyss for being the first company to achieve the Artificial General Intelligence (AGI) that has been talked about so much lately. In addition, OpenAI has not carried a clear policy on security in its company, as Altman disbanded the team dedicated to security and his boss eventually resigned to run a new company focused on AI risks.

Thus, OpenAI is considered not to be clear at all. Altman always assured that security would be key for the company, but his latest moves are showing that his interest is going in the opposite direction.

Comments

Comments

Leave a Reply