If you cheat, money will follow

Chapter 462 Resignation and Siege

A week later.

The Hinton trio returned to North America.

Hinton resigned from Google.

Google has no objections.

Google has fully mastered the essence of the convolutional neural network algorithm.

Knowing computing power and data is the more important key.

Moreover, Hinton is 70 years old and his energy is not as good as before.

Already in a semi-retired state at Google, his role is more of a guidance one.

And mentoring doesn’t require a $600 million-plus annual salary.

Sutskvi's departure caused a stir.

There is quite a bit of criticism from all over the OpenAi company.

Many company employees complained privately.

They believe that the company's first-generation model has just been built and is in need of data feeding and learning and training.

Sutskvi's departure at this time was a complete betrayal.

The company's CEO, Altman, had a special talk with him.

"Is the resignation related to Musk's comments?" Altman asked.

Recently, Musk, who failed in the power struggle, publicly criticized OpenAi for deviating from its original intention.

"OpenAi was founded as an open source, non-profit organization designed to serve as a counterweight to Google..."

"But now, Ultraman is getting along well with Microsoft, and it may become a closed-source company controlled by Microsoft for the purpose of profit."

At the beginning, the prerequisites for Suzkowe to leave Google and join OpenAi were:

OpenAi should be an open source, non-profit organization.

"You should know that after Musk withdrew his capital, the company is now in very difficult conditions. The project is close to being suspended. External funds must be involved, otherwise the company will go bankrupt." Altman explained:

"Microsoft is now very proactive. They will provide US$10 billion for the company's development. They do not seek control or dominance of the company."

"All they asked for was a small for-profit unit under the company's umbrella, and that was it."

"I believe you can understand that to achieve the full maturity of AGI, we need money, a lot of money, and nothing can be done without money."

After Altman finished speaking, Sutskwe shook his head and said, "Mr. Altman, my resignation has nothing to do with Musk's remarks."

"I admit that Musk approached me and wanted me to join xAI, but I refused."

"Then why did you leave?" Ultraman was puzzled:

"As long as we negotiate with Microsoft, a large amount of money will enter the company, and you can have better hardware conditions to continue your research."

"All of us think you are a genius and you are best suited to control language model development."

"No, there are many people in the company who can lead the research and development of language models. I am not important." Sutskvi shook his head:

“Language models ultimately require computing power and data, you should understand.”

"Are you planning to return to Google? It is undeniable that they can offer higher salaries, which the company really cannot compete with..." Altman thought:

“But Google won’t buy into your concept of ‘super-alignment’.”

"They will only hope that the language model will continue to grow until it matures, and then be introduced to the market and occupy the market..."

"They believe that 'hyper-alignment' will hinder the development of language models."

"I think you are the same," Sutskvi said bluntly.

Ultraman was stunned for a moment, quickly organized his words, and then explained: "No, we are different."

"At OpenAi you are the chief scientist, but at Google you are nothing. This is the essential difference."

"You get nothing but a higher salary."

"Mr. Altman, I didn't go to Google. I went to Goose Factory, an Internet company in China." Suzkewei said.

"Goose factory?" Ultraman tried to remember and found the coordinates in his mind.

It's them.

"Then I have no problem. I wish you good luck." Ultraman raised his eyebrows, completely relieved.

He knew that Goose Factory was a Chinese Internet company.

They have a huge user base, many profitable games, and a good e-commerce platform...

Soon, the news that Sutskvi left OpenAi and joined Goose Factory spread throughout the AI ​​circle and Silicon Valley.

Many people expressed incomprehension and doubt about his choice.

“Isn’t he an effective altruist?”

"What's in the Goose Factory? There's money, a lot of money. There, Mr. Sutskevi can get a salary that even Google can't give."

"Goose Factory wants to make their game conversations more intelligent, attract more users to open their wallets, and then make more money."

"Sutskvi finally became a prisoner of money, and he was dissatisfied with OpenAi's salary."

"I understand his "super alignment" because the salary he received at OpenAi was not aligned with his psychological requirements."

all the time.

Silicon Valley, the AI ​​circle, and even within OpenAi, all dismiss and sneer at the so-called "super alignment."

Because this will seriously hinder the development and maturity of AI, and thus affect making money.

what?

Will it be available for use after the "Super Alignment" strategy takes effect?

No, what they want is not safety, but instant wealth.

It is best to get rich overnight and have freedom of wealth.

Only sudden wealth and financial freedom are safe.

Everything else is bullshit.

Faced with doubts, ridicule, and sarcasm, Sutskvi publicly stated:

"Goose Factory invited me because it agreed with our view of "super alignment."

"They believe that AGI must be strictly regulated and cannot harm humans."

Well, this sentence caused a lot of ridicule.

This time the reason for the group ridicule was more professional.

The Futurism website believes: "This is a lame excuse. Although Mr. Suzkovi has made outstanding contributions to AI in the past ten years..."

"But he should admit that this technology is far below human intelligence, let alone conscious observation of the world."

A professor of computer science at Stanford University has a more direct view: "AI will never have consciousness and intelligence like humans."

"The operation of AI is based on algorithms, and their functions are fundamentally different from human emotions and thinking."

The "Nature" website specifically interviewed New York University neuroscientist Dürer on the issue of consciousness.

Dürer insisted: "Consciousness can only exist in living organisms. Even if it imitates the mechanism of biological consciousness, the AI ​​system will not be conscious."

Australia's "Conversation" magazine also joined in the fun: "Can AI systems really think and understand? That's impossible."

A paper in the journal "Neuroscience" believes: "Although the convolutional neural network algorithm enables the AI ​​system to have complex response functions."

"But these systems don't have the specific experiences and neural mechanisms that humans have."

"Neurons are real physical entities that can grow and change shape, while parameters in AI systems are just meaningless pieces of code."

"We cannot simply equate some of the complex functions of AI with human consciousness."

A paper on the arXIV website stated: "Google's latest language model fails the Turing test."

In addition, Silicon Valley investors also moved out papers related to the Annual Conference on Consciousness Science held in New York at the beginning of the year:

“The mechanism by which neurons in the human brain produce consciousness has not been discovered!”

The implication is that it is not clear why human neurons produce consciousness. How can we be sure that an AI system can produce consciousness?

Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like