If you cheat, money will follow

Chapter 459 Impossible

April.

Through the help of Liu Chaoyang, Xiao Ma met the father of deep learning, Professor Hinton, in California.

There was also a senior Chinese medicine practitioner who went together.

After several seemingly ordinary acupuncture and manual massages from the old Chinese doctor, Hinton felt extremely relaxed.

No discomfort even if you sit for a long time.

He was amazed again and again and asked the old Chinese doctor, what scientific principle is this?

Because, Western medicine’s solution to lumbar disc herniation is surgery.

The old Chinese medicine doctor told Hinton: Meridian paralysis.

It is caused by the deficiency of healthy energy and the entry of evil spirits such as wind, cold, dampness and heat.

The treatment idea is to strengthen the body and eliminate evil, with continuous external application, oral administration, acupuncture and massage for relief.

Brother Ma couldn't translate this rhetoric from Chinese medicine.

Through WeChatGPT, he translated Zhengqi as: healthy Qi;

Pathogenic Qi is translated as: Pathogenic Qi;

Dispel Cold is translated as: Dispel Cold;

Acupuncture is translated as: Acupuncture and moxibustion;

Meridians translates as: Meridians.

"..." Hinton was confused.

It can only be attributed to the fact that this is a secret that is not passed down by others and is not easy to inquire about.

But the old Chinese medicine doctor was suffering from something he couldn't express.

The untold secret of farts.

Children at home are unwilling to learn, and outsiders are also unwilling to learn.

His systematic therapy requires decades of meditation and cultivation, and may not be successful.

Who among the young people will learn?

Playing with mobile phones every day for decades is almost the same.

After several treatments, out of gratitude, Hinton became familiar with Xiao Ma.

Then we naturally talked about the main business: deep learning of AI models.

Xiao Ma introduced the establishment and development of the WeChatGPT model to Hinton.

“在训练上,我们动用了30万颗麒麟970芯片和1.6万颗昇腾910芯片,麒麟970负责代码运行和浮点计算,昇腾910负责数据处理……”

"Currently, the parameters of the WeChatGPT model have exceeded 4000 billion, and the number of training parameters is 1.5 trillion..."

Hinton was very surprised after hearing this.

He never imagined that in the distant and mysterious Eastern country, there would be a company that would develop convolutional neural networks to such an extent.

It is ahead of the world, faster than Google and OpenAi that he knows.

"Are you worried about it becoming conscious?" Hinton asked.

"Yes, Professor." Brother Ma nodded:

"In just one year, its parameters have grown exponentially, and the generated content has become more and more mature, becoming more and more like a human being..."

"On the one hand, we hope that it will grow rapidly and become a perfect tool for human development."

"On the other hand, we are worried that it will grow too fast and be out of control?"

"We have read your and Mr. Sutskevi's interviews, opinions and related papers, and we feel that it is necessary to intervene in advance to prevent safety accidents."

"At this iteration speed, it may be in the fifth or sixth generation."

Here we will introduce how to train a language model.

Many people believe that the algorithm of language model is the difficult point.

actually not.

The algorithm is public and anyone can use it.

Domestic major manufacturers such as Bdu, Juchang, and Ali can easily master and develop algorithms.

It is not too difficult for other first- and second-tier manufacturers to master it.

The difficulty is money.

The three major elements of AI development are algorithms, computing power and data.

Algorithms are the basis.

Computing power and data are boosters, and both require money to boost them.

The cost of training WeChatGPT3.0 once is 1200 million US dollars.

Entering 4.0, it is expected that it may require more than $6000 million.

The specific training process is as follows.

First, the hardware.

You have to spend money to build top-notch hardware and increase computing power.

Just like WeChatGPT, hundreds of thousands of CPUs plus tens of thousands of AI chips process data.

Once this set of hardware is running, the daily electricity bill can bankrupt a small company.

Second, data.

It is necessary to continuously provide data to feed the model and let it know what is happening and why.

Just like raising a child.

This data is not entered manually.

Instead, it is crawled on the web through spider crawlers.

These data include science and technology, humanities and social sciences, law and art, medical ethics, education... and other aspects...

It also includes spam conversations between netizens, such as:

"I have a friend."

"My king is starving to death..."

"You're cold, bro."

"Men are all big hooves..."

"It's impossible to work part-time. It's impossible to work part-time in this life."

……and many more.

It's possible that those private love conversations you had with your girlfriend will also be caught.

How big are these data?

Currently WeChatGPT3.0 is 90TB.

There are about 5 trillion Chinese characters, equivalent to 35 Beijing libraries.

Third, training.

After the data is captured and fed to the model, humans are hired to ask questions to the model.

Generally, the model is asked to provide three answers, and then the only answer is determined manually.

Then modify the parameters based on the answers.

The understanding of the model is adjusted through parameters until it is perfected.

The artificial intelligence here is not a spirited boy or a JK girl, the models they train will become fools.

The staff here are college students in various majors.

Hiring these workers requires money, a lot of money.

After continuous cyclic training of the model, its understanding ability will continue to improve, continue to improve, and become more and more intelligent...

Until it emerges through the accumulation of parameters and becomes "God".

Because it's not human.

Human beings have the ability to forget.

It does not. All the knowledge it has learned will not be forgotten and can be used at any time.

Brother Ma continued to say to Professor Hinton:

"Professor Hinton, the language model we built more than a year ago has now been iterated to the third generation..."

"We expect that in half a year, we will iterate to the fourth generation. The parameters may not be one trillion, but maybe ten trillion..."

"This is equivalent to ten trillion neurons constantly communicating..."

After hearing this, Professor Hinton's face became serious.

He is the creator of convolutional neural networks.

He knows the magic of this algorithm best.

Brother Ma wanted to continue talking, but the phone rang.

Brother Ma picked up the phone and saw that it was Chang Le.

"Hey, Mr. Chang, I'm communicating with Professor Hinton."

"Tell you, the model should be conscious..." Chang Le said.

"what?!"

Brother Ma's eyes were wide open, his face full of disbelief, and his phone slipped and fell to the ground without realizing it.

"Mr. Ma, Mr. Ma..." Hinton kept saying.

Brother Ma reacted and said calmly: "Professor Hinton, just now, my partner told me that our model may already be conscious."

"This... is impossible? How can more than 4000 billion parameters produce consciousness?" Hinton showed an expression of disbelief.

Tap the screen to use advanced tools Tip: You can use left and right keyboard keys to browse between chapters.

You'll Also Like