Master Karpathy is no longer satisfied with using C language to create Llama!
The latest challenge he gave himself: to reproduce the classic results of OpenAI, starting with the basic version of GPT-2.
The success of the challenge itself is not unexpected, but it only cost 20 US dollars and 90 minutes to complete the training, and the loss and evaluation surpassed the original version, just! have! point! Pass! point! Got it! .
# Not only that, he wrote a complete tutorial on the reproduction process, and as expected it became popular again.
Since Karpathy rented the A100 cloud service, training the 124M version cost US$20.
However, someone followed the tutorial and ran with H100. Not only did the training time become shorter, but it also saved money: it was completed in 43 minutes and only cost 14 US dollars.
In addition, Karpathy also spent US$200 from his own pocket to reproduce the 350M version of GPT-2 for everyone.
But the 1.5B large cup version, according to calculations, will cost 1 week and 2,500 US dollars, which is a bit unaffordable. The main reason is that he does not have H100 in his hand.
Fortunately, all the trenches are very generous and take action when it’s time to take action:
I’ll give it to you anytime you need it!
Only charges you $2 an hour!
This time Karpathy reproduced GPT-2, still based on his llama.c code base. Complete training end-to-end.
The code base has been continuously improved by him these days, and now it is very simple to start training:
Specifically, the network structure is GPT-2, but many super The parameter settings follow the set of GPT-3.
Karpathy analyzed that according to the standards of Chinchilla's law, GPT-2 training on 100B tokens should be over-trained, and the returns will be diminishing later. According to calculation, 2.5Btokens is enough for the 124M model.
However, he trained 10B tokens himself, and the training data also used FineWeb, which was just released. The token quality is higher than the original OpenAI WebText data set.
The original WebText has never been made public, and it is impossible to experiment with control variables under the same conditions. In addition, the distribution of Internet data today may be very different from that of 5 years ago.
It is speculated that these differences may be the reason why the review score is higher than the original version.
In addition, some netizens noticed that the GPU utilization efficiency during training is also higher than that of OpenAI work, but Karpathy said that it is mainly due to the use of a single cloud service node and does not need to be considered. Inter-server communication issues.
Finally, for the 350M version of GPT-2 that has been trained, it also achieved results that surpassed the original version.
Applause rings~
Since resigning from OpenAI in February this year, Karpathy has used C The language has produced many large model results, and I have played with it from Llama to GPT.
Observing his GitHub heat map, I only took a break for a while at the beginning, and it became more and more popular after entering April.
Is this the rhythm of resigning and staying at home to do 997?
Actually, Karpathy has also traveled during this period and shared the games he was playing, which were not that overwhelming.
According to the weekly schedule he posted: 975 hours on the job, and 4-20 hours of work after resignation, depending on his mood.
#Everyone is curious after seeing this. Does a regular arrangement feel better, or does randomness have miraculous effects?
Karpathy himself is not sure, but a chaotic schedule is definitely more interesting.
Finally, he also shared a freelancing experience:
Start working directly after getting up, without reading any news, and then go online after lunch to avoid the outside world Information is distracting.
Friends who have the conditions can try it.
Tutorial: https://github.com/karpathy/llm.c/discussions/481.
Reference link:
[1]https://x.com/karpathy/status/1795484547267834137.
[2]https://www.threads.net/@karpathy.
The above is the detailed content of Karpathy's new tutorial goes viral, and netizens rush to give him H100: Recreate GPT-2 training from scratch. For more information, please follow other related articles on the PHP Chinese website!