I don’t know if you have encountered larger projects where git clone is very slow or even fails. How will everyone deal with it?
You may consider changing the download source, and you may use some means to increase the Internet speed, but if you have tried all these, it is still relatively slow?
I encountered this problem today. I needed to download the typescript code from gitlab, but the speed was very slow:
git clone https://github.com/microsoft/TypeScript ts
After waiting for a long time, the download was still not completed, so I added a parameter :
git clone https://github.com/microsoft/TypeScript --depth=1 ts
The speed is increased dozens of times, and the download is completed in an instant.
Adding --depth will only download one commit, so the content will be much less and the speed will increase. [Related recommendations: Git usage tutorial]
And the downloaded content can continue to submit new commits and create new branches. It does not affect subsequent development, but you cannot switch to historical commits and historical branches.
I tested it with one of my projects. I first downloaded a commit:
Then made some changes, then git add, commit, push, Can be submitted normally:
Create a new branch and can be submitted normally. The only disadvantage is that you cannot switch to historical commits and historical branches.
It is quite useful in some scenarios: when you need to switch to a historical branch, you can also calculate how many commits are needed, and then specify the depth, which can also improve the speed.
Have you ever thought about why this works?
Git saves information through some objects:
With one commit as the entry point, all associated trees and blobs are the contents of this commit.
#Commits are related to each other, and head, branch, tag, etc. are pointers to specific commits. It can be seen under .git/refs. In this way, concepts such as branches and tags are implemented based on commit.
Git implements version management and branch switching functions through these three objects. All objects can be seen under .git/objects.
This is the principle of git.
Mainly understand the three objects of blob, tree, and commit, as well as refs such as head, tag, branch, and remote.
We know that git associates all objects through a certain commit as the entry point, so if we don't need history, we can naturally download only one commit.
#In this way, a new commit is still created based on that commit, and new blobs, trees, etc. are associated. However, the historical commits, trees, and blobs cannot be switched back because they have not been downloaded, nor can the corresponding tag, branch, and other pointers. This is how we download a single commit but can still create new branches, commits, etc.
When encountering a large git project, you can greatly improve the speed by adding the --depth parameter. The more historical commits, the greater the download speed improvement.
And the downloaded project can still be developed for subsequent development, and new commits, new branches, and tags can be created, but you cannot switch to historical commits, branches, and tags.
We have sorted out the principles of git: files and submission information are stored through the three objects tree, blob, and commit, and functions such as branches and tags are implemented through the association between commits. Commit is the entry point, associated with all trees and blobs.
When we download a commit, we download all its associated trees, blobs, and some refs (including tags, branches, etc.). This is the principle of --depth.
I hope you can use this technique to improve the git clone speed of large projects without switching to historical commits and branches.
The above is the detailed content of Using this technique, git clone can be speeded up dozens of times!. For more information, please follow other related articles on the PHP Chinese website!