Why is my git repository so big?

Git

Git Problem Overview


145M = .git/objects/pack/

I wrote a script to add up the sizes of differences of each commit and the commit before it going backwards from the tip of each branch. I get 129MB, which is without compression and without accounting for same files across branches and common history among branches.

Git takes all those things into account so I would expect much much smaller repository. So why is .git so big?

I've done:

git fsck --full
git gc --prune=today --aggressive
git repack

To answer about how many files/commits, I have 19 branches about 40 files in each. 287 commits, found using:

git log --oneline --all|wc -l

It should not be taking 10's of megabytes to store information about this.

Git Solutions


Solution 1 - Git

Some scripts I use:

git-fatfiles

git rev-list --all --objects | \
    sed -n $(git rev-list --objects --all | \
    cut -f1 -d' ' | \
    git cat-file --batch-check | \
    grep blob | \
    sort -n -k 3 | \
    tail -n40 | \
    while read hash type size; do 
         echo -n "-e s/$hash/$size/p ";
    done) | \
    sort -n -k1

...
89076 images/screenshots/properties.png
103472 images/screenshots/signals.png
9434202 video/parasite-intro.avi

If you want more lines, see also Perl version in a neighbouring answer: https://stackoverflow.com/a/45366030/266720

git-eradicate (for video/parasite.avi):

git filter-branch -f  --index-filter \
    'git rm --force --cached --ignore-unmatch video/parasite-intro.avi' \
     -- --all
rm -Rf .git/refs/original && \
    git reflog expire --expire=now --all && \
    git gc --aggressive && \
    git prune

Note: the second script is designed to remove info from Git completely (including all info from reflogs). Use with caution.

Solution 2 - Git

I recently pulled the wrong remote repository into the local one (git remote add ... and git remote update). After deleting the unwanted remote ref, branches and tags I still had 1.4GB (!) of wasted space in my repository. I was only able to get rid of this by cloning it with git clone file:///path/to/repository. Note that the file:// makes a world of difference when cloning a local repository - only the referenced objects are copied across, not the whole directory structure.

Edit: Here's Ian's one liner for recreating all branches in the new repo:

d1=#original repo
d2=#new repo (must already exist)
cd $d1
for b in $(git branch | cut -c 3-)
do
    git checkout $b
    x=$(git rev-parse HEAD)
    cd $d2
    git checkout -b $b $x
    cd $d1
done

Solution 3 - Git

git gc already does a git repack so there is no sense in manually repacking unless you are going to be passing some special options to it.

The first step is to see whether the majority of space is (as would normally be the case) your object database.

git count-objects -v

This should give a report of how many unpacked objects there are in your repository, how much space they take up, how many pack files you have and how much space they take up.

Ideally, after a repack, you would have no unpacked objects and one pack file but it's perfectly normal to have some objects which aren't directly reference by current branches still present and unpacked.

If you have a single large pack and you want to know what is taking up the space then you can list the objects which make up the pack along with how they are stored.

git verify-pack -v .git/objects/pack/pack-*.idx

Note that verify-pack takes an index file and not the pack file itself. This give a report of every object in the pack, its true size and its packed size as well as information about whether it's been 'deltified' and if so the origin of delta chain.

To see if there are any unusally large objects in your repository you can sort the output numerically on the third of fourth columns (e.g. | sort -k3n).

From this output you will be able to see the contents of any object using the git show command, although it is not possible to see exactly where in the commit history of the repository the object is referenced. If you need to do this, try something from this question.

Solution 4 - Git

Just FYI, the biggest reason why you may end up with unwanted objects being kept around is that git maintains a reflog.

The reflog is there to save your butt when you accidentally delete your master branch or somehow otherwise catastrophically damage your repository.

The easiest way to fix this is to truncate your reflogs before compressing (just make sure that you never want to go back to any of the commits in the reflog).

git gc --prune=now --aggressive
git repack

This is different from git gc --prune=today in that it expires the entire reflog immediately.

Solution 5 - Git

If you want to find what files are taking up space in your git repository, run

git verify-pack -v .git/objects/pack/*.idx | sort -k 3 -n | tail -5

Then, extract the blob reference that takes up the most space (the last line), and check the filename that is taking so much space

git rev-list --objects --all | grep <reference>

This might even be a file that you removed with git rm, but git remembers it because there are still references to it, such as tags, remotes and reflog.

Once you know what file you want to get rid of, I recommend using git forget-blob

https://ownyourbits.com/2017/01/18/completely-remove-a-file-from-a-git-repository-with-git-forget-blob/

It is easy to use, just do

git forget-blob file-to-forget

This will remove every reference from git, remove the blob from every commit in history, and run garbage collection to free up the space.

Solution 6 - Git

The git-fatfiles script from Vi's answer is lovely if you want to see the size of all your blobs, but it's so slow as to be unusable. I removed the 40-line output limit, and it tried to use all my computer's RAM instead of finishing. Plus it would give inaccurate results when summing the output to see all space used by a file.

I rewrote it in rust, which I find to be less error-prone than other languages. I also added the feature of summing up the space used by all commits in various directories if the --directories flag is passed. Paths can be given to limit the search to certain files or directories.

src/main.rs:

use std::{
    collections::HashMap,
    io::{self, BufRead, BufReader, Write},
    path::{Path, PathBuf},
    process::{Command, Stdio},
    thread,
};

use bytesize::ByteSize;
use structopt::StructOpt;

#[derive(Debug, StructOpt)]
#[structopt()]
pub struct Opt {
    #[structopt(
        short,
        long,
        help("Show the size of directories based on files committed in them.")
    )]
    pub directories: bool,

    #[structopt(help("Optional: only show the size info about certain paths."))]
    pub paths: Vec<String>,
}

/// The paths list is a filter. If empty, there is no filtering.
/// Returns a map of object ID -> filename.
fn get_revs_for_paths(paths: Vec<String>) -> HashMap<String, PathBuf> {
    let mut process = Command::new("git");
    let mut process = process.arg("rev-list").arg("--all").arg("--objects");

    if !paths.is_empty() {
        process = process.arg("--").args(paths);
    };

    let output = process
        .output()
        .expect("Failed to execute command git rev-list.");

    let mut id_map = HashMap::new();
    for line in io::Cursor::new(output.stdout).lines() {
        if let Some((k, v)) = line
            .expect("Failed to get line from git command output.")
            .split_once(' ')
        {
            id_map.insert(k.to_owned(), PathBuf::from(v));
        }
    }
    id_map
}

/// Returns a map of object ID to size.
fn get_sizes_of_objects(ids: Vec<&String>) -> HashMap<String, u64> {
    let mut process = Command::new("git")
        .arg("cat-file")
        .arg("--batch-check=%(objectname) %(objecttype) %(objectsize:disk)")
        .stdin(Stdio::piped())
        .stdout(Stdio::piped())
        .spawn()
        .expect("Failed to execute command git cat-file.");
    let mut stdin = process.stdin.expect("Could not open child stdin.");

    let ids: Vec<String> = ids.into_iter().cloned().collect(); // copy data for thread

    // Stdin will block when the output buffer gets full, so it needs to be written
    // in a thread:
    let write_thread = thread::spawn(|| {
        for obj_id in ids {
            writeln!(stdin, "{}", obj_id).expect("Could not write to child stdin");
        }
        drop(stdin);
    });

    let output = process
        .stdout
        .take()
        .expect("Could not get output of command git cat-file.");

    let mut id_map = HashMap::new();
    for line in BufReader::new(output).lines() {
        let line = line.expect("Failed to get line from git command output.");

        let line_split: Vec<&str> = line.split(' ').collect();

        // skip non-blob objects
        if let [id, "blob", size] = &line_split[..] {
            id_map.insert(
                id.to_string(),
                size.parse::<u64>().expect("Could not convert size to int."),
            );
        };
    }
    write_thread.join().unwrap();
    id_map
}

fn main() {
    let opt = Opt::from_args();

    let revs = get_revs_for_paths(opt.paths);
    let sizes = get_sizes_of_objects(revs.keys().collect());

    // This skips directories (they have no size mapping).
    // Filename -> size mapping tuples. Files are present in the list more than once.
    let file_sizes: Vec<(&Path, u64)> = sizes
        .iter()
        .map(|(id, size)| (revs[id].as_path(), *size))
        .collect();

    // (Filename, size) tuples.
    let mut file_size_sums: HashMap<&Path, u64> = HashMap::new();
    for (mut path, size) in file_sizes.into_iter() {
        if opt.directories {
            // For file path "foo/bar", add these bytes to path "foo/"
            let parent = path.parent();
            path = match parent {
                Some(parent) => parent,
                _ => {
                    eprint!("File has no parent directory: {}", path.display());
                    continue;
                }
            };
        }

        *(file_size_sums.entry(path).or_default()) += size;
    }
    let sizes: Vec<(&Path, u64)> = file_size_sums.into_iter().collect();

    print_sizes(sizes);
}

fn print_sizes(mut sizes: Vec<(&Path, u64)>) {
    sizes.sort_by_key(|(_path, size)| *size);
    for file_size in sizes.iter() {
        // The size needs some padding--a long size is as long as a tabstop
        println!("{:10}{}", ByteSize(file_size.1), file_size.0.display())
    }
}

Cargo.toml:

[package]
name = "git-fatfiles"
version = "0.1.0"
edition = "2018"
[dependencies]
structopt = { version = "0.3"}
bytesize = {version = "1"}

Options:

USAGE:
    git-fatfiles [FLAGS] [paths]...

FLAGS:
    -d, --directories    Show the size of directories based on files committed in them.
    -h, --help           Prints help information

ARGS:
    <paths>...    Optional: only show the size info about certain paths.

Solution 7 - Git

Are you sure you are counting just the .pack files and not the .idx files? They are in the same directory as the .pack files, but do not have any of the repository data (as the extension indicates, they are nothing more than indexes for the corresponding pack — in fact, if you know the correct command, you can easily recreate them from the pack file, and git itself does it when cloning, as only a pack file is transferred using the native git protocol).

As a representative sample, I took a look at my local clone of the linux-2.6 repository:

$ du -c *.pack
505888  total

$ du -c *.idx
34300   total

Which indicates an expansion of around 7% should be common.

There are also the files outside objects/; in my personal experience, of them index and gitk.cache tend to be the biggest ones (totaling 11M in my clone of the linux-2.6 repository).

Solution 8 - Git

Other git objects stored in .git include trees, commits, and tags. Commits and tags are small, but trees can get big particularly if you have a very large number of small files in your repository. How many files and how many commits do you have?

Solution 9 - Git

Did you try using git repack?

Solution 10 - Git

before doing git filter-branch & git gc you should review tags that are present in your repo. Any real system which has automatic tagging for things like continuous integration and deployments will make unwated objects still refrenced by these tags , hence gc cant remove them and you will still keep wondering why the size of repo is still so big.

The best way to get rid of all un-wanted stuff is to run git-filter & git gc and then push master to a new bare repo. The new bare repo will have the cleaned up tree.

Solution 11 - Git

This can happen if you added a big chunk of files accidentally and staged them, not necessarily commit them. This can happen in a rails app when you run bundle install --deployment and then accidentally git add . then you see all the files added under vendor/bundle you unstage them but they already got into git history, so you have to apply Vi's answer and change video/parasite-intro.avi by vendor/bundle then run the second command he provides.

You can see the difference with git count-objects -v which in my case before applying the script had a size-pack: of 52K and after applying it was 3.8K.

Solution 12 - Git

It is worth checking the stacktrace.log. It is basically an error log for tracing commits that failed. I've recently found out that my stacktrace.log is 65.5GB and my app is 66.7GB.

Solution 13 - Git

I've created a new implementation of the perl script that was originally provided in this answer (which has since been rewritten in rust). After much investigation of that perl script, I realized that it had multiple bugs:

  • Errors with paths with spaces
  • --sum didn't work correctly (it wasn't actually adding up all the deltas)
  • --directory didn't work correctly (it relies on --sum)
  • Without --sum it would report a size of an effectively-random object for the given path, which might not have been the largest one

So I ended up rewriting the script entirely. It uses the same sequence of git commands (git rev-list and git cat-file) but then it processes the data correctly to give accurate results. I preserved the --sum and --directories features.

I also changed it to report the "disk" size (i.e. the compressed size in the git repo) of the files, rather than the original file sizes. That seems more relevant to the problem at hand. (This could be made optional, if someone wants the uncompressed sizes for some reason.)

I also added an option to only report on files that have been deleted, on the assumption that files still in use are probably less interesting. (The way I did that was a bit of a hack; suggestions welcome.)

The latest script is here. I can also copy it here if that's good StackOverflow etiquette? (It's ~180 lines long.)

Solution 14 - Git

Create new branch where current commit is the initial commit with all history gone to reduce git objects and history size.

Note: Please read the comment before running the code.

  1. git checkout --orphan latest_branch
  2. git add -A
  3. git commit -a -m “Initial commit message” #Committing the changes
  4. git branch -D master #Deleting master branch
  5. git branch -m master #renaming branch as master
  6. git push -f origin master #pushes to master branch
  7. git gc --aggressive --prune=all # remove the old files

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionIan KellingView Question on Stackoverflow
Solution 1 - GitVi.View Answer on Stackoverflow
Solution 2 - GitpgsView Answer on Stackoverflow
Solution 3 - GitCB BaileyView Answer on Stackoverflow
Solution 4 - GitJohn GietzenView Answer on Stackoverflow
Solution 5 - GitnachoparkerView Answer on Stackoverflow
Solution 6 - GitpiojoView Answer on Stackoverflow
Solution 7 - GitCesarBView Answer on Stackoverflow
Solution 8 - GitGreg HewgillView Answer on Stackoverflow
Solution 9 - GitbaudtackView Answer on Stackoverflow
Solution 10 - Gitv_abhi_vView Answer on Stackoverflow
Solution 11 - GitjuliangonzalezView Answer on Stackoverflow
Solution 12 - GitNesView Answer on Stackoverflow
Solution 13 - GitNathan ArthurView Answer on Stackoverflow
Solution 14 - GitcyperpunkView Answer on Stackoverflow