Is it possible to get info about how much space is wasted by changes in every commit — so I can find commits which added big files or a lot of files. This is all to try to reduce git repo size (rebasing and maybe filtering commits)


You could do this:

git ls-tree -r -t -l --full-name HEAD | sort -n -k 4

This will show the largest files at the bottom (fourth column is the file (blob) size.

If you need to look at different branches you'll want to change HEAD to those branch names. Or, put this in a loop over the branches, tags, or revs you are interested in.


Forgot to reply, my answer is:

git rev-list --all --pretty=format:'%H%n%an%n%s'    # get all commits
git diff-tree -r -c -M -C --no-commit-id #{sha}     # get new blobs for each commit
git cat-file --batch-check << blob ids              # get size of each blob

All of the solutions provided here focus on file sizes but the original question asked was about commit sizes, which in my opinion, and in my case in point, was more important to find (because what I wanted is to get rid of many small binaries introduced in a single commit, which summed up accounted for a lot of size, but small size if measured individually by file).

A solution that focuses on commit sizes is the provided here, which is this perl script:

#!/usr/bin/perl
foreach my $rev (`git rev-list --all --pretty=oneline`) {
  my $tot = 0;
  ($sha = $rev) =~ s/\s.*$//;
  foreach my $blob (`git diff-tree -r -c -M -C --no-commit-id $sha`) {
    $blob = (split /\s/, $blob)[3];
    next if $blob == "0000000000000000000000000000000000000000"; # Deleted
    my $size = `echo $blob | git cat-file --batch-check`;
    $size = (split /\s/, $size)[2];
    $tot += int($size);
  }
  my $revn = substr($rev, 0, 40);
#  if ($tot > 1000000) {
    print "$tot $revn " . `git show --pretty="format:" --name-only $revn | wc -l`  ;
#  }
}

And which I call like this:

./git-commit-sizes.pl | sort -n -k 1