Find largest document size in MongoDB

Mongodb

Mongodb Problem Overview


Is it possible to find the largest document size in MongoDB?

db.collection.stats() shows average size, which is not really representative because in my case sizes can differ considerably.

Mongodb Solutions


Solution 1 - Mongodb

You can use a small shell script to get this value.

Note: this will perform a full table scan, which will be slow on large collections.

let max = 0, id = null;
db.test.find().forEach(doc => {
    const size = Object.bsonsize(doc); 
    if(size > max) {
        max = size;
        id = doc._id;
    } 
});
print(id, max);

Solution 2 - Mongodb

Note: this will attempt to store the whole result set in memory (from .toArray) . Careful on big data sets. Do not use in production! Abishek's answer has the advantage of working over a cursor instead of across an in memory array.

If you also want the _id, try this. Given a collection called "requests" :

// Creates a sorted list, then takes the max
db.requests.find().toArray().map(function(request) { return {size:Object.bsonsize(request), _id:request._id}; }).sort(function(a, b) { return a.size-b.size; }).pop();

// { "size" : 3333, "_id" : "someUniqueIdHere" }

Solution 3 - Mongodb

Finding the largest documents in a MongoDB collection can be ~100x faster than the other answers using the aggregation framework and a tiny bit of knowledge about the documents in the collection. Also, you'll get the results in seconds, vs. minutes with the other approaches (forEach, or worse, getting all documents to the client).

You need to know which field(s) in your document might be the largest ones - which you almost always will know. There are only two practical1 MongoDB types that can have variable sizes:

  • arrays
  • strings

The aggregation framework can calculate the length of each. Note that you won't get the size in bytes for arrays, but the length in elements. However, what matters more typically is which the outlier documents are, not exactly how many bytes they take.

Here's how it's done for arrays. As an example, let's say we have a collections of users in a social network and we suspect the array friends.ids might be very large (in practice you should probably keep a separate field like friendsCount in sync with the array, but for the sake of example, we'll assume that's not available):

db.users.aggregate([
    { $match: {
        'friends.ids': { $exists: true }
    }},
    { $project: { 
        sizeLargestField: { $size: '$friends.ids' } 
    }},
    { $sort: {
        sizeLargestField: -1
    }},
])

The key is to use the $size aggregation pipeline operator. It only works on arrays though, so what about text fields? We can use the $strLenBytes operator. Let's say we suspect the bio field might also be very large:

db.users.aggregate([
    { $match: {
        bio: { $exists: true }
    }},
    { $project: { 
        sizeLargestField: { $strLenBytes: '$bio' } 
    }},
    { $sort: {
        sizeLargestField: -1
    }},
])

You can also combine $size and $strLenBytes using $sum to calculate the size of multiple fields. In the vast majority of cases, 20% of the fields will take up 80% of the size (if not 10/90 or even 1/99), and large fields must be either strings or arrays.


1 Technically, the rarely used binData type can also have variable size.

Solution 4 - Mongodb

Starting Mongo 4.4, the new aggregation operator $bsonSize returns the size in bytes of a given document when encoded as BSON.

Thus, in order to find the bson size of the document whose size is the biggest:

// { "_id" : ObjectId("5e6abb2893c609b43d95a985"), "a" : 1, "b" : "hello" }
// { "_id" : ObjectId("5e6abb2893c609b43d95a986"), "c" : 1000, "a" : "world" }
// { "_id" : ObjectId("5e6abb2893c609b43d95a987"), "d" : 2 }
db.collection.aggregate([
  { $group: {
    _id: null,
    max: { $max: { $bsonSize: "$$ROOT" } }
  }}
])
// { "_id" : null, "max" : 46 }

This:

  • $groups all items together
  • $projects the $max of documents' $bsonSize
  • $$ROOT represents the current document for which we get the bsonsize

Solution 5 - Mongodb

Well.. this is an old question.. but - I thought to share my cent about it

My approach - use Mongo mapReduce function

First - let's get the size for each document

db.myColection.mapReduce
(
   function() { emit(this._id, Object.bsonsize(this)) }, // map the result to be an id / size pair for each document
   function(key, val) { return val }, // val = document size value (single value for each document)
   { 
       query: {}, // query all documents
       out: { inline: 1 } // just return result (don't create a new collection for it)
   } 
)

This will return all documents sizes although it worth mentioning that saving it as a collection is a better approach (the result is an array of results inside the result field)

Second - let's get the max size of document by manipulating this query

db.metadata.mapReduce
(
    function() { emit(0, Object.bsonsize(this))}, // mapping a fake id (0) and use the document size as value
    function(key, vals) { return Math.max.apply(Math, vals) }, // use Math.max function to get max value from vals (each val = document size)
    { query: {}, out: { inline: 1 } } // same as first example
)

Which will provide you a single result with value equals to the max document size

In short:

you may want to use the first example and save its output as a collection (change out option to the name of collection you want) and applying further aggregations on it (max size, min size, etc.)

-OR-

you may want to use a single query (the second option) for getting a single stat (min, max, avg, etc.)

Solution 6 - Mongodb

If you're working with a huge collection, loading it all at once into memory will not work, since you'll need more RAM than the size of the entire collection for that to work.

Instead, you can process the entire collection in batches using the following package I created: https://www.npmjs.com/package/mongodb-largest-documents

All you have to do is provide the MongoDB connection string and collection name. The script will output the top X largest documents when it finishes traversing the entire collection in batches.

Preview

Solution 7 - Mongodb

Inspired by Elad Nana's package, but usable in a MongoDB console :

function biggest(collection, limit=100, sort_delta=100) {
  var documents = [];
  cursor = collection.find().readPref("nearest");
  while (cursor.hasNext()) {
    var doc = cursor.next();
    var size = Object.bsonsize(doc);
    if (documents.length < limit || size > documents[limit-1].size) {
      documents.push({ id: doc._id.toString(), size: size });
    }
    if (documents.length > (limit + sort_delta) || !cursor.hasNext()) {
      documents.sort(function (first, second) {
        return second.size - first.size;
      });
      documents = documents.slice(0, limit);
    }
  }
  return documents;
}; biggest(db.collection)
  • Uses cursor
  • Gives a list of the limit biggest documents, not just the biggest
  • Sort & cut output list to limit every sort_delta
  • Use nearest as read preference (you might also want to use rs.slaveOk() on the connection to be able to list collections if you're on a slave node)

Solution 8 - Mongodb

As Xavier Guihot already mentioned, a new $bsonSize aggregation operator was introduced in Mongo 4.4, which can give you the size of the object in bytes. In addition to that just wanted to provide my own example and some stats.

Usage example:
// I had an `orders` collection in the following format
[
  {
    "uuid": "64178854-8c0f-4791-9e9f-8d6767849bda",
    "status": "new",
    ...
  },
  {
    "uuid": "5145d7f1-e54c-44d9-8c10-ca3ce6f472d6",
    "status": "complete",
    ...
  },
  ...
];

// and I've run the following query to get documents' size
db.getCollection("orders").aggregate(
  [
    {
      $match: { status: "complete" } // pre-filtered only completed orders
    },
    {
      $project: {
        uuid: 1,
        size: { $bsonSize: "$$ROOT" } // added object size
      }
    },
    {
      $sort: { size: -1 }
    },
  ],
  { allowDiskUse: true } // required as I had huge amount of data
);

as a result, I received a list of documents by size in descending order.

Stats:

For the collection of ~3M records and ~70GB size in total, the query above took ~6.5 minutes.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionsashkelloView Question on Stackoverflow
Solution 1 - MongodbAbhishek KumarView Answer on Stackoverflow
Solution 2 - MongodbMike GrafView Answer on Stackoverflow
Solution 3 - MongodbDan DascalescuView Answer on Stackoverflow
Solution 4 - MongodbXavier GuihotView Answer on Stackoverflow
Solution 5 - MongodbymzView Answer on Stackoverflow
Solution 6 - MongodbElad NavaView Answer on Stackoverflow
Solution 7 - Mongodbu890106View Answer on Stackoverflow
Solution 8 - MongodbD.DimitriogloView Answer on Stackoverflow