MongoDB {aggregation $match} vs {find} speed

MongodbAggregation Framework

Mongodb Problem Overview


I have a mongoDB collection with millions of rows and I'm trying to optimize my queries. I'm currently using the aggregation framework to retrieve data and group them as I want. My typical aggregation query is something like : $match > $group > $ group > $project

However, I noticed that the last parts only take a few ms, the beginning is the slowest.

I tried to perform a query with only the $match filter, and then to perform the same query with collection.find. The aggregation query takes ~80ms while the find query takes 0 or 1ms.

I have indexes on pretty much each field so I guess this isn't the problem. Any idea on what could go wrong ? Or is it just a "normal" drawback of the aggregation framework ?

I could use find queries instead of aggregation queries, however I would have to perform a lot of processing after the request and this process can be done quickly with $group etc. so I would rather keep the aggregation framework.

Thanks,

EDIT :

Here is my criteria :

{
    "action" : "click",
    "timestamp" : {
            "$gt" : ISODate("2015-01-01T00:00:00Z"),
            "$lt" : ISODate("2015-02-011T00:00:00Z")
    },
    "itemId" : "5"
}

Mongodb Solutions


Solution 1 - Mongodb

The main purpose of the aggregation framework is to ease the query of a big number of entries and generate a low number of results that hold value to you.

As you have said, you can also use multiple find queries, but remember that you can not create new fields with find queries. On the other hand, the $group stage allows you to define your new fields.

If you would like to achieve the functionality of the aggregation framework, you would most likely have to run an initial find (or chain several ones), pull that information and further manipulate it with a programming language.

The aggregation pipeline might seem to take longer, but at least you know you only have to take into account the performance of one system - MongoDB engine.

Whereas, when it comes to manipulating the data returned from a find query, you would most likely have to further manipulate the data with a programming language, thus increasing the complexity depending on the intricacies of the programming language of choice.

Solution 2 - Mongodb

Have you tried using explain() to your find queries? It'll give you good idea about how much time find() query will exactly take. You can do the same for $match with $explain & see whether there is any difference in index accessing & other parameters.

Also the $group part of aggregation framework doesn't utilize the indexing so it has to process all the records returned by $match stage of aggregation framework. So to better understand the the working of your query see the result set it returns & whether it fits into memory to be processed by MongoDB.

Solution 3 - Mongodb

if you are concern with performance, then no doubt aggregation is time taking task rather then find clause. when you are fetching record on multiple conditions, having lookup, grouping, and some limited record ( paginated) then it is best approch to use aggregate , meanwhile in find query is fast when you have to fetch very big data set. you have some population, projection and no pagination i suggest to use find query that is fast

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionAntekView Question on Stackoverflow
Solution 1 - MongodbvladzamView Answer on Stackoverflow
Solution 2 - MongodbharshadView Answer on Stackoverflow
Solution 3 - MongodbGhazanfar AliView Answer on Stackoverflow