Count vs len on a Django QuerySet

PythonDjangoPerformance

Python Problem Overview


In Django, given that I have a QuerySet that I am going to iterate over and print the results of, what is the best option for counting the objects? len(qs) or qs.count()?

(Also given that counting the objects in the same iteration is not an option.)

Python Solutions


Solution 1 - Python

Although the Django docs recommend using count rather than len:

>Note: Don't use len() on QuerySets if all you want to do is determine the number of records in the set. It's much more efficient to handle a count at the database level, using SQL's SELECT COUNT(*), and Django provides a count() method for precisely this reason.

Since you are iterating this QuerySet anyway, the result will be cached (unless you are using iterator), and so it will be preferable to use len, since this avoids hitting the database again, and also the possibly of retrieving a different number of results!).
If you are using iterator, then I would suggest including a counting variable as you iterate through (rather than using count) for the same reasons.

Solution 2 - Python

Choosing between len() and count() depends on the situation and it's worth to deeply understand how they work to use them correctly.

Let me provide you with a few scenarios:

  1. (most crucial) When you only want to know the number of elements and you do not plan to process them in any way it's crucial to use count():

DO: queryset.count() - this will perform single SELECT COUNT(*) FROM some_table query, all computation is carried on RDBMS side, Python just needs to retrieve the result number with fixed cost of O(1)

DON'T: len(queryset) - this will perform SELECT * FROM some_table query, fetching whole table O(N) and requiring additional O(N) memory for storing it. This is the worst that can be done

  1. When you intend to fetch the queryset anyway it's slightly better to use len() which won't cause an extra database query as count() would

len() (one db query)

    len(queryset) # SELECT * fetching all the data - NO extra cost - data would be fetched anyway in the for loop

    for obj in queryset: # data is already fetched by len() - using cache
        pass

count() (two db queries!):

    queryset.count() # First db query SELECT COUNT(*)

    for obj in queryset: # Second db query (fetching data) SELECT *
        pass

3. Reverted 2nd case (when queryset has already been fetched):

    for obj in queryset: # iteration fetches the data
        len(queryset) # using already cached data - O(1) no extra cost
        queryset.count() # using cache - O(1) no extra db query

    len(queryset) # the same O(1)
    queryset.count() # the same: no query, O(1)

Everything will be clear once you take a glance "under the hood":

class QuerySet(object):
   
    def __init__(self, model=None, query=None, using=None, hints=None):
        # (...)
        self._result_cache = None
 
    def __len__(self):
        self._fetch_all()
        return len(self._result_cache)
 
    def _fetch_all(self):
        if self._result_cache is None:
            self._result_cache = list(self.iterator())
        if self._prefetch_related_lookups and not self._prefetch_done:
            self._prefetch_related_objects()
 
    def count(self):
        if self._result_cache is not None:
            return len(self._result_cache)
 
        return self.query.get_count(using=self.db)

Good references in Django docs:

Solution 3 - Python

I think using len(qs) makes more sense here as you need to iterate over the results. qs.count() is a better option if all that you want to do it print the count and not iterate over the results.

len(qs) will hit the database with select * from table whereas qs.count() will hit the db with select count(*) from table.

also qs.count() will give return integer and you cannot iterate over it

Solution 4 - Python

For people who prefer test measurements(Postresql):

If we have a simple Person model and 1000 instances of it:

class Person(models.Model):
    name = models.CharField(max_length=100)
    age = models.SmallIntegerField()

    def __str__(self):
        return self.name

In average case it gives:

In [1]: persons = Person.objects.all()

In [2]: %timeit len(persons)                                                                                                                                                          
325 ns ± 3.09 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)

In [3]: %timeit persons.count()                                                                                                                                                       
170 ns ± 0.572 ns per loop (mean ± std. dev. of 7 runs, 10000000 loops each)

So how can you see count() almost 2x faster than len() in this particular test case.

Solution 5 - Python

Summarizing what others have already answered:

  • len() will fetch all the records and iterate over them.
  • count() will perform an SQL COUNT operation (much faster when dealing with big queryset).

It is also true that if after this operation, the whole queryset will be iterated, then as as whole it could be slightly more efficient to use len().

However

In some cases, for instance when having memory limitations, it could be convenient (when posible) to split the operation performed over the records. That can be achieved using django pagination.

Then, using count() would be the choice and you could avoid to have to fetch the entire queryset at once.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionantonagestamView Question on Stackoverflow
Solution 1 - PythonAndy HaydenView Answer on Stackoverflow
Solution 2 - PythonKrzysiekView Answer on Stackoverflow
Solution 3 - PythonRohanView Answer on Stackoverflow
Solution 4 - PythonfunnydmanView Answer on Stackoverflow
Solution 5 - PythonPablo GuerreroView Answer on Stackoverflow