How much faster is Redis than mongoDB?

MongodbBenchmarkingRedis

Mongodb Problem Overview


It's widely mentioned that Redis is "Blazing Fast" and mongoDB is fast too. But, I'm having trouble finding actual numbers comparing the results of the two. Given similar configurations, features and operations (and maybe showing how the factor changes with different configurations and operations), etc, is Redis 10x faster?, 2x faster?, 5x faster?

I'm ONLY speaking of performance. I understand that mongoDB is a different tool and has a richer feature set. This is not the "Is mongoDB better than Redis" debate. I'm asking, by what margin does Redis outperform mongoDB?

At this point, even cheap benchmarks are better than no benchmarks.

Mongodb Solutions


Solution 1 - Mongodb

Rough results from the following benchmark: 2x write, 3x read.

Here's a simple benchmark in python you can adapt to your purposes, I was looking at how well each would perform simply setting/retrieving values:

#!/usr/bin/env python2.7
import sys, time
from pymongo import Connection
import redis

# connect to redis & mongodb
redis = redis.Redis()
mongo = Connection().test
collection = mongo['test']
collection.ensure_index('key', unique=True)

def mongo_set(data):
    for k, v in data.iteritems():
        collection.insert({'key': k, 'value': v})

def mongo_get(data):
    for k in data.iterkeys():
        val = collection.find_one({'key': k}, fields=('value',)).get('value')

def redis_set(data):
    for k, v in data.iteritems():
        redis.set(k, v)

def redis_get(data):
    for k in data.iterkeys():
        val = redis.get(k)

def do_tests(num, tests):
    # setup dict with key/values to retrieve
    data = {'key' + str(i): 'val' + str(i)*100 for i in range(num)}
    # run tests
    for test in tests:
        start = time.time()
        test(data)
        elapsed = time.time() - start
        print "Completed %s: %d ops in %.2f seconds : %.1f ops/sec" % (test.__name__, num, elapsed, num / elapsed)

if __name__ == '__main__':
    num = 1000 if len(sys.argv) == 1 else int(sys.argv[1])
    tests = [mongo_set, mongo_get, redis_set, redis_get] # order of tests is significant here!
    do_tests(num, tests)

Results for with mongodb 1.8.1 and redis 2.2.5 and latest pymongo/redis-py:

$ ./cache_benchmark.py 10000
Completed mongo_set: 10000 ops in 1.40 seconds : 7167.6 ops/sec
Completed mongo_get: 10000 ops in 2.38 seconds : 4206.2 ops/sec
Completed redis_set: 10000 ops in 0.78 seconds : 12752.6 ops/sec
Completed redis_get: 10000 ops in 0.89 seconds : 11277.0 ops/sec

Take the results with a grain of salt of course! If you are programming in another language, using other clients/different implementations, etc your results will vary wildy. Not to mention your usage will be completely different! Your best bet is to benchmark them yourself, in precisely the manner you are intending to use them. As a corollary you'll probably figure out the best way to make use of each. Always benchmark for yourself!

Solution 2 - Mongodb

Please check this post about Redis and MongoDB insertion performance analysis:

> Up to 5000 entries mongodb $push is faster even when compared to Redis RPUSH, then it becames incredibly slow, probably the mongodb array type has linear insertion time and so it becomes slower and slower. mongodb might gain a bit of performances by exposing a constant time insertion list type, but even with the linear time array type (which can guarantee constant time look-up) it has its applications for small sets of data.

Solution 3 - Mongodb

Good and simple benchmark

I tried to recalculate the results again using the current versions of redis(2.6.16) and mongo(2.4.8) and here's the result

Completed mongo_set: 100000 ops in 5.23 seconds : 19134.6 ops/sec
Completed mongo_get: 100000 ops in 36.98 seconds : 2703.9 ops/sec
Completed redis_set: 100000 ops in 6.50 seconds : 15389.4 ops/sec
Completed redis_get: 100000 ops in 5.59 seconds : 17896.3 ops/sec

Also this blog post compares both of them but using node.js. It shows the effect of increasing number of entries in the database along with time.

Solution 4 - Mongodb

Numbers are going to be hard to find as the two are not quite in the same space. The general answer is that Redis 10 - 30% faster when the data set fits within working memory of a single machine. Once that amount of data is exceeded, Redis fails. Mongo will slow down at an amount which depends on the type of load. For an insert only type of load one user recently reported a slowdown of 6 to 7 orders of magnitude (10,000 to 100,000 times) but that report also admitted that there were configuration issues, and that this was a very atypical working load. Normal read heavy loads anecdotally slow by about 10X when some of the data must be read from disk.

Conclusion: Redis will be faster but not by a whole lot.

Solution 5 - Mongodb

Here is an excellent article about session performance in the Tornado framework about 1 year old. It has a comparison between a few different implementations, of which Redis and MongoDB are included. The graph in the article states that Redis is behind MongoDB by about 10% in this specific use case.

Redis comes with a built in benchmark that will analyze the performance of the machine you are on. There is a ton of raw data from it at the Benchmark wiki for Redis. But you might have to look around a bit for Mongo. Like here, here, and some random polish numbers (but it gives you a starting point for running some MongoDB benchmarks yourself).

I believe the best solution to this problem is to perform the tests yourself in the types of situations you expect.

Solution 6 - Mongodb

In my case, what has been a determining factor in performance comparison, is the MongoDb WriteConcern that is used. Most mongo drivers nowadays will set the default WriteConcern to ACKNOWLEDGED which means 'written to RAM' ([Mongo2.6.3-WriteConcern][1]), in that regards, it was very comparable to redis for most write operations.

But the reality is depending on your application needs and production environment setup, you may want to change this concern to WriteConcern.JOURNALED (written to oplog) or WriteConcern.FSYNCED (written to disk) or even written to replica sets (back-ups) if it is needed.

Then you may start seeing some performance decrease. Other important factors also include, how optimized your data access patterns are, index miss % (see [mongostat][2]) and indexes in general.

[1]: http://docs.mongodb.org/manual/core/write-concern/#write-concern "MongoDocumentation" [2]: http://docs.mongodb.org/manual/reference/program/mongostat/ "MongoStatDocumentation"

Solution 7 - Mongodb

I think that the 2-3X on the shown benchmark are misleading, since if you it also depends on the hardware you run it on - from my experience, the 'stronger' the machine is, the bigger the gap (in favor of Redis) will be, probably by the fact that the benchmark hits the memory bounds limit pretty fast.

As for the memory capacity - this is partially true, since there are also ways to go around that, there are (commercial) products that writes back Redis data to disk, and also cluster (multi-sharded) solutions that overcome the memory-size limitation.

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionHomer6View Question on Stackoverflow
Solution 1 - MongodbzeekayView Answer on Stackoverflow
Solution 2 - MongodbAndrei AndrushkevichView Answer on Stackoverflow
Solution 3 - MongodbTareq SalahView Answer on Stackoverflow
Solution 4 - MongodbJohn F. MillerView Answer on Stackoverflow
Solution 5 - MongodbmistagroovesView Answer on Stackoverflow
Solution 6 - MongodbschwarzView Answer on Stackoverflow
Solution 7 - MongodbElior MalulView Answer on Stackoverflow