Apache Kafka vs Apache Storm

Apache StormApache KafkaData Integration

Apache Storm Problem Overview


Apache Kafka: Distributed messaging system
Apache Storm: Real Time Message Processing

How we can use both technologies in a real-time data pipeline for processing event data?

In terms of real time data pipeline both seems to me do the job identical. How can we use both the technologies on a data pipeline?

Apache Storm Solutions


Solution 1 - Apache Storm

You use Apache Kafka as a distributed and robust queue that can handle high volume data and enables you to pass messages from one end-point to another.

Storm is not a queue. It is a system that has distributed real time processing abilities, meaning you can execute all kind of manipulations on real time data in parallel.

The common flow of these tools (as I know it) goes as follows:

real-time-system --> Kafka --> Storm --> NoSql --> BI(optional)

So you have your real time app handling high volume data, sends it to Kafka queue. Storm pulls the data from kafka and applies some required manipulation. At this point you usually like to get some benefits from this data, so you either send it to some Nosql db for additional BI calculations, or you could simply query this NoSql from any other system.

Solution 2 - Apache Storm

Kafka and Storm have a slightly different purpose:

Kafka is a distributed message broker which can handle big amount of messages per second. It uses publish-subscribe paradigm and relies on topics and partitions. Kafka uses Zookeeper to share and save state between brokers. So Kafka is basically responsible for transferring messages from one machine to another.

Storm is a scalable, fault-tolerant, real-time analytic system (think like Hadoop in realtime). It consumes data from sources (Spouts) and passes it to pipeline (Bolts). You can combine them in the topology. So Storm is basically a computation unit (aggregation, machine learning).


But you can use them together: for example your application uses kafka to send data to other servers which uses storm to make some computation on it.

Solution 3 - Apache Storm

I know that this is an older thread and the comparisons of Apache Kafka and Storm were valid and correct when they were written but it is worth noting that Apache Kafka has evolved a lot over the years and since version 0.10 (April 2016) Kafka has included a Kafka Streams API which provides stream processing capabilities without the need for any additional software such as Storm. Kafka also includes the Connect API for connecting into various sources and sinks (destinations) of data.

Announcement blog - https://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/

Current Apache documentation - https://kafka.apache.org/documentation/streams/

In 0.11 Kafka the stream processing functionality was further expanded to provide Exactly Once Semantics and Transactions.

https://www.confluent.io/blog/exactly-once-semantics-are-possible-heres-how-apache-kafka-does-it/

Solution 4 - Apache Storm

This is how it works

Kafka - To provide a realtime stream

Storm - To perform some operations on that stream

You might take a look at the GitHub project https://github.com/abhishekgoel137/kafka-nodejs-d3js.

(D3js is a graph-representation library)

Ideal case:

Realtime application -> Kafka -> Storm -> NoSQL -> d3js

This repository is based on:

Realtime application -> Kafka -> <plain Node.js> -> NoSQL -> d3js

Solution 5 - Apache Storm

As every one explain you that Apache Kafka: is continuous messaging queue

Apache Storm: is continuous processing tool

here in this aspect Kafka will get the data from any website like FB,Twitter by using API's and that data is processed by using Apache Storm and you can store the processed data in either in any databases you like.

https://github.com/miguno/kafka-storm-starter

Just follow it you will get some idea

Solution 6 - Apache Storm

When I have a use case that requires me to visualize or alert on patterns (think of twitter trends), while continuing to process the events, I have a several patterns.
NiFi would allow me to process an event and update a persistent data store with low(er) batch aggregation with very, very little custom coding.
Storm (lots of custom coding) allows me nearly real time access to the trending events.
If I can wait for many seconds, then I can batch out of kafka, into hdfs (Parquet) and process.
If I need to know in seconds, I need NiFi, and probably even Storm. (Think of monitoring thousands of earth stations, where I need to see small region weather conditions for tornado warnings).

Solution 7 - Apache Storm

Simply Kafka send the messages from node to another , and Storm processing the messages . Check this example of how you can Integration Apache Kafka With Storm

Attributions

All content for this solution is sourced from the original question on Stackoverflow.

The content on this page is licensed under the Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) license.

Content TypeOriginal AuthorOriginal Content on Stackoverflow
QuestionAnanth DuariView Question on Stackoverflow
Solution 1 - Apache StormforhasView Answer on Stackoverflow
Solution 2 - Apache StormSalvador DaliView Answer on Stackoverflow
Solution 3 - Apache StormHans JespersenView Answer on Stackoverflow
Solution 4 - Apache StormAbhishek GoelView Answer on Stackoverflow
Solution 5 - Apache Stormsyed jameerView Answer on Stackoverflow
Solution 6 - Apache StormDaemeonView Answer on Stackoverflow
Solution 7 - Apache StormAl-Mustafa AzhariView Answer on Stackoverflow