1) Explain what is Apache Storm? What are the components of Storm?
Apache storm is an open source distributed real-time computation system used for processing real time big data analytics. Unlike Hadoop batch processing, Apache storm does for real-time processing and can be used with any programming language.
Components of Apache Storm includes
2) Why Apache Storm is the first choice for Real Time Processing?
3) Explain how data is stream flow in Apache Storm?
In Apache storm, data is stream flow with three components Spout, Bolt and Tuple
4) Mention what is the difference between Apache Hbase and Storm?
Apache Storm | Apache Hbase |
|
|
5) Explain how you can streamline log files using Apache storm?
To read from the log files you can configure your spout and emit per line as it read the log. The output then can be assign to a bolt for analyzing.
6) Explain what streams is and stream grouping in Apache storm?
In Apache Storm, stream is referred as a group or unbounded sequence of Tuples while stream grouping determines how stream should be partitioned among the bolt’s tasks.
7) List out different stream grouping in Apache storm?
8) Mention how storm application can be beneficial in financial services?
In financial services, Storm can be helpful in preventing
9) Explain what is Topology_Message_Timeout_secs in Apache Storm?
The maximum amount of time alloted to the topology to fully process a message released by a spout. If the message in not acknowledged in given time frame, Apache storm will fail the message on the spout.
10) Explain how message is fully processed in Apache Storm?
By calling the nextTuple procedure or method on the Spout, Storm requests a tuple from the Spout. The Spout avails the SpoutoutputCollector given in the open method to discharge a tuple to one of its output streams. While discharging a tuple, the Spout allocates a “message id” that will be used to recognize the tuple later.
After that, the tuple gets sent to consuming bolts, and storm takes charge of tracking the tree of messages that is produced. If the storm is confident that a tuple is processed thoroughly, then it can call the ack procedure on the originating Spout task with the message id that the Spout has given to the Storm.
11) Explain how to write the Output into a file using Storm?
In Spout, when you are reading file, make FileReader object in Open() method, as such that time it initializes the reader object for worker node. And use that object in nextTuple() method.
12) Mention what is the difference between Apache Kafka and Apache Storm?
13) Explain when using field grouping in storm, is there any time-out or limit to known field values?
Field grouping in storm uses a mod hash function to decide which task to send a tuple, ensuring which task will be processed in the correct order. For that, you don’t require any cache. So, there is no time-out or limit to known field values.
Refer our Apache Tutorials for an extra edge in your interview.
View Comments
Answer for #11is incorrect. The answer is about reading from a file but the question is about writing into a file