Dataflow programming, a classic computing model, is experiencing a revival thanks to the surge in web-scale real-time services. Its inherent simplicity, scalability, and resource efficiency make it ideal for numerous engineering challenges. Straw, a Node.js framework, facilitates dataflow implementation, originally designed for real-time financial data processing and capable of handling thousands of messages per second on modest hardware.
Straw structures code into interconnected nodes: each node receives input, processes it, and outputs results. This modular design simplifies complex problems, enhancing scalability and resilience. This article demonstrates Straw's capabilities by detailing its application in mining Twitter's Firehose for tweet data. The process involves setting up nodes to ingest raw data, perform analysis, and distribute results to an Express server and clients via WebSockets for real-time visualization.
Introduction to Straw and Haystack
Straw defines a topology of nodes, each with input and zero or more outputs. Nodes process incoming messages using user-defined functions, generating output messages for connected nodes. The example application, Haystack, involves nodes for raw data consumption from the Firehose, data routing for analysis, and analysis nodes themselves. Data is then relayed to an Express server and clients via WebSockets. To follow along, install Haystack locally; Redis and Bower are prerequisites. Bower installation: npm install -g bower
. Haystack cloning and setup:
git clone https://github.com/simonswain/haystack cd haystack npm install bower install
Running the Firehose Data Stream
Accessing the Twitter Firehose requires API credentials obtained by creating a Twitter app (read permissions only). Obtain the consumer_key
, consumer_secret
, access_token_key
, and access_token_secret
from the API Keys tab. Update Haystack's sample config file (config.js
) with your credentials:
exports.twitter = { consumer_key: '{your consumer key}', consumer_secret: '{your consumer secret}', access_token_key: '{your access token key}', access_token_secret: '{your access token secret}' };
Run Haystack using two separate terminals: one for the Straw topology (node run
), and another for the Express server (node server.js
). Access the visualization at http://localhost:3000
.
Understanding the Straw Topology (run.js)
run.js
defines the Straw topology. Nodes and their connections are specified in an object. For example:
var topo = new straw.topology({ 'consume-firehose': { 'node': __dirname + '/nodes/consume-firehose.js', 'output': 'raw-tweets', 'twitter': config.twitter }, 'route-tweets': { 'node': __dirname + '/nodes/route-tweets.js', 'input': 'raw-tweets', 'outputs': { 'geo': 'client-geo', 'lang': 'lang', 'text': 'text' } }, // ... more nodes });
Nodes are located in the nodes
directory. consume-firehose
(no input) introduces messages; route-tweets
demonstrates multiple outputs for selective message routing.
Example Nodes (consume-firehose.js and route-tweets.js)
consume-firehose.js
:
// nodes/consume-firehose.js var straw = require('straw'); var Twitter = require('twitter'); module.exports = straw.node.extend({ initialize: function(opts, done) { this.twit = new Twitter(opts.twitter); process.nextTick(done); }, run: function(done) { var self = this; this.twit.stream('statuses/sample', function(stream) { stream.on('data', function(data) { self.output(data); }); }); done(false); } });
route-tweets.js
:
git clone https://github.com/simonswain/haystack cd haystack npm install bower install
The catch-langs Node (for language aggregation)
catch-langs
aggregates language counts, periodically emitting totals to avoid overwhelming clients. It uses setInterval
to control emission, incrementing language counts and emitting totals when changes occur.
The Express Server (server.js) and Client-Side Visualization (haystack.js)
server.js
uses Express and Socket.IO (or SockJS) to serve the web interface and stream data from Straw using a straw.tap
. The client-side (public/js/haystack.js
) receives and visualizes this data.
Conclusion
Haystack exemplifies dataflow processing for real-time data streams. Straw's inherent parallelism and modularity simplify complex tasks. Extend Haystack by adding nodes and visualizations.
Frequently Asked Questions (FAQs) about Dataflow Programming (This section remains largely unchanged from the input, as it's a self-contained FAQ section.) The provided FAQs are comprehensive and well-written and don't require modification for the purposes of this rewrite.
The above is the detailed content of Dataflow Programming with Straw. For more information, please follow other related articles on the PHP Chinese website!