Java Code Examples for org.apache.storm.topology.TopologyBuilder#setSpout()
The following examples show how to use
org.apache.storm.topology.TopologyBuilder#setSpout() .
You can vote up the ones you like or vote down the ones you don't like,
and go to the original project or source file by following the links above each example. You may check out the related API usage on the sidebar.
Example 1
Source File: AckingTopology.java From incubator-heron with Apache License 2.0 | 6 votes |
public static void main(String[] args) throws Exception { if (args.length != 1) { throw new RuntimeException("Specify topology name"); } TopologyBuilder builder = new TopologyBuilder(); int spouts = 2; int bolts = 2; builder.setSpout("word", new AckingTestWordSpout(), spouts); builder.setBolt("exclaim1", new ExclamationBolt(), bolts) .shuffleGrouping("word"); Config conf = new Config(); conf.setDebug(true); // Put an arbitrary large number here if you don't want to slow the topology down conf.setMaxSpoutPending(1000 * 1000 * 1000); // To enable acking, we need to setEnableAcking true conf.setNumAckers(1); conf.put(Config.TOPOLOGY_WORKER_CHILDOPTS, "-XX:+HeapDumpOnOutOfMemoryError"); // Set the number of workers or stream managers conf.setNumWorkers(2); StormSubmitter.submitTopology(args[0], conf, builder.createTopology()); }
Example 2
Source File: AnchoredWordCount.java From storm-net-adapter with Apache License 2.0 | 6 votes |
protected int run(String[] args) throws Exception { TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("spout", new RandomSentenceSpout(), 4); builder.setBolt("split", new SplitSentence(), 4).shuffleGrouping("spout"); builder.setBolt("count", new WordCount(), 4).fieldsGrouping("split", new Fields("word")); Config conf = new Config(); conf.setMaxTaskParallelism(3); String topologyName = "word-count"; conf.setNumWorkers(3); if (args != null && args.length > 0) { topologyName = args[0]; } return submit(topologyName, conf, builder); }
Example 3
Source File: StormRangerAuthorizerTest.java From ranger with Apache License 2.0 | 6 votes |
@org.junit.BeforeClass public static void setup() throws Exception { cluster = new LocalCluster(); final Config conf = new Config(); conf.setDebug(true); final TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("words", new WordSpout()); builder.setBolt("counter", new WordCounterBolt()).shuffleGrouping("words"); // bob can create a new topology final Subject subject = new Subject(); subject.getPrincipals().add(new SimplePrincipal("bob")); Subject.doAs(subject, new PrivilegedExceptionAction<Void>() { public Void run() throws Exception { cluster.submitTopology("word-count", conf, builder.createTopology()); return null; } }); }
Example 4
Source File: MultipleLoggerTopology.java From storm-net-adapter with Apache License 2.0 | 6 votes |
public static void main(String[] args) throws Exception { TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("word", new TestWordSpout(), 10); builder.setBolt("exclaim1", new ExclamationLoggingBolt(), 3).shuffleGrouping("word"); builder.setBolt("exclaim2", new ExclamationLoggingBolt(), 2).shuffleGrouping("exclaim1"); Config conf = new Config(); conf.setDebug(true); String topoName = MultipleLoggerTopology.class.getName(); if (args != null && args.length > 0) { topoName = args[0]; } conf.setNumWorkers(2); StormSubmitter.submitTopologyWithProgressBar(topoName, conf, builder.createTopology()); }
Example 5
Source File: TopologyFactoryBean.java From breeze with Apache License 2.0 | 6 votes |
private StormTopology build() { run(); verify(); Map<String,BoltDeclarer> declaredBolts = new HashMap<>(); TopologyBuilder builder = new TopologyBuilder(); for (Map.Entry<ConfiguredSpout,List<ConfiguredBolt>> line : entrySet()) { ConfiguredSpout spout = line.getKey(); String lastId = spout.getId(); String streamId = spout.getOutputStreamId(); builder.setSpout(lastId, spout, spout.getParallelism()); for (ConfiguredBolt bolt : line.getValue()) { String id = bolt.getId(); BoltDeclarer declarer = declaredBolts.get(id); if (declarer == null) declarer = builder.setBolt(id, bolt, bolt.getParallelism()); declarer.noneGrouping(lastId, streamId); if (declaredBolts.put(id, declarer) != null) break; lastId = id; streamId = bolt.getOutputStreamId(); } } return builder.createTopology(); }
Example 6
Source File: AbstractStormSimpleBoltTests.java From elasticsearch-hadoop with Apache License 2.0 | 6 votes |
@Test public void testSimpleWriteTopology() throws Exception { List doc1 = Collections.singletonList(ImmutableMap.of("one", 1, "two", 2)); List doc2 = Collections.singletonList(ImmutableMap.of("OTP", "Otopeni", "SFO", "San Fran")); String target = index + "/simple-write"; TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("test-spout-1", new TestSpout(ImmutableList.of(doc2, doc1), new Fields("doc"))); builder.setBolt("es-bolt-1", new TestBolt(new EsBolt(target, conf))).shuffleGrouping("test-spout-1"); MultiIndexSpoutStormSuite.run(index + "simple", builder.createTopology(), COMPONENT_HAS_COMPLETED); COMPONENT_HAS_COMPLETED.waitFor(1, TimeValue.timeValueSeconds(10)); RestUtils.refresh(index); assertTrue(RestUtils.exists(target)); String results = RestUtils.get(target + "/_search?"); assertThat(results, containsString("SFO")); }
Example 7
Source File: StatisticTopology.java From storm-statistic with Apache License 2.0 | 6 votes |
public static void main(String[] args) throws Exception { TopologyBuilder builder = new TopologyBuilder(); /** * 设置spout和bolt的dag(有向无环图) */ KafkaSpout kafkaSpout = createKafkaSpout(); builder.setSpout("id_kafka_spout", kafkaSpout); builder.setBolt("id_convertIp_bolt", new ConvertIPBolt()).shuffleGrouping("id_kafka_spout"); // 通过不同的数据流转方式,来指定数据的上游组件 builder.setBolt("id_statistic_bolt", new StatisticBolt()).shuffleGrouping("id_convertIp_bolt"); // 通过不同的数据流转方式,来指定数据的上游组件 // 使用builder构建topology StormTopology topology = builder.createTopology(); String topologyName = KafkaStormTopology.class.getSimpleName(); // 拓扑的名称 Config config = new Config(); // Config()对象继承自HashMap,但本身封装了一些基本的配置 // 启动topology,本地启动使用LocalCluster,集群启动使用StormSubmitter if (args == null || args.length < 1) { // 没有参数时使用本地模式,有参数时使用集群模式 LocalCluster localCluster = new LocalCluster(); // 本地开发模式,创建的对象为LocalCluster localCluster.submitTopology(topologyName, config, topology); } else { StormSubmitter.submitTopology(topologyName, config, topology); } }
Example 8
Source File: AbstractSpoutSimpleRead.java From elasticsearch-hadoop with Apache License 2.0 | 6 votes |
@Test public void testSimpleRead() throws Exception { String target = index + "/basic-read"; RestUtils.touch(index); RestUtils.postData(target, "{\"message\" : \"Hello World\",\"message_date\" : \"2014-05-25\"}".getBytes()); RestUtils.postData(target, "{\"message\" : \"Goodbye World\",\"message_date\" : \"2014-05-25\"}".getBytes()); RestUtils.refresh(index); TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("es-spout", new TestSpout(new EsSpout(target))); builder.setBolt("test-bolt", new CapturingBolt()).shuffleGrouping("es-spout"); MultiIndexSpoutStormSuite.run(index + "simple", builder.createTopology(), COMPONENT_HAS_COMPLETED); COMPONENT_HAS_COMPLETED.waitFor(1, TimeValue.timeValueSeconds(10)); assertTrue(RestUtils.exists(target)); String results = RestUtils.get(target + "/_search?"); assertThat(results, containsString("Hello")); assertThat(results, containsString("Goodbye")); System.out.println(CapturingBolt.CAPTURED); assertThat(CapturingBolt.CAPTURED.size(), is(2)); }
Example 9
Source File: LocalWordCountJDBCStormTopology.java From 163-bigdate-note with GNU General Public License v3.0 | 5 votes |
public static void main(String[] args) { //根据Spout和Bolt构建TopologyBuilder TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("DataSourceSpout", new DataSourceSpout()); builder.setBolt("SplitBolt", new SplitBolt()).shuffleGrouping("DataSourceSpout"); builder.setBolt("CountBolt", new CountBolt()).shuffleGrouping("SplitBolt"); Map hikariConfigMap = Maps.newHashMap(); hikariConfigMap.put("dataSourceClassName","com.mysql.jdbc.jdbc2.optional.MysqlDataSource"); hikariConfigMap.put("dataSource.url", "jdbc:mysql://192.168.60.11/storm"); hikariConfigMap.put("dataSource.user","root"); hikariConfigMap.put("dataSource.password","123"); ConnectionProvider connectionProvider = new HikariCPConnectionProvider(hikariConfigMap); String tableName = "wc"; JdbcMapper simpleJdbcMapper = new SimpleJdbcMapper(tableName, connectionProvider); JdbcInsertBolt userPersistanceBolt = new JdbcInsertBolt(connectionProvider, simpleJdbcMapper) .withTableName(tableName) .withQueryTimeoutSecs(30); builder.setBolt("JdbcInsertBolt", userPersistanceBolt).shuffleGrouping("CountBolt"); //创建本地集群 LocalCluster cluster = new LocalCluster(); cluster.submitTopology("LocalWordCountRedisStormTopology", new Config(), builder.createTopology()); }
Example 10
Source File: IntervalWindowTopology.java From twister2 with Apache License 2.0 | 5 votes |
@Override public StormTopology buildTopology() { TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("source", new TestWordSpout(), 1); builder.setBolt("windower", new IntervalWindowBolt() .withTumblingWindow( new BaseWindowedBolt.Duration(2, TimeUnit.SECONDS) ), 1).shuffleGrouping("source"); return builder.createTopology(); }
Example 11
Source File: StormKafkaProcess.java From BigData with GNU General Public License v3.0 | 5 votes |
public static void main(String[] args) throws InterruptedException, InvalidTopologyException, AuthorizationException, AlreadyAliveException { String topologyName = "TSAS";// 元组名 // Zookeeper主机地址,会自动选取其中一个 ZkHosts zkHosts = new ZkHosts("192.168.230.128:2181,192.168.230.129:2181,192.168.230.131:2181"); String topic = "trademx"; String zkRoot = "/storm";// storm在Zookeeper上的根路径 String id = "tsaPro"; // 创建SpoutConfig对象 SpoutConfig spontConfig = new SpoutConfig(zkHosts, topic, zkRoot, id); TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("kafka", new KafkaSpout(spontConfig), 2); builder.setBolt("AccBolt", new AccBolt()).shuffleGrouping("kafka"); builder.setBolt("ToDbBolt", new ToDbBolt()).shuffleGrouping("AccBolt"); Config config = new Config(); config.setDebug(false); if (args.length == 0) { // 本地运行,用于测试 LocalCluster localCluster = new LocalCluster(); localCluster.submitTopology(topologyName, config, builder.createTopology()); Thread.sleep(1000 * 3600); localCluster.killTopology(topologyName); localCluster.shutdown(); } else { // 提交至集群运行 StormSubmitter.submitTopology(topologyName, config, builder.createTopology()); } }
Example 12
Source File: SlidingWindowTopology.java From twister2 with Apache License 2.0 | 5 votes |
@Override public StormTopology buildTopology() { TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("source", new TestWordSpout(), 1); builder.setBolt("windower", new SlidingWindowBolt() .withWindow( new BaseWindowedBolt.Count(30), new BaseWindowedBolt.Count(10) ), 1) .shuffleGrouping("source"); return builder.createTopology(); }
Example 13
Source File: ManualDRPC.java From storm-net-adapter with Apache License 2.0 | 5 votes |
public static void main(String[] args) throws Exception { TopologyBuilder builder = new TopologyBuilder(); DRPCSpout spout = new DRPCSpout("exclamation"); builder.setSpout("drpc", spout); builder.setBolt("exclaim", new ExclamationBolt(), 3).shuffleGrouping("drpc"); builder.setBolt("return", new ReturnResults(), 3).shuffleGrouping("exclaim"); Config conf = new Config(); StormSubmitter.submitTopology("exclaim", conf, builder.createTopology()); try (DRPCClient drpc = DRPCClient.getConfiguredClient(conf)) { System.out.println(drpc.execute("exclamation", "aaa")); System.out.println(drpc.execute("exclamation", "bbb")); } }
Example 14
Source File: App.java From springBoot-study with Apache License 2.0 | 5 votes |
public static void main(String[] args) { // TODO Auto-generated method stub //定义一个拓扑 TopologyBuilder builder=new TopologyBuilder(); builder.setSpout(str1, new TestSpout()); builder.setBolt(str2, new TestBolt()).shuffleGrouping(str1); Config conf = new Config(); conf.put("test", "test"); try{ //运行拓扑 if(args !=null&&args.length>0){ //有参数时,表示向集群提交作业,并把第一个参数当做topology名称 System.out.println("远程模式"); StormSubmitter.submitTopology(args[0], conf, builder.createTopology()); } else{//没有参数时,本地提交 //启动本地模式 System.out.println("本地模式"); LocalCluster cluster = new LocalCluster(); cluster.submitTopology("111" ,conf, builder.createTopology() ); // Thread.sleep(2000); // //关闭本地集群 // cluster.shutdown(); } }catch (Exception e){ e.printStackTrace(); } }
Example 15
Source File: TumblingWindowTopology.java From twister2 with Apache License 2.0 | 5 votes |
@Override public StormTopology buildTopology() { TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("source", new TestWordSpout(), 1); builder.setBolt("windower", new TumblingWindowBolt() .withTumblingWindow( new BaseWindowedBolt.Count(10) ), 1) .shuffleGrouping("source"); return builder.createTopology(); }
Example 16
Source File: StormTestUtil.java From atlas with Apache License 2.0 | 5 votes |
public static StormTopology createTestTopology() { TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("words", new TestWordSpout(), 10); builder.setBolt("count", new TestWordCounter(), 3).shuffleGrouping("words"); builder.setBolt("globalCount", new TestGlobalCount(), 2).shuffleGrouping("count"); return builder.createTopology(); }
Example 17
Source File: MysqlExtractorTopology.java From DBus with Apache License 2.0 | 5 votes |
public void buildTopology(String[] args) { //TODO if (parseCommandArgs(args) != 0) { return; } TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("CanalClientSpout", new CanalClientSpout(), 1); builder.setBolt("KafkaProducerBolt", new KafkaProducerBolt(), 1).shuffleGrouping("CanalClientSpout"); Config conf = new Config(); conf.put(Constants.ZOOKEEPER_SERVERS, zkServers); conf.put(Constants.EXTRACTOR_TOPOLOGY_ID, extractorTopologyId); logger.info(Constants.ZOOKEEPER_SERVERS + "=" + zkServers); logger.info(Constants.EXTRACTOR_TOPOLOGY_ID + "=" + extractorTopologyId); conf.setNumWorkers(1); conf.setMaxSpoutPending(50); conf.setMessageTimeoutSecs(120); if (!runAsLocal) { conf.setDebug(false); try { //StormSubmitter.submitTopology("extractorTopologyId", conf, builder.createTopology()); StormSubmitter.submitTopology(extractorTopologyId, conf, builder.createTopology()); } catch (Exception e) { e.printStackTrace(); } } else { conf.setDebug(false); LocalCluster cluster = new LocalCluster(); //cluster.submitTopology("extractorTopologyId", conf, builder.createTopology()); cluster.submitTopology(extractorTopologyId, conf, builder.createTopology()); } }
Example 18
Source File: SingleJoinExample.java From storm-net-adapter with Apache License 2.0 | 5 votes |
public static void main(String[] args) throws Exception { if (!NimbusClient.isLocalOverride()) { throw new IllegalStateException("This example only works in local mode. " + "Run with storm local not storm jar"); } FeederSpout genderSpout = new FeederSpout(new Fields("id", "gender")); FeederSpout ageSpout = new FeederSpout(new Fields("id", "age")); TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("gender", genderSpout); builder.setSpout("age", ageSpout); builder.setBolt("join", new SingleJoinBolt(new Fields("gender", "age"))).fieldsGrouping("gender", new Fields("id")) .fieldsGrouping("age", new Fields("id")); Config conf = new Config(); conf.setDebug(true); StormSubmitter.submitTopology("join-example", conf, builder.createTopology()); for (int i = 0; i < 10; i++) { String gender; if (i % 2 == 0) { gender = "male"; } else { gender = "female"; } genderSpout.feed(new Values(i, gender)); } for (int i = 9; i >= 0; i--) { ageSpout.feed(new Values(i, i + 20)); } }
Example 19
Source File: DemoTopologyBuilder.java From storm_spring_boot_demo with MIT License | 4 votes |
/** * 模拟一个监控系统的实现 * @return */ @Bean public TopologyBuilder buildTopology() { TopologyBuilder builder = new TopologyBuilder(); //产生随机句子写入kafka(用于模拟实际生产环境中生产者往Kafka写消息的动作) builder.setSpout(kafkaProducerSpoutBuilder.getId(), kafkaProducerSpout, kafkaProducerSpoutBuilder.getParallelismHint()); //统计WordCount //读取kafka作为数据源,输入句子给下游 builder.setSpout(kafkaSpoutBuilder.getId(), kafkaSpout, kafkaSpoutBuilder.getParallelismHint()); //将句子拆分成单词,上游数据源为kafka spout(实际上的业务日志清洗规则) builder.setBolt(splitSentenceBoltBuilder.getId(), splitSentenceBolt, splitSentenceBoltBuilder.getParallelismHint()) .shuffleGrouping(kafkaSpoutBuilder.getId()); /** * 滑动窗口统计的单词计数,上游数据源为分词bolt。数据分组为fieldsGrouping,在这里即同样的单词分给同一个下游bolt处理 * rollingWordCountBolt发射的数据:遍历Map<Object, Long>,然后collector.emit(new Values(obj, count, actualWindowLengthInSeconds)) 即将每个对象计数分别往下游发送 */ builder.setBolt(rollingWordCountBoltBuilder.getId(),rollingWordCountBolt,rollingWordCountBoltBuilder.getParallelismHint()) .fieldsGrouping(splitSentenceBoltBuilder.getId(),new Fields("word")); /** * 下游收到滑动窗口的数据并写入Redis。那么此时在Redis的数据便是滑动窗口的实时数据了。前台页面直接定时轮询key即可实现实时监控。 * Redis的话,个人认为可不使用FieldGrouping,反正是单线程。 */ builder.setBolt(wordCountToRedisBoltBuilder.getId(),wordCountToRedisBolt,wordCountToRedisBoltBuilder.getParallelismHint()) .shuffleGrouping(rollingWordCountBoltBuilder.getId()); //分词统计数据发送,相同的单词发送到相同的bolt,上游数据源为分词bolt。每天汇总一次数据, builder.setBolt(wordCountBoltBuilder.getId(), wordCountBolt, wordCountBoltBuilder.getParallelismHint()) .fieldsGrouping(splitSentenceBoltBuilder.getId(), new Fields("word")); //同样的单词分到一个bolt处理(防止多个bolt的sql connection处理同一个单词),分词数据写入Mysql builder.setBolt(wordCountToMySQLBoltBuilder.getId(),wordCountToMySQLBolt,wordCountToMySQLBoltBuilder.getParallelismHint()) .fieldsGrouping(wordCountBoltBuilder.getId(), new Fields("word")); //滑动窗口topN,注意:根据观察(还没看源码)此处TOPN是累计TOPN,而且如果掉出了TOPN,会导致计数清零。 /** * 产生中间数据Rankings,就是统计分配在每个bolt实例里面的单词的Rankings。这里的聚合操作类似于Hadoop的combiner。(Mapper端的reduce) * 同一个对象("obj"域)需要发送到同一个Bolt求Rankings。 * {@link org.apache.storm.starter.bolt.IntermediateRankingsBolt}接收的Tuple格式为:(object, object_count, additionalField1,additionalField2, ..., additionalFieldN),并且根据object_count对object进行排序 * {@link org.apache.storm.starter.tools.Rankings}:自然降序排列 * {@link org.apache.storm.starter.tools.Rankable}:可自然降序排列接口 * {@link org.apache.storm.starter.tools.RankableObjectWithFields}:RankableObjectWithFields是Rankable的实现类,根据Fields实现Rank */ builder.setBolt(intermediateRankingsWordCountBoltBuilder.getId(), intermediateRankingsWordCountBolt, intermediateRankingsWordCountBoltBuilder.getParallelismHint()) .fieldsGrouping(rollingWordCountBoltBuilder.getId(),new Fields("obj")); /** * 将IntermediateRankingsBolt统计的Rankings全部汇聚于一个bolt实例(TotalRankingsBolt)进行统一的Rankings统计。类似与hadoop的reduce。 * globalGrouping意为将数据分配给同一个task处理。 * {@link org.apache.storm.starter.bolt.TotalRankingsBolt} 合并Rankings * PS:这里是汇聚每个IntermediateRankingsBolt上的Rankings。所以需要通过globalGrouping保证所有Tuple发送给一个bolt的一个task处理(保证所有数据进入同一个bolt实例) * TODO: 那么ParallelismHint对于globalGrouping是否就是没有用了呢? */ builder.setBolt(totalRankingsWordCountBoltBuilder.getId(), totalRankingsWordCountBolt,totalRankingsWordCountBoltBuilder.getParallelismHint()) .globalGrouping(intermediateRankingsWordCountBoltBuilder.getId()); /** * 滑动窗口topN统计写入Redis */ builder.setBolt(wordCountTopNToRedisBoltBuilder.getId(), wordCountTopNToRedisBolt,wordCountTopNToRedisBoltBuilder.getParallelismHint()) .globalGrouping(totalRankingsWordCountBoltBuilder.getId()); return builder; }
Example 20
Source File: LocalSumStormTopology.java From 163-bigdate-note with GNU General Public License v3.0 | 3 votes |
public static void main(String[] args) { //根据Spout和Bolt构建出TopologyBuilder,Storm中任何作业都是通过Topology提交的,Topology中需要指定Spout和Bolt的顺序 TopologyBuilder builder = new TopologyBuilder(); builder.setSpout("DataSourceSpout", new DataSourceSpout()); builder.setBolt("SumBolt", new SumBolt()).shuffleGrouping("DataSourceSpout"); //创建一个本地模式运行的Storm集群 LocalCluster cluster = new LocalCluster(); cluster.submitTopology("LocalSumStormTopology", new Config(), builder.createTopology()); }