brisk快速测试记录。
参考链接:
http://www.datastax.com/docs/0.8/brisk/about_pig
设置环境变量
1 2 3 4 |
vi /etc/profile export BRISK_HOME=/usr/local/brisk-1.0 export PATH=$PATH:$BRISK_HOME/bin |
生效
1 |
. /etc/profile |
On linux systems, you need to run the following as root
1 2 |
su echo 1 > /proc/sys/vm/overcommit_memory |
This is to avoid OOM errors when tasks are spawned.
–此步骤不需要–
如果从源码编译,可能需要ant
compile and download all dependencies
1 2 3 4 |
wget http://mirrors.kahuki.com/apache/ant/binaries/apache-ant-1.8.2-bin.tar.gz tar vxzf apache-ant-1.8.2-bin.tar.gz .. ... ant |
–end——–
start cassandra with built in job/task trackers
1 |
./bin/brisk cassandra -t |
问?为什么有-t?
答:The -t option starts Cassandra (with CassandraFS) and the Hadoop Job Tracker and Task Tracker services.
Because there is no Hadoop NameNode with CassandraFS, there is no additional configuration to run MapReduce jobs in single mode versus distributed mode
view jobtracker
http://localhost:50030
examine CassandraFS
1 2 3 4 5 6 |
./bin/brisk hadoop fs -lsr cfs:/// drwxrwxrwx - root root 0 2011-09-06 14:36 /tmp drwxrwxrwx - root root 0 2011-09-06 14:36 /tmp/hadoop-root drwxrwxrwx - root root 0 2011-09-06 14:36 /tmp/hadoop-root/mapred drwxrwxrwx - root root 0 2011-09-06 14:36 /tmp/hadoop-root/mapred/system -rwxrwxrwx 1 root root 4 2011-09-06 14:36 /tmp/hadoop-root/mapred/system/jobtracker.info |
start hive shell or webUI
1 |
./bin/brisk hive |
or
1 |
./bin/brisk hive --service hwi |
open web browser to http://localhost:9999/hwi
貌似还有不少bug~
test
1.pig
上传文件
1 |
bin/brisk hadoop fs -put demos/pig/files/example.txt / |
创建keyspace
打开cli
resources/cassandra/bin/cassandra-cli
连接cassandra
1 2 3 |
connect 127.0.0.1/9160; create keyspace PigDemo with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = [{replication_factor:1}]; exit; |
注:集群环境,不能用127.0.0.1,必须用ip,如下错误:
1 2 3 4 5 6 7 8 9 10 11 12 13 |
[default@unknown] connect 127.0.0.1/9160; SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". SLF4J: Defaulting to no-operation (NOP) logger implementation SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details. Connected to: "Brisk Cluster" on 127.0.0.1/9160 [default@unknown] create keyspace PigDemo with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = [{replication_factor:1}]; Internal error processing system_add_keyspace -- Connected to: "Brisk Cluster" on 10.129.6.36/9160 [default@unknown] create keyspace PigDemo with placement_strategy = 'org.apache.cassandra.locator.SimpleStrategy' and strategy_options = [{replication_factor:1}]; 002f2970-d87c-11e0-0000-e2490229bfff Waiting for schema agreement... ... schemas agree across the cluster |
—忽略—-
执行
1 2 3 4 5 6 7 8 9 10 |
resources/pig/bin/pig -x demos/pig/001_sort-by-total-cfs.pig Exception in thread "main" java.lang.NoClassDefFoundError: jline/ConsoleReaderInputStream Caused by: java.lang.ClassNotFoundException: jline.ConsoleReaderInputStream at java.net.URLClassLoader$1.run(URLClassLoader.java:202) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.java:190) at java.lang.ClassLoader.loadClass(ClassLoader.java:307) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:301) at java.lang.ClassLoader.loadClass(ClassLoader.java:248) Could not find the main class: org.apache.pig.Main. Program will exit. |
–忽略完—
原来,正确的做法不是这样的,看下面:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
./bin/brisk pig 2011-09-06 15:03:30,470 [main] INFO org.apache.pig.Main - Logging error messages to: /home/dev/brisk/brisk-1.0~beta2/pig_1315292610468.log 2011-09-06 15:03:30,744 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: cfs:/// 2011-09-06 15:03:31,398 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: localhost.localdomain:8012 grunt> help Commands: <pig latin statement>; - See the PigLatin manual for details: http://hadoop.apache.org/pig File system commands: fs <fs arguments> - Equivalent to Hadoop dfs command: http://hadoop.apache.org/common/docs/current/hdfs_shell.html Diagnostic commands: describe <alias>[::<alias] - Show the schema for the alias. Inner aliases can be described as A::B. explain [-script <pigscript>] [-out <path>] [-brief] [-dot] [-param <param_name>=<param_value>] [-param_file <file_name>] [<alias>] - Show the execution plan to compute the alias or for entire script. -script - Explain the entire script. -out - Store the output into directory rather than print to stdout. -brief - Don't expand nested plans (presenting a smaller graph for overview). -dot - Generate the output in .dot format. Default is text format. -param <param_name - See parameter substitution for details. -param_file <file_name> - See parameter substitution for details. alias - Alias to explain. dump <alias> - Compute the alias and writes the results to stdout. Utility Commands: exec [-param <param_name>=param_value] [-param_file <file_name>] <script> - Execute the script with access to grunt environment including aliases. -param <param_name - See parameter substitution for details. -param_file <file_name> - See parameter substitution for details. script - Script to be executed. run [-param <param_name>=param_value] [-param_file <file_name>] <script> - Execute the script with access to grunt environment. -param <param_name - See parameter substitution for details. -param_file <file_name> - See parameter substitution for details. script - Script to be executed. kill <job_id> - Kill the hadoop job specified by the hadoop job id. set <key> <value> - Provide execution parameters to Pig. Keys and values are case sensitive. The following keys are supported: default_parallel - Script-level reduce parallelism. Basic input size heuristics used by default. debug - Set debug on or off. Default is off. job.name - Single-quoted name for jobs. Default is PigLatin:<script name> job.priority - Priority for jobs. Values: very_low, low, normal, high, very_high. Default is normal stream.skippath - String that contains the path. This is used by streaming. any hadoop property. help - Display this message. quit - Quit the grunt shell. grunt> |
#原来如此,之前那种方式是local,在brisk集群中需要使用下面的方式,如下:
1 |
grunt>run demos/pig/001_sort-by-total-cfs.pig |
#下面就开始跑了
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 |
grunt> -- load the score data into a pig relation grunt> score_data = LOAD 'cfs:///example.txt' USING PigStorage() AS (name:chararray, score:long); grunt> grunt> -- group tuples by user grunt> -- the PARALLEL keyword controls how many reducers are used grunt> grunt> name_group = GROUP score_data BY name PARALLEL 3; grunt> grunt> -- calculate the total score per user grunt> grunt> name_total = FOREACH name_group GENERATE group, COUNT(score_data.name), LongSum(score_data.score) AS total_score; grunt> grunt> -- order the results by score in descending order grunt> grunt> ordered_scores = ORDER name_total BY total_score DESC PARALLEL 3; grunt> grunt> -- output results to standard output grunt> grunt> DUMP ordered_scores; ... ... |
brisktool使用:http://www.datastax.com/docs/0.8/brisk/about_pig
1 2 3 4 5 6 7 8 9 10 11 |
usage: java BriskTool [-h|--host=<hostname>] [-p|--port=<#>] <command> <args> -h,--host <arg> node hostname or ip address -p,--port <arg> remote jmx agent port number Available commands: jobtracker - Returns the jobtracker hostname and port movejt - Move the jobtracker and notifies the Task trakers # bin/brisktool jobtracker localhost.localdomain:8012 |
默认的jobtracker client的端口是8012,如果你不确定你的究竟是多少,则可以通过执行上面的命令来查看
test2
portfolio_manager
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
cd demos/ # ls pig portfolio_manager # cd portfolio_manager/ # ls 10_day_loss.q bin portfolio.jar README.txt website # ./bin/pricer Created keyspaces. Sleeping 1s for propagation. total,interval_op_rate,interval_key_rate,avg_latency,elapsed_time 10000,1000,1000,0.0137187,4 ./bin/pricer -o INSERT_PRICES ./bin/pricer -o UPDATE_PORTFOLIOS ./bin/pricer -o INSERT_HISTORICAL_PRICES -n 100 |
如果是在集群环境,使用如下的方式来初始化数据及keyspace
1 |
./bin/pricer -o INSERT_PRICES --replication-strategy="org.apache.cassandra.locator.NetworkTopologyStrategy" --strategy-properties="Brisk:1,Cassandra:1" |
–注,貌似上面这种方式不行,应该和前面一样ip的问题。
–解决办法–
手动处理
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
bin/brisk hive > CREATE TABLE invites (foo INT, bar STRING) PARTITIONED BY (ds STRING); > LOAD DATA LOCAL INPATH 'resources/hive/examples/files/kv2.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-15'); > LOAD DATA LOCAL INPATH 'resources/hive/examples/files/kv3.txt' OVERWRITE INTO TABLE invites PARTITION (ds='2008-08-08'); > SELECT count(*), ds FROM invites GROUP BY ds; > CREATE DATABASE PortfolioDemo; > CREATE DATABASE MyBriskDemoDB; > CREATE EXTERNAL TABLE MyTable(row_key string, col1 string, col2 string) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' TBLPROPERTIES ( "cassandra.ks.name" = "PortfolioDemo" ); > CREATE EXTERNAL TABLE Users(userid string, name string, email string, phone string) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' WITH SERDEPROPERTIES ( "cassandra.columns.mapping" = ":key,user_name,primary_email,home_phone") TBLPROPERTIES ( "cassandra.range.size" = "100", "cassandra.slice.predicate.size" = "100" ); > CREATE EXTERNAL TABLE PortfolioDemo.Stocks (row_key string, column_name string, value string) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler'; > CREATE EXTERNAL TABLE PortfolioDemo.PortfolioStocks (portfolio string, ticker string, number_shares string) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler' WITH SERDEPROPERTIES ("cassandra.columns.mapping" = ":key,:column,:value" ); > CREATE EXTERNAL TABLE PortfolioDemo.HistLoss (row_key string, worst_date string, loss string) STORED BY 'org.apache.hadoop.hive.cassandra.CassandraStorageHandler'; |
然后再执行前面的命令:
1 2 3 |
./bin/pricer -o INSERT_PRICES ./bin/pricer -o UPDATE_PORTFOLIOS ./bin/pricer -o INSERT_HISTORICAL_PRICES -n 100 |
–这样处理了之后,倒是没有之前的错误了,但是数据没有进去了
cli下:drop keyspace PortfolioDemo;
然后重新执行第一步的pricer程序,即可。
启动网站
1 2 |
cd website java -jar start.jar |
http://localhost:8983/portfolio
生成数据
1 2 |
cd .. while true; do ./bin/pricer; sleep 1; done |
如图所示,报表中显示的金额不断在变化,但是每个图表下方的10天历史统计数据显示为?,没错,这个还没有计算,用hive来跑一下吧。
1 |
./bin/brisk hive -f demos/portfolio_manager/10_day_loss.q |
ok,单击环境的brisk就是这么简单了。
目前为止,我们还没有改任何一个配置文件,单节点跑的很happy。
下面开始搭建brisk集群环境。
参考链接:
http://www.datastax.com/docs/0.8/brisk/init_brisk_cluster
开始之前,我需要确定下面几件事情,
集群名称,cassandra通过集群名称来区分集群
集群节点数量
每个节点的ip
每个节点的token,token生成:http://www.datastax.com/docs/0.8/brisk/init_brisk_cluster#token-gen
seed节点配置,cassandra节点可分成real-time对外服务的节点和用来进行分析计算的节点(什么是seed,cassandra通过gossip协议来通信,集群很大之后,不可能每个节点上都配置了完整的节点列表,但是只是需要有一个作为seed,种子的意思,通过这个seed传递发现其他节点,以此类推,http://wiki.apache.org/cassandra/GettingStarted)
现在我们有3个节点,先生成token
token用来为特点节点的数据分配范围,假设使用的是RandomPartitioner方式,通过这种方式可以保证数据的平均分配。
1 2 3 4 5 6 7 8 9 10 |
# vi tokentool #输入 #! /usr/bin/python import sys if (len(sys.argv) > 1): num=int(sys.argv[1]) else: num=int(raw_input("How many nodes are in your cluster? ")) for i in range(0, num): print 'node %d: %d' % (i, (i*(2**127)/num)) |
1 2 3 4 5 6 7 |
# chmod a+x tokentool # ./tokentool How many nodes are in your cluster? 3 node 0: 0 node 1: 56713727820156410577229101238628035242 node 2: 113427455640312821154458202477256070485 # |
注:生成的token在后面的配置中,填入到initial_token
1 2 3 |
10.129.8.58 (cassandra seed)platformA 10.129.8.74 (brisk node) platformB 10.129.6.36 (brisk seed)platformD |
节点36:
1 2 3 4 5 6 7 8 9 10 11 12 |
cluster_name: 'Brisk Cluster' initial_token: 0 seed_provider: # Addresses of hosts that are deemed contact points. # Cassandra nodes use this list of hosts to find each other and learn # the topology of the ring. You must change this if you are running # multiple nodes! - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: # seeds is actually a comma-delimited list of addresses. - seeds: "10.129.6.36,10.129.8.58" listen_address: 10.129.6.36 |
节点58:
1 2 3 4 5 6 7 8 9 10 11 12 |
cluster_name: 'Brisk Cluster' initial_token: 56713727820156410577229101238628035242 seed_provider: # Addresses of hosts that are deemed contact points. # Cassandra nodes use this list of hosts to find each other and learn # the topology of the ring. You must change this if you are running # multiple nodes! - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: # seeds is actually a comma-delimited list of addresses. - seeds: "10.129.6.36,10.129.8.58" listen_address: 10.129.8.58 |
节点74:
1 2 3 4 5 6 7 8 9 10 11 12 |
cluster_name: 'Brisk Cluster' initial_token: 113427455640312821154458202477256070485 seed_provider: # Addresses of hosts that are deemed contact points. # Cassandra nodes use this list of hosts to find each other and learn # the topology of the ring. You must change this if you are running # multiple nodes! - class_name: org.apache.cassandra.locator.SimpleSeedProvider parameters: # seeds is actually a comma-delimited list of addresses. - seeds: "10.129.6.36,10.129.8.58" listen_address: 10.129.8.74 |
启动服务
1 2 3 4 5 |
On a Brisk node: brisk cassandra -t On a Cassandra node: brisk cassandra |
JNA安装(Java Native Access)
jna可以提高brisk内存使用的性能
To install JNA with Brisk
Download jna.jar from the JNA project site:http://java.net/projects/jna/sources/svn/show/trunk/jnalib/dist/.
Add jna.jar to $BRISK_HOME/lib/ or otherwise place it on the classpath.
Edit the file /etc/security/limits.conf, adding the following entries for the user or group that runs Brisk:
$USER soft memlock unlimited
$USER hard memlock unlimited
1 2 |
cd lib curl -o jna.jar http://java.net/projects/jna/sources/svn/content/trunk/jnalib/dist/jna.jar?rev=1212 |
–悲催,cfs里面/居然是个文件–
1 2 3 4 5 6 7 |
[root@platformB brisk-1.0]# bin/brisk hive Hive history file=/tmp/root/hive_job_log_root_201109061811_84709751.txt hive> CREATE TABLE invites (foo INT, bar STRING) > PARTITIONED BY (ds STRING); FAILED: Error in metadata: MetaException(message:Got exception: java.io.IOException Can't make directory for path cfs://null/ since it is a file.) FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask hive> quit; |
—
1 2 |
[root@pingtai brisk-1.0~beta2]# bin/brisk hadoop fs -put demos/pig/files/example.txt /example.txt put: Can't make directory for path / since it is a file. |
—
–解决办法:服务全停,cassandra数据目录清空,起服务—
命令汇总:
查看jobtracker client信息
1 2 |
bin/brisktool jobtracker netstat -ano|grep 8012 |
6.36启动hive站点
1 |
bin/brisk hive --service hwi |
地址:http://10.129.6.36:9999/hwi/show_databases.jsp
查看cassandra集群
1 2 3 |
[*]resources/cassandra/bin/cassandra-cli [*]connect 10.129.6.36/9160; [*]describe cluster; |
—
1 2 3 4 5 |
Cluster Information: Snitch: org.apache.cassandra.locator.BriskSimpleSnitch Partitioner: org.apache.cassandra.dht.RandomPartitioner Schema versions: 91b5cac0-d86d-11e0-0000-2d3955226ebf: [10.129.6.36, 10.129.8.58, 10.129.8.74] |
在每个节点上配置host
否则可能有如下异常:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 |
2011-09-06 20:03:34,675 WARN org.apache.hadoop.mapred.ReduceTask: java.net.UnknownHostException: platformB at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:177) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at sun.net.NetworkClient.doConnect(NetworkClient.java:158) at sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at sun.net.www.http.HttpClient.<init>(HttpClient.java:233) at sun.net.www.http.HttpClient.New(HttpClient.java:306) at sun.net.www.http.HttpClient.New(HttpClient.java:323) at sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:860) at sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:801) at sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:726) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getInputStream(ReduceTask.java:1602) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.setupSecureConnection(ReduceTask.java:1559) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.getMapOutput(ReduceTask.java:1467) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.copyOutput(ReduceTask.java:1378) at org.apache.hadoop.mapred.ReduceTask$ReduceCopier$MapOutputCopier.run(ReduceTask.java:1310) |
[Configuring OpsCenter]
1 2 3 |
cd /tmp/ rpm -Uvh http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm vi /etc/yum.repos.d/opscenter.repo |
—
1 2 3 4 5 |
[opscenter] name= DataStax OpsCenter baseurl=http://m_medcl.net-brisk:UW7gaYSoSh7oNR2@rpm.opsc.datastax.com/free enabled=1 gpgcheck=0 |
—
1 2 3 4 |
yum install opscenter-free /etc/init.d/opscenterd status /etc/init.d/opscenterd start netstat -ano|grep 88 |
1 |
vi /etc/opscenter/opscenterd.conf |
—
1 2 3 4 5 6 7 8 9 10 11 |
... [webserver] port = 8888 interface = 10.129.6.36 #替换为public ip [cassandra] # a comma-separated list of places to try for a connection to your Cassandra # cluster: seed_hosts = 10.129.6.36,10.129.8.58 ... |
—
重启服务
1 |
/etc/init.d/opscenterd restart |
打开站点:http://10.129.6.36:8888/opscenter/index.html
1 2 3 4 |
Error Loading OpsCenter OpsCenter is having trouble gathering basic information about your cluster. This usually means OpsCenter cannot connect to your cluster via thrift. Be sure to check your seed hosts in opscenterd.conf. |
—-
Error: No Cassandra connections available
—
Configuring JMX Connectivity on the Monitored Cluster
OpsCenter 通过jmx来进行集群监控,看来还需要配置cassandra一下
vi /usr/local/brisk-1.0/resources/cassandra/conf/cassandra-env.sh
查找
1 |
JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=10.129.6.36" |
重启opscenter,发现还是不行
再次打开
1 2 3 4 5 |
vi /etc/opscenter/opscenterd.conf # the API (Thrift) port on your Cassandra cluster api_port = 9160 #取消上面的注释 |
重启服务
1 |
/etc/init.d/opscenterd restart |
还是不行,哥,怎么回事啊
看日志
1 |
vi /var/log/opscenter/opscenterd.log |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 |
2011-09-07 11:22:20+0800 [] INFO: Starting factory <opscenterd.WebServer.OpsCenterdWebServer instance at 0x4b00ab8> 2011-09-07 11:22:20+0800 [] INFO: Unhandled error in Deferred: 2011-09-07 11:22:20+0800 [] Unhandled Error Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/twisted/scripts/_twistd_unix.py", line 317, in startApplication app.startApplication(application, not self.config['no_save']) File "/usr/lib/python2.6/site-packages/twisted/application/app.py", line 653, in startApplication service.IService(application).startService() File "/usr/lib/python2.6/site-packages/twisted/application/service.py", line 277, in startService service.startService() File "/usr/lib/python2.6/site-packages/twisted/internet/defer.py", line 1141, in unwindGenerator return _inlineCallbacks(None, f(*args, **kwargs), Deferred()) --- <exception caught here> --- File "/usr/lib/python2.6/site-packages/twisted/internet/defer.py", line 1020, in _inlineCallbacks result = g.send(result) File "/usr/lib/python2.6/site-packages/opscenterd/OpsCenterdService.py", line 223, in startService File "/usr/lib/python2.6/site-packages/twisted/application/service.py", line 277, in startService service.startService() File "/usr/lib/python2.6/site-packages/opscenterd/jmxadapt/JmxJythonService.py", line 318, in startService File "/usr/lib/python2.6/site-packages/opscenterd/jmxadapt/JmxJythonService.py", line 346, in setupJythonEnv File "/usr/lib64/python2.6/os.py", line 157, in makedirs mkdir(name, mode) exceptions.OSError: [Errno 13] Permission denied: '/root/.jython_cache' 2011-09-07 11:22:26+0800 [] Problem while calling ClusterController Traceback (most recent call last): Failure: opscenterd.CassandraService.NoCassandraConnection: No Cassandra connections available 2011-09-07 11:22:28+0800 [] Problem while calling ClusterController Traceback (most recent call last): Failure: opscenterd.CassandraService.NoCassandraConnection: No Cassandra connections available |
居然。。。
用这种方式启动,ok:
1 |
service opscenterd start |
agent手动安装方式
1 2 3 4 5 6 7 8 9 |
cd /usr/share/opscenter/agent scp opscenter-agent.tar.gz platforma:/tmp -- cd /tmp/ ls tar -xzf opscenter-agent.tar.gz cd opscenter-agent ls ./bin/install_agent.sh opscenter-agent.rpm 10.129.6.36 10.129.6.36 |
添加第4个节点
1 2 |
tar cf brisk.tar brisk-1.0/ scp brisk.tar dev@10.129.6.62:/tmp/ |
1 2 3 4 5 6 |
./tokentool How many nodes are in your cluster? 4 node 0: 0 node 1: 42535295865117307932921825928971026432 node 2: 85070591730234615865843651857942052864 node 3: 127605887595351923798765477786913079296 |
each node
1 2 3 4 5 6 7 |
vi /usr/local/brisk-1.0/resources/cassandra/conf/cassandra.yaml vi resources/cassandra/conf/cassandra-env.sh vi /etc/hosts 10.129.6.36 platformd 10.129.8.74 platformb 10.129.8.58 platforma 10.129.6.62 platformc |