write down,forget
adidas eqt support ultra primeknit vintage white coming soon adidas eqt support ultra boost primeknit adidas eqt support ultra pk vintage white available now adidas eqt support ultra primeknit vintage white sz adidas eqt support ultra boost primeknit adidas eqt adv support primeknit adidas eqt support ultra boost turbo red white adidas eqt support ultra boost turbo red white adidas eqt support ultra boost turbo red adidas eqt support ultra whiteturbo adidas eqt support ultra boost off white more images adidas eqt support ultra boost white tactile green adidas eqt support ultra boost beige adidas eqt support ultra boost beige adidas eqt support refined camo drop adidas eqt support refined camo drop adidas eqt support refined running whitecamo adidas eqt support 93 primeknit og colorway ba7506 adidas eqt running support 93 adidas eqt support 93

datastax brisk 安装

<Category: cassandra> 查看评论

https://github.com/riptano/brisk/archives/brisk1)

//压缩包里面包含了所有的组件:brisk1.0,pig,hive,

或者使用包来安装
redhat或centos下:
第一步,先安装EPEL(Extra Packages for Enterprise Linux),包含了brisk依赖的相关包,如jna和jpackage-utils
如果不确定是否安装EPEL,可以通过查看/etc/yum.repos.d下的epel.repo和epel-testing.repo 文件

如果遇到警告: RPM-GPG-KEY-EPEL key not being found,可以忽略或者到这里下载key:https://fedoraproject.org/keys

ok,开始正式安装brisk

添加源

替换成你系统自己的,有EL或Fedora两种

替换之后的repo文件如下:

安装

debian下:
编辑文件/etc/apt/sources.list

可选 lenny, lucid, maverick or squeeze

debian5.0使用如下

添加datastx的key

安装


–mark:http://www..com/docs/0.8/brisk/install_brisk_packages —

About Brisk Packaged Installations

The packaged releases create a user cassandra. When starting brisk as a service, the Cassandra and Hadoop tracker services run as this user. A service initialization script is located in /etc/init.d/brisk. Run levels are not set by the package.

The package installs into the following directories:

Brisk / Cassandra Directories

  • /var/lib/cassandra (Cassandra and CassandraFS data directories)
  • /var/log/cassandra
  • /var/run/cassandra
  • /usr/share/brisk/cassandra (Cassandra environment settings)
  • /usr/share/brisk/cassandra/lib
  • /usr/share/brisk-demos (Portfolio Manager demo application)
  • /usr/bin
  • /usr/sbin
  • /etc/brisk/cassandra (Cassandra configuration files)
  • /etc/init.d
  • /etc/security/limits.d
  • /etc/default/

Hadoop Directories

  • /usr/share/brisk/hadoop (Hadoop environment settings)
  • /etc/brisk/hadoop (Hadoop configuration files)

Hive Directories

  • /usr/share/brisk/hive (Hive environment settings)
  • /etc/brisk/hive (Hive configuration files)

Hive Directories

  • /usr/share/brisk/pig (Pig environment settings)
  • /etc/brisk/pig (Pig configuration files)

Next Steps

For next steps see Initializing a Brisk Cluster and then Starting Brisk.

–initial–

Configuring and Initializing a Brisk Cluster

Before you can start Brisk, be it on a single or multi-node cluster, there are a few Cassandra configuration properties you must set on each node in the cluster. These are set in the cassandra.yaml file (located in/etc/brisk/cassandra in packaged installations or $BRISK_HOME/resources/cassandra/conf in binary distributions).

Initializing a Single-Node Brisk Cluster (for evaluation purposes)

Brisk is intended to be run on multiple nodes, however you may want to start with a single node Brisk cluster for evaluation purposes. To start Brisk on a single node:

  1. Set the following properties in the cassandra.yaml file:
  2. Start Brisk.

The -t option starts Cassandra (with CassandraFS) and the Hadoop Job Tracker and Task Tracker services. Because there is no Hadoop NameNode with CassandraFS, there is no additional configuration to run MapReduce jobs in single mode versus distributed mode.

When running on a single node, there are no additional steps to configure the Cassandra seed node and Brisk job tracker node, as they are automatically set to localhost.

Initializing a Multi-Node Brisk Cluster

Before you start a multi-node Brisk cluster you must determine the following:

  • A name for your cluster
  • How many total nodes your Brisk cluster will have
  • The IP addresses of each node
  • The token for each node (see Generating Tokens). If you are deploying a mixed-workload Brisk Cluster, make sure to alternate token assignments between Cassandra nodes and Brisk nodes so that replicas are evenly distributed around the Cassandra ring.
  • Which nodes will serve as the seed nodes. If you are configuring a mixed-workload cluster, you should have at least one seed node for each side (the Cassandra real-time side and the Brisk analytics side).
  • If you intend to run a mixed-workload cluster determine which nodes will serve which purpose.

For example, suppose you are starting a 6 node mixed-workload cluster with 3 Brisk nodes and 3 Cassandra nodes. The nodes have the following IPs:

  • node0 (Cassandra seed) 110.82.155.0
  • node1 (Cassandra) 110.82.155.1
  • node2 (Cassandra) 110.82.155.2
  • node3 (Brisk seed) 110.82.155.3
  • node4 (Brisk) 110.82.155.4
  • node5 (Brisk) 110.82.155.5

The cassandra.yaml file for each node would have the following modified property settings. Note that in a mixed-workload cluster, the token placement alternates between Cassandra and Brisk nodes. This ensures even distribution of replicas on both sides of the cluster. For example:

  • node 0: 0
  • node 3: 28356863910078205288614550619314017621
  • node 1: 56713727820156410577229101238628035242
  • node 4: 85070591730234615865843651857942052864
  • node 2: 113427455640312821154458202477256070485
  • node 5: 141784319550391026443072753096570088106

Node0

Node1

Node2

Node3

Node4

Node5

Generating Tokens

Tokens are used to assign a range of data to a particular node. Assuming you are using the RandomPartitioner, this approach will ensure even data distribution.

  1. Create a new file for your token generator program:
  2. Paste the following Python program into this file:
  3. Save and close the file and make it executable:
  4. Run the script:
  5. When prompted, enter the total number of nodes in your cluster:
  6. On each node, edit the cassandra.yaml file and enter its corresponding token value in theinitial_token property.

Starting a Brisk Cluster

After you have installed and configured Brisk on one or more nodes, you are ready to start your Brisk cluster. If you want to run a multi-node Brisk cluster, you must first install the Brisk packages on each node, and then configure each node according to the instructions in Initializing a Brisk Cluster.

Packaged installations include startup scripts for running Brisk as a service. Binary packages do not.

Starting Brisk as a Stand-Alone Process

If running a mixed workload cluster, determine which nodes to start as Cassandra nodes and which nodes to start as Brisk nodes. To start Brisk as a service see Starting Brisk as a Service. Otherwise, you can start the Brisk server process as follows:

On a Brisk node:

On a Cassandra node:

Starting Brisk as a Service

Packaged installations provide startup scripts in /etc/init.d for starting Brisk as a service. Before starting Brisk as a service on a node, you must first configure the Cassandra service to start the Hadoop Job Tracker and Task Tracker services as well.

Note

For mixed-workload clusters, nodes that are Cassandra-only can simply start the Cassandra service (skip step 1).

  1. Create the file /etc/default/brisk, and add the following line as the contents of this file:
  2. Start the Brisk service:

Note

On Enterprise Linux systems, the Brisk service runs as a java process. On Debian systems, the Brisk service runs as a jsvc process.

–binary install–

Installing the Brisk Binary Distribution

Binary distributions of Brisk are available from the DataStax website.

To run Brisk, you will need to install a Java Virtual Machine (JVM). DataStax recommends installing the most recently released version of the Sun JVM. Versions earlier than 1.6.0_19 are specifically not recommended.

  1. Download the distribution to a location on your machine and unpack it:
  2. For convenience, you may want to set the following environment variables:
  3. Create the data and logging directories needed by Brisk Cassandra. By default, Cassandra uses/var/lib/cassandra and /var/log/cassandra. To create these directories, run the following commands where $USER is the user that will run Brisk:

About Brisk Binary Installations

Brisk Directories

  • bin (Brisk start scripts)
  • demos (Portfolio Manager Demo)
  • interface
  • javadoc
  • lib
  • resources/cassandra/bin (Cassandra utilities)
  • resources/cassandra/conf (Cassandra configuration files)
  • resources/hadoop (Hadoop installation)
  • resources/hive (Hive installation)
  • resources/pig (Pig installation)

Installing JNA

Installing JNA (Java Native Access) on Linux platforms can improve Brisk memory usage. With JNA installed and configured as described in this section, Linux does not swap out the JVM, and thus avoids related performance issues.

To install JNA with Brisk

  1. Download jna.jar from the JNA project site.
  2. Add jna.jar to $BRISK_HOME/lib/ or otherwise place it on the classpath.
  3. Edit the file /etc/security/limits.conf, adding the following entries for the user or group that runs Brisk:

 

Installing the OpsCenter Dashboard
http://www.datastax.com/docs/opscenter/install_opscenter#opscenterd-install

OpsCenter packages are available from DataStax. You will need a username and password to access the OpsCenter package repositories. If you registered online, these credentials should have been sent to you in an email. If you do not have your OpsCenter credentials, contact DataStax Support.

Installing OpsCenter on Debian and Ubuntu

DataStax provides OpsCenter package repositories for Debian 6.0 (Squeeze), Debian 5.0 (Lenny), Ubuntu Lucid (10.04), Ubuntu Maverick (10.10) and Ubuntu Natty Narwhal (11.04). There are different package repositories for the free and paid versions of OpsCenter.

These instructions assume that you have the aptitude package management application installed, and that you have root access on the machine where you are installing. If you have not already, log in as root. Optionally, you can run the commands using sudo.

  1. Edit the aptitude repository source list file (/etc/apt/sources.list).
  2. In this file, add a line for the DataStax OpsCenter repository, where <username> and <password>are the username and password from your OpsCenter registration email. Note the different repository locations for the free and paid versions of OpsCenter.For the free version of OpsCenter:

    For the paid version of OpsCenter:

  3. In this file, also add a line for the general DataStax repository (for installing dependent packages such as jna). Add the appropriate repository location for your operating system where OSType is lenny,lucidmavericksqueeze or natty:

    For example, if installing on Ubuntu 10.10 (Maverick):

    If installing on Debian 5.0 (Lenny), add the lenny-backports repository definition as well.

  4. Save and close the /etc/apt/sources.list file after you are done adding the appropriate DataStax repositories.
  5. Add the DataStax repository key to your aptitude trusted keys.
  6. (Debian 5.0 Only) If installing on Debian 5.0 (Lenny), run the following commands as well.
  7. Install the OpsCenter package using aptitude. Note the different package names for the free and paid versions of OpsCenter.For the free version of OpsCenter:

    For the paid version of OpsCenter:

Installing OpsCenter on RHEL and CentOS

DataStax provides yum repositories for RedHat Enterprise Linux (RHEL) and CentOS versions 5.4, 5.5 and 5.6. There are different package repositories for the free and paid versions of OpsCenter.

These instructions assume that you have the yum package management application installed, and that you have root access on the machine where you are installing OpsCenter console. If you have not already, log in as root. Optionally, you can run the commands using sudo.

  1. EPEL (Extra Packages for Enterprise Linux) contains dependent packages required by OpsCenter, such as jna and jpackage-utils. EPEL must be installed on the OpsCenter machine. To install the epel-release package:
  2. Add a yum repository specification for the DataStax OpsCenter repository in /etc/yum.repos.d. For example:
  3. In this file add the following lines where <username> and <password> are the username and password from your OpsCenter registration email. Note the different repository locations for the free and paid versions of OpsCenter.

For the free version of OpsCenter:

For the paid version of OpsCenter:

  1. Install the OpsCenter package using yum. Note the different package names for the free and paid versions of OpsCenter.For the free version of OpsCenter:

    For the paid version of OpsCenter:

About Your OpsCenter Installation

The OpsCenter packaged releases create an opscenter user. When starting the OpsCenter dashboard as a service, the service runs as this user. A service initialization script is located in /etc/init.d. Run levels are not set by the package.

Before starting OpsCenter and installing agents, make the required settings described in Configuring OpsCenter.

The OpsCenter package installs into the following directories:

  • /var/lib/opscenter (SSL certificates for encrypted agent/dashboard communications)
  • /var/log/opscenter (log directory)
  • /var/run/opscenter (runtime files)
  • /usr/share/opscenter (jar, agent, web application, and binary files)
  • /etc/opscenter (configuration files)
  • /etc/init.d (service startup script)
  • /etc/security/limits.d (OpsCenter user limits)

OpsCenter requires the following ports:

  • 8888 – The OpsCenter web server listens on this port. Configurable in opscenterd.conf.
  • 61620 – The port agents use to connect to OpsCenter.

Additionally, OpsCenter agents gather JMX information from the local node, so they need all ports (1024-65535) open on the local interface.

 

 

本文来自: datastax brisk 安装