write down,forget
adidas eqt support ultra primeknit vintage white coming soon adidas eqt support ultra boost primeknit adidas eqt support ultra pk vintage white available now adidas eqt support ultra primeknit vintage white sz adidas eqt support ultra boost primeknit adidas eqt adv support primeknit adidas eqt support ultra boost turbo red white adidas eqt support ultra boost turbo red white adidas eqt support ultra boost turbo red adidas eqt support ultra whiteturbo adidas eqt support ultra boost off white more images adidas eqt support ultra boost white tactile green adidas eqt support ultra boost beige adidas eqt support ultra boost beige adidas eqt support refined camo drop adidas eqt support refined camo drop adidas eqt support refined running whitecamo adidas eqt support 93 primeknit og colorway ba7506 adidas eqt running support 93 adidas eqt support 93
标签 Tag : Hadoop

clouderaCDH3国内源

<Category: Hadoop> Comments Off on clouderaCDH3国内源

贡献一个cloudra CDH3 国内源 #如何使用呢?

阅读这篇文章的其余部分 »

本文来自: clouderaCDH3国内源

how 2 run hadoop streaming job over brisk

<Category: Hadoop> Comments Off on how 2 run hadoop streaming job over brisk

–error—
[root@platformD testmr]# ./job.sh
rmr: cannot remove /test_output: No such file or directory.
File: /tmp/testmr/-Dbrisk.job.tracker=10.129.6.36:8012 does not exist, or is not readable

阅读这篇文章的其余部分 »

本文来自: how 2 run hadoop streaming job over brisk

brisk调试部署全纪录

<Category: cassandra, Hadoop, nosql> Comments Off on brisk调试部署全纪录

brisk快速测试记录。
参考链接:
http://www.datastax.com/docs/0.8/brisk/about_pig
阅读这篇文章的其余部分 »

本文来自: brisk调试部署全纪录

datastax brisk 安装

<Category: cassandra> Comments Off on datastax brisk 安装

https://github.com/riptano/brisk/archives/brisk1)

//压缩包里面包含了所有的组件:brisk1.0,pig,hive,hadoop,cassandra

或者使用包来安装
redhat或centos下:
第一步,先安装EPEL(Extra Packages for Enterprise Linux),包含了brisk依赖的相关包,如jna和jpackage-utils
如果不确定是否安装EPEL,可以通过查看/etc/yum.repos.d下的epel.repo和epel-testing.repo 文件

如果遇到警告: RPM-GPG-KEY-EPEL key not being found,可以忽略或者到这里下载key:https://fedoraproject.org/keys

ok,开始正式安装brisk

添加源

替换成你系统自己的,有EL或Fedora两种

替换之后的repo文件如下:

安装

debian下:
编辑文件/etc/apt/sources.list

可选 lenny, lucid, maverick or squeeze

debian5.0使用如下

添加datastx的key

安装

阅读这篇文章的其余部分 »

本文来自: datastax brisk 安装

Hadoop and MapReduce: Big Data Analytics [gartner]

<Category: Hadoop> Comments Off on Hadoop and MapReduce: Big Data Analytics [gartner]

收藏,下载地址:http://dl.medcl.com/get.php?id=29&path=books%2Fgartner%2CHadoop+and+MapReduce+Big+Data+Analytics.7z

阅读这篇文章的其余部分 »

本文来自: Hadoop and MapReduce: Big Data Analytics [gartner]

Hive derby lock及目录权限错误

<Category: Hadoop> Comments Off on Hive derby lock及目录权限错误

FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
NestedThrowables:
org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask
Hive history file=/tmp/dev/hive_job_log_dev_201107062337_381665684.txt
FAILED: Error in semantic analysis: line 1:83 Exception while processing raw_daily_stats_table: Unable to fetch table raw_daily_stats_table

查看hive配置文件/etc/hive/conf/hive-default.xml,找到你的元数据存放位置

打开hdfs目录发现
/user/hive/warehouse

raw_daily_stats_table 目录的权限成root了,但是我是以dev身份执行的,

执行:

结果发现还是报,神啊

FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
NestedThrowables:
org.apache.commons.dbcp.SQLNestedException: Cannot get a connection, pool error Could not create a validated object, cause: A read-only user or a user in a read-only database is not permitted to disable read-only mode on a connection.
FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask

打开配置文件/etc/hive/conf/hive-site.xml发现如下节点

然后定位到相应目录

db.lck 干掉, dbex.lck干掉

再跑hadoop相关脚本,ok~

本文来自: Hive derby lock及目录权限错误

热门话题,时间及空目录的处理

<Category: Hadoop, Linux> Comments Off on 热门话题,时间及空目录的处理

 

先查看hadoop目录的文件数,然后再决定是不是在input里面加上该目录
[dev@platformB dailyrawdata]$  hadoop fs -ls /trendingtopics |wc -l
3

计算时间的方法
[dev@platformB dailyrawdata]$ lastdate=20110619
[dev@platformB dailyrawdata]$ echo $lastdate
20110619
[dev@platformB dailyrawdata]$ echo date --date "-d $lastdate + 1day" +"%Y%m%d"
20110620

[dev@platformB dailyrawdata]$ echo D9=date --date "now -20 day" +"%Y%m%d"
D9=20110530

 

[dev@platformB dailyrawdata]$ D1=date --date "now" +"%Y/%m/%d"
[dev@platformB dailyrawdata]$ echo $D1
2011/06/20

注:等号后面不能有空格,如下面:

[dev@platformB dailyrawdata]$ D1= date --date "now" +"%Y/%m/%d"
-bash: 2011/06/20: No such file or directory

 

拷贝今天的文件到指定目录

DAYSTR=date --date "now" +"%Y/%m/%d"

hadoop fs -copyFromLocal dailyrawdata/* /trendingtopics/data/raw/$DAYSTR

 

慢着,当目录下文件为空的时候,Hadoop Stream Job的根据你指定的Input Pattern找不到文件的时候会抛异常,结果就造成了Job的失败。

找了半天也没有找到好的办法(那个知道比较好的办法,还请不吝赐教啊),只能先判断目录是否为空,为空则将文件夹重定向到一个空文件。

#touch blank file
BLANK=”/your folder/temp/blank”
hadoop fs -touchz $BLANK

#define a function to check hdfs files
function check_hdfs_files(){

#run hdfs command to check the files
hadoop fs -ls $1 &>/dev/null

#if file match is zero
#check file exists
if  [ $? -ne 0 ]
then
eval “$2=$BLANK”
echo “can’t find any files,use blank file instead”
fi

return $?
}

 

D0=date --date "now" +"/your folder/%Y/%m/%d/${APPNAME}-${TENANT}*"
D1=date --date "now -1 day" +"/your folder/%Y/%m/%d/$APPNAME-$TENANT*"

#check file exists
check_hdfs_files $D0 “D0”
check_hdfs_files $D1 “D1”

本文来自: 热门话题,时间及空目录的处理

hadoop thrift client

<Category: Hadoop> Comments Off on hadoop thrift client

http://code.google.com/p/hadoop-sharp/
貌似不给力,pass

http://wiki.apache.org/hadoop/HDFS-APIs
http://wiki.apache.org/hadoop/MountableHDFS
http://wiki.apache.org/hadoop/Hbase/Stargate
http://hadoop.apache.org/hdfs/docs/r0.21.0/hdfsproxy.html

统统不给力啊,走thrift吧,看了下svn,cocoa之类的都有现成的了,为啥没有c#,faint
阅读这篇文章的其余部分 »

本文来自: hadoop thrift client

Hive安装Tips

<Category: Hadoop> Comments Off on Hive安装Tips

Hive安装

下载地址
http://hive.apache.org/releases.html
阅读这篇文章的其余部分 »

本文来自: Hive安装Tips

搭建trendingtopics

<Category: 小道消息> Comments Off on 搭建trendingtopics

https://github.com/datawrangling/trendingtopics
https://github.com/datawrangling/spatialanalytics

搭建trendingtopics,步骤。

环境准备

配置文件

安装

如果保错:undefined local variable or method `version_requirements’
vi config/environment.rb
在开头加入:

安装mysql client和mysql gem

配置数据库连接

安装数据库

生成100条文章来做demo数据

server启动后,访问地址http://localhost:3000/

报错:

创建表 CREATE TABLE raw_daily_stats_table1 (redirect_title STRING, dates STRING, pageviews STRING, total_pageviews BIGINT, monthly_trend DOUBLE) ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\t’ STORED AS TEXTFILE; 加载数据 LOAD DATA INPATH ‘/home/dev/finalresult-a’ INTO TABLE raw_daily_stats_table; //文件路径为hadoop的文件路径,上面的路径对应为hdfs://platformB/home/dev/finalresult-a

加载的时候如果报加载失败,检查你的hdfs,会发现生成了一个你的文件名+_copy_1的文件,然后你load这个文件就成了。 hive> show tables > ; FAILED: Error in metadata: javax.jdo.JDOFatalDataStoreException: Failed to start database ‘/var/lib/hive/metastore/metastore_db’, see the next exception for details. NestedThrowables: java.sql.SQLException: Failed to start database ‘/var/lib/hive/metastore/metastore_db’, se e the next exception for details. FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask hive> cat derby.log ============= begin nested exception, level (3) =========== ERROR XSDB6: Another instance of Derby may have already booted the database /var/lib/hive/ metastore/metastore_db. at org.apache.derby.iapi.error.StandardException.newException(Unknown Source) at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.privGetJBMSLockOnDB(Un known Source) at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.getJBMSLockOnDB(Unknow n Source) at org.apache.derby.impl.store.raw.data.BaseDataFileFactory.boot(Unknown Source) at org.apache.derby.impl.services.monitor.BaseMonitor.boot(Unknown Source) at org.apache.derby.impl.services.monitor.TopService.bootModule(Unknown Source) at org.apache.derby.impl.services.monitor.BaseMonitor.startModule(Unknown Source) at org.apache.derby.iapi.services.monitor.Monitor.bootServiceModule(Unknown Source ) 原来异常退出造成前面的访问derby进程还在,而derby是文件型的存储,每次只能一个进程打开,so,你懂的,看来生成环境使用mysql才是王道,打开配置文件hive-default.xml

hive查询及排序: select * from raw_daily_stats_table sort by monthly_trend; select * from raw_daily_stats_table sort by monthly_trend desc limit 10; http://www.fuzhijie.me/?p=377 http://wiki.apache.org/hadoop/Hive/AdminManual/MetastoreAdmin

本文来自: 搭建trendingtopics