Hadoop可以在单节点上以所谓的伪分布式模式运行,此时每一个Hadoop守护进程都作为一个独立的Java进程运行。本文通过自动化脚本配置Hadoop伪分布式模式。测试环境为VMware中的Centos 6.3, Hadoop 1.2.1.其他版本未测试。 伪分布式配置脚本 包括配置core-site.
Hadoop可以在单节点上以所谓的伪分布式模式运行,此时每一个Hadoop守护进程都作为一个独立的Java进程运行。本文通过自动化脚本配置Hadoop伪分布式模式。测试环境为VMware中的Centos 6.3, Hadoop 1.2.1.其他版本未测试。
伪分布式配置脚本
包括配置core-site.xml,hdfs-site.xml及mapred-site.xml,配置ssh免密码登陆。[1]
#!/bin/bash# Usage: Hadoop伪分布式配置# History:# 20140426 annhe 完成基本功能# Check if user is rootif [ $(id -u) != "0" ]; then printf "Error: You must be root to run this script!\n" exit 1fi#同步时钟rm -rf /etc/localtimeln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime#yum install -y ntpntpdate -u pool.ntp.org &>/dev/nullecho -e "Time: `date` \n"#默认为单网卡结构,多网卡的暂不考虑IP=`ifconfig eth0 |grep "inet\ addr" |awk '{print $2}' |cut -d ":" -f2`#伪分布式function PseudoDistributed (){ cd /etc/hadoop/ #恢复备份 mv core-site.xml.bak core-site.xml mv hdfs-site.xml.bak hdfs-site.xml mv mapred-site.xml.bak mapred-site.xml #备份 mv core-site.xml core-site.xml.bak mv hdfs-site.xml hdfs-site.xml.bak mv mapred-site.xml mapred-site.xml.bak #使用下面的core-site.xml cat > core-site.xml <<?xml-stylesheet type="text/xsl" href="h<strong>本文来源gaodaima#com搞(代@码$网6</strong>ttp://www.annhe.net/configuration.xsl"?><!---ecms -ecms Put site-specific property overrides in this file. --> fs.default.name hdfs://$IP:9000 eof #使用下面的hdfs-site.xml cat > hdfs-site.xml <<?xml-stylesheet type="text/xsl" href="http://www.annhe.net/configuration.xsl"?><!---ecms -ecms Put site-specific property overrides in this file. --> dfs.replication 1 eof #使用下面的mapred-site.xml cat > mapred-site.xml <<?xml-stylesheet type="text/xsl" href="http://www.annhe.net/configuration.xsl"?><!---ecms -ecms Put site-specific property overrides in this file. --> mapred.job.tracker $IP:9001 eof}#配置ssh免密码登陆function PassphraselessSSH (){ #不重复生成私钥 [ ! -f ~/.ssh/id_dsa ] && ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa cat ~/.ssh/authorized_keys |grep "`cat ~/.ssh/id_dsa.pub`" &>/dev/null && r=0 || r=1 #没有公钥的时候才添加 [ $r -eq 1 ] && cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys chmod 644 ~/.ssh/authorized_keys}#执行function Execute (){ #格式化一个新的分布式文件系统 hadoop namenode -format #启动Hadoop守护进程 start-all.sh echo -e "\n========================================================================" echo "hadoop log dir : $HADOOP_LOG_DIR" echo "NameNode - http://$IP:50070/" echo "JobTracker - http://$IP:50030/" echo -e "\n========================================================================="}PseudoDistributed 2>&1 | tee -a pseudo.logPassphraselessSSH 2>&1 | tee -a pseudo.logExecute 2>&1 | tee -a pseudo.log
脚本测试结果
[root@hadoop hadoop]# ./pseudo.sh14/04/26 23:52:30 INFO namenode.NameNode: STARTUP_MSG:/************************************************************STARTUP_MSG: Starting NameNodeSTARTUP_MSG: host = hadoop/216.34.94.184STARTUP_MSG: args = [-format]STARTUP_MSG: version = 1.2.1STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1503152; compiled by 'mattf' on Mon Jul 22 15:27:42 PDT 2013STARTUP_MSG: java = 1.7.0_51************************************************************/Re-format filesystem in /tmp/hadoop-root/dfs/name ? (Y or N) yFormat aborted in /tmp/hadoop-root/dfs/name14/04/26 23:52:40 INFO namenode.NameNode: SHUTDOWN_MSG:/************************************************************SHUTDOWN_MSG: Shutting down NameNode at hadoop/216.34.94.184************************************************************/starting namenode, logging to /var/log/hadoop/root/hadoop-root-namenode-hadoop.outlocalhost: starting datanode, logging to /var/log/hadoop/root/hadoop-root-datanode-hadoop.outlocalhost: starting secondarynamenode, logging to /var/log/hadoop/root/hadoop-root-secondarynamenode-hadoop.outstarting jobtracker, logging to /var/log/hadoop/root/hadoop-root-jobtracker-hadoop.outlocalhost: starting tasktracker, logging to /var/log/hadoop/root/hadoop-root-tasktracker-hadoop.out========================================================================hadoop log dir : /var/log/hadoop/rootNameNode - http://192.168.60.128:50070/JobTracker - http://192.168.60.128:50030/=========================================================================