• 欢迎访问搞代码网站,推荐使用最新版火狐浏览器和Chrome浏览器访问本网站!
  • 如果您觉得本站非常有看点,那么赶紧使用Ctrl+D 收藏搞代码吧

ubuntu下hadoop 2.3.0配置

mysql 搞代码 4年前 (2022-01-09) 10次浏览 已收录 0个评论

环境 系统:ubuntu12.4 hadoop版本:2.3.0 一。下载hadoop-2.3.0-tar.gz解压 二修改配置文件,配置文件都在${hadoop-2.3.0}/etc/hadoop路径下 1、core-site.xml configuration property namehadoop.tmp.dir/name value/usr/local/hadoop-2.3.0/tmp/hadoop-${u

环境系统:ubuntu12.4

hadoop版本:2.3.0

一。下载hadoop-2.3.0-tar.gz解压

二修改配置文件,配置文件都在${hadoop-2.3.0}/etc/hadoop路径下

1、core-site.xml

hadoop.tmp.dir
/usr/local/hadoop-2.3.0/tmp/hadoop-${user.name}

fs.defaultFS
hdfs://localhost:8020

2、hdfs-site.xml

dfs.namenode.name.dir
/usr/local/hadoop-2.3.0/tmp/dfs/name

dfs.datanode.data.dir
/usr/local/hadoop-2.3.0/tmp/dfs/data

dfs.replication
1

3、mapred-site.xml

mapreduce.framework.name
yarn

4、yarn-site.xml

yarn.resourcemanager.hostname
localhost

yarn.nodemanager.aux-services
mapreduce_shuffle

yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler

三、命令启动

hadoop脚本命令在${hadoop-2.3.0}/bin和${hadoop-2.3.0}/sbin目录下,可以根据路径执行命令

也可以配置环境变量,简便命令书写

使用命令:sudo /etc/profile

1

2

3

4

5

6

7

8

export HADOOP_HOME=/usr/local/hadoop-2.3.0

export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop

export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop

初始化hadoop文件系统

hdfs namenode -format

四、启动和关闭hadoop

1. 启动脚本一:

sujx@ubuntu:~$ hadoop-daemon.sh start namenode

starting namenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-namenode-ubuntu.out
sujx@ubuntu:~$ hadoop-daemon.sh start datanode
starting datanode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-datanode-ubuntu.out
sujx@ubuntu:~$ hadoop-daemon.sh start secondarynamenode
starting secondarynamenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-secondarynamenode-ubuntu.out
sujx@ubuntu:~$ jps
9310 SecondaryNameNode
9345 Jps
9140 NameNode
9221 DataNode
sujx@ubuntu:~$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-resourcemanager-ubuntu.out
sujx@ubuntu:~$ yarn-daemon.sh start nodemanager
starting nodemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-nodemanager-ubuntu.out
sujx@ubuntu:~$ jps
9310 SecondaryNameNode
9651 NodeManager
9413 ResourceManager
9140 NameNode
9709 Jps
9221 DataNode
sujx@ubuntu:~$

2. 启动脚本二:

sujx@ubuntu:~$ start-dfs.sh
Starting namenodes on [hd2-single]
hd2-single: starting namenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-namenode-ubuntu.out
hd2-single: starting datanode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-datanode-ubuntu.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-secondarynamenode-ubuntu.out
sujx@ubuntu:~$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-resourcemanager-ubuntu.out
hd2-single: starting nodemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-nodemanager-ubuntu.out
sujx@ubuntu:~$ jps
11414 SecondaryNameNode
10923 NameNode
11141 DataNode
12038 Jps
11586 ResourceManager
11811 NodeManager
sujx@ubuntu:~$

3. 启动脚本三:


sujx@ubuntu:~$ start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hd2-single]
hd2-single: starting namenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-namenode-ubuntu.out
hd2-single: starting datanode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-datanode-ubuntu.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /opt/hadoop-2.2.0/logs/hadoop-sujx-secondarynamenode-ubu本文来源gao.dai.ma.com搞@代*码(网$ntu.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-resourcemanager-ubuntu.out
hd2-single: starting nodemanager, logging to /opt/hadoop-2.2.0/logs/yarn-sujx-nodemanager-ubuntu.out
sujx@ubuntu:~$ jps
14156 NodeManager
14445 Jps
13267 NameNode
13759 SecondaryNameNode
13485 DataNode
13927 ResourceManager
sujx@ubuntu:~$

其实这三种方式最终效果都是相同,他们内部也都是相互调用关系。对应的结束脚本也简单:
1. 结束脚本一:
sujx@ubuntu:~$ hadoop-daemon.sh stop nodemanager
sujx@ubuntu:~$ hadoop-daemon.sh stop resourcemanager
sujx@ubuntu:~$ hadoop-daemon.sh stop secondarynamenode
sujx@ubuntu:~$ hadoop-daemon.sh stop datanode
sujx@ubuntu:~$ hadoop-daemon.sh stop namenode
2. 结束脚本二:
sujx@ubuntu:~$ stop-yarn.sh
sujx@ubuntu:~$ stop-dfs.sh
3. 结束脚本三:
sujx@ubuntu:~$ stop-all.sh

查看hadoop namenode或hdfs的状态:http://localhost:50070/

查看job运行情况:http://localhost:8088/

客户端访问的hdfs的地址端口:8020

客户端访问的YARN 的地址端口:8032

至此,单机伪分布就已经部署完毕。


搞代码网(gaodaima.com)提供的所有资源部分来自互联网,如果有侵犯您的版权或其他权益,请说明详细缘由并提供版权或权益证明然后发送到邮箱[email protected],我们会在看到邮件的第一时间内为您处理,或直接联系QQ:872152909。本网站采用BY-NC-SA协议进行授权
转载请注明原文链接:ubuntu下hadoop 2.3.0配置

喜欢 (0)
[搞代码]
分享 (0)
发表我的评论
取消评论

表情 贴图 加粗 删除线 居中 斜体 签到

Hi,您需要填写昵称和邮箱!

  • 昵称 (必填)
  • 邮箱 (必填)
  • 网址