時(shí)間:2023-06-28 19:39:02 | 來源:網(wǎng)站運(yùn)營
時(shí)間:2023-06-28 19:39:02 來源:網(wǎng)站運(yùn)營
使用虛擬機(jī)搭建Hadoop集群:vim /etc/hostname # 修改主機(jī)名
2. 修改hosts映射vim /etc/hosts
3. 關(guān)閉防火墻# centossystemctl stop firewalld service # 關(guān)閉防火墻systemctl disable firewalld service # 禁用防火墻# ubuntuufw status # 查看防火墻狀態(tài)ufw disable # 禁用防火墻ufw enable # 打開防火墻
4. ssh免密登錄ssh-keygen # 生成公鑰私鑰ssh-copy-id hadoop01ssh-copy-id hadoop02ssh-copy-id hadoop03
5. 集群時(shí)間同步systemctl start chronydsystemctl status chronydsystemctl enable chronyd
ubuntu可以使用timedatectltimedatectl # 查看時(shí)間同步狀態(tài)timedatectl set-ntp true # 開啟時(shí)間同步
6. 安裝jdk8mkdir -p /export/server # 軟件安裝路徑mkdir -p /export/data # 數(shù)據(jù)存儲(chǔ)路徑mkdir -p /export/software # 壓縮包存放路徑
8. 上傳壓縮包并解壓HADOOP_HOME
,例如 export HADOOP_HOME=/export/server/hadoop-3.3.1
# 配置JAVA_HOMEexport JAVA_HOME=/export/server/jdk1.8.0_291# 配置HADOOP_HOMEexport HADOOP_HOME=/export/server/hadoop-3.3.1# 設(shè)置用戶以執(zhí)行對(duì)應(yīng)角色shell命令export HDFS_NAMENODE_USER=rootexport HDFS_DATANODE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport YARN_RESOURCEMANAGER_USER=rootexport YARN_NODEMANAGER_USER=root
=============== hadoop2 ==============# 配置JAVA_HOMEexport JAVA_HOME=/export/server/jdk1.8.0_291# 配置HADOOP_HOMEexport HADOOP_HOME=/export/server/hadoop-2.7.2
2. core-site.xml<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:9820</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/export/data/hadoop-3.3.1</value> </property> <property> <name>hadoop.http.staticuser.user</name> <value>root</value> </property></configuration>
=============== hadoop2 ==============<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:8020</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/export/data/hadoop-2.7.2</value> </property></configuration>
3. hdfs-site.xml<configuration> <property> <name>dfs.namenode.http-address</name> <value>hadoop01:9870</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop02:9868</value> </property></configuration>
=============== hadoop2 ==============<configuration> <property> <name>dfs.namenode.http-address</name> <value>hadoop01:50070</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop02:50090</value> </property></configuration>
4. mapred-site.xml<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property></configuration>
5. yarn-site.xml<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>512</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>4</value> </property></configuration>
6. workers(hadoop3)或slaves(hadoop2)hadoop01hadoop02hadoop03
7. 修改環(huán)境變量export JAVA_HOME=/export/server/jdk1.8.0_291export HADOOP_HOME=/export/server/hadoop-3.3.1export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile.d/hadoop.sh以上配置需要分發(fā)到各個(gè)節(jié)點(diǎn)
hdfs namenode -format # 僅在hadoop01上運(yùn)行
多次format會(huì)導(dǎo)致主從角色數(shù)據(jù)不一致。通過刪除hadoop.tmp.dir目錄,并重新format解決。dfs
和yarn
# 僅在hadoop01運(yùn)行./sbin/start-dfs.sh # 啟動(dòng) Namenode 和 Datanode./sbin/start-yarn.sh # 啟動(dòng) ResourceManager 和 NodeManager./sbin/mr-jobhistory-daemon.sh start historyserver # 啟動(dòng) HistoryServer
查看webhadoop01:9870 # hadoop3 namenodehadoop01:50070 # hadoop2 namenodehadoop01:8088 # hadoop resource managerhadoop01:19888 # history server
# 計(jì)算圓周率hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar pi 2 4# 文件寫入測(cè)試hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 10MB# 文件讀取測(cè)試hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 10MB
關(guān)鍵詞:虛擬,使用
客戶&案例
營銷資訊
關(guān)于我們
微信公眾號(hào)
版權(quán)所有? 億企邦 1997-2025 保留一切法律許可權(quán)利。