国产成人精品无码青草_亚洲国产美女精品久久久久∴_欧美人与鲁交大毛片免费_国产果冻豆传媒麻婆精东

15158846557 在線咨詢 在線咨詢
15158846557 在線咨詢
所在位置: 首頁 > 營銷資訊 > 網(wǎng)站運(yùn)營 > 使用虛擬機(jī)搭建Hadoop集群

使用虛擬機(jī)搭建Hadoop集群

時(shí)間:2023-06-28 19:39:02 | 來源:網(wǎng)站運(yùn)營

時(shí)間:2023-06-28 19:39:02 來源:網(wǎng)站運(yùn)營

使用虛擬機(jī)搭建Hadoop集群:

集群規(guī)劃

環(huán)境準(zhǔn)備

  1. 準(zhǔn)備好三臺(tái)虛擬機(jī),當(dāng)前以centos8為例。這里要注意的是,安裝的時(shí)候最好選擇以基礎(chǔ)設(shè)置的server模式進(jìn)行安裝,如此一來,系統(tǒng)中會(huì)附帶必要的軟件工具,并且沒有消耗大量資源的圖形界面。
vim /etc/hostname # 修改主機(jī)名2. 修改hosts映射

vim /etc/hosts3. 關(guān)閉防火墻

# centossystemctl stop firewalld service # 關(guān)閉防火墻systemctl disable firewalld service # 禁用防火墻# ubuntuufw status # 查看防火墻狀態(tài)ufw disable # 禁用防火墻ufw enable # 打開防火墻4. ssh免密登錄

ssh-keygen # 生成公鑰私鑰ssh-copy-id hadoop01ssh-copy-id hadoop02ssh-copy-id hadoop035. 集群時(shí)間同步

chrony是centos8自帶的時(shí)間同步工具,是ntp的靈活實(shí)現(xiàn)。

systemctl start chronydsystemctl status chronydsystemctl enable chronydubuntu可以使用timedatectl

timedatectl # 查看時(shí)間同步狀態(tài)timedatectl set-ntp true # 開啟時(shí)間同步6. 安裝jdk8

建議使用Oracle JDK

7. 規(guī)劃工作路徑

mkdir -p /export/server # 軟件安裝路徑mkdir -p /export/data # 數(shù)據(jù)存儲(chǔ)路徑mkdir -p /export/software # 壓縮包存放路徑8. 上傳壓縮包并解壓

定義環(huán)境變量HADOOP_HOME,例如 export HADOOP_HOME=/export/server/hadoop-3.3.1

編輯配置文件

  1. hadoop-env.sh
=============== hadoop3 ==============

# 配置JAVA_HOMEexport JAVA_HOME=/export/server/jdk1.8.0_291# 配置HADOOP_HOMEexport HADOOP_HOME=/export/server/hadoop-3.3.1# 設(shè)置用戶以執(zhí)行對(duì)應(yīng)角色shell命令export HDFS_NAMENODE_USER=rootexport HDFS_DATANODE_USER=rootexport HDFS_SECONDARYNAMENODE_USER=rootexport YARN_RESOURCEMANAGER_USER=rootexport YARN_NODEMANAGER_USER=root=============== hadoop2 ==============

# 配置JAVA_HOMEexport JAVA_HOME=/export/server/jdk1.8.0_291# 配置HADOOP_HOMEexport HADOOP_HOME=/export/server/hadoop-2.7.22. core-site.xml

=============== hadoop3 ==============

<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:9820</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/export/data/hadoop-3.3.1</value> </property> <property> <name>hadoop.http.staticuser.user</name> <value>root</value> </property></configuration>=============== hadoop2 ==============

<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://hadoop01:8020</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/export/data/hadoop-2.7.2</value> </property></configuration>3. hdfs-site.xml

=============== hadoop3 ==============

<configuration> <property> <name>dfs.namenode.http-address</name> <value>hadoop01:9870</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop02:9868</value> </property></configuration>=============== hadoop2 ==============

<configuration> <property> <name>dfs.namenode.http-address</name> <value>hadoop01:50070</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop02:50090</value> </property></configuration>4. mapred-site.xml

<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=${HADOOP_HOME}</value> </property></configuration>5. yarn-site.xml

<configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.scheduler.minimum-allocation-mb</name> <value>512</value> </property> <property> <name>yarn.scheduler.maximum-allocation-mb</name> <value>2048</value> </property> <property> <name>yarn.nodemanager.vmem-pmem-ratio</name> <value>4</value> </property></configuration>6. workers(hadoop3)或slaves(hadoop2)

hadoop01hadoop02hadoop037. 修改環(huán)境變量

vim /etc/profile.d/hadoop.sh

export JAVA_HOME=/export/server/jdk1.8.0_291export HADOOP_HOME=/export/server/hadoop-3.3.1export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbinsource /etc/profile.d/hadoop.sh

以上配置需要分發(fā)到各個(gè)節(jié)點(diǎn)

啟動(dòng)hadoop集群

格式化namenode

hdfs namenode -format # 僅在hadoop01上運(yùn)行多次format會(huì)導(dǎo)致主從角色數(shù)據(jù)不一致。通過刪除hadoop.tmp.dir目錄,并重新format解決。

接下來啟動(dòng)dfsyarn

# 僅在hadoop01運(yùn)行./sbin/start-dfs.sh # 啟動(dòng) Namenode 和 Datanode./sbin/start-yarn.sh # 啟動(dòng) ResourceManager 和 NodeManager./sbin/mr-jobhistory-daemon.sh start historyserver # 啟動(dòng) HistoryServer查看web

hadoop01:9870 # hadoop3 namenodehadoop01:50070 # hadoop2 namenodehadoop01:8088 # hadoop resource managerhadoop01:19888 # history server


官方示例

# 計(jì)算圓周率hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar pi 2 4# 文件寫入測(cè)試hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 10MB# 文件讀取測(cè)試hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.3.1-tests.jar TestDFSIO -read -nrFiles 10 -fileSize 10MB



關(guān)鍵詞:虛擬,使用

74
73
25
news

版權(quán)所有? 億企邦 1997-2025 保留一切法律許可權(quán)利。

為了最佳展示效果,本站不支持IE9及以下版本的瀏覽器,建議您使用谷歌Chrome瀏覽器。 點(diǎn)擊下載Chrome瀏覽器
關(guān)閉