如何使用jar包安裝部署Hadoop2.6+jdk8

小編給大家分享一下如何使用jar包安裝部署Hadoop2.6+jdk8,相信大部分人都還不怎么了解,因此分享這篇文章給大家參考一下,希望大家閱讀完這篇文章后大有收獲,下面讓我們一起去了解一下吧!

創(chuàng)新互聯(lián)公司專注為客戶提供全方位的互聯(lián)網(wǎng)綜合服務(wù),包含不限于成都做網(wǎng)站、網(wǎng)站設(shè)計(jì)、寧武網(wǎng)絡(luò)推廣、微信小程序定制開發(fā)、寧武網(wǎng)絡(luò)營(yíng)銷、寧武企業(yè)策劃、寧武品牌公關(guān)、搜索引擎seo、人物專訪、企業(yè)宣傳片、企業(yè)代運(yùn)營(yíng)等,從售前售中售后,我們都將竭誠(chéng)為您服務(wù),您的肯定,是我們最大的嘉獎(jiǎng);創(chuàng)新互聯(lián)公司為所有大學(xué)生創(chuàng)業(yè)者提供寧武建站搭建服務(wù),24小時(shí)服務(wù)熱線:13518219792,官方網(wǎng)址:bm7419.com

Hadoop的安裝部署可以分為三類:

一. 自動(dòng)安裝部署

Ambari:http://ambari.apache.org/,它是有Hortonworks開源的。

Minos:https://github.com/XiaoMi/minos,中國(guó)小米公司開源(為的是把大家的手機(jī)變成分布式集群,哈哈。。)

Cloudera Manager(收費(fèi),但是當(dāng)節(jié)點(diǎn)數(shù)非常少的時(shí)候是免費(fèi)的。很好的策略!并且非常好用)

二. 使用RPM包安裝部署

Apache Hadoop不提供

HDP和CDH提供

三. 使用JAR包安裝部署

各版本均提供。

這種方式是最靈活的,可以任意更換所需要的版本,但是缺點(diǎn)是需要人的很多參與,不夠自動(dòng)化。

Hadoop 2.0安裝部署流程

步驟1:準(zhǔn)備硬件(linux操作系統(tǒng),本人的機(jī)器是Fedora 21 WorkStation,CentOS適用)

步驟2:準(zhǔn)備軟件安裝包,并安裝基礎(chǔ)軟件(主要是JDK,本人用的是最新的jdk8)

步驟3:將Hadoop安裝包分發(fā)到各個(gè)節(jié)點(diǎn)的同一個(gè)目錄下,并解壓

步驟4:修改配置文件(關(guān)鍵!!)

步驟5:?jiǎn)?dòng)服務(wù)(關(guān)鍵??!)

步驟6:驗(yàn)證是否啟動(dòng)成功

Hadoop各個(gè)發(fā)行版:

Apache Hadoop

最原始版本,所有其他發(fā)行版均基于該發(fā)行版實(shí)現(xiàn)的

0.23.x:非穩(wěn)定版

2.x:穩(wěn)定版

HDP

Hortonworks公司的發(fā)行版

CDH

Cloudera公司的的Hadoop發(fā)行版

包含CDH4和CDH5兩個(gè)版本

CDH4:基于Apache Hadoop 0.23.0版本開發(fā)

CDH5:基于Apache Hadoop 2.2.0版本開發(fā)

不同發(fā)行版兼容性

架構(gòu)、部署和使用方法一致,不同之處僅在若干內(nèi)部實(shí)現(xiàn)。

CDH的安裝方法可以參照下面的步驟:詳細(xì)參見官網(wǎng)。

Automated Installation

Ideal for trying Cloudera enterprise data hub, the installer will download Cloudera Manager from Cloudera's website and guide you through the setup process.

Pre-requisites: multiple, Internet-connected Linux machines, with SSH access, and significant free space in /var and /opt.

    $ wget http://archive.cloudera.com/cm5/installer/latest/cloudera-manager-installer.bin

    $ chmod u+x cloudera-manager-installer.bin

    $ sudo ./cloudera-manager-installer.bin

Production Installation

Users setting up Cloudera enterprise data hub for production use are encouraged to follow the installation instructions in our documentation. These instructions suggest explicitly provisioning the databases used by Cloudera Manager and walk through explicitly which packages need installation.

本文的將要重點(diǎn)介紹的還是適用apache hadoop2.6的安裝配置方法:

1. 首先,jdk要安裝好,注意:請(qǐng)選擇oracle的jdk,我這里用的jdk8.

千萬(wàn)別用fedora和opensuse系統(tǒng)自帶的openjdk。貌似jps都沒有。

2. 從apache官網(wǎng)下在最新的hadoop2.6,然后解壓:

[neil@neilhost Servers]$ tar zxvf hadoop-2.6.0.tar.gz 
hadoop-2.6.0/
hadoop-2.6.0/etc/
hadoop-2.6.0/etc/hadoop/
hadoop-2.6.0/etc/hadoop/hdfs-site.xml
hadoop-2.6.0/etc/hadoop/hadoop-metrics2.properties
hadoop-2.6.0/etc/hadoop/container-executor.cfg
... ...
tar: 歸檔文件中異常的 EOF
tar: 歸檔文件中異常的 EOF
tar: Error is not recoverable: exiting now

解壓最后包了個(gè)錯(cuò)誤,聽說(shuō)其他人也有類似的情況,但不影響后面的使用。

3. 配置/etc/hosts

增加一行 127.0.1.1 YARN001。配置成127.0.0.1也是可以的。

127.0.0.1		localhost.localdomain localhost
::1		localhost6.localdomain6 localhost6
127.0.1.1	YARN001

4. 修改hadoop里的各個(gè)配置文件:

4.1 解壓包etc/hadoop/hadoop-en.sh

配置JAVA_HOME,將jdk路徑配置上

# The java implementation to use.
export JAVA_HOME=/usr/java/jdk1.8.0_40/
#${JAVA_HOME}

4.2增加一個(gè)解壓包etc/hadoop/mapred-site.xml

哎呀!這個(gè)文件應(yīng)該怎么寫呢?不要著急,etc/hadoop/mapred-site.xml.template?;靖袷街灰獜?fù)制里面的就可以了。

然后需要在etc/hadoop/mapred-site.xml需要增加一個(gè)configuration節(jié)點(diǎn),里面增加一個(gè)property,然后的然后如下:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>

4.3 解壓目錄etc/hadoop/core-site.xml

增加內(nèi)容如下:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://YARN001:8020</value>
</property>
</configuration>

注意:這里的value里的YARN001就是前面在系統(tǒng)/etc/hosts文件里增加的YARN001.

如果沒有設(shè)置,這里可以寫成hdfs://localhost:8020或hdfs://127.0.0.1:8020,或者換為本機(jī)的IP都是可以的。

后面的端口可以配置成任意開放的端口,這里我配置成8020,配置成其他如9001等也是可以的。

4.4 解壓目錄下的etc/hadoop/hdfs-site.xml

第一個(gè)配置dfs.replication,即配置副本數(shù)量。這里配置成1,因?yàn)檫@里是單機(jī)版的。默認(rèn)是3,如果用默認(rèn)3的話,這里回報(bào)錯(cuò)。

第二個(gè)和第三個(gè)配置的是namenode和datanode。這里需要指定兩個(gè)路徑,如果不設(shè)置,會(huì)默認(rèn)設(shè)置為系統(tǒng)/tmp目錄下,這樣,如果你用的是虛擬機(jī)模擬系統(tǒng)環(huán)境,那么每次重啟虛擬機(jī)之后,/tmp會(huì)被清空,里面的信息也就沒了,所以這里建議設(shè)置這里兩個(gè)目錄,目錄可以不存在,hadoop運(yùn)行時(shí)會(huì)自動(dòng)按照這里的配置信息生成這兩個(gè)目錄。

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>/home/neil/Servers/hadoop-2.6.0/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/home/neil/Servers/hadoop-2.6.0/dfs/data</value>
</property>
</configuration>

4.5 解壓目錄下的etc/hadoop/yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
</configuration>

4.6 還有一個(gè)可以改也可以不改的,就是解壓目錄下etc/hadoop/slave

可以將localhost改為之前設(shè)置的YARN001,也可以改為127.0.0.1

5.開始正式操作。

啟動(dòng)hadoop的方法有很多,在sbin目錄下有很多腳本命令。其中,有一步到位的命令腳本start-all.sh,但是不建議這么做。雖然這樣做比較方便,能夠自動(dòng)啟動(dòng)dfs、yarn等,但是很可能中間某幾步啟動(dòng)失敗,造成整個(gè)服務(wù)啟動(dòng)不完整。

另外,sbin中的其他腳本如start-dfs.sh,start-yarn.sh等也不能完全解決這樣的問(wèn)題。例如,啟動(dòng)dfs包括啟動(dòng)namenode和datanode,如果當(dāng)中有節(jié)點(diǎn)啟動(dòng)失敗,就很麻煩了。所以,我建議一步一步啟動(dòng)。

5.0 首先第一次使用hadoop之前,需要對(duì)namenode進(jìn)行格式化。

[neil@neilhost hadoop-2.6.0]$ ll bin
總用量 440
-rwxr-xr-x. 1 neil neil 159183 11月 14 05:20 container-executor
-rwxr-xr-x. 1 neil neil   5479 11月 14 05:20 hadoop
-rwxr-xr-x. 1 neil neil   8298 11月 14 05:20 hadoop.cmd
-rwxr-xr-x. 1 neil neil  11142 11月 14 05:20 hdfs
-rwxr-xr-x. 1 neil neil   6923 11月 14 05:20 hdfs.cmd
-rwxr-xr-x. 1 neil neil   5205 11月 14 05:20 mapred
-rwxr-xr-x. 1 neil neil   5949 11月 14 05:20 mapred.cmd
-rwxr-xr-x. 1 neil neil   1776 11月 14 05:20 rcc
-rwxr-xr-x. 1 neil neil 201659 11月 14 05:20 test-container-executor
-rwxr-xr-x. 1 neil neil  11380 11月 14 05:20 yarn
-rwxr-xr-x. 1 neil neil  10895 11月 14 05:20 yarn.cmd
[neil@neilhost hadoop-2.6.0]$ bin/hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

15/04/01 20:57:32 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = neilhost.neildomain/192.168.1.101
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /home/neil/Servers/hadoop-2.6.0/etc/hadoop:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/curator-client-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jettison-1.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/asm-3.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-io-2.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/activation-1.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/curator-framework-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/gson-2.2.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/junit-4.11.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/htrace-core-3.0.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-math4-3.1.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/xz-1.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/commons-lang-2.6.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/hadoop-nfs-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/common/hadoop-common-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jettison-1.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/activation-1.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-registry-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-client-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/home/neil/Servers/hadoop-2.6.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.8.0_40
************************************************************/
15/04/01 20:57:32 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/04/01 20:57:32 INFO namenode.NameNode: createNameNode [-format]
15/04/01 20:57:32 WARN common.Util: Path /home/neil/Servers/hadoop-2.6.0/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
15/04/01 20:57:32 WARN common.Util: Path /home/neil/Servers/hadoop-2.6.0/dfs/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-fb38ac3b-414f-4643-b62d-2e9897b5db27
15/04/01 20:57:32 INFO namenode.FSNamesystem: No KeyProvider found.
15/04/01 20:57:33 INFO namenode.FSNamesystem: fsLock is fair:true
15/04/01 20:57:33 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/04/01 20:57:33 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/04/01 20:57:33 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/04/01 20:57:33 INFO blockmanagement.BlockManager: The block deletion will start around 2015 四月 01 20:57:33
15/04/01 20:57:33 INFO util.GSet: Computing capacity for map BlocksMap
15/04/01 20:57:33 INFO util.GSet: VM type       = 64-bit
15/04/01 20:57:33 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/04/01 20:57:33 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/04/01 20:57:33 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/04/01 20:57:33 INFO blockmanagement.BlockManager: defaultReplication         = 1
15/04/01 20:57:33 INFO blockmanagement.BlockManager: maxReplication             = 512
15/04/01 20:57:33 INFO blockmanagement.BlockManager: minReplication             = 1
15/04/01 20:57:33 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
15/04/01 20:57:33 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
15/04/01 20:57:33 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/04/01 20:57:33 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
15/04/01 20:57:33 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
15/04/01 20:57:33 INFO namenode.FSNamesystem: fsOwner             = neil (auth:SIMPLE)
15/04/01 20:57:33 INFO namenode.FSNamesystem: supergroup          = supergroup
15/04/01 20:57:33 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/04/01 20:57:33 INFO namenode.FSNamesystem: HA Enabled: false
15/04/01 20:57:33 INFO namenode.FSNamesystem: Append Enabled: true
15/04/01 20:57:33 INFO util.GSet: Computing capacity for map INodeMap
15/04/01 20:57:33 INFO util.GSet: VM type       = 64-bit
15/04/01 20:57:33 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/04/01 20:57:33 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/04/01 20:57:33 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/04/01 20:57:33 INFO util.GSet: Computing capacity for map cachedBlocks
15/04/01 20:57:33 INFO util.GSet: VM type       = 64-bit
15/04/01 20:57:33 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/04/01 20:57:33 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/04/01 20:57:33 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/04/01 20:57:33 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/04/01 20:57:33 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
15/04/01 20:57:33 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/04/01 20:57:33 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/04/01 20:57:33 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/04/01 20:57:33 INFO util.GSet: VM type       = 64-bit
15/04/01 20:57:33 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/04/01 20:57:33 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/04/01 20:57:33 INFO namenode.NNConf: ACLs enabled? false
15/04/01 20:57:33 INFO namenode.NNConf: XAttrs enabled? true
15/04/01 20:57:33 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/04/01 20:57:33 INFO namenode.FSImage: Allocated new BlockPoolId: BP-546681589-192.168.1.101-1427893053846
15/04/01 20:57:34 INFO common.Storage: Storage directory /home/neil/Servers/hadoop-2.6.0/dfs/name has been successfully formatted.
15/04/01 20:57:34 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/04/01 20:57:34 INFO util.ExitUtil: Exiting with status 0
15/04/01 20:57:34 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at neilhost.neildomain/192.168.1.101
************************************************************/
[neil@neilhost hadoop-2.6.0]$

這里需要萬(wàn)分注意:這一步僅限于第一部署新集群時(shí)用,它會(huì)清空所有dfs上的數(shù)據(jù)。如果在線上環(huán)境下,你手賤了,吃不了兜著走?。。?!

這時(shí)候,你會(huì)發(fā)現(xiàn),目錄下多了一個(gè)dfs目錄,dfs下有個(gè)name,這是我們之前配置的etc/hadoop/hdfs-site.xml

[neil@neilhost hadoop-2.6.0]$ ll dfs
總用量 4
drwxrwxr-x. 3 neil neil 4096 4月   1 20:57 name

5.1 啟動(dòng)namenode

使用sbin下的hadoop-daemon.sh啟動(dòng)namenode。

啟動(dòng)之后,可以用jdk的jps命令來(lái)進(jìn)行查看JVM進(jìn)程。注意:我用的fedora/centos系列,所以用rpm安裝jdk后一切都配置好,如果你用的ubuntu或你下載的是解壓版的jdk需要按全路徑輸入命令,不然也配置一下jdk環(huán)境變量吧。

[neil@neilhost hadoop-2.6.0]$ ^C
[neil@neilhost hadoop-2.6.0]$ sbin/hadoop-daemon.sh start namenode 
starting namenode, logging to /home/neil/Servers/hadoop-2.6.0/logs/hadoop-neil-namenode-neilhost.neildomain.out
[neil@neilhost hadoop-2.6.0]$ jps
4192 Jps
4117 NameNode

我們可以看到NameNode成功啟動(dòng)了。

注意:如果沒有啟動(dòng)成功,請(qǐng)去查看logs目錄下的namenode.log日志文件。

[neil@neilhost hadoop-2.6.0]$ ll logs
總用量 36
-rw-rw-r--. 1 neil neil 31591 4月   1 21:19 hadoop-neil-namenode-neilhost.neildomain.log
-rw-rw-r--. 1 neil neil   715 4月   1 21:13 hadoop-neil-namenode-neilhost.neildomain.out
-rw-rw-r--. 1 neil neil     0 4月   1 21:13 SecurityAuth-neil.audit

5.2 啟動(dòng)datanode。

[neil@neilhost hadoop-2.6.0]$ sbin/hadoop-daemon.sh start datanode
starting datanode, logging to /home/neil/Servers/hadoop-2.6.0/logs/hadoop-neil-datanode-neilhost.neildomain.out
[neil@neilhost hadoop-2.6.0]$ jps
4276 DataNode
4117 NameNode
4351 Jps

這時(shí)候,我們還可以通過(guò)網(wǎng)頁(yè)http訪問(wèn)dfs。dfs默認(rèn)端口50070.

輸入http://yarn001:50070/

http://127.0.0.1:50070/

http://127.0.1.1:50070/

http://localhost:50070/

都是可以的。

如何使用jar包安裝部署Hadoop2.6+jdk8

里面有當(dāng)前狀態(tài)的綜述:

如何使用jar包安裝部署Hadoop2.6+jdk8

可以看到livenode這時(shí)候是1.

這里,livenode可以點(diǎn)擊鏈接進(jìn)去查看:

如何使用jar包安裝部署Hadoop2.6+jdk8

另外,還可以查看hdfs上的文件有哪些:

方法是點(diǎn)擊上面菜單Utilities中的Browse the file system

如何使用jar包安裝部署Hadoop2.6+jdk8

(本文初期oschina的用戶HappyBKs的博客,轉(zhuǎn)載請(qǐng)?jiān)谛涯课恢寐暶鞒鎏?!http://my.oschina.net/u/1156339/blog/396128)

5.4 那么現(xiàn)在,我們嘗試向hdfs中添加目錄和文件。

[neil@neilhost hadoop-2.6.0]$ bin/hadoop fs -mkdir /myhome
[neil@neilhost hadoop-2.6.0]$ bin/hadoop fs -mkdir /myhome/happyBKs
[neil@neilhost hadoop-2.6.0]$

再次查看Utilities中的Browse the file system。

如何使用jar包安裝部署Hadoop2.6+jdk8

如何使用jar包安裝部署Hadoop2.6+jdk8

接下來(lái),我們?cè)囍砑游募?/p>

這里,我一次性添加兩個(gè)文件和一整個(gè)文件夾。

[neil@neilhost hadoop-2.6.0]$ bin/hadoop fs -put README.txt NOTICE.txt logs/  /myhome/happyBKs

我們查看hdfs。

如何使用jar包安裝部署Hadoop2.6+jdk8

6. 前面5我們已經(jīng)啟動(dòng)了dfs?,F(xiàn)在開始啟動(dòng)yarn。yarn不像dfs那樣對(duì)數(shù)據(jù)直接影響,我們可以使用一次性啟動(dòng),也可以使用sbin下的yarn-daemon.sh start resourcemanager和yarn-daemon.sh start nodemanager來(lái)分別啟動(dòng)。

這里我直接一次啟動(dòng),使用start-yarn.sh

[neil@neilhost hadoop-2.6.0]$ sbin/start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /home/neil/Servers/hadoop-2.6.0/logs/yarn-neil-resourcemanager-neilhost.neildomain.out
localhost: ssh: connect to host localhost port 22: Connection refused
[neil@neilhost hadoop-2.6.0]$ sudo sbin/start-yarn.sh 
[sudo] password for neil: 
starting yarn daemons
starting resourcemanager, logging to /home/neil/Servers/hadoop-2.6.0/logs/yarn-root-resourcemanager-neilhost.neildomain.out
localhost: ssh: connect to host localhost port 22: Connection refused
[neil@neilhost hadoop-2.6.0]$

可以看到,這里啟動(dòng)被拒絕了,原因是ssh訪問(wèn)被拒。即使我用su權(quán)限也是一樣。

sudo yum install openssh-server

網(wǎng)上給出了幾個(gè)在ubuntu上的解決方法。

http://asyty.iteye.com/blog/1440141  http://blog.sina.com.cn/s/blog_573a052b0102dwxn.html

結(jié)果輸入之后都失敗了

/etc/init.d/ssh -start 

bash: /etc/init.d/ssh: 沒有那個(gè)文件或目錄

net start sshd

Invalid command: net start

最后解決方法:

[neil@neilhost hadoop-2.6.0]$ service sshd start
Redirecting to /bin/systemctl start  sshd.service
[neil@neilhost hadoop-2.6.0]$ pstree -p | grep ssh
           |-sshd(7937)
[neil@neilhost hadoop-2.6.0]$ ssh locahost
ssh: Could not resolve hostname locahost: Name or service not known
[neil@neilhost hadoop-2.6.0]$ ssh localhost
The authenticity of host 'localhost (127.0.0.1)' can't be established.
ECDSA key fingerprint is 88:17:a4:f2:dd:87:6f:ce:b4:04:07:d5:6c:ca:6c:b1.
Are you sure you want to continue connecting (yes/no)? y
Please type 'yes' or 'no': yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
neil@localhost's password: 
[neil@neilhost ~]$

之后,我們?cè)賴L試啟動(dòng)yarn。

[neil@neilhost hadoop-2.6.0]$ sbin/start-yarn.sh
starting yarn daemons
resourcemanager running as process 5115. Stop it first.
neil@localhost's password: 
localhost: starting nodemanager, logging to /home/neil/Servers/hadoop-2.6.0/logs/yarn-neil-nodemanager-neilhost.neildomain.out
[neil@neilhost hadoop-2.6.0]$ jps
[neil@neilhost hadoop-2.6.0]$ jps
10113 Jps
7875 NameNode
9974 NodeManager
8936 ResourceManager
8136 DataNode
8430 SecondaryNameNode

發(fā)現(xiàn),ResourceManager正式啟動(dòng)了。(這里因?yàn)楸疚奈也皇且惶鞂懙?,所以前后pid會(huì)不一樣)

這時(shí)候,我們輸入yarn001:8088.進(jìn)入下面的UI。剛進(jìn)去可能Active Node為0,等一段時(shí)間刷新就出現(xiàn)1了。

注意:如果始終為0,用jps看看nodemanager是否已經(jīng)啟動(dòng),如果沒有啟動(dòng),用sbin/start-yarn.sh start nodemanager來(lái)試試。

如何使用jar包安裝部署Hadoop2.6+jdk8

以上是“如何使用jar包安裝部署Hadoop2.6+jdk8”這篇文章的所有內(nèi)容,感謝各位的閱讀!相信大家都有了一定的了解,希望分享的內(nèi)容對(duì)大家有所幫助,如果還想學(xué)習(xí)更多知識(shí),歡迎關(guān)注創(chuàng)新互聯(lián)行業(yè)資訊頻道!

分享名稱:如何使用jar包安裝部署Hadoop2.6+jdk8
URL地址:http://bm7419.com/article12/gijpgc.html

成都網(wǎng)站建設(shè)公司_創(chuàng)新互聯(lián),為您提供虛擬主機(jī)、小程序開發(fā)、標(biāo)簽優(yōu)化網(wǎng)站制作、品牌網(wǎng)站制作、網(wǎng)站導(dǎo)航

廣告

聲明:本網(wǎng)站發(fā)布的內(nèi)容(圖片、視頻和文字)以用戶投稿、用戶轉(zhuǎn)載內(nèi)容為主,如果涉及侵權(quán)請(qǐng)盡快告知,我們將會(huì)在第一時(shí)間刪除。文章觀點(diǎn)不代表本網(wǎng)站立場(chǎng),如需處理請(qǐng)聯(lián)系客服。電話:028-86922220;郵箱:631063699@qq.com。內(nèi)容未經(jīng)允許不得轉(zhuǎn)載,或轉(zhuǎn)載時(shí)需注明來(lái)源: 創(chuàng)新互聯(lián)

網(wǎng)站優(yōu)化排名