- 浏览: 124547 次
- 性别:
- 来自: 山东
最新评论
鲁春利的学习笔记
1、环境准备
主机信息:
说明:
为了保证虚拟机与物理机的通信不受物理机所在子网的影响,通过VMware配置了NAT的网络连接方式。
编辑-->虚拟机网络编辑器,选中VMnet8,VMnet信息选中NAT模式
修改主机名:
说明:三台节点的主机名分别为nnode、dnode1、dnode2
修改hosts文件:
说明:将127.0.0.1这两行原有的注释掉
关闭防火墙:
关闭SELinux:
配置静态IP:
说明:
可以通过图形化界面或配置文件修改;另外两台机器配置方式类似。
查看IP配置:
查看网络连接:
2、安装JDK
为了方便物理机与虚拟机之间文件共享,通过VMware设置共享目录。
选中某一个主机节点->右键设置-->选项共享文件夹->文件夹共享选中“总是启用“,然后添加共享目录。
登录Liunx验证共享目录:
jdk-7u75-linux-x64.tar.gz
解压:
说明:
在三台机器依次解压java的gz文件;
大数据相关的程序均部署于/lucl目录下。
程序的执行通过用户hadoop来调用(chown -R hadoop:hadoop /lucl)。
配置Java环境变量:
验证:
3、配置SSH免密码登录
在三台主机依次生成公钥和私钥
分别登录到三台主机,执行:
将节点dnode1和dnode2上的authorized_keys复制到nnode节点:
# 拷贝dnode1上的authorized_keys到nnode
# 拷贝dnode2上的authorized_keys到nnode
合并authorized_keys文件:
分发合并后的authorized_keys到节点dnode1和dnode2
调整文件的权限
测试验证:
1、环境准备
VMware安装三台虚拟机: 系统:CentOS-6.5-x86_64 内存:2G 硬盘:20G
主机信息:
IP | 主机名 |
192.168.137.117 | nnode |
192.168.137.118 | dnode1 |
192.168.137.119 | dnode2 |
说明:
为了保证虚拟机与物理机的通信不受物理机所在子网的影响,通过VMware配置了NAT的网络连接方式。
编辑-->虚拟机网络编辑器,选中VMnet8,VMnet信息选中NAT模式
修改主机名:
# 查看主机名 [root@nnode ~]# hostname nnode [root@nnode ~]# # 修改主机名 [root@nnode ~]# vim /etc/sysconfig/network NETWORKING=yes HOSTNAME=nnode # 主机名
说明:三台节点的主机名分别为nnode、dnode1、dnode2
修改hosts文件:
[root@nnode ~]# vim /etc/hosts # 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 # ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.137.117 nnode nnode 192.168.137.118 dnode1 dnode1 192.168.137.119 dnode2 dnode2
说明:将127.0.0.1这两行原有的注释掉
关闭防火墙:
# 查看状态 service iptables status # 临时关闭 service iptables stop # 永久关闭 chkconfig iptables off
关闭SELinux:
[root@nnode ~]# vim /etc/sysconfig/selinux # This file controls the state of SELinux on the system. # SELINUX= can take one of these three values: # enforcing - SELinux security policy is enforced. # permissive - SELinux prints warnings instead of enforcing. # disabled - No SELinux policy is loaded. # SELINUX=enforcing SELINUX=disabled # SELINUXTYPE= can take one of these two values: # targeted - Targeted processes are protected, # mls - Multi Level Security protection. SELINUXTYPE=targeted
配置静态IP:
[root@nnode ~]# vim /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" # 静态IP BOOTPROTO=static IPV6INIT="yes" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="81c9009f-6b9f-4f72-8a38-ab91fecf788a" HWADDR=00:0C:29:67:EF:06 # IP地址 IPADDR=192.168.137.117 # 子网掩码255.255.255.0 PREFIX=24 # 网关地址,为VMware指定的那个值 GATEWAY=192.168.137.2 # 使用免费DNS服务器 DNS1=114.114.115.115 DEFROUTE=yes IPV4_FAILURE_FATAL=yes IPV6_AUTOCONF=yes IPV6_DEFROUTE=yes IPV6_PEERDNS=yes IPV6_PEERROUTES=yes IPV6_FAILURE_FATAL=no # 自动生成的网卡名称 NAME="System eth0" [root@nnode ~]#
说明:
可以通过图形化界面或配置文件修改;另外两台机器配置方式类似。
查看IP配置:
[root@nnode ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:67:EF:06 inet addr:192.168.137.117 Bcast:192.168.137.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fe67:ef06/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:1814 errors:0 dropped:0 overruns:0 frame:0 TX packets:658 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:186698 (182.3 KiB) TX bytes:75045 (73.2 KiB) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:480 (480.0 b) TX bytes:480 (480.0 b) [root@nnode ~]#
查看网络连接:
[root@nnode ~]# ping www.baidu.com PING www.a.shifen.com (119.75.218.70) 56(84) bytes of data. 64 bytes from 119.75.218.70: icmp_seq=1 ttl=128 time=7.57 ms 64 bytes from 119.75.218.70: icmp_seq=2 ttl=128 time=7.52 ms 64 bytes from 119.75.218.70: icmp_seq=3 ttl=128 time=6.02 ms 64 bytes from 119.75.218.70: icmp_seq=4 ttl=128 time=3.91 ms --- www.a.shifen.com ping statistics --- 4 packets transmitted, 4 received, 0% packet loss, time 2851ms rtt min/avg/max/mdev = 3.917/5.585/7.577/1.312 ms [root@nnode ~]#
2、安装JDK
为了方便物理机与虚拟机之间文件共享,通过VMware设置共享目录。
选中某一个主机节点->右键设置-->选项共享文件夹->文件夹共享选中“总是启用“,然后添加共享目录。
登录Liunx验证共享目录:
[root@nnode hgfs]# cd /mnt/hgfs/ [root@nnode hgfs]# ls Share [root@nnode hgfs]# cd Share/
jdk-7u75-linux-x64.tar.gz
解压:
[root@nnode lucl]# tar -xzv -f jdk-7u75-linux-x64.tar.gz
说明:
在三台机器依次解压java的gz文件;
大数据相关的程序均部署于/lucl目录下。
程序的执行通过用户hadoop来调用(chown -R hadoop:hadoop /lucl)。
配置Java环境变量:
[root@nnode lucl]# vim /etc/profile # 新增 export JAVA_HOME=/lucl/jdk1.7.0_75 export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
验证:
[root@nnode lucl]# source /etc/profile [root@nnode lucl]# java -version java version "1.7.0_75" Java(TM) SE Runtime Environment (build 1.7.0_75-b13) Java HotSpot(TM) 64-Bit Server VM (build 24.75-b04, mixed mode) [root@nnode lucl]#
3、配置SSH免密码登录
在三台主机依次生成公钥和私钥
[hadoop@nnode ~]$ ssh-keygen -t rsa -P "" -f ~/.ssh/id_rsa Generating public/private rsa key pair. Your identification has been saved in /home/hadoop/.ssh/id_rsa. Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub. The key fingerprint is: 48:03:e8:86:0a:59:77:94:f1:68:80:fc:28:29:a4:be hadoop@nnode The key's randomart image is: +--[ RSA 2048]----+ | . oo.oo | | .= .ooo | |o* + .= . | |O + .o o | |=o . S | |.. | | . | | E | | | +-----------------+ [hadoop@nnode ~]$
分别登录到三台主机,执行:
[hadoop@nnode ~]$ ll .ssh total 8 -rw------- 1 hadoop hadoop 1671 Jan 9 19:15 id_rsa -rw-r--r-- 1 hadoop hadoop 394 Jan 9 19:15 id_rsa.pub [hadoop@nnode ~]$ cd .ssh/ [hadoop@nnode .ssh]$ cp id_rsa.pub authorized_keys
将节点dnode1和dnode2上的authorized_keys复制到nnode节点:
# 拷贝dnode1上的authorized_keys到nnode
[hadoop@dnode1 .ssh]$ scp authorized_keys hadoop@nnode:/home/hadoop/dnode1_authorized_keys The authenticity of host 'nnode (192.168.137.117)' can't be established. RSA key fingerprint is 90:7a:48:8d:b9:ed:c9:92:56:01:f2:e7:49:99:1c:93. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'nnode,192.168.137.117' (RSA) to the list of known hosts. hadoop@nnode's password: authorized_keys 100% 395 0.4KB/s 00:00 # 重命名authorized_keys文件 [hadoop@dnode1 .ssh]$ mv authorized_keys authorized_keys_backup [hadoop@dnode1 .ssh]$
# 拷贝dnode2上的authorized_keys到nnode
[hadoop@dnode2 .ssh]$ scp authorized_keys hadoop@nnode:/home/hadoop/dnode2_authorized_keys The authenticity of host 'nnode (192.168.137.117)' can't be established. RSA key fingerprint is 90:7a:48:8d:b9:ed:c9:92:56:01:f2:e7:49:99:1c:93. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'nnode,192.168.137.117' (RSA) to the list of known hosts. hadoop@nnode's password: authorized_keys 100% 395 0.4KB/s 00:00 # 重命名authorized_keys文件 [hadoop@dnode2 .ssh]$ mv authorized_keys authorized_keys_backup [hadoop@dnode2 .ssh]$
合并authorized_keys文件:
[hadoop@nnode .ssh]$ cat /home/hadoop/dnode1_authorized_keys >> authorized_keys [hadoop@nnode .ssh]$ cat /home/hadoop/dnode2_authorized_keys >> authorized_keys [hadoop@nnode .ssh]$ cd .. # 必须是700 [hadoop@nnode ~]$ chmod 700 ~/.ssh # 最好是600 [hadoop@nnode ~]$ chmod 600 ~/.ssh/authorized_keys
分发合并后的authorized_keys到节点dnode1和dnode2
[hadoop@nnode .ssh]$ scp authorized_keys hadoop@dnode1:/home/hadoop/.ssh hadoop@dnode1's password: authorized_keys 100% 1184 1.2KB/s 00:00 [hadoop@nnode .ssh]$ scp authorized_keys hadoop@dnode2:/home/hadoop/.ssh hadoop@dnode2's password: authorized_keys 100% 1184 1.2KB/s 00:00 [hadoop@nnode .ssh]$
调整文件的权限
# 在节点dnode1和dnode2分别执行 chmod 700 ~/.ssh chmod 600 ~/.ssh/authorized_keys
测试验证:
# 通过nnode节点验证 [hadoop@nnode ~]$ ssh nnode Last login: Sat Jan 9 19:25:18 2016 from nnode [hadoop@nnode ~]$ exit logout Connection to nnode closed. [hadoop@nnode ~]$ ssh dnode1 Last login: Sat Jan 9 19:25:08 2016 from dnode2 [hadoop@dnode1 ~]$ exit logout Connection to dnode1 closed. [hadoop@nnode ~]$ ssh dnode2 Last login: Sat Jan 9 19:25:12 2016 from dnode2 [hadoop@dnode2 ~]$ exit logout Connection to dnode2 closed. [hadoop@nnode ~]$ # 通过dnode1节点验证 [hadoop@dnode1 ~]$ ssh dnode1 Last login: Sat Jan 9 19:35:08 2016 from nnode [hadoop@dnode1 ~]$ exit logout Connection to dnode1 closed. [hadoop@dnode1 ~]$ ssh nnode Last login: Sat Jan 9 19:35:04 2016 from nnode [hadoop@nnode ~]$ exit logout Connection to nnode closed. [hadoop@dnode1 ~]$ ssh dnode2 Last login: Sat Jan 9 19:35:12 2016 from nnode [hadoop@dnode2 ~]$ exit logout Connection to dnode2 closed. [hadoop@dnode1 ~]$ # 通过dnode2节点验证 [hadoop@dnode2 ~]$ ssh dnode2 Last login: Sat Jan 9 19:35:59 2016 from dnode1 [hadoop@dnode2 ~]$ exit logout Connection to dnode2 closed. [hadoop@dnode2 ~]$ ssh nnode Last login: Sat Jan 9 19:35:56 2016 from dnode1 [hadoop@nnode ~]$ exit logout Connection to nnode closed. [hadoop@dnode2 ~]$ ssh dnode1 Last login: Sat Jan 9 19:35:52 2016 from dnode1 [hadoop@dnode1 ~]$ exit logout Connection to dnode1 closed. [hadoop@dnode2 ~]$
发表评论
-
Impala学习笔记汇总
2016-01-10 22:51 439Impala学习笔记(一)CDH5.4.0安装 -
Impala学习笔记(一)CDH5.4.0安装
2016-01-10 18:01 1438Impala是Cloudra公司发布的实时查询开源项目,基于H ... -
ZooKeeper3.4.6学习笔记(二)集群配置
2016-01-10 11:47 312ZooKeeper是一个分布式的,开放源码的分布式应用程序协调 ... -
ZooKeeper3.4.6学习笔记汇总
2016-01-10 10:28 611ZooKeeper3.4.6学习笔记(一)集群环境准备 ... -
闷葫芦的世界
2016-01-09 22:57 298工作几年,零零散散也整理了不少东西,但都是东一榔头西一棒槌的, ...
相关推荐
zookeeper-3.4.6超稳定版本,有需要的朋友可直接下载搭建
Hadoop2.6+HA+Zookeeper3.4.6+Hbase1.0.0 集群安装详细步骤
Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6 搭建分布式集群环境详解。 详细介绍了如何搭建分布式集群环境。
zookeeper3.4.6 我的(lh2420124680)下载http://zookeeper.apache.org/
zookeeper-3.4.6 解压后可直接运行 bin/zkServer.cmd 来启动
zookeeper-3.4.6.tar,
zookeeper-3.4.6下载 zookeeper-3.4.6下载zookeeper-3.4.6下载zookeeper-3.4.6下载zookeeper-3.4.6下载zookeeper-3.4.6下载zookeeper-3.4.6下载zookeeper-3.4.6下载
zookeeper3.4.6安装包,ZooKeeper注册中心安装详细步骤(单节点)
zookeeper3.4.6安装包
ZooKeeper的windows版本,可以自己电脑本地搭建小型的dubbo demo,非常好用
curator zookeeper 3.4.6 2.9.1
org.apache.zookeeper 3.4.6下载 org.apache.zookeeper 3.4.6 安全稳定的版本
zookeeper 3.4.6版本 可以直接解压到指定盘符下即可 直接启动zoo.exe
zookeeper,用于作为dubbo支持,实现分布式处理,达到高并发的效果,在开发阶段也是一个不错的选择
此压缩文件包含zkp-3.4.6.tar.gz,jdk-7u55-linux-i586.tar.gz,供大家学习,下载
zookeeper-3.4.6安装包,可以对分布式服务做集群管理,像dubbo
详细描述solrCloud单机以及单机伪集群在windows下的部署安装并配图。
zookeeper3.4.6 pdf文档 有很多很多的例子
本文档详细介绍了如何用ZooKeeper和Hadoop、HBase搭建分布式大数据分析平台。