VMWare中Ubuntu网络配置
作者 | WenasWei
前言
在应用 VMWare 装置 Ubuntu18.04-Linux 操作系统下时产生系统配置问题,正好借这个机会,和大家一起分享下 Ubuntu 18.04 的一些基本操作和网络配置,次要波及到 NAT 网络和网络配置解决,将从以下几点介绍:
- 批改主机名称
- Windows设置VMWare的NAT网络
- 拜访公网设置与配置动态IP
- 批改hosts文件
- 设置免密登录
一、Ubuntu零碎信息与批改主机名
1.1 查看Linux零碎版本信息
1、查看Linux内核版本命令(两种办法):
- cat /proc/version
- uname -a
Linux内核版本信息:
$cat /proc/version Linux version 4.15.0-88-generic (buildd@lgw01-amd64-036) (gcc version 7.4.0 (Ubuntu 7.4.0-1ubuntu1~18.04.1)) #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 $ uname -a Linux wenas 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
2、查看Linux零碎版本的命令
(1)lsb_release -a
即可列出所有版本信息:
$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 18.04.2 LTS Release: 18.04 Codename: bionic
这个命令实用于所有的Linux发行版,包含RedHat、SUSE、Debian…等发行版。
(2) cat /etc/redhat-release
这种办法只适宜Redhat系的Linux:
$ cat /etc/redhat-release CentOS release 6.5 (Final)
(3) cat /etc/issue
此命令也实用于所有的Linux发行版。
在终端窗口中输出命令:
$ cat /etc/issue Ubuntu 18.04.2 LTS \n \l $ hostname wenas
能够查看到以后主机操作系统为:Ubuntu 18.04.2 LTS, 主机名为wenas
1.2 批改主机名
(1)永恒批改主机名
-
编辑
/etc/hostname
文件vi /etc/hostname hadoop1
-
重启服务器
reboot #重启后重连能够看到主机名批改胜利 Last login: Wed May 12 00:38:12 2021 from 192.168.254.1 root@hadoop1:~#
(2)长期批改主机名
$ hostname testname $ uname -a Linux testname 4.15.0-88-generic #88-Ubuntu SMP Tue Feb 11 20:11:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
其中“新主机名”能够用任何非法字符串来示意。不过采纳这种形式,新主机名并不保留在零碎中,重启零碎后主机名将复原为原先的主机名称。
这样主机名字就长期被批改为 testname ,然而终端下不会立刻显示失效后的主机名,重开一个终端窗口(通过ssh连贯的终端须要从新连贯才能够)
二、Windows设置VMWare的NAT网络
2.1 查看本机网络VMnet8
(1)ipconfig 查看 VMnet8 网络
VMnet8 网络适配器信息:
- IPv4 地址 : 192.168.254.1
- 子网掩码 : 255.255.255.0
- 默认网关: 192.168.254.2
(2)关上“网络和Internet”设置
ip地址、子网掩码与默认网关统一
2.2 VMWare 虚拟机设置
- 点击此处编辑,抉择虚构网络编辑器
- 虚构网络编辑器,抉择更改设置
- 抉择VMnet8,编辑NAT设置,设置成 2.1 中 查看 VMnet8 的网络
- 设置虚拟机的网络适配器,批改网络连接为NAT模式
三、Linux网关设置与配置动态IP
3.1 Linux网关设置与配置动态IP
-
编辑
/etc/netplan/50-cloud-init.yaml
$ vi /etc/netplan/50-cloud-init.yaml network: ethernets: ens33: # 配置动态IP为192.168.254.130 addresses: [192.168.254.130/24] # 网关设置成 2.1 中的VMnet8 网络适配器默认网关192.168.254.2 gateway4: 192.168.254.2 # 网关设置成 2.1 中的VMnet8 网络适配器默认网关192.168.254.2 nameservers: addresses: [192.168.254.2] dhcp4: true version: 2
-
通过netplan更新配置
$ sudo netplan apply
-
通过ifconfig查看配置是否失效
$ ifconfig
如下图所示:
3.2 批改DNS解析文件
-
批改配置文件:
/etc/systemd/resolved.conf
root@hadoop1:~# vi /etc/systemd/resolved.conf [Resolve] DNS=1.1.1.1 8.8.8.8 #FallbackDNS= #Domains= #LLMNR=no #MulticastDNS=no #DNSSEC=no #Cache=yes #DNSStubListener=yes
3.3 重启服务器联网
重启服务器,联网更新 apt
$ reboot root@hadoop2:~# apt-get update Hit:1 http://mirrors.aliyun.com/ubuntu xenial InRelease Get:2 http://mirrors.aliyun.com/ubuntu xenial-security InRelease [109 kB] Get:3 http://mirrors.aliyun.com/ubuntu xenial-updates InRelease [109 kB] Get:4 http://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic InRelease [64.4 kB] Get:5 http://mirrors.aliyun.com/ubuntu xenial-backports InRelease [107 kB] Get:6 http://mirrors.aliyun.com/ubuntu xenial-security/main amd64 Packages [1,646 kB] Get:7 http://mirrors.aliyun.com/ubuntu xenial-security/main Translation-en [380 kB] Get:8 http://mirrors.aliyun.com/ubuntu xenial-security/restricted amd64 Packages [9,824 B] Get:9 http://mirrors.aliyun.com/ubuntu xenial-security/universe amd64 Packages [786 kB] Get:10 http://mirrors.aliyun.com/ubuntu xenial-security/universe Translation-en [226 kB] Get:11 http://mirrors.aliyun.com/ubuntu xenial-security/multiverse amd64 Packages [7,864 B] Get:12 http://mirrors.aliyun.com/ubuntu xenial-security/multiverse Translation-en [2,672 B] Get:13 http://mirrors.aliyun.com/ubuntu xenial-updates/main amd64 Packages [2,048 kB] Get:14 http://mirrors.aliyun.com/ubuntu xenial-updates/main Translation-en [482 kB] Get:15 http://mirrors.aliyun.com/ubuntu xenial-updates/restricted amd64 Packages [10.2 kB] Get:16 http://mirrors.aliyun.com/ubuntu xenial-updates/universe amd64 Packages [1,220 kB] Get:17 http://mirrors.aliyun.com/ubuntu xenial-updates/universe Translation-en [358 kB] Get:18 http://mirrors.aliyun.com/ubuntu xenial-updates/multiverse amd64 Packages [22.6 kB] Get:19 http://mirrors.aliyun.com/ubuntu xenial-updates/multiverse Translation-en [8,476 B] Get:20 http://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic/stable amd64 Packages [18.1 kB] Get:21 http://mirrors.aliyun.com/ubuntu xenial-backports/main amd64 Packages [9,812 B] Get:22 http://mirrors.aliyun.com/ubuntu xenial-backports/universe amd64 Packages [11.3 kB] Fetched 7,637 kB in 4s (1,738 kB/s) Reading package lists... Done root@hadoop2:~# ping www.baidu.com PING www.wshifen.com (104.193.88.77) 56(84) bytes of data. 64 bytes from 104.193.88.77: icmp_seq=1 ttl=128 time=167 ms 64 bytes from 104.193.88.77: icmp_seq=2 ttl=128 time=166 ms 64 bytes from 104.193.88.77: icmp_seq=3 ttl=128 time=166 ms 64 bytes from 104.193.88.77: icmp_seq=4 ttl=128 time=166 ms 64 bytes from 104.193.88.77: icmp_seq=5 ttl=128 time=167 ms
四、Linux批改hosts文件
4.1 批改目标与初始化额定主机
目标: 批改hosts文件次要是为了服务器之间能够应用服务器名间接进行拜访,也能够通过ip地址进行拜访
过程: 将 VMWare 中的 Linux 服务器 192.168.254.130 通过 治理克隆的形式克隆出3台主机,别离如下设置ip地址和主机名称,网络配置和主机名设置为永恒主机名步骤如上所述设置。
主机ip与主机名对应关系:
192.168.254.130 hadoop1 192.168.254.131 hadoop2 192.168.254.132 hadoop3 192.168.254.133 hadoop4
4.2 批改hosts文件
-
批改
/etc/hosts
文件命令:$vi /etc/hosts
新增内容如下:
$ vi /etc/hosts 127.0.0.1 localhost.localdomain localhost ::1 localhost6.localdomain6 localhost6 # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet ff02::1 ip6-allnodes ff02::2 ip6-allrouters ff02::3 ip6-allhosts # 新增内容 192.168.254.130 hadoop1 192.168.254.131 hadoop2 192.168.254.132 hadoop3 192.168.254.133 hadoop4
-
测试通过主机名拜访网络,如下所示,能够看到能够失常拜访
$ ping hadoop1
- bytes from hadoop1 (192.168.254.130): icmp_seq=1 ttl=64 time=0.018 ms
- bytes from hadoop1 (192.168.254.130): icmp_seq=2 ttl=64 time=0.079 ms
- bytes from hadoop1 (192.168.254.130): icmp_seq=3 ttl=64 time=0.079 ms
- bytes from hadoop1 (192.168.254.130): icmp_seq=4 ttl=64 time=0.026 ms
- bytes from hadoop1 (192.168.254.130): icmp_seq=5 ttl=64 time=0.026 ms
- bytes from hadoop1 (192.168.254.130): icmp_seq=6 ttl=64 time=0.029 ms
- bytes from hadoop1 (192.168.254.130): icmp_seq=7 ttl=64 time=0.028 ms
-
bytes from hadoop1 (192.168.254.130): icmp_seq=8 ttl=64 time=0.029 ms
4.3 scp 批改其余主机hosts文件
在 192.168.254.130 上别离针对 192.168.254.131、192.168.254.132 和 192.168.254.133 操作
root@hadoop1:~# scp /etc/hosts root@hadoop2:/etc/ #输出hadoop2主机明码 root@hadoop2's password: hosts 100% 371 194.6KB/s root@hadoop1:~# scp /etc/hosts root@hadoop3:/etc/ #输出hadoop3主机明码 root@hadoop3's password: hosts 100% 371 194.6KB/s root@hadoop1:~# scp /etc/hosts root@hadoop4:/etc/ #输出hadoop4主机明码 root@hadoop4's password: hosts 100% 371 194.6KB/s
-
登录到 hadoop2、hadoop3、hadoop4主机测试网络胜利即可
root@hadoop2:/# ping hadoop3
- bytes from hadoop3 (192.168.254.132): icmp_seq=1 ttl=64 time=0.220 ms
- bytes from hadoop3 (192.168.254.132): icmp_seq=2 ttl=64 time=0.234 ms
-
bytes from hadoop3 (192.168.254.132): icmp_seq=3 ttl=64 time=0.417 ms
五、Linux免明码登录
5.1 背景
当在下面的几台主机互相登录时,须要输出每一台的登录机器明码,这几乎是不能忍啊,所以,就有了这节 Linux免密登录,在一个Linux机器上配置免明码登录其它服务器。
5.2 指标
四台服务器:在 192.168.254.130 免明码登录192.168.254.131;192.168.254.132;192.168.254.133这俩台机器,总结来说就是任一服务器能够免明码登录其它服务器。
5.3 生成公钥和密钥
应用以下命令生成公钥和私钥
$ ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:2wrX44H3QnG4PLCi/ujwrDLalGwLrAxiX1iP/GlHNQI root@hadoop1 The key's randomart image is: +---[RSA 2048]----+ | | | E | | . . | | ..oo. | | . S+o+. | |.. .+ + .*= | |+o*. =.o=.*. | |O=.=o..+.=.+ | |+++=*.ooo ... | +----[SHA256]-----+
5.4 把本地的ssh公钥文件装置到近程主机
应用以下命令把本地的ssh公钥文件装置到192.168.254.131(hadoop2):
$ ssh-copy-id root@hadoop2 /usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub" /usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed /usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys root@hadoop2's password:
- 留神: 当须要配置多个服务器的免密登录时,须要每次都从新生成公钥和私钥,再装置到近程主机
5.5 将公钥拷贝到每一台主机
- 登录 192.168.254.131 将 hadoop1 生成的公钥 authorized_keys 拷贝到 192.168.254.130
root@hadoop2:/# cd ~/.ssh/ root@hadoop2:~/.ssh# ll total 24 drwx------ 2 root root 4096 May 12 23:25 ./ drwx------ 6 root root 4096 May 12 22:35 ../ -rw------- 1 root root 1182 May 12 23:23 authorized_keys -rw------- 1 root root 1675 May 12 23:24 id_rsa -rw-r--r-- 1 root root 394 May 12 23:24 id_rsa.pub -rw-r--r-- 1 root root 444 May 12 23:25 known_hosts root@hadoop2:~/.ssh# scp authorized_keys root@hadoop1 root@hadoop2:~/.ssh#
-
将 192.168.254.131;192.168.254.132;192.168.254.133 上的公钥拷贝到 192.168.254.130
root@hadoop2:~# ssh-copy-id [email protected] root@hadoop3:~# ssh-copy-id [email protected] root@hadoop4:~# ssh-copy-id [email protected]
-
将 192.168.254.130 保留的4台服务器的公钥拷贝到其余3台主机上
root@hadoop1:/# hostname -i 192.168.254.130 root@hadoop1:/# cd ~/.ssh/ root@hadoop1:~/.ssh# ll total 24 drwx------ 2 root root 4096 May 12 23:42 ./ drwx------ 7 root root 4096 May 12 22:31 ../ -rw------- 1 root root 1576 May 12 23:42 authorized_keys -rw------- 1 root root 1675 May 12 23:10 id_rsa -rw-r--r-- 1 root root 394 May 12 23:10 id_rsa.pub -rw-r--r-- 1 root root 1776 May 12 23:42 known_hosts root@hadoop1:~/.ssh# scp authorized_keys [email protected] root@hadoop1:~/.ssh# scp authorized_keys [email protected] root@hadoop1:~/.ssh# scp authorized_keys [email protected]
能够看出 scp 到其余三台主机,不必再输出明码,通过主机名 ssh hadoop2
能够免密登录到 192.168.254.132 上
END
本文次要是为了后续部署 hadoop 等大数据组件的网络策略解决,其中最次要的设置网络动态ip、主机名批改、设置免密登录等操作,欢送关注微信公众号: 进击的梦清 ; 我是一名在互联网浪潮下的打工人,心愿和你独特学习提高,秉承信念: 你晓得的越多,不晓得的就越多。