基于VMware VSphere的Oracle 11G RAC 测试环境

今天我们一起来看看如何通过VMware vSphere虚拟化部署一套Oracle RAC测试环境。一般情况下我们都是在虚拟化上部署Oracle RAC测试环境,生产环境用物理机和存储做Oracle RAC而不推荐在虚拟化环境部署Oracle RAC,虚拟化环境推荐单机Oracle或者 DataGuard(DataGuard用于容灾和读写分离,故障的快速恢复切换,但是DataGuard的启停需要人工干预)。为什么我们不推荐在虚拟化上部署Oracle RAC测试环境呢?

VMware vSphere虚拟化环境部署Oracle RAC的弊端:

1、VMware vSphere虚拟化环境中部署的Oracle RAC如果出现问题,Oracle 原厂不会给予技术支持;

2、VMware vSphere虚拟化环境中部署的Oracle RAC,共享存储存在IO瓶颈;

3、VMware vSphere虚拟化环境中部署的Oracle RAC,性能会降低5%-20%;

4、VMware vSphere虚拟化环境中部署的Oracle RAC部署完毕后,容易给以后的运维维护挖坑,尤其是虚拟化的相关问题;比如虚拟化环境网络部门在管理,而具体业务数据库运维部门在管理;如果存储或虚拟化出现问题,沟通流程会很慢。但是如果是我们自己管理虚拟化平台,有具体操作的权限其实可以采用虚拟化环境来部署Oracle RAC。所以综合而言不论是测试环境,还是小的生产环境;就目前状况而言虚拟化环境都不太合适部署Oracle RAC(虚拟化的小问题也有很多需要调试优化,比如存储配置重启,涉及到搭建中的机器重启,网卡调整等等);

5、Oracle RAC的主要作用在于高可用,但是虚拟化本身就可以提供高可用功能;从这一方面来说不需要在虚拟化环境上部署Oracle RAC,因为一旦虚拟化环境整体崩溃,那么Oracle RAC将没有任何意义;

6、虚拟化环境可以做主要数据库的镜像,或者使用 DataGuard(不需要共享存储)来做镜像和备份切换作用;

7、Oracle RAC 在虚拟化环境中部署,增加了虚拟化软件故障的风险;同时也增加了一层对数据库的影响,相对来说不如在物理机上部署风险低。

VMware vSphere虚拟化环境部署Oracle RAC的优点:

1、VMware vSphere虚拟化环境中部署Oracle RAC方便快捷,不需要搭建网线、主机硬件等相关操作,网卡和存储也可以搭建快速配置部署;

2、虚拟化是以后的趋势,同时他也可以节省机柜空间、降低数据中心整体能耗、减少物理主机维护管理数量;

3、VMware vSphere虚拟化环境可动态移转到新的VMware vSphere主机,迁移方便;

4、VMware vSphere虚拟化从运维DBA初始化搭建的角度,方便,快捷。

讲了这么多、那么我们如何通过VMware vSphere虚拟化环境快速部署一套Oracle RAC测试环境呢?

1、基础环境

1.1、安装环境

宿主机操作系统:VMware vSphere 5.5

RAC节点操作系统:CentOS Linux release 7.6.1810 (Core)

Oracle DateBase Software:Oracle11G R2 11.2.0.4

Cluster Software:Oracle Grid Infrastructure 11G R2

共享存储:ASM

1.2、网络规划

网络配置 节点1 节点2 备注
主机名称 oracle11grac01 oracle11grac02
public ip 172.16.200.21 172.16.200.22
private ip 192.168.0.21 192.168.0.22
vip 172.16.200.23 172.16.200.24
scan ip 172.16.200.25 172.16.200.25

初步网卡规划:安装只要保证公网、 虚拟IP、 SCAN IP在同一网段, 专用IP在同一网段即可。
注:公有IP(公网)一般用于管理员用来确保可以操作到正确的机器,可以理解为真实ip;专用IP(私网)用于心跳同步,这个对于用户层面可以直接忽略,简单来说这个ip用来保证两台服务器同步数据;虚拟IP用于客户端应用以支持失效转移,通俗说就是一节点宕机另一台自动接管,客户端没有任何感觉;在11gR2中,SCAN IP是作为一个新增IP出现的,原有的CRS中VIP仍然存在,scan主要是简化客户端连接。

1.3、ASM磁盘组规划

组件名称 文件系统 卷大小 ASM卷组名 ASM冗余 磁盘名称
OCR/表决磁盘 ASM 5GB CRS External ASMDisk_OCR_5G
快速恢复区 ASM 50GB FRA External ASMDisk_FRA_50G
数据库区 ASM 500GB DATA External ASMDisk_DATA_500G

1.4、Oracle组件

组件名称 所属用户 所属辅助组 主目录 Oracle基础目录 Oracle主目录
Grid infrastructure grid asmadmin,asmdba,asmoper /home/grid /u01/app/grid /u01/app/11.2.0/grid
Oracle RAC oracle dba,dbaoper,asmdba,asmadmin /home/oracle /u01/app/oracle /u01/app/oracle/11.2.0/db_1

1.5、Oracle安装包

Linux版本的Oracle 11G R2 11.2.0.4 下载地址如下:

# X86版本
https://updates.oracle.com/Orion/Services/download/p13390677_112040_LINUX_1of7.zip?aru=16720989&patch_file=p13390677_112040_LINUX_1of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_LINUX_2of7.zip?aru=16720989&patch_file=p13390677_112040_LINUX_2of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_LINUX_3of7.zip?aru=16720989&patch_file=p13390677_112040_LINUX_3of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_LINUX_4of7.zip?aru=16720989&patch_file=p13390677_112040_LINUX_4of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_LINUX_5of7.zip?aru=16720989&patch_file=p13390677_112040_LINUX_5of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_LINUX_6of7.zip?aru=16720989&patch_file=p13390677_112040_LINUX_6of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_LINUX_7of7.zip?aru=16720989&patch_file=p13390677_112040_LINUX_7of7.zip  

X64版本
https://updates.oracle.com/Orion/Services/download/p13390677_112040_Linux-x86-64_1of7.zip?aru=16716375&patch_file=p13390677_112040_Linux-x86-64_1of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_Linux-x86-64_2of7.zip?aru=16716375&patch_file=p13390677_112040_Linux-x86-64_2of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_Linux-x86-64_3of7.zip?aru=16716375&patch_file=p13390677_112040_Linux-x86-64_3of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_Linux-x86-64_4of7.zip?aru=16716375&patch_file=p13390677_112040_Linux-x86-64_4of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_Linux-x86-64_5of7.zip?aru=16716375&patch_file=p13390677_112040_Linux-x86-64_5of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_Linux-x86-64_6of7.zip?aru=16716375&patch_file=p13390677_112040_Linux-x86-64_6of7.zip  
https://updates.oracle.com/Orion/Services/download/p13390677_112040_Linux-x86-64_7of7.zip?aru=16716375&patch_file=p13390677_112040_Linux-x86-64_7of7.zip  

image-20210623210650995

注:各位小伙伴也可以到Oracle官方网站自行下载(需要账号登录):https://updates.oracle.com/download/13390677.html ;下载完成之后上传到虚拟机的任意位置、后面安装grid和oracle数据库的时候我们会用到。

2、创建虚拟机

注:创建虚拟机以及安装操作系统的过程这里就不再详细介绍了、请各位小伙伴自行百度。

这里有一点需要特别强调、操作系统安装完成以后我们需要删除自动生成的网卡;从下面我们看到除了正常的两张网卡外还有一张虚拟网卡,需要将该网卡删除,后续系统检测才不会报错。

# 删除默认虚拟网卡需要用到libvirt工具、没有安装的小伙伴可以使用yum在线安装
[root@oracle11grac01 ~]# yum -y install libvirt
[root@oracle11grac01 ~]# ifconfig
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.200.21  netmask 255.255.255.0  broadcast 172.16.200.255
        inet6 fe80::6af4:fd6f:545a:e003  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:33:68:71  txqueuelen 1000  (Ethernet)
        RX packets 5483013  bytes 7454425719 (6.9 GiB)
        RX errors 0  dropped 12  overruns 0  frame 0
        TX packets 1784431  bytes 101204215 (96.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.21  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::4136:e030:d6a2:7922  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:33:68:7b  txqueuelen 1000  (Ethernet)
        RX packets 202161  bytes 17118489 (16.3 MiB)
        RX errors 0  dropped 43281  overruns 0  frame 0
        TX packets 245  bytes 25208 (24.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2  bytes 98 (98.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2  bytes 98 (98.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

virbr0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500
        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255
        ether 52:54:00:99:e9:56  txqueuelen 1000  (Ethernet)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@oracle11grac01 ~]# virsh net-list
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes

[root@oracle11grac01 ~]# virsh net-destroy default 
Network default destroyed

[root@oracle11grac01 ~]# virsh net-undefine default
Network default has been undefined

[root@oracle11grac01 ~]# systemctl restart libvirtd
[root@oracle11grac01 ~]# ifconfig
ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 172.16.200.21  netmask 255.255.255.0  broadcast 172.16.200.255
        inet6 fe80::6af4:fd6f:545a:e003  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:33:68:71  txqueuelen 1000  (Ethernet)
        RX packets 5483268  bytes 7454451456 (6.9 GiB)
        RX errors 0  dropped 12  overruns 0  frame 0
        TX packets 1784560  bytes 101225661 (96.5 MiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens192: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.21  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::4136:e030:d6a2:7922  prefixlen 64  scopeid 0x20<link>
        ether 00:0c:29:33:68:7b  txqueuelen 1000  (Ethernet)
        RX packets 202498  bytes 17145690 (16.3 MiB)
        RX errors 0  dropped 43358  overruns 0  frame 0
        TX packets 245  bytes 25208 (24.6 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 2  bytes 98 (98.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2  bytes 98 (98.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

[root@oracle11grac01 ~]#

注:我们看到虚拟网卡已经不存在了

2.1、关闭防火墙

[root@oracle11grac01 ~]# systemctl stop firewalld  
[root@oracle11grac01 ~]# systemctl disable firewalld
[root@oracle11grac01 ~]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
[root@oracle11grac01 ~]# 

2.2、关闭SELinux

实际就是关闭防火墙用的修改 vi /etc/selinux/config 文件;确保 selinux 设置为:SELINUX=disabled。

[root@oracle11grac01 ~]# cat /etc/selinux/config 

# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=disable
# SELINUXTYPE= can take one of three values:
#     targeted - Targeted processes are protected,
#     minimum - Modification of targeted policy. Only selected processes are protected. 
#     mls - Multi Level Security protection.
SELINUXTYPE=targeted 


[root@oracle11grac01 ~]# setenforce 0

2.3、配置主机名和hosts

# 可以将下面的节点名称替换为自己的主机名称
hostnamectl set-hostname oracle11grac01
hostnamectl set-hostname oracle11grac02

# 配置hosts文件
cat >> /etc/hosts <<EOF
# Public ip ens160
172.16.200.21   oracle11grac01
172.16.200.22   oracle11grac02

# Private ip ens192
192.168.0.21   oracle11grac01-priv
192.168.0.22   oracle11grac02-priv

# VIP ens160
172.16.200.23   oracle11grac01-vip
172.16.200.24   oracle11grac02-vip

# Scan ip ens160
172.16.200.25   oracle11grac-scan
EOF

注:这里我们没有配置DNS,如果 DNS 不支持主机名称解析,我们就需要在每台机器的 /etc/hosts 文件中添加主机名和 IP 的对应关系;然后退出,重新登录 root 账号,可以看到主机名生效;要特别强调一下,hostname尽量采用小写。

2.4、配置本地Yum源

注:没有网络环境的小伙伴可以执行下面的操作、如果您有网络环境则可以不用配置本地Yum源。以下所有操作均在oracle11grac01和oracle11grac02节点同时配置。

[root@oracle11grac01 ~]# mount -t auto /dev/cdrom /mnt/
mount: /dev/sr0 is write-protected, mounting read-only
[root@oracle11grac01 ~]# cd /etc/yum.repos.d/
[root@oracle11grac01 yum.repos.d]# cat CentOS-Media.repo 
# CentOS-Media.repo
#
#  This repo can be used with mounted DVD media, verify the mount point for
#  CentOS-7.  You can use this repo and yum to install items directly off the
#  DVD ISO that we release.
#
# To use this repo, put in your DVD and use it with the other repos too:
#  yum --enablerepo=c7-media [command]
#  
# or for ONLY the media repo, do this:
#
#  yum --disablerepo=\* --enablerepo=c7-media [command]

[c7-media]
name=CentOS-$releasever - Media
baseurl=file:///media/CentOS/
        file:///media/cdrom/
        file:///media/cdrecorder/
        file:///mnt/
gpgcheck=0
enabled=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

[root@oracle11grac01 yum.repos.d]# 



[root@oracle11grac01 yum.repos.d]# yum clean all
Loaded plugins: fastestmirror, langpacks
Cleaning repos: base c7-media extras updates
Cleaning up list of fastest mirrors
[root@oracle11grac01 yum.repos.d]# yum makecache
Loaded plugins: fastestmirror, langpacks
Determining fastest mirrors
 * base: ftp.sjtu.edu.cn
 * c7-media: 
 * extras: ftp.sjtu.edu.cn
 * updates: ftp.sjtu.edu.cn
base                                                                                                        | 3.6 kB  00:00:00     
file:///media/CentOS/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /media/CentOS/repodata/repomd.xml"
Trying other mirror.
file:///media/cdrecorder/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /media/cdrecorder/repodata/repomd.xml"
Trying other mirror.
file:///media/cdrom/repodata/repomd.xml: [Errno 14] curl#37 - "Couldn't open file /media/cdrom/repodata/repomd.xml"
Trying other mirror.
c7-media                                                                                                    | 3.6 kB  00:00:00     
extras                                                                                                      | 2.9 kB  00:00:00     
updates                                                                                                     | 2.9 kB  00:00:00     
c7-media/group_gz              FAILED                                          
file:///media/CentOS/repodata/bc140c8149fc43a5248fccff0daeef38182e49f6fe75d9b46db1206dc25a6c1c-c7-x86_64-comps.xml.gz: [Errno 14] curl#37 - "Couldn't open file /media/CentOS/repodata/bc140c8149fc43a5248fccff0daeef38182e49f6fe75d9b46db1206dc25a6c1c-c7-x86_64-comps.xml.gz"
Trying other mirror.
c7-media/filelists_db          FAILED                                          
file:///media/cdrecorder/repodata/c4a7811896f3a65404455f3631907adaca3bb9bcd93acb6e476a9a7708abe8c7-filelists.sqlite.bz2: [Errno 14] curl#37 - "Couldn't open file /media/cdrecorder/repodata/c4a7811896f3a65404455f3631907adaca3bb9bcd93acb6e476a9a7708abe8c7-filelists.sqlite.bz2"
Trying other mirror.
c7-media/primary_db            FAILED                                          
file:///media/cdrom/repodata/2ff4471767c82fed8b27981c603e3dc9d6559ca69e162f5ca2bb53f2450c8b08-primary.sqlite.bz2: [Errno 14] curl#37 - "Couldn't open file /media/cdrom/repodata/2ff4471767c82fed8b27981c603e3dc9d6559ca69e162f5ca2bb53f2450c8b08-primary.sqlite.bz2"
Trying other mirror.
(1/14): c7-media/other_db                                                                                   | 1.3 MB  00:00:00     
(2/14): base/7/x86_64/group_gz                                                                              | 153 kB  00:00:00     
(3/14): base/7/x86_64/primary_db                                                                            | 6.1 MB  00:00:00     
(4/14): base/7/x86_64/other_db                                                                              | 2.6 MB  00:00:00     
(5/14): extras/7/x86_64/other_db                                                                            | 143 kB  00:00:00     
(6/14): extras/7/x86_64/filelists_db                                                                        | 235 kB  00:00:00     
(7/14): updates/7/x86_64/other_db                                                                           | 680 kB  00:00:00     
(8/14): c7-media/group_gz                                                                                   | 166 kB  00:00:00     
(9/14): base/7/x86_64/filelists_db                                                                          | 7.2 MB  00:00:00     
(10/14): c7-media/filelists_db                                                                              | 3.2 MB  00:00:00     
(11/14): updates/7/x86_64/primary_db                                                                        | 8.8 MB  00:00:00     
(12/14): extras/7/x86_64/primary_db                                                                         | 242 kB  00:00:00     
(13/14): c7-media/primary_db                                                                                | 3.1 MB  00:00:00     
(14/14): updates/7/x86_64/filelists_db                                                                      | 5.1 MB  00:00:00     
Metadata Cache Created
[root@oracle11grac01 yum.repos.d]#

2.5、安装依赖包

yum install -y binutils compat-libcap1 compat-libstdc++-33 compat-libstdc++-33.i686 gcc gcc-c++ glibc glibc.i686 glibc-devel glibc-devel.i686 ksh libgcc libgcc.i686 libstdc++ libstdc++.i686 libstdc++-devel libstdc++-devel.i686 libaio libaio.i686 libaio-devel libaio-devel.i686 libXext libXext.i686 libXtst libXtst.i686 libX11 libX11.i686 libXau libXau.i686 libxcb libxcb.i686 libXi libXi.i686 make sysstat unixODBC unixODBC-devel readline libtermcap-devel bc compat-libstdc++ elfutils-libelf elfutils-libelf-devel fontconfig-devel libXi libXtst libXrender libXrender-devel libgcc librdmacm-devel libstdc++ libstdc++-devel net-tools nfs-utils python python-configshell python-rtslib python-six targetcli smartmontools

注:RHEL7还需要单独安装一个独立包:rpm -ivh compat-libstdc++-33-3.2.3-72.el7.x86_64.rpm。RAC的安装基于Grid Infrastructure (GI) 与 RDBMS, 所需安装的包与安装Orcle RDBMS一样,可参考RDBMS 安装文档,也可以在GI 执行安装先决条件时再具体安装缺少的包。看到有一些包没有安装,这些都只需要安装 64 位的包即可,但下面的包要特别安装,不然后面检测不通过。

2.6、禁用NTP

检查两个节点之间的时间,时区等是否相同,并禁用NTP。
注:安装配置RAC时,可以选择NTP或者CTSS两种方式进行时间同步。如果选择了NTP,则CTSS将处于观察模式(Observer Mode);如果选择了CTSS,则必须禁用NTP,CTSS将自动处于活动模式(active mode),时间同步将在集群节点之间进行,而不需要联系外部服务器。

[root@oracle11grac01 ~]# systemctl disable ntpd
[root@oracle11grac01 ~]# systemctl stop ntpd
[root@oracle11grac01 ~]# mv /etc/ntp.conf /etc/ntp.conf.orig 
[root@oracle11grac01 ~]# systemctl status ntpd
● ntpd.service - Network Time Service
   Loaded: loaded (/usr/lib/systemd/system/ntpd.service; disabled; vendor preset: disabled)
   Active: inactive (dead)
[root@oracle11grac01 ~]# timedatectl list-timezones | grep Shanghai
Asia/Shanghai
[root@oracle11grac01 ~]# timedatectl set-timezone Asia/Shanghai
[root@oracle11grac01 ~]# 

2.7、配置SSH主机互信

su grid
cd /home/grid/
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa  
ssh-keygen -t dsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 
ssh oracle11grac02 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh oracle11grac02 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys oracle11grac02:~/.ssh/authorized_keys
ssh oracle11grac02 date;ssh oracle11grac01 date;ssh oracle11grac01-priv date;ssh oracle11grac02-priv date


su oracle
cd /home/oracle/
mkdir ~/.ssh
chmod 700 ~/.ssh
ssh-keygen -t rsa  
ssh-keygen -t dsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys 
ssh oracle11grac02 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
ssh oracle11grac02 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
scp ~/.ssh/authorized_keys oracle11grac02:~/.ssh/authorized_keys
ssh oracle11grac02 date;ssh oracle11grac01 date;ssh oracle11grac01-priv date;ssh oracle11grac02-priv date

3、配置CentOS系统

3.1、创建所需目录

mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
mkdir -p /u01/app/oracle
mkdir -p /u01/app/oraInventory
mkdir -p /u01/app/oracle/product/11.2.0/db_1
chown -R grid:oinstall /u01/app/grid
chown -R grid:oinstall /u01/app/11.2.0/grid
chown -R grid:oinstall /u01/app/oraInventory
chown -R oracle:oinstall /u01/app/oracle
chmod -R 775 /u01/

注:Oracle 11G 单实例如果需要使用 ASM,grid 也必须安装,且必须放在 ORACLE_BASE 下,Oracle 11G RAC 则不行,它的 grid 家目录必须另外放在一个地方, 比如/u01/app/grid。

3.2、配置Linux内核参数

编辑文件

# /etc/sysctl.conf
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 18446744073692774399
kernel.shmmax = 18446744073692774399
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586

[root@oracle11grac01 ~]# cat /etc/sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
fs.aio-max-nr = 1048576
fs.file-max = 6815744
kernel.shmall = 18446744073692774399
kernel.shmmax = 18446744073692774399
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
net.ipv4.ip_local_port_range = 9000 65500
net.core.rmem_default = 262144
net.core.rmem_max = 4194304
net.core.wmem_default = 262144
net.core.wmem_max = 1048586
[root@oracle11grac01 ~]# 

# 执行下面的语句生效
/sbin/sysctl -p


[root@localhost ~]# systemctl disable avahi-daemon.socket
Removed symlink /etc/systemd/system/sockets.target.wants/avahi-daemon.socket.
[root@localhost ~]# systemctl disable avahi-daemon.service
Removed symlink /etc/systemd/system/multi-user.target.wants/avahi-daemon.service.
Removed symlink /etc/systemd/system/dbus-org.freedesktop.Avahi.service.
[root@localhost ~]#

[root@localhost ~]# ps -ef | grep avahi-daemon
avahi     5256     1  0 18:12 ?        00:00:00 avahi-daemon: running [linux.local]
avahi     5311  5256  0 18:12 ?        00:00:00 avahi-daemon: chroot helper
root      5935  4335  0 21:24 pts/2    00:00:00 grep --color=auto avahi-daemon
[root@localhost ~]# kill -9 5256 5311
[root@localhost ~]# ps -ef | grep avahi-daemon
root      5946  4335  0 21:24 pts/2    00:00:00 grep --color=auto avahi-daemon
[root@localhost ~]# 

vi /etc/sysconfig/network
# Created by anaconda
NOZEROCONF=yes

3.3、为Oracle用户设置 shell limits

# /etc/security/limits.conf
..............................................................
#@student        -       maxlogins       4

#ORACLE SETTING
grid            soft    nproc           16384
grid            hard    nproc           16384
grid            soft    nofile          1024
grid            hard    nofile          65536           
grid            soft    stack           10240
grid            hard    stack           10240
oracle          soft    nproc           16384
oracle          hard    nproc           16384 
oracle          soft    nofile          1024
oracle          hard    nofile          65536
oracle          soft    stack           10240
oracle          hard    stack           32768           
oracle          soft    memlock         6291456
oracle          hard    memlock         6291456
# memlock 这个值应该比内存配置略小,也就是要配置的足够大。单位:k
# 4194304 表示4G 8388608表示8G
# End of file
..............................................................

..............................................................
# /etc/pam.d/login
session    required     pam_limits.so
..............................................................

3.4、配置用户组及账号配置

groupadd ‐g 11001 oinstall
groupadd ‐g 11002 dba
groupadd ‐g 11003 oper
groupadd ‐g 11004 asmdba
groupadd ‐g 11005 asmoper
groupadd ‐g 11006 asmadmin
useradd ‐u 11007 ‐g oinstall ‐G asmadmin,asmdba,asmoper grid
useradd ‐u 11008 ‐g oinstall ‐G dba,oper,asmdba,asmadmin oracle
passwd grid
passwd oracle

3.5、配置环境变量

# /home/grid/.bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH
export ORACLE_SID=+ASM2;
export ORACLE_BASE=/u01/app/grid;
export ORACLE_HOME=/u01/app/11.2.0/grid;
export NLS_DATE_FORMAT="yyyy-mm-dd HH24:MI:SS";
export PATH=.:$PATH:$HOME/bin:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib:/lib:/usr/lib
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib


# /home/oracle/.bash_profile 
# .bash_profile

# Get the aliases and functions
if [ -f ~/.bashrc ]; then
        . ~/.bashrc
fi

# User specific environment and startup programs

PATH=$PATH:$HOME/.local/bin:$HOME/bin

export PATH
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.2.0/db_1
export ORACLE_SID=RACDB1
export PATH=$ORACLE_HOME/bin:$PATH
export LD_LIBRARY_PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/local/bin
export CLASSPATH=$ORACLE_HOME/JRE:$ORACLE_HOME/jlib:$ORACLE_HOME/rdbms/jlib

注:再处理oracle下配置, 需要注意的是ORACLE_HOSTNAME与ORACLE_SID要根据节点不同改为1和2。

4、添加共享存储

4.1、添加共享磁盘

VMware vSphere中磁盘有三种类型:分别是后置备延迟置零、后置备置零(thick)、精简置备(thin);他们之间的区别如下:

1、后置备延迟置零:默认的创建格式、创建过程中为虚拟磁盘分配所需空间。创建时不会擦除物理设备上保留的任何数据、没有置零操作。当有IO操作时,需要等待清零操作完成以后才能完成IO。即:分配好空间,执行写操作时才会按需要将其置零。

2、后置备置零(thick):创建支持群集功能的厚磁盘,在创建时为虚拟磁盘分配所需的空间。并将物理设备上保留的数据置零、创建这种格式的磁盘所需的时间可能会比创建其他类型的磁盘时间长。即:分配好空间并置零操作,有IO的时候无需等待任何操作直接执行。

3、精简置备(thin):精简置备就是无论磁盘分配多大空间,实际占用存储大小是现在的使用大小,即用多少算多少。当客户机有IO的时候,VMKernel首先分配需要的空间进行清零操作,也就是说如果使用精简置备在有IO需求的时候,等待分配空间和清零。这两个步骤完成后才能进行操作,对于IO比较频繁的应用虽然节省了存储空间但是性能会有所下降。

我们通过SSH登录到虚拟主机上、用下面的命令在指定存储的文件夹下面常见3个共享磁盘。其中 1 块 5G的硬盘,将来用于配置 GRIDDG 磁盘组,专门存放 OCR 和 Voting Disk; 1 块 500G 的磁盘用于配置 DATA 磁盘组存放数据库; 1 块 50G 的磁盘用于配置 FRA 磁盘组用于闪回区; 在 racdb1 上创建共享硬盘详细步骤: 先关闭节点 oracle11grac01, 然后选择 oracle11grac01, 右键选择编辑设置, 编辑设置, 进入配置界面, 点击添加增加磁盘:

# 通过SSH命登录到vSphere主机上、用下面的命令创建3个共享磁盘
vmkfstools -c 5120m -a lsilogic -d eagerzeroedthick /vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk/ASMDisk_OCR_5G.vmdk
vmkfstools -c 51200m -a lsilogic -d eagerzeroedthick /vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk/ASMDisk_FRA_50G.vmdk
vmkfstools -c 512000m -a lsilogic -d eagerzeroedthick /vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk/ASMDisk_DATA_500G.vmdk

# 详细创建过程如下
~ # find / -name ASMDisk
/vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk
~ # cd /vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk
/vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk # vmkfstools -c 5120m -a lsilogic -d eagerzeroedthick /vmfs/volumes/55db0
a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk/ASMDisk_OCR_5G.vmdk
Creating disk '/vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk/ASMDisk_OCR_5G.vmdk' and zeroing it out...
Create: 100% done.
/vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk # vmkfstools -c 51200m -a lsilogic -d eagerzeroedthick /vmfs/volumes/55db
0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk/ASMDisk_FRA_50G.vmdk
Creating disk '/vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk/ASMDisk_FRA_50G.vmdk' and zeroing it out...
Create: 100% done.
/vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk # vmkfstools -c 512000m -a lsilogic -d eagerzeroedthick /vmfs/volumes/55d
b0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk/ASMDisk_DATA_500G.vmdk
Creating disk '/vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk/ASMDisk_DATA_500G.vmdk' and zeroing it out...
Create: 100% done.
/vmfs/volumes/55db0a12-56604c4c-f3e1-7ca23e8d333c/ASMDisk #

注:共享磁盘创建完成之后我们就可以在oracle11grac01和oracle11grac02节点添加使用了。

image-20210623150116293

image-20210623150141895

image-20210623150213611

注:这里需要注意下一下、我们一定要选择虚拟设备节点为:SCSI(1:0)

image-20210623150319146

image-20210623150336423

在oracle11grac01虚拟机属性中, 选择刚才新添加的 SCSI controller 1 驱动器, 配置其为物理模式(或者虚拟模式) 用于支持共享, 因为这块新添加的硬盘将来要被oracle11grac01和oracle11grac02节点同时访问:重复执行上述步骤, 添加另外两块硬盘, 将其驱动器设备选择 SCSI 1:1; SCSI 1:2。 最后,添加完 3 块共享硬盘的 racdb1 配置信息如下:

image-20210623150417590

image-20210623150354019

4.2、配置ASM共享磁盘

添加完虚拟磁盘后、我们开始对刚才添加的磁盘进行分区;按照下面的分区方式对/dev/sdb;/dev/sdc和/dev/sdd三块磁盘依次进行分区。

[root@oracle11grac01 ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0xbc71b181.

# 创建新分区
Command (m for help): n
Partition type:
   p   primary (0 primary, 0 extended, 4 free)
   e   extended

# 选择分区格式为主分区
Select (default p): p
Partition number (1-4, default 1): 
First sector (2048-10485759, default 2048): 
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-10485759, default 10485759): 
Using default value 10485759
Partition 1 of type Linux and of size 5 GiB is set

# 保存分区信息
Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.
[root@oracle11grac01 ~]# 

# 查看分区
[root@oracle11grac01 ~]# fdisk -l

Disk /dev/sdb: 5368 MB, 5368709120 bytes, 10485760 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdc: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sdd: 536.9 GB, 536870912000 bytes, 1048576000 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 107.4 GB, 107374182400 bytes, 209715200 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x000d143d

   Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *        2048     2099199     1048576   83  Linux
/dev/sda2         2099200   209715199   103808000   8e  Linux LVM

Disk /dev/mapper/centos-root: 53.7 GB, 53687091200 bytes, 104857600 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-swap: 8455 MB, 8455716864 bytes, 16515072 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/mapper/centos-home: 44.1 GB, 44149243904 bytes, 86228992 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

[root@oracle11grac01 ~]# 

本步需要在两个节点都进行操作;使用 udev 管理配置磁盘方式;启动主机后确认 udev 是否已安装, 输入命令 rpm -qa | grep udev 。如果可以看到已经有相应的包、即可执行以下命令获得 scsi id 信息,并要记住这个 ID,后面配置磁盘要用(oracle11grac01和oracle11grac02节点都要执行):

[root@oracle11grac01 rules.d]# /usr/lib/udev/scsi_id -g -u /dev/sdb
36000c29c05c7be2193f3b7701aeae954
[root@oracle11grac01 rules.d]# /usr/lib/udev/scsi_id -g -u /dev/sdc
36000c29a85ed16a5a001d5e8c6c3dc68
[root@oracle11grac01 rules.d]# /usr/lib/udev/scsi_id -g -u /dev/sdd
36000c2960860983e6db13b278bdd1c87
[root@oracle11grac01 rules.d]# cat /etc/udev/rules.d/99-oracle-asmdevices.rules
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block",  PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",  RESULT=="36000c29c05c7be2193f3b7701aeae954", OWNER="grid", GROUP="asmadmin", RUN+="/bin/sh -c 'mknod /dev/asmdiskOCR b  $major $minor; chown grid:asmadmin /dev/asmdiskOCR; chmod 0664 /dev/asmdiskOCR'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block",  PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",  RESULT=="36000c29a85ed16a5a001d5e8c6c3dc68", OWNER="grid", GROUP="asmadmin", RUN+="/bin/sh -c 'mknod /dev/asmdiskFRA b  $major $minor; chown grid:asmadmin /dev/asmdiskFRA; chmod 0664 /dev/asmdiskFRA'"
KERNEL=="sd*[!0-9]", ENV{DEVTYPE}=="disk", SUBSYSTEM=="block",  PROGRAM=="/usr/lib/udev/scsi_id -g -u -d $devnode",  RESULT=="36000c2960860983e6db13b278bdd1c87", OWNER="grid", GROUP="asmadmin", RUN+="/bin/sh -c 'mknod /dev/asmdiskDATA b $major $minor; chown grid:asmadmin /dev/asmdiskDATA; chmod 0664 /dev/asmdiskDATA'"
[root@oracle11grac01 rules.d]# 

文件创建完完成以后我们开始诊断磁盘挂载情况:

# 检查新设备
[root@oracle11grac01 rules.d]# /sbin/udevadm trigger --type=devices --action=change

# 重新加载udev
[root@oracle11grac01 rules.d]# /sbin/udevadm control --reload    

# 诊断udev
[root@oracle11grac01 rules.d]# # /sbin/udevadm test /sys/block/sdb

# 查看是否绑定成功
[root@oracle11grac01 rules.d]# ls -la /dev/asm*
brw-rw-r--. 1 grid asmadmin 8, 48 Jun 26 13:08 /dev/asmdiskDATA
brw-rw-r--. 1 grid asmadmin 8, 32 Jun 26 13:08 /dev/asmdiskFRA
brw-rw-r--. 1 grid asmadmin 8, 16 Jun 26 13:08 /dev/asmdiskOCR
[root@oracle11grac01 rules.d]# 

5、安装Grid Infrastructure

5.1、安装Grid Infrastructure

注:此流程只需在Oracle11grac01节点执行即可。

CentOS操作系统基础环境配置完成以后、我们进入图形化界面后打开终端在root用户下输入xhost+,然后我们进入grid用户,在grid用户下进行下面的操作。

[grid@oracle11grac01 grid]$ pwd
/u01/app/11.2.0/grid/grid
[grid@oracle11grac01 grid]$ ls
install  readme.html  response  rpm  runcluvfy.sh  runInstaller  sshsetup  stage  welcome.html
[grid@oracle11grac01 grid]$ ./runInstaller -jreLoc /etc/alternatives/jre_1.8.0

注:在CentOS7上安装Oracle的时候经常碰到Oracle安装客户端的弹窗很小,有的时候还会只有一个竖条;我们在runInstaller后面加上-jreLoc /etc/alternatives/jre_1.8.0参数来执行就不会出现这样的问题。

image-20210623224005669

image-20210623224026687

image-20210623224056684

image-20210623224105620

image-20210623224219547

定义完集群名称后,i界面只出来了第一个节点,点击 “Add” 再把第二个节点添加上去。

image-20210623231153452

添加完成之后点击下一步、然后我们按照前面的网络规划来选择对应的网卡。

image-20210623231238210

image-20210623231306070

这里我们需要将磁盘组名称改成 CRS, 并且选择为扩展“External”, 然后点击右下角的“Change Discovery Path” 。把Disk Discovery Path修改成前面提到的/dev/asmdisk*。

image-20210623231731374

确认后能看到三个前面配置的ASM磁盘, 这里我们选中第三个磁盘 /dev/asmdiskOCR。

image-20210623231759350

要求输入系统用户密码, 我们选择第二个选项;所有用户用同一个密码。如果提示密码不符合标准是否还要使用, 我们点击 OK, 确认使用即可。

image-20210623231835926

image-20210623231854585

这里已经自动检测到我们前面配置的分组,然后我们把路径修改程我们前面配置的路径即可。

image-20210623232310299

image-20210623232518155

image-20210623232552144

检测后如果发现 NTP(时间同步) 和 resolv.conf(DNS) 这两个问题可忽略, 点击右上角的“Ignore All”。如果提示没有安装cvuqdisk的小伙伴、请到百度自行下载RPM安装包进行手动安装(两个节点都要安装)。

[root@oracle11grac01 ~]# rpm  -ivh cvuqdisk-1.0.10-1.rpm 
Preparing...                          ################################# [100%]
        package cvuqdisk-1.0.10-1.x86_64 is already installed
[root@oracle11grac01 ~]#

image-20210623232634217

这里还有一个问题需要注意下、如果安装的过程中没有检测到cvuqdisk-1.0.10-1.rpm包(必须要安装、不然会报错);各位小伙伴可以自动百度下载安装。这里的pdksh-5.2.14我忽略掉了、安装过程没有报错。

image-20210623234151750

image-20210623234637621

image-20210623234822619

image-20210623235214774

安装完成后会提示执行两个脚本,这里我们按照提示分别在两个节点上依次用root用户执行orainstRoot.sh和root.sh脚本。脚本的运行顺序一定不要弄错了:首先在oracle11grac01节点运行 sh /u01/app/grid/oraInventory/orainstRoot.sh 然后在 oracle11grac02节点运行 sh /u01/app/grid/oraInventory/orainstRoot.sh 。我们继续在oracle11grac01节点运行 sh /u01/app/grid/11.2.0/root.sh 然后在oracle11grac02节点运行 sh /u01/app/grid/11.2.0/root.sh 。详细的执行过程如下:

[root@oracle11grac01 oraInventory]# sh /u01/app/grid/oraInventory/orainstRoot.sh 
Changing permissions of /u01/app/grid/oraInventory.
Adding read,write permissions for group.
Removing read,write,execute permissions for world.

Changing groupname of /u01/app/grid/oraInventory to oinstall.
The execution of the script is complete.

[root@oracle11grac01 oraInventory]# sh /u01/app/grid/11.2.0/root.sh              
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer

CRS-2672: Attempting to start 'ora.mdnsd' on 'oracle11grac01'
CRS-2676: Start of 'ora.mdnsd' on 'oracle11grac01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'oracle11grac01'
CRS-2676: Start of 'ora.gpnpd' on 'oracle11grac01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'oracle11grac01'
CRS-2672: Attempting to start 'ora.gipcd' on 'oracle11grac01'
CRS-2676: Start of 'ora.cssdmonitor' on 'oracle11grac01' succeeded
CRS-2676: Start of 'ora.gipcd' on 'oracle11grac01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'oracle11grac01'
CRS-2672: Attempting to start 'ora.diskmon' on 'oracle11grac01'
CRS-2676: Start of 'ora.diskmon' on 'oracle11grac01' succeeded
CRS-2676: Start of 'ora.cssd' on 'oracle11grac01' succeeded

ASM created and started successfully.

Disk Group CRS created successfully.

clscfg: -install mode specified
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4256: Updating the profile
Successful addition of voting disk 2cecd81df5c04f29bfeca3e7c5017913.
Successfully replaced voting disk group with +CRS.
CRS-4256: Updating the profile
CRS-4266: Voting file(s) successfully replaced
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   2cecd81df5c04f29bfeca3e7c5017913 (/dev/asmdiskOCR) [CRS]
Located 1 voting disk(s).

CRS-2672: Attempting to start 'ora.asm' on 'oracle11grac01'
CRS-2676: Start of 'ora.asm' on 'oracle11grac01' succeeded
CRS-2672: Attempting to start 'ora.CRS.dg' on 'oracle11grac01'
CRS-2676: Start of 'ora.CRS.dg' on 'oracle11grac01' succeeded
Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@oracle11grac01 oraInventory]#


[root@oracle11grac02 ~]# sh /u01/app/grid/11.2.0/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0/crs/install/crsconfig_params
User ignored Prerequisites during installation
Installing Trace File Analyzer

CRS-4402: The CSS daemon was started in exclusive mode but found an active CSS daemon on node oracle11grac01, number 1, and is terminating
An active cluster was found during exclusive startup, restarting to join the cluster

Configure Oracle Grid Infrastructure for a Cluster ... succeeded
[root@oracle11grac02 ~]#

[root@oracle11grac02 ~]# sh /u01/app/grid/11.2.0/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/grid/11.2.0

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
   Copying dbhome to /usr/local/bin ...
   Copying oraenv to /usr/local/bin ...
   Copying coraenv to /usr/local/bin ...


Creating /etc/oratab file...
Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/grid/11.2.0/crs/install/crsconfig_params
Creating trace directory
User ignored Prerequisites during installation
Installing Trace File Analyzer
OLR initialization - successful
Adding Clusterware entries to inittab

ohasd failed to start
Failed to start the Clusterware. Last 20 lines of the alert log follow: 
2021-06-24 00:30:30.156: 
[client(25565)]CRS-2101:The OLR was formatted using version 3.

^C^C^C^C^CINT at /u01/app/grid/11.2.0/crs/install/crsconfig_lib.pm line 1446.
Failed to write the checkpoint:'ROOTCRS_STACK' with status:FAIL.Error code is 256
/u01/app/grid/11.2.0/perl/bin/perl -I/u01/app/grid/11.2.0/perl/lib -I/u01/app/grid/11.2.0/crs/install /u01/app/grid/11.2.0/crs/install/rootcrs.pl execution failed
Oracle root script execution aborted!
[root@oracle11grac02 ~]# ^C

在执行root.sh脚本的时候会出现Failed to start the Clusterware. Last 20 lines of the alert log follow的错误提示。这是因为在 CentOS 7 使用的是 systemd 而不是 initd 运行进程和重启进程,而 root.sh 是通过传统的 initd 运行ohasd进程。这里我们需要做一些简单修改、我们在root用户下创建ohas.service服务文件:

touch /usr/lib/systemd/system/ohas.service
chmod 777 /usr/lib/systemd/system/ohas.service

# vi /usr/lib/systemd/system/ohas.service
[Unit]
Description=Oracle High Availability Services
After=syslog.target

[Service]
ExecStart=/etc/init.d/init.ohasd run >/dev/null 2>&1 Type=simple
Restart=always

[Install]
WantedBy=multi-user.target

# 启动该服务
systemctl daemon-reload
systemctl enable ohas.service
systemctl start ohas.service
systemctl status ohas.service

# 然后再开始运行 root.sh 脚本
sh /u01/app/grid/11.2.0/root.sh

注:这里一定要先执行sh /u01/app/grid/11.2.0/root.sh脚本、出现错误之后再启动ohas.service服务,不然ohas.service服务无法启动。

脚本执行完成后我们按照系统提示继续完成安装。

image-20210624003927891

image-20210624004303446

image-20210624004327083

最后,单击 Close,完成 GRID 软件在双节点上的安装;至此,GRID 集群件安装成功。

安装完成后我们用grid用户运行下面的命令来检查crs状态:

# 检查CRS状态
[grid@oracle11grac01 bin]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online

# 检查ClusterWare资源
[grid@oracle11grac01 bin]$ crs_stat -t -v
Name           Type           R/RA   F/FT   Target    State     Host        
----------------------------------------------------------------------
ora.CRS.dg     ora....up.type 0/5    0/     ONLINE    ONLINE    orac...ac01 
ora.DATA.dg    ora....up.type 0/5    0/     ONLINE    ONLINE    orac...ac01 
ora.FRA.dg     ora....up.type 0/5    0/     ONLINE    ONLINE    orac...ac01 
ora....ER.lsnr ora....er.type 0/5    0/     ONLINE    ONLINE    orac...ac01 
ora....N1.lsnr ora....er.type 0/5    0/0    ONLINE    ONLINE    orac...ac01 
ora.asm        ora.asm.type   0/5    0/     ONLINE    ONLINE    orac...ac01 
ora.cvu        ora.cvu.type   0/5    0/0    ONLINE    ONLINE    orac...ac01 
ora.gsd        ora.gsd.type   0/5    0/     OFFLINE   OFFLINE               
ora....network ora....rk.type 0/5    0/     ONLINE    ONLINE    orac...ac01 
ora.oc4j       ora.oc4j.type  0/1    0/2    ONLINE    ONLINE    orac...ac01 
ora.ons        ora.ons.type   0/3    0/     ONLINE    ONLINE    orac...ac01 
ora....SM1.asm application    0/5    0/0    ONLINE    ONLINE    orac...ac01 
ora....01.lsnr application    0/5    0/0    ONLINE    ONLINE    orac...ac01 
ora....c01.gsd application    0/5    0/0    OFFLINE   OFFLINE               
ora....c01.ons application    0/3    0/0    ONLINE    ONLINE    orac...ac01 
ora....c01.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    orac...ac01 
ora....SM2.asm application    0/5    0/0    ONLINE    ONLINE    orac...ac02 
ora....02.lsnr application    0/5    0/0    ONLINE    ONLINE    orac...ac02 
ora....c02.gsd application    0/5    0/0    OFFLINE   OFFLINE               
ora....c02.ons application    0/3    0/0    ONLINE    ONLINE    orac...ac02 
ora....c02.vip ora....t1.type 0/0    0/0    ONLINE    ONLINE    orac...ac02 
ora.racdb.db   ora....se.type 0/2    0/1    ONLINE    ONLINE    orac...ac01 
ora.scan1.vip  ora....ip.type 0/0    0/0    ONLINE    ONLINE    orac...ac01 

# 检查集群节点
[grid@oracle11grac01 bin]$ olsnodes -n
oracle11grac01  1
oracle11grac02  2

# 检查两个节点上的Oracle TNS 监听器进程
[grid@oracle11grac01 bin]$ ps -ef |grep lsnr |grep -v 'grep'|grep -v 'ocfs'|awk '{print$9}' 
LISTENER
LISTENER_SCAN1

# 确认 Oracle ASM 功能
[grid@oracle11grac01 bin]$ srvctl status asm -a
ASM is running on oracle11grac01,oracle11grac02
ASM is enabled.
[grid@oracle11grac01 bin]$ 
# 注:确认针对 Oracle Clusterware 文件的 Oracle ASM 功能:如果在 Oracle ASM 上安装了 OCR 和表决磁盘文件, 则以 Grid Infrastructure 安装所有者的身份, 使用下面的命令语法来确认当前正在运行已安装的 Oracle ASM。

5.2、配置ASM磁盘组

Grid安装完成之后,我们为数据和快速恢复区创建ASM磁盘组。我们在节点上oracle11grac01上执行,进入图形界面后打开终端在root用户下输入xhost +。

image-20210624010343696

从上去我们可以看到安装Grid的时候配置的CRS磁盘已经存在,现在我们还需要把FRA和DATA两个磁盘添加进来;我们点击左下角的Create按钮按下图进行选择先建 FRA磁盘, 最后点击“OK” 完成;然后继续添加DATA磁盘。

image-20210624010505015

image-20210624010526389

image-20210624010546926

image-20210624010604447

image-20210624010611614

最后三个盘都已经挂好, 结果如上;单击 Exit 退出 ASM 配置向导。

6、安装Oracle数据库

6.1、安装Oracle数据库

我们在oracle11grac01节点来执行Oracle数据库的安装。我们先解压Oracle数据包,然后进入图形界面后打开终端在root用户下输入xhost +。进入oracle用户下进行安装操作:

[oracle@oracle11grac01 ~]$ ls
Desktop  Documents  Downloads  install2021-06-24_14-45-00.log  Music  Oracle11G  Pictures  Public  Templates  Videos
[oracle@oracle11grac01 ~]$ cd Oracle11G/
[oracle@oracle11grac01 Oracle11G]$ ls
database  p13390677_112040_Linux-x86-64_1of7.zip  p13390677_112040_Linux-x86-64_2of7.zip
[oracle@oracle11grac01 Oracle11G]$ ./runInstaller -jreLoc /etc/alternatives/jre_1.8.0

进入图形界面并取消更新, 如下:

image-20210624091310108

image-20210624091405632

这里只选择安装 database software only,下一步选择 Real Application Clusters database installation 单选按钮(此为默认选择) ,确保选中Node Name 窗口中的两个 Oracle RAC 节点。

image-20210624091434810

image-20210624091454808

单击 [SSH Connectivity] 按钮。 输入 oracle 用户的 OS Password, 然后单击 [Setup] 按钮。这会启动 SSH Connectivity 配置过程。ssh 等效性验证成功, 点 OK, 继续下一步:

image-20210624091528222

image-20210624093003757

image-20210624091910735

选择第 1 项, 安装企业版软件, Next:

image-20210624093059775

选择 oracle 软件的安装路径, 其中 ORACLE_BASE, ORACLE_HOME 均选择之前已经配置好的。Next:

image-20210624091933179

image-20210624093231209

image-20210624091957510

选择 oracle 用户组,执行安装前的预检查,Next:

image-20210624092042150

image-20210624092140623

检测后如果发现 NTP(时间同步) 和 resolv.conf(DNS) 这两个问题可忽略, 点击右上角的“ignore All” :

image-20210624093402714

image-20210624092224200

image-20210624092310257

安装过程中出现了下面的错误信息:Error in invoking target ‘agent nmhs’ of makefile……提示我们在makefile中调用agent nmhs的时候出现了错误。这是因为我们需要在makefile中添加链接libnnz11库的参数。这里我们在/u01/app/oracle/product/11.2.0/db_1/sysman/lib/ins_emagent.mk文件中MK_EMAGENT_NMECTL添加参数-lnnz11;详细如下:

[oracle@oracle11grac01 ~]$ cat /u01/app/oracle/product/11.2.0/db_1/sysman/lib/ins_emagent.mk | grep -C 5 MK_EMAGENT_NMECTL 
#===========================
#  emdctl
#===========================

$(SYSMANBIN)emdctl:
        $(MK_EMAGENT_NMECTL) -lnnz11

#===========================
#  nmocat
#===========================

[oracle@oracle11grac01 ~]$ 

添加完成之后我们回到图形界面点击Retry按钮,可以看到安装会继续进行。

image-20210624094100143

image-20210624094843905

根据提示我们以 root 用户分别在两个节点上执行脚本root.sh, Next:

[root@oracle11grac01 ~]# sh /u01/app/oracle/product/11.2.0/db_1/root.sh
Performing root user operation for Oracle 11g 

The following environment variables are set as:
    ORACLE_OWNER= oracle
    ORACLE_HOME=  /u01/app/oracle/product/11.2.0/db_1

Enter the full pathname of the local bin directory: [/usr/local/bin]: 
The contents of "dbhome" have not changed. No need to overwrite.
The contents of "oraenv" have not changed. No need to overwrite.
The contents of "coraenv" have not changed. No need to overwrite.

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Finished product-specific root actions.
[root@oracle11grac01 ~]# 

最后,单击 Close,完成 Oracle 软件在双节点上的安装。

image-20210624095020278

至此,我们在 RAC 双节点上完成 oracle 软件的安装。

6.2、创建集群数据库

接下来,使用 DBCA 来创建 RAC 数据库。以 oracle 用户登录图形界面,执行 dbca,进入 DBCA 的图形界面,选择第 1 项,创建RAC 数据库:

image-20210625224558645

image-20210625224613389

image-20210625224624937

image-20210625224640004

image-20210625224645907

image-20210625224703828

image-20210625224724581

image-20210625224754489

image-20210625224806132

image-20210625224821853

image-20210625224853194

image-20210625224900427

image-20210625224912945

image-20210625224934096

image-20210625224950347

image-20210625224958700

image-20210625225025405

image-20210625225038948

image-20210625225057723

image-20210625225122199

image-20210625225140596

image-20210625225200133

安装进程进行到2%的时候会出现下图的错误、这主要是因为ASM的权限不对导致数据无法写入。这里需要注意3个小点:1、/dev/asm*磁盘需要配置为grid:asmadmin属组;2、/dev/asm*磁盘需要配置664权限;3、oracle用户需要配置asmdba和asmadmin属组。具体操作如下:

# 分别在oracle11grac01和oracle11grac02节点上操作
[root@oracle11grac01 ~]# chmod 664 /dev/asm*
[root@oracle11grac01 ~]# chown -R oracle:asmadmin /dev/asm*
[root@oracle11grac01 ~]# ls -la /dev/asm*
brw-rw-r--. 1 grid asmadmin 8, 48 Jun 26 11:03 /dev/asmdiskDATA
brw-rw-r--. 1 grid asmadmin 8, 32 Jun 26 11:03 /dev/asmdiskFRA
brw-rw-r--. 1 grid asmadmin 8, 16 Jun 26 11:03 /dev/asmdiskOCR
[root@oracle11grac01 ~]# 

image-20210624110812558

image-20210624100601797

image-20210625233257558

至此、集群数据库就已经创建完成了。

7、Oracle RAC验证

7.1、查看服务状态

注:下面的GSD问题可以忽略不管,因为并不影响数据库的正常使用,GSD是用于支持dbca,srvctl,oem等的交互工具,为了向后兼容9i而保留。

[grid@oracle11grac01 ~]# su - grid
[grid@oracle11grac01 ~]$ cd /u01/app/grid/11.2.0/bin/
[grid@oracle11grac01 bin]$ crs_stat -t
Name           Type           Target    State     Host        
------------------------------------------------------------
ora.CRS.dg     ora....up.type ONLINE    ONLINE    orac...ac01 
ora.DATA.dg    ora....up.type ONLINE    ONLINE    orac...ac01 
ora.FRA.dg     ora....up.type ONLINE    ONLINE    orac...ac01 
ora....ER.lsnr ora....er.type ONLINE    ONLINE    orac...ac01 
ora....N1.lsnr ora....er.type ONLINE    ONLINE    orac...ac01 
ora.asm        ora.asm.type   ONLINE    ONLINE    orac...ac01 
ora.cvu        ora.cvu.type   ONLINE    ONLINE    orac...ac01 
ora.gsd        ora.gsd.type   OFFLINE   OFFLINE               
ora....network ora....rk.type ONLINE    ONLINE    orac...ac01 
ora.oc4j       ora.oc4j.type  ONLINE    ONLINE    orac...ac01 
ora.ons        ora.ons.type   ONLINE    ONLINE    orac...ac01 
ora....SM1.asm application    ONLINE    ONLINE    orac...ac01 
ora....01.lsnr application    ONLINE    ONLINE    orac...ac01 
ora....c01.gsd application    OFFLINE   OFFLINE               
ora....c01.ons application    ONLINE    ONLINE    orac...ac01 
ora....c01.vip ora....t1.type ONLINE    ONLINE    orac...ac01 
ora....SM2.asm application    ONLINE    ONLINE    orac...ac02 
ora....02.lsnr application    ONLINE    ONLINE    orac...ac02 
ora....c02.gsd application    OFFLINE   OFFLINE               
ora....c02.ons application    ONLINE    ONLINE    orac...ac02 
ora....c02.vip ora....t1.type ONLINE    ONLINE    orac...ac02 
ora.racdb.db   ora....se.type ONLINE    ONLINE    orac...ac01 
ora.scan1.vip  ora....ip.type ONLINE    ONLINE    orac...ac01 
[grid@oracle11grac01 bin]$ 

7.2、查看集群运行状态

[grid@oracle11grac01 bin]$ srvctl status database -d racdb
Instance racdb1 is running on node oracle11grac01
Instance racdb2 is running on node oracle11grac02
[grid@oracle11grac01 bin]$

7.3、检查CRS状态

# 检查本地节点的CRS状态
[grid@oracle11grac01 bin]$ crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@oracle11grac01 bin]$ 

# 检查集群的CRS状态
[grid@oracle11grac01 bin]$ crsctl check cluster
CRS-4537: Cluster Ready Services is online
CRS-4529: Cluster Synchronization Services is online
CRS-4533: Event Manager is online
[grid@oracle11grac01 bin]$ 

# 检查集群中节点的配置信息
[grid@oracle11grac01 bin]$ olsnodes
oracle11grac01
oracle11grac02
[grid@oracle11grac01 bin]$ olsnodes -n
oracle11grac01  1
oracle11grac02  2
[grid@oracle11grac01 bin]$ olsnodes -n -i
oracle11grac01  1       oracle11grac01-vip
oracle11grac02  2       oracle11grac02-vip
[grid@oracle11grac01 bin]$ olsnodes -n -i -s
oracle11grac01  1       oracle11grac01-vip      Active
oracle11grac02  2       oracle11grac02-vip      Active
[grid@oracle11grac01 bin]$ 

# 查看集群的表决磁盘信息
[grid@oracle11grac01 bin]$ crsctl query css votedisk
##  STATE    File Universal Id                File Name Disk group
--  -----    -----------------                --------- ---------
 1. ONLINE   2cecd81df5c04f29bfeca3e7c5017913 (/dev/asmdiskOCR) [CRS]
Located 1 voting disk(s).
[grid@oracle11grac01 bin]$ 

# 查看集群Scan VIP信息
[grid@oracle11grac01 bin]$ srvctl config scan
SCAN name: oracle11grac-scan, Network: 1/172.16.200.0/255.255.255.0/ens160
SCAN VIP name: scan1, IP: /oracle11grac-scan/172.16.200.25

[grid@oracle11grac01 bin]$ srvctl config scan_listener
SCAN Listener LISTENER_SCAN1 exists. Port: TCP:1521
[grid@oracle11grac01 bin]$

7.4、启停集群数据库

# 查看集群数据库状态
[grid@oracle11grac01 bin]$ srvctl status database -d racdb
Instance racdb1 is running on node oracle11grac01
Instance racdb2 is running on node oracle11grac02
[grid@oracle11grac01 bin]$ 

# 停止集群数据库
[grid@oracle11grac01 bin]$ srvctl stop database -d racdb

# 启动集群数据库
[grid@oracle11grac01 bin]$ srvctl start database -d racdb

# 关闭所有节点
[grid@oracle11grac01 bin]$ ./crsctl stop crs

# 关闭单个节点
[grid@oracle11grac01 bin]$ srvctl stop nodeapps -n oracle11grac02

8、Oracle RAC常规维护

9、Oracle RAC性能调优

10、EM管理Oracle RAC

推荐文章

26条评论

  1. It’s very simple to find out any matter on web as compared to books, as I found this paragraph at this site.

  2. I’m extremely pleased to uncover this website. I want to to thank you for ones time for this
    wonderful read!! I definitely enjoyed every part of
    it and I have you saved to fav to check out new information in your web site.

  3. My spouse and I stumbled over here coming from a different web page and thought I should check things out.
    I like what I see so now i’m following you. Look forward to looking into your web page again.

  4. Great post! We are linking to this great content on our website.
    Keep up the good writing.

  5. I am curious to find out what blog system you’re utilizing?
    I’m experiencing some minor security problems with my latest website and I would
    like to find something more secure. Do you have any solutions?

  6. What’s up to every body, it’s my first go to see of this blog; this weblog consists of remarkable and truly excellent material in favor
    of readers.

  7. When I originally commented I clicked the “Notify me when new comments are added” checkbox and now each time a comment is added I get several emails with
    the same comment. Is there any way you can remove me from that service?

    Appreciate it!

  8. hi!,I like your writing very much! percentage we keep up
    a correspondence more about your article on AOL? I require an expert on this area to resolve
    my problem. May be that is you! Looking forward to
    look you.

  9. I don’t even know the way I ended up here, however
    I believed this publish used to be great.
    I don’t recognise who you’re however definitely you are
    going to a well-known blogger if you aren’t already.

    Cheers!

  10. As the admin of this web site is working, no uncertainty very soon it will be well-known, due to its quality
    contents.

  11. Hi there! This post couldn’t be written much better!
    Reading through this article reminds me of my previous roommate!
    He constantly kept preaching about this. I most
    certainly will forward this post to him. Pretty sure he’ll have a good read.
    Thank you for sharing!

  12. This page truly has all of the info I needed about this subject and didn’t know who to ask.

  13. Undeniably believe that which you stated. Your favorite reason appeared to be on the
    net the simplest thing to be aware of. I say to you,
    I definitely get annoyed while people consider worries that they plainly don’t
    know about. You managed to hit the nail upon the top as well as defined
    out the whole thing without having side-effects , people can take a signal.

    Will likely be back to get more. Thanks

  14. It’s perfect time to make some plans for the long
    run and it’s time to be happy. I’ve read this publish and
    if I may just I want to suggest you some attention-grabbing things or advice.
    Maybe you can write subsequent articles referring to this article.

    I want to read even more issues approximately it!

  15. Hi to every , because I am genuinely keen of reading this webpage’s
    post to be updated daily. It contains fastidious data.

  16. Hi there, all is going fine here and ofcourse every one is sharing facts,
    that’s actually good, keep up writing.

  17. You actually make it seem so easy with your presentation but I find this topic to be
    actually something which I think I would never understand.

    It seems too complicated and very broad for me. I am looking forward for your next
    post, I’ll try to get the hang of it!

  18. The other day, while I was at work, my sister stole my iphone
    and tested to see if it can survive a 40 foot drop, just so she can be a
    youtube sensation. My iPad is now destroyed and she has 83 views.
    I know this is entirely off topic but I had to
    share it with someone!

  19. Fascinating blog! Is your theme custom made or did you download it from somewhere?

    A theme like yours with a few simple adjustements would really make my blog shine.

    Please let me know where you got your theme. Kudos

  20. I’m no longer sure the place you’re getting your info, but
    great topic. I needs to spend some time finding out much more or figuring out more.
    Thanks for excellent information I used to be on the lookout for this information for my mission.

  21. I was wondering if you ever considered changing the structure of your site?

    Its very well written; I love what youve got to say. But maybe you could a little more in the way of content so people could connect with it better.

    Youve got an awful lot of text for only having one or 2 images.
    Maybe you could space it out better?

  22. I am regular reader, how are you everybody? This article
    posted at this web site is really good.

  23. I am now not positive where you’re getting your information, but good topic.
    I must spend some time finding out much more or figuring
    out more. Thank you for fantastic information I was on the lookout for this info
    for my mission.

  24. Nice blog right here! Additionally your web site lots up very fast!
    What web host are you the use of? Can I get your associate link in your host?
    I wish my website loaded up as fast as yours lol

  25. Informative article, totally what I needed.

评论已关闭。