CentOS5.4安装Oracle10g数据库总结
1:查看内存:grep MemTotal /proc/meminfo查看swap:grep SwapTotal /proc/meminfo2: 检查RPM包。Red Hat Enterprise Linux 4.0
1:查看内存:grep MemTotal /proc/meminfo
查看swap:grep SwapTotal /proc/meminfo
2: 检查RPM包。
Red Hat Enterprise Linux 4.0
CentOS5.4
binutils-2.15.92.0.2-13.EL4
binutils-2.17.50.0.6-6.el5
compat-db-4.1.25-9
compat-db-4.2.52-5.1
compat-libstdc++-296-2.96-132.7.2
compat-libstdc++-33-3.2.3-61
control-center-2.8.0-12
control-center-2.16.0-16.el5
gcc-3.4.3-22.1.EL4
gcc-4.1.2-42.el5
gcc-c++-3.4.3-22.1.EL44
gcc-c++-4.1.2-42.el5
glibc-2.3.4-2.9
glibc-2.5-24
glibc-common-2.3.4-2.9
glibc-common-2.5-24
gnome-libs-1.4.1.2.90-44.1
libgnome-2.16.0-6.el5
libstdc++-3.4.3-22.1
libstdc++-4.1.2-42.el5
libstdc++-devel-3.4.3-22.1
libstdc++-devel-4.1.2-42.el5
make-3.80-5
make-3.81-3.el5
pdksh-5.2.14-30
ksh-20060214-1.7
sysstat-5.0.5-1
sysstat-7.0.2-1.el5
xscreensaver-4.18-5.rhel4.2
gnome-screensaver-2.16.1-8.el5
setarch-1.6-1
setarch-2.0-1.1
libXp-1.0.0-8.1.el5
最主要的是 libXp-1.0.0-8.1.el5
如果他没有直接导致:
[Oracle@linux database]$ ./runInstaller
Starting Oracle Universal Installer...
Checking installer requirements...
Checking operating system version: must be RedHat-3, SUSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2
Passed
All installer requirements met.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2010-11-18_10-08-45PM.
Please wait ...[oracle@linux database]$ Exception in thread "main" java.lang.UnsatisfiedLinkError:
/tmp/OraInstall2010-11-18_10-08-45PM/jre/1.4.2/lib/i386/libawt.so: libXp.so.6: cannot open shared object file: No such file or directory
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(Unknown Source)
at java.lang.ClassLoader.loadLibrary(Unknown Source)
at java.lang.Runtime.loadLibrary0(Unknown Source)
at java.lang.System.loadLibrary(Unknown Source)
at sun.security.action.LoadLibraryAction.run(Unknown Source)
at java.security.AccessController.doPrivileged(Native Method)
at sun.awt.NativeLibLoader.loadLibraries(Unknown Source)
at sun.awt.DebugHelper.
at java.awt.Component.
这个问题找了很久,才找到。最后才是这个包没有安装。
创建组:
groupadd oinstall
groupadd dba
groupadd oper
添加用户:
useradd -g oinstall -G dba,oper oracle
设置密码:passwd oracle
将Oracle用户添加到oinstall组里面,他的两个辅助的组是dba,oper
在Oracle的home目录下面创建u01目录,然后改变u01的权限。
chown oracle:oinstall /home/u01
后面是Linux参数的配置:
/etc/sysctl.conf 文件
kernel.shmall = 2097152
kernel.shmmax = 2147483648
kernel.shmmni = 4096
kernel.sem = 250 32000 100 128
fs.file-max = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default = 262144
net.core.rmem_max = 262144
net.core.wmem_default = 262144
net.core.wmem_max = 262144
/etc/security/limits.conf file:
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
/etc/pam.d/login
session required pam_limits.so
/etc/profile
if [ $USER = "oracle" ]; then
if [ $SHELL = "/bin/ksh" ]; then
ulimit -p 16384
ulimit -n 65536
else
ulimit -u 16384 -n 65536
fi
fi
查看Shell:echo $shell
设置Oracle的环境变量:
su oracle
vi .bash_profile
ORACLE_BASE=/u01/oracle
ORACLE_SID=orcl
export ORACLE_BASE ORACLE_SID
ORACLE_HOME=$ORACLE_BASE/u01/oracle/product/10.2.0
PATH=$ORACLE_HOME/bin:$PATH
unset=ORACLE_HOME
unset=TNS_NAME
查看环境是否生效:env | grep ORA
所有工作配置完了后到图形界面上面安装:
进入database目录,运行里面的runInstaller
命令:./runInstaller
后面跟windows上面安装的差不多。。。
不过有个地方不同,,在安装数据库软件后,会叫你去运行两个脚本。
/home/oracle/u01/oraInventory/orainstRoot.sh
/home/oracle/u01/oracle/product/10.2.0/db_1
这里直接按enter键即可。

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Improve HDFS performance on CentOS: A comprehensive optimization guide to optimize HDFS (Hadoop distributed file system) on CentOS requires comprehensive consideration of hardware, system configuration and network settings. This article provides a series of optimization strategies to help you improve HDFS performance. 1. Hardware upgrade and selection resource expansion: Increase the CPU, memory and storage capacity of the server as much as possible. High-performance hardware: adopts high-performance network cards and switches to improve network throughput. 2. System configuration fine-tuning kernel parameter adjustment: Modify /etc/sysctl.conf file to optimize kernel parameters such as TCP connection number, file handle number and memory management. For example, adjust TCP connection status and buffer size

CentOS will be shut down in 2024 because its upstream distribution, RHEL 8, has been shut down. This shutdown will affect the CentOS 8 system, preventing it from continuing to receive updates. Users should plan for migration, and recommended options include CentOS Stream, AlmaLinux, and Rocky Linux to keep the system safe and stable.

Complete Guide to Checking HDFS Configuration in CentOS Systems This article will guide you how to effectively check the configuration and running status of HDFS on CentOS systems. The following steps will help you fully understand the setup and operation of HDFS. Verify Hadoop environment variable: First, make sure the Hadoop environment variable is set correctly. In the terminal, execute the following command to verify that Hadoop is installed and configured correctly: hadoopversion Check HDFS configuration file: The core configuration file of HDFS is located in the /etc/hadoop/conf/ directory, where core-site.xml and hdfs-site.xml are crucial. use

The CentOS shutdown command is shutdown, and the syntax is shutdown [Options] Time [Information]. Options include: -h Stop the system immediately; -P Turn off the power after shutdown; -r restart; -t Waiting time. Times can be specified as immediate (now), minutes ( minutes), or a specific time (hh:mm). Added information can be displayed in system messages.

The Installation, Configuration and Optimization Guide for HDFS File System under CentOS System This article will guide you how to install, configure and optimize Hadoop Distributed File System (HDFS) on CentOS System. HDFS installation and configuration Java environment installation: First, make sure that the appropriate Java environment is installed. Edit /etc/profile file, add the following, and replace /usr/lib/java-1.8.0/jdk1.8.0_144 with your actual Java installation path: exportJAVA_HOME=/usr/lib/java-1.8.0/jdk1.8.0_144exportPATH=$J

When configuring Hadoop Distributed File System (HDFS) on CentOS, the following key configuration files need to be modified: core-site.xml: fs.defaultFS: Specifies the default file system address of HDFS, such as hdfs://localhost:9000. hadoop.tmp.dir: Specifies the storage directory for Hadoop temporary files. hadoop.proxyuser.root.hosts and hadoop.proxyuser.ro

The key differences between CentOS and Ubuntu are: origin (CentOS originates from Red Hat, for enterprises; Ubuntu originates from Debian, for individuals), package management (CentOS uses yum, focusing on stability; Ubuntu uses apt, for high update frequency), support cycle (CentOS provides 10 years of support, Ubuntu provides 5 years of LTS support), community support (CentOS focuses on stability, Ubuntu provides a wide range of tutorials and documents), uses (CentOS is biased towards servers, Ubuntu is suitable for servers and desktops), other differences include installation simplicity (CentOS is thin)

Backup and Recovery Policy of GitLab under CentOS System In order to ensure data security and recoverability, GitLab on CentOS provides a variety of backup methods. This article will introduce several common backup methods, configuration parameters and recovery processes in detail to help you establish a complete GitLab backup and recovery strategy. 1. Manual backup Use the gitlab-rakegitlab:backup:create command to execute manual backup. This command backs up key information such as GitLab repository, database, users, user groups, keys, and permissions. The default backup file is stored in the /var/opt/gitlab/backups directory. You can modify /etc/gitlab
