Centos中彻底删除Mysql(rpm、yum安装的情况)
我用的centos6,mysql让我整出了各种问题,我想重装一个全新的mysql,yum remove mysql-server mysql之后再install并不能得到一个干净的mysql,原来的/etc/my.cnf依然没变,datadir里面的数据已没有任何变化,手动删除/etc/my.cnf,/usr/lib/mysql,/usr/sha
我用的centos6,mysql让我整出了各种问题,我想重装一个全新的mysql,yum remove mysql-server mysql之后再install并不能得到一个干净的mysql,原来的/etc/my.cnf依然没变,datadir里面的数据已没有任何变化,手动删除/etc/my.cnf,/usr/lib/mysql,/usr/share/mysql,,/var/lib/mysql也不会起任何作用,各种查找之后发现应该是根本没删除干净
代码如下:
[root@data lib]# rpm -qa¦grep mysql
mysql-5.5.25-1.el6.remi.i686
mysql-libs-5.5.25-1.el6.remi.i686
compat-mysql51-5.1.54-1.el6.remi.i686
mysql-server-5.5.25-1.el6.remi.i686
[root@data lib]# yum remove mysql mysql-server mysql-libs compat-mysql51
这样才是完全给卸载了,然后重装就会得到一个全新的mysql。

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Improve HDFS performance on CentOS: A comprehensive optimization guide to optimize HDFS (Hadoop distributed file system) on CentOS requires comprehensive consideration of hardware, system configuration and network settings. This article provides a series of optimization strategies to help you improve HDFS performance. 1. Hardware upgrade and selection resource expansion: Increase the CPU, memory and storage capacity of the server as much as possible. High-performance hardware: adopts high-performance network cards and switches to improve network throughput. 2. System configuration fine-tuning kernel parameter adjustment: Modify /etc/sysctl.conf file to optimize kernel parameters such as TCP connection number, file handle number and memory management. For example, adjust TCP connection status and buffer size

CentOS will be shut down in 2024 because its upstream distribution, RHEL 8, has been shut down. This shutdown will affect the CentOS 8 system, preventing it from continuing to receive updates. Users should plan for migration, and recommended options include CentOS Stream, AlmaLinux, and Rocky Linux to keep the system safe and stable.

Complete Guide to Checking HDFS Configuration in CentOS Systems This article will guide you how to effectively check the configuration and running status of HDFS on CentOS systems. The following steps will help you fully understand the setup and operation of HDFS. Verify Hadoop environment variable: First, make sure the Hadoop environment variable is set correctly. In the terminal, execute the following command to verify that Hadoop is installed and configured correctly: hadoopversion Check HDFS configuration file: The core configuration file of HDFS is located in the /etc/hadoop/conf/ directory, where core-site.xml and hdfs-site.xml are crucial. use

The CentOS shutdown command is shutdown, and the syntax is shutdown [Options] Time [Information]. Options include: -h Stop the system immediately; -P Turn off the power after shutdown; -r restart; -t Waiting time. Times can be specified as immediate (now), minutes ( minutes), or a specific time (hh:mm). Added information can be displayed in system messages.

The Installation, Configuration and Optimization Guide for HDFS File System under CentOS System This article will guide you how to install, configure and optimize Hadoop Distributed File System (HDFS) on CentOS System. HDFS installation and configuration Java environment installation: First, make sure that the appropriate Java environment is installed. Edit /etc/profile file, add the following, and replace /usr/lib/java-1.8.0/jdk1.8.0_144 with your actual Java installation path: exportJAVA_HOME=/usr/lib/java-1.8.0/jdk1.8.0_144exportPATH=$J

When configuring Hadoop Distributed File System (HDFS) on CentOS, the following key configuration files need to be modified: core-site.xml: fs.defaultFS: Specifies the default file system address of HDFS, such as hdfs://localhost:9000. hadoop.tmp.dir: Specifies the storage directory for Hadoop temporary files. hadoop.proxyuser.root.hosts and hadoop.proxyuser.ro

The key differences between CentOS and Ubuntu are: origin (CentOS originates from Red Hat, for enterprises; Ubuntu originates from Debian, for individuals), package management (CentOS uses yum, focusing on stability; Ubuntu uses apt, for high update frequency), support cycle (CentOS provides 10 years of support, Ubuntu provides 5 years of LTS support), community support (CentOS focuses on stability, Ubuntu provides a wide range of tutorials and documents), uses (CentOS is biased towards servers, Ubuntu is suitable for servers and desktops), other differences include installation simplicity (CentOS is thin)

Building a Hadoop Distributed File System (HDFS) on a CentOS system requires multiple steps. This article provides a brief configuration guide. 1. Prepare to install JDK in the early stage: Install JavaDevelopmentKit (JDK) on all nodes, and the version must be compatible with Hadoop. The installation package can be downloaded from the Oracle official website. Environment variable configuration: Edit /etc/profile file, set Java and Hadoop environment variables, so that the system can find the installation path of JDK and Hadoop. 2. Security configuration: SSH password-free login to generate SSH key: Use the ssh-keygen command on each node
