MySQL数据库迁移到PostgreSQL_MySQL
bitsCN.com
查了不少资料,也尝试了一些,最后采用的办法如下:
1. 导出mysql表定义(无数据)
mysqldump --no-data [dbname] >dbdef.sql
2. 使用mysql2postgres把脚本转换为pgsql
3. 上面生成的脚本还不一定很完美,可以尝试导入pgsql,调试错误并手动修改之。我遇到的问题就只有一个,mysql列定义中的zerofill需要手工去掉。一些unsinged定义会生成constraint,如果不需要可以去掉。另外,trigger都有问题,只能后面手工重建
4. 导出mysql数据:
mysqldump -v -nt --complete-insert=TRUE --compact --no-create-info --skip-quote-names [dbname] >dbdata.sql
老一些版本的pgsql如果不支持批量插入的话还需要加上--extended-insert=FALSE,这个性能损失巨大。
5. 转义符
mysql默认字符串里的'/'是转义符,而pgsql默认不是,修改postgresql.conf:
backslash_quote = on
escape_string_warning = off
standard_conforming_strings = off
数据导入完成后可以改回默认值。
5. pgsql里导入表定义和数据 6. 重建trigger 7. 自增主键(字段)的处理 最后,如果数据量大,导入时考虑性能可以先把主键、索引、约束都去掉,导入完成后再加上。另外,psql客户端从管道导入数据似乎不够快,可以用tcp方式psql -h localhost ,还有一些为大数据量导入优化的参数,大概列一下: autovacuum = off wal_level = minimal archive_mode = off full_page_writes = off fsync = off checkpoint_segments = 50 maintenance_work_mem视内存情况尽量大点
psql -d [dbname]
由于导入数据时此字段都是有值的,所以pgsql里面seq并不会增加,可以用如下语句设置自增列的当前值:
SELECT setval('sample_id_seq',max(id)) from sample;
checkpoint_timeout = 1h
作者 RuralHunter

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



In MySQL database, the relationship between the user and the database is defined by permissions and tables. The user has a username and password to access the database. Permissions are granted through the GRANT command, while the table is created by the CREATE TABLE command. To establish a relationship between a user and a database, you need to create a database, create a user, and then grant permissions.

MySQL has a free community version and a paid enterprise version. The community version can be used and modified for free, but the support is limited and is suitable for applications with low stability requirements and strong technical capabilities. The Enterprise Edition provides comprehensive commercial support for applications that require a stable, reliable, high-performance database and willing to pay for support. Factors considered when choosing a version include application criticality, budgeting, and technical skills. There is no perfect option, only the most suitable option, and you need to choose carefully according to the specific situation.

Data Integration Simplification: AmazonRDSMySQL and Redshift's zero ETL integration Efficient data integration is at the heart of a data-driven organization. Traditional ETL (extract, convert, load) processes are complex and time-consuming, especially when integrating databases (such as AmazonRDSMySQL) with data warehouses (such as Redshift). However, AWS provides zero ETL integration solutions that have completely changed this situation, providing a simplified, near-real-time solution for data migration from RDSMySQL to Redshift. This article will dive into RDSMySQL zero ETL integration with Redshift, explaining how it works and the advantages it brings to data engineers and developers.

To fill in the MySQL username and password: 1. Determine the username and password; 2. Connect to the database; 3. Use the username and password to execute queries and commands.

1. Use the correct index to speed up data retrieval by reducing the amount of data scanned select*frommployeeswherelast_name='smith'; if you look up a column of a table multiple times, create an index for that column. If you or your app needs data from multiple columns according to the criteria, create a composite index 2. Avoid select * only those required columns, if you select all unwanted columns, this will only consume more server memory and cause the server to slow down at high load or frequency times For example, your table contains columns such as created_at and updated_at and timestamps, and then avoid selecting * because they do not require inefficient query se

MySQL database performance optimization guide In resource-intensive applications, MySQL database plays a crucial role and is responsible for managing massive transactions. However, as the scale of application expands, database performance bottlenecks often become a constraint. This article will explore a series of effective MySQL performance optimization strategies to ensure that your application remains efficient and responsive under high loads. We will combine actual cases to explain in-depth key technologies such as indexing, query optimization, database design and caching. 1. Database architecture design and optimized database architecture is the cornerstone of MySQL performance optimization. Here are some core principles: Selecting the right data type and selecting the smallest data type that meets the needs can not only save storage space, but also improve data processing speed.

Detailed explanation of database ACID attributes ACID attributes are a set of rules to ensure the reliability and consistency of database transactions. They define how database systems handle transactions, and ensure data integrity and accuracy even in case of system crashes, power interruptions, or multiple users concurrent access. ACID Attribute Overview Atomicity: A transaction is regarded as an indivisible unit. Any part fails, the entire transaction is rolled back, and the database does not retain any changes. For example, if a bank transfer is deducted from one account but not increased to another, the entire operation is revoked. begintransaction; updateaccountssetbalance=balance-100wh

Copy and paste in MySQL includes the following steps: select the data, copy with Ctrl C (Windows) or Cmd C (Mac); right-click at the target location, select Paste or use Ctrl V (Windows) or Cmd V (Mac); the copied data is inserted into the target location, or replace existing data (depending on whether the data already exists at the target location).
