海量日记入库
海量日志入库
日志目录下有10个日志文件,每个文件压缩后大约60M左右,文件后缀是.gz,如a.gz、b.gz等,文件中行的内容是id=2112112,[email protected],等等其它,
id=2112112,[email protected],等等其它,
id=2112112,[email protected],等等其它,
id=2112112,[email protected],等等其它,
id=2112112,[email protected],等等其它,
id=2112112,[email protected],等等其它,
id=2112112,[email protected],等等其它,
现在是想把这个目录下的每个文件的所有内容insert到数据库中,数据库中的表,是通过email分表的,大约是log_1,log_2,一直到log_1000的分表的,请问下详细的解决方案,比如说怎么样能保证到每个文件在很快的时间内入库,使得脚本执行更有效率
先贴一段代码
<br /> <?php<br /> error_reporting(E_ALL & ~E_NOTICE);<br /> //接收参数<br /> $mysql_host = XX.XX.XX.XX;<br /> $mysql_user = XXX;<br /> $mysql_pass = XX;<br /> $mysql_port = 3306;<br /> $mysql_db = 'test'; <br /> $table_pre = 'log_';<br /> $gz_log_file = a.gz;<br /> //脚本执行日志<br /> $exec_log = '/data_log/record.txt'; <br /> file_put_contents ($exec_log,'*****************************************START***********************************'."\r\n",FILE_APPEND );<br /> file_put_contents ($exec_log,'param is mysql_host='.$mysql_host.' mysql_user='.$mysql_user.' mysql_pass='.$mysql_pass.' mysql_port='.$mysql_port.' mysql_db='.$mysql_db.' table_pre='.$table_pre.' gz_log_file='.$gz_log_file.' start_time='.date("Y-m-d H:i:s")."\r\n",FILE_APPEND ); <br /> //读日志入库 <br /> $z_handle = gzopen($gz_log_file,'r');<br /> $time_start = microtime_float();<br /> $mysql_value_ary = array();<br /> //链接数据库<br /> $conn = mysql_connect("$mysql_host:$mysql_port",$mysql_user,$mysql_pass);<br /> if (!$conn) {<br /> file_put_contents ($exec_log,'Could not connect database error, error='.mysql_error()."\r\n",FILE_APPEND ); <br /> exit;<br /> }<br /> $selec_db = mysql_select_db($mysql_db);<br /> if(!$selec_db){<br /> file_put_contents ($exec_log,'select database error, database='.$mysql_db."\r\n",FILE_APPEND ); <br /> exit;<br /> }<br /> while(!gzeof($z_handle)){<br /> $each_gz_line = gzgets($z_handle, 4096);<br /> $line_to_array = explode("\t",$each_gz_line);<br /> //过滤无效日志<br /> if(!empty($line_to_array[3]) && !empty($line_to_array[2]) && !empty($line_to_array[4])){<br /> $insert_value = "('".$line_to_array[3]."','".$line_to_array[2]."','".$line_to_array[1]."','".$line_to_array[4]."','".$line_to_array[0]."') ";<br /> $insert_sql = "insert into $table_name (uid,email,ip,ctime) values $insert_value ";<br /> $table_id = abs(crc32($line_to_array[2]) % 1000);<br /> $table_name = $table_pre.$table_id;<br /> $result = mysql_query($insert_sql); <br /> if(!$result){<br /> //如果插入错误,则记录日志<br /> file_put_contents ($exec_log,'table_name='.$table_name.' email='.$line_to_array[2]."\r\n",FILE_APPEND ); <br /> }<br /> }<br /> }<br /> $time_end = microtime_float();<br /> $diff = $time_end - $time_start;<br /> file_put_contents ($exec_log,'success to insert database,log_file is '.$gz_log_file.' time-consuming is='.$diff."s \r\n",FILE_APPEND );<br /> file_put_contents ($exec_log,'*******************************************END***********************************'."\r\n",FILE_APPEND );<br /> gzclose($z_handle); <br />
上面的代码执行起来,很慢,不可忍受,请大牛帮忙
------解决方案--------------------
表类型修改为:InnoDB,然后用事务实施,
还不行的话,换load file
------解决方案--------------------
对于innodb,开事物应该不会更慢,因为就算不开,每一条语句也都是一个事物,所以如果是只开启一个事物,最后commit一次,应该会比每条语句都begin一下,commit一下要快的(但我记得开了事物也不会快多少);但myisam在只有一个插入线程执行,并且表内总数据量比较小的场合下,肯定比innodb要快的,尤其是只有60M数据的环境下
load data infile 绝对会快很多,但你文件得先转换成另一个"xxx \t xxx"的形式,然后再load data infile,应该比一条条插入能快几倍
------解决方案--------------------
load data吧,load进去以后比对一下条数,别搞什么事务。出错几率很低的,即使出错了,删除以后重新导入也快。PS,这数据不叫海量数据。
------解决方案--------------------
不知道为什么要放在数据库中
按你的描述,数据文件展开后,每个在 60*20M左右,甚至更高
你一条一条的插入,不慢才怪呢
------解决方案--------------------
历史数据入库,只是一次性工作。无所谓“效率”
你可以直接将文件导入 text 字段后,再由 update 指令拆分
如果不打算修改日志处理方式,那么将日志增量追加入库也只是一个定期工作(周期至少大于等于1天)
同样也没效率的概念

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

Big data structure processing skills: Chunking: Break down the data set and process it in chunks to reduce memory consumption. Generator: Generate data items one by one without loading the entire data set, suitable for unlimited data sets. Streaming: Read files or query results line by line, suitable for large files or remote data. External storage: For very large data sets, store the data in a database or NoSQL.

Backing up and restoring a MySQL database in PHP can be achieved by following these steps: Back up the database: Use the mysqldump command to dump the database into a SQL file. Restore database: Use the mysql command to restore the database from SQL files.

MySQL query performance can be optimized by building indexes that reduce lookup time from linear complexity to logarithmic complexity. Use PreparedStatements to prevent SQL injection and improve query performance. Limit query results and reduce the amount of data processed by the server. Optimize join queries, including using appropriate join types, creating indexes, and considering using subqueries. Analyze queries to identify bottlenecks; use caching to reduce database load; optimize PHP code to minimize overhead.

How to insert data into MySQL table? Connect to the database: Use mysqli to establish a connection to the database. Prepare the SQL query: Write an INSERT statement to specify the columns and values to be inserted. Execute query: Use the query() method to execute the insertion query. If successful, a confirmation message will be output.

Creating a MySQL table using PHP requires the following steps: Connect to the database. Create the database if it does not exist. Select a database. Create table. Execute the query. Close the connection.

To use MySQL stored procedures in PHP: Use PDO or the MySQLi extension to connect to a MySQL database. Prepare the statement to call the stored procedure. Execute the stored procedure. Process the result set (if the stored procedure returns results). Close the database connection.

One of the major changes introduced in MySQL 8.4 (the latest LTS release as of 2024) is that the "MySQL Native Password" plugin is no longer enabled by default. Further, MySQL 9.0 removes this plugin completely. This change affects PHP and other app

Oracle database and MySQL are both databases based on the relational model, but Oracle is superior in terms of compatibility, scalability, data types and security; while MySQL focuses on speed and flexibility and is more suitable for small to medium-sized data sets. . ① Oracle provides a wide range of data types, ② provides advanced security features, ③ is suitable for enterprise-level applications; ① MySQL supports NoSQL data types, ② has fewer security measures, and ③ is suitable for small to medium-sized applications.
