Home > Database > SQL > body text

SQL optimization minimalist rules that you must not know

coldplay.xixi
Release: 2021-01-02 11:23:42
forward
2210 people have browsed it

SQL tutorial As the standard language of relational database, it is one of the essential skills for IT practitioners. SQL itself is not difficult to learn, and writing query statements is also very easy, but it is difficult to write query statements that can run efficiently.

SQL optimization minimalist rules that you must not know

Recommended (free): SQL tutorial

Query optimization is a complex project that involves From hardware to parameter configuration, parsers for different databases, optimizer implementation, execution order of SQL statements, indexes, collection of statistical information, etc., and even the overall architecture of the application and system. This article introduces several key rules that can help us write efficient SQL queries; especially for beginners, these rules can at least prevent us from writing query statements with poor performance.

The following rules apply to various relational databases, including but not limited to: MySQL, Oracle, SQL Server, PostgreSQL and SQLite, etc. If you find the article useful, please comment, like, and forward it to your circle of friends for support.

Rule 1: Only return the required results

Be sure to specify the WHERE condition for the query statement to filter out unnecessary data rows. Generally speaking, OLTP systems only need to return a few records from a large amount of data at a time; specifying query conditions can help us return results through indexes instead of full table scans. Performance is better in most cases when using indexes because indexes (B-trees, B-trees, B*-trees) perform binary searches with logarithmic time complexity, rather than linear time complexity. The following is a schematic diagram of the MySQL clustered index: For example, assuming that each index branch node can store 100 records, 1 million (1003) records only require 3 layers of B-trees to complete the index. When searching for data through the index, you need to read the index data 3 times (each disk IO reads the entire branch node), plus 1 disk IO to read the data to get the query results. Pure stuff! A 15,000-word grammar manual shared with you

On the contrary, if a full table scan is used, the number of disk IOs that need to be performed may be several orders of magnitude higher. When the data volume increases to 100 million (1004), the B-tree index only requires one more index IO; while a full table scan requires several orders of magnitude more IO.

Similarly, we should avoid using SELECT * FROM because it represents all fields in the query table. This writing method usually causes the database to read more data, and the network also needs to transmit more data, resulting in a decrease in performance.

Rule 2: Make sure the query uses the correct index

If the appropriate index is missing, the data will not be found through the index even if the query conditions are specified. Therefore, we first need to ensure that the appropriate index is created. Generally speaking, the following fields need to be indexed:

  • Creating indexes on fields that often appear in WHERE conditions can avoid full table scans;
  • Add ORDER BY sorted fields to the index , you can avoid additional sorting operations;
  • Creating an index on the associated fields of multi-table join queries can improve the performance of join queries;
  • Adding the GROUP BY grouping operation field to the index can Use index to complete grouping.

Even if a suitable index is created, if there is a problem with the SQL statement, the database will not use the index. Common problems that lead to index failure include:

  • Performing expression operations or using functions on index fields in the WHERE clause will cause index failure. This situation also includes field data type mismatches, such as Strings and integers are compared;
  • When using LIKE matching, if the wildcard character appears on the left side, the index cannot be used. For fuzzy matching of large text data, you should consider the full-text search function provided by the database, or even a specialized full-text search engine (Elasticsearch, etc.);
  • If an index is created on the field in the WHERE condition, try to set it to NOT NULL ; Not all databases can use indexes when using IS [NOT] NULL judgment.

Execution plan (execution plan, also called query plan or explanation plan) is the specific steps for the database to execute SQL statements, such as accessing data in the table through indexes or full table scans, and the implementation of connection queries. and the order of connections, etc. If the performance of a SQL statement is not ideal, we should first check its execution plan and ensure that the query uses the correct index through the execution plan (EXPLAIN).

Rule 3: Try to avoid using subqueries

Taking MySQL as an example, the following query returns information about employees whose monthly salary is greater than the average monthly salary of the department:

EXPLAIN ANALYZE
 SELECT emp_id, emp_name
   FROM employee e
   WHERE salary > (
     SELECT AVG(salary)
       FROM employee
       WHERE dept_id = e.dept_id);
-> Filter: (e.salary > (select #2))  (cost=2.75 rows=25) (actual time=0.232..4.401 rows=6 loops=1)
    -> Table scan on e  (cost=2.75 rows=25) (actual time=0.099..0.190 rows=25 loops=1)
    -> Select #2 (subquery in condition; dependent)
        -> Aggregate: avg(employee.salary)  (actual time=0.147..0.149 rows=1 loops=25)
            -> Index lookup on employee using idx_emp_dept (dept_id=e.dept_id)  (cost=1.12 rows=5) (actual time=0.068..0.104 rows=7 loops=25)
Copy after login

As can be seen from the execution plan, MySQL uses a similar Nested Loop Join implementation; the subquery loops 25 times, but the average monthly salary of each department can actually be calculated and cached through one scan. The following statement replaces the subquery with an equivalent JOIN statement to realize the expansion of the subquery (Subquery Unnest):

EXPLAIN ANALYZE
 SELECT e.emp_id, e.emp_name
   FROM employee e
   JOIN (SELECT dept_id, AVG(salary) AS dept_average
           FROM employee
          GROUP BY dept_id) t
     ON e.dept_id = t.dept_id
  WHERE e.salary > t.dept_average;
-> Nested loop inner join  (actual time=0.722..2.354 rows=6 loops=1)
    -> Table scan on e  (cost=2.75 rows=25) (actual time=0.096..0.205 rows=25 loops=1)
    -> Filter: (e.salary > t.dept_average)  (actual time=0.068..0.076 rows=0 loops=25)
        -> Index lookup on t using <auto_key0> (dept_id=e.dept_id)  (actual time=0.011..0.015 rows=1 loops=25)
            -> Materialize  (actual time=0.048..0.057 rows=1 loops=25)
                -> Group aggregate: avg(employee.salary)  (actual time=0.228..0.510 rows=5 loops=1)
                    -> Index scan on employee using idx_emp_dept  (cost=2.75 rows=25) (actual time=0.181..0.348 rows=25 loops=1)
Copy after login

改写之后的查询利用了物化(Materialization)技术,将子查询的结果生成一个内存临时表;然后与 employee 表进行连接。通过实际执行时间可以看出这种方式更快。

以上示例在 Oracle 和 SQL Server 中会自动执行子查询展开,两种写法效果相同;在 PostgreSQL 中与 MySQL 类似,第一个语句使用 Nested Loop Join,改写为 JOIN 之后使用 Hash Join 实现,性能更好。

另外,对于 IN 和 EXISTS 子查询也可以得出类似的结论。由于不同数据库的优化器能力有所差异,我们应该尽量避免使用子查询,考虑使用 JOIN 进行重写。搜索公众号 民工哥技术之路,回复“1024”,送你一份技术资源大礼包。

法则四:不要使用 OFFSET 实现分页

分页查询的原理就是先跳过指定的行数,再返回 Top-N 记录。分页查询的示意图如下:数据库一般支持 FETCH/LIMIT 以及 OFFSET 实现 Top-N 排行榜和分页查询。当表中的数据量很大时,这种方式的分页查询可能会导致性能问题。以 MySQL 为例:

-- MySQL
SELECT *
  FROM large_table
 ORDER BY id
 LIMIT 10 OFFSET N;
Copy after login

以上查询随着 OFFSET 的增加,速度会越来越慢;因为即使我们只需要返回 10 条记录,数据库仍然需要访问并且过滤掉 N(比如 1000000)行记录,即使通过索引也会涉及不必要的扫描操作。

对于以上分页查询,更好的方法是记住上一次获取到的最大 id,然后在下一次查询中作为条件传入:

-- MySQL
SELECT *
  FROM large_table
 WHERE id > last_id
 ORDER BY id
 LIMIT 10;
Copy after login

如果 id 字段上存在索引,这种分页查询的方式可以基本不受数据量的影响。

法则五:了解 SQL 子句的逻辑执行顺序

以下是 SQL 中各个子句的语法顺序,前面括号内的数字代表了它们的逻辑执行顺序:

(6)SELECT [DISTINCT | ALL] col1, col2, agg_func(col3) AS alias
(1)  FROM t1 JOIN t2
(2)    ON (join_conditions)
(3) WHERE where_conditions
(4) GROUP BY col1, col2
(5)HAVING having_condition
(7) UNION [ALL]
   ...
(8) ORDER BY col1 ASC,col2 DESC
(9)OFFSET m ROWS FETCH NEXT num_rows ROWS ONLY;
Copy after login

也就是说,SQL 并不是按照编写顺序先执行 SELECT,然后再执行 FROM 子句。从逻辑上讲,SQL 语句的执行顺序如下:

  • 首先,FROM 和 JOIN 是 SQL 语句执行的第一步。它们的逻辑结果是一个笛卡尔积,决定了接下来要操作的数据集。注意逻辑执行顺序并不代表物理执行顺序,实际上数据库在获取表中的数据之前会使用 ON 和 WHERE 过滤条件进行优化访问;
  • 其次,应用 ON 条件对上一步的结果进行过滤并生成新的数据集;
  • 然后,执行 WHERE 子句对上一步的数据集再次进行过滤。WHERE 和 ON 大多数情况下的效果相同,但是外连接查询有所区别,我们将会在下文给出示例;
  • 接着,基于 GROUP BY 子句指定的表达式进行分组;同时,对于每个分组计算聚合函数 agg_func 的结果。经过 GROUP BY 处理之后,数据集的结构就发生了变化,只保留了分组字段和聚合函数的结果;
  • 如果存在 GROUP BY 子句,可以利用 HAVING 针对分组后的结果进一步进行过滤,通常是针对聚合函数的结果进行过滤;
  • 接下来,SELECT 可以指定要返回的列;如果指定了 DISTINCT 关键字,需要对结果集进行去重操作。另外还会为指定了 AS 的字段生成别名;
  • 如果还有集合操作符(UNION、INTERSECT、EXCEPT)和其他的 SELECT 语句,执行该查询并且合并两个结果集。对于集合操作中的多个 SELECT 语句,数据库通常可以支持并发执行;
  • 然后,应用 ORDER BY 子句对结果进行排序。如果存在 GROUP BY 子句或者 DISTINCT 关键字,只能使用分组字段和聚合函数进行排序;否则,可以使用 FROM 和 JOIN 表中的任何字段排序;
  • 最后,OFFSET 和 FETCH(LIMIT、TOP)限定了最终返回的行数。

了解 SQL 逻辑执行顺序可以帮助我们进行 SQL 优化。例如 WHERE 子句在 HAVING 子句之前执行,因此我们应该尽量使用 WHERE 进行数据过滤,避免无谓的操作;除非业务需要针对聚合函数的结果进行过滤。

除此之外,理解SQL的逻辑执行顺序还可以帮助我们避免一些常见的错误,例如以下语句:

-- 错误示例
SELECT emp_name AS empname
  FROM employee
 WHERE empname ='张飞';
Copy after login

该语句的错误在于 WHERE 条件中引用了列别名;从上面的逻辑顺序可以看出,执行 WHERE 条件时还没有执行 SELECT 子句,也就没有生成字段的别名。

另外一个需要注意的操作就是 GROUP BY,例如:

-- GROUP BY 错误示例
SELECT dept_id, emp_name, AVG(salary)
  FROM employee
 GROUP BY dept_id;
Copy after login

由于经过 GROUP BY 处理之后结果集只保留了分组字段和聚合函数的结果,示例中的 emp_name 字段已经不存在;从业务逻辑上来说,按照部门分组统计之后再显示某个员工的姓名没有意义。如果需要同时显示员工信息和所在部门的汇总,可以使用窗口函数。扩展:SQL 语法速成手册

还有一些逻辑问题可能不会直接导致查询出错,但是会返回不正确的结果;例如外连接查询中的 ON 和 WHERE 条件。以下是一个左外连接查询的示例:

SELECT e.emp_name, d.dept_name
  FROM employee e
  LEFT JOIN department d ON (e.dept_id = d.dept_id)
 WHERE e.emp_name ='张飞';
emp_name|dept_name|
--------|---------|
张飞     |行政管理部|
SELECT e.emp_name, d.dept_name
  FROM employee e
  LEFT JOIN department d ON (e.dept_id = d.dept_id AND e.emp_name ='张飞');
emp_name|dept_name|
--------|---------|
刘备     |   [NULL]|
关羽     |   [NULL]|
张飞     |行政管理部|
诸葛亮   |   [NULL]|
...
Copy after login
  • 第一个查询在 ON 子句中指定了连接的条件,同时通过 WHERE 子句找出了“张飞”的信息。
  • 第二个查询将所有的过滤条件都放在 ON 子句中,结果返回了所有的员工信息。这是因为左外连接会返回左表中的全部数据,即使 ON 子句中指定了员工姓名也不会生效;而 WHERE 条件在逻辑上是对连接操作之后的结果进行过滤。

总结

SQL 优化本质上是了解优化器的的工作原理,并且为此创建合适的索引和正确的语句;同时,当优化器不够智能的时候,手动让它智能。

The above is the detailed content of SQL optimization minimalist rules that you must not know. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:segmentfault.com
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template
About us Disclaimer Sitemap
php.cn:Public welfare online PHP training,Help PHP learners grow quickly!