对MySql查询缓存及SQL Server过程缓存的理解及总结_MySQL
bitsCN.com
一、MySql的Query Cache
1、Query Cache
MySQL Query Cache是用来缓存我们所执行的SELECT语句以及该语句的结果集。MySql在实现Query Cache的具体技术细节上类似典型的KV存储,就是将SELECT语句和该查询语句的结果集做了一个HASH映射并保存在一定的内存区域中。当客户端发起SQL查询时,Query Cache的查找逻辑是,先对SQL进行相应的权限验证,接着就通过Query Cache来查找结果。它不需要经过Optimizer模块进行执行计划的分析优化,更不需要发生同任何存储引擎的交互,减少了大量的磁盘IO和CPU运算,所以有时候效率非常高。
2、Query Cache设置参数
可以通过调整 MySQL的参数打开并设置它的Query Cache功能,主要有以下5个参数:
(1)、query_cache_limit:允许缓存的单条查询结果集的最大容量,默认是1MB,超过此参数设置的查询结果集将不会被缓存;
(2)、query_cache_min_res_unit:设置查询缓存Query Cache每次分配内存的最小空间大小,即每个查询的缓存最小占用的内存空间大小;
(3)、query_cache_size:设置 Query Cache 所使用的内存大小,默认值为0,大小必须是1024的整数倍,如果不是整数倍,MySQL 会自动调整降低最小量以达到1024的倍数;
(4)、query_cache_type:控制 Query Cache 功能的开关,可以设置为0、1、2三种,意义分别如下:
a、0(OFF):关闭 Query Cache 功能,任何情况下都不会使用 Query Cache;
b、1(ON):开启 Query Cache 功能,但是当SELECT语句中使用SQL_NO_CACHE提示后,将不使用Query Cache;
c、2(DEMAND):开启Query Cache 功能,但是只有当SELECT语句中使用了SQL_CACHE 提示后,才使用Query Cache。
(5)、query_cache_wlock_invalidate:控制当有写锁定发生在表上的时刻是否先失效该表相关的Query Cache,如果设置为 1(TRUE),则在写锁定的同时将失效该表相关的所有Query Cache,如果设置为0(FALSE)则在锁定时刻仍然允许读取该表相关的Query Cache。
3、Query Cache和性能
任何事情过犹不及,尤其对于某些写频繁的系统,开启Query Cache功能可能并不能让系统性能有提升,有时反而会有下降。原因是MySql为了保证Query Cache缓存的内容和实际数据绝对一致,当某个数据表发生了更新、删除及插入操作,MySql都会强制使所有引用到该表的查询SQL的Query Cache失效。对于密集写操作,启用查询缓存后很可能造成频繁的缓存失效,间接引发内存激增及CPU飙升,对已经非常忙碌的数据库系统这是一种极大的负担。
4、其他
Query Cache因MySql的存储引擎不同而实现略有差异,比如MyISAM,缓存的结果集存储在OS Cache中,而最流行的InnoDB则放在Buffer Pool中。
二、SQL Server的Procedure Cache
SQL Server没有类似MySql的Query Cache机制,但是它有自己的缓存机制。SQL Server不会简单直接地缓存SQL查询结果集,而是缓存它所读取过的查询数据页(数据缓存Data Buffer),同时它还缓存执行计划(过程缓存Procedure Cache),下面就谈谈我们所熟知的过程缓存。
1、SQL执行过程
SQL语句在执行前首先需要被编译,接着需要通过SQL Server查询引擎进行优化,然后得到优化后的执行计划,最后SQL按照执行计划被执行。
2、过程缓存(Procedure Cache)
创建执行计划会占用CPU资源,当执行计划被创建后,SQL Server查询引擎默认会自动缓存执行计划。
对于整体相似,仅仅是参数不同的SQL语句,SQL Server可以重用缓存的执行计划。
但对于不同的SQL语句,SQL Server并不能重复使用以前的执行计划,而需要重新编译出一个新的执行计划,因为SQL Server查询引擎会自动缓存执行计划,每一个新的执行计划都会占用SQL Server的内存。
在SQL Server可用内存足够使用的情况下,查询引擎并不主动清除以前保存的查询计划。所以,某些情况下,一条相似的SQL语句,仅仅因为写法不同,而凭空多出了很多执行计划,对于相似的SQL,这些多余的执行计划白白地占据着内存,大大影响SQL Server中缓存的查询计划数目。
对于上面这种情况,如果限定了SQL Server最大可用内存,它将导致SQL Server可用内存减少,从而在执行查询时尤其是大的数据查询时与磁盘发生更多的内存页交换;如果没有设置最大可用内存,则SQL Server由于缓存了太多执行计划,从而使内存占用过大。
3、如何减少过程缓存
对于减少过程缓存的占用,主要是可以通过使用参数化查询。
参数化查询的关键是查询优化器将创建一个可以重用的缓存计划(SQL Server查询优化器将查询重新编写为一个参数化SQL语句),这个可重用的缓存计划消除了对这些类似SQL语句的每一次执行都创建一个缓存计划的需求。通过创建一个可重用计划,SQL Server就减少了存放类似的执行计划所需的内存使用。
对于开发人员,我们一般可以通过下面两种方式实现参数化查询:
(1)、使用存储过程执行SQL语句;
(2)、使用sp_executesql 方式执行SQL语句。
关于使用存储过程执行SQL,再说句题外话:对于存储过程一直以来有颇多争议,比如ORM派认为存储过程是完全面向过程的不易扩展不易维护的等等等等。根据我个人的开发经验,简单的几乎没有逻辑的存储过程我建议多用,但是复杂的存储过程一直以来都是BUG集中营,而且后期维护成本奇高(听我司架构师讲过,某重要业务系统的数据库有个八千多行的存储过程,两百多个变量,没有人敢动),逻辑最好通过应对剧烈变化的业务逻辑层来写。现在我们有了成熟的ORM,还有分层,开发中要绝对避免写过长且逻辑复杂的存储过程,否则面对变化,日积月累再出现几个八千行的存储过程也不是没有可能。
参考:
>
http://www.sql-server-performance.com/2004/data-cache/
>
bitsCN.com
Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



StableDiffusion3’s paper is finally here! This model was released two weeks ago and uses the same DiT (DiffusionTransformer) architecture as Sora. It caused quite a stir once it was released. Compared with the previous version, the quality of the images generated by StableDiffusion3 has been significantly improved. It now supports multi-theme prompts, and the text writing effect has also been improved, and garbled characters no longer appear. StabilityAI pointed out that StableDiffusion3 is a series of models with parameter sizes ranging from 800M to 8B. This parameter range means that the model can be run directly on many portable devices, significantly reducing the use of AI

The first pilot and key article mainly introduces several commonly used coordinate systems in autonomous driving technology, and how to complete the correlation and conversion between them, and finally build a unified environment model. The focus here is to understand the conversion from vehicle to camera rigid body (external parameters), camera to image conversion (internal parameters), and image to pixel unit conversion. The conversion from 3D to 2D will have corresponding distortion, translation, etc. Key points: The vehicle coordinate system and the camera body coordinate system need to be rewritten: the plane coordinate system and the pixel coordinate system. Difficulty: image distortion must be considered. Both de-distortion and distortion addition are compensated on the image plane. 2. Introduction There are four vision systems in total. Coordinate system: pixel plane coordinate system (u, v), image coordinate system (x, y), camera coordinate system () and world coordinate system (). There is a relationship between each coordinate system,

Trajectory prediction plays an important role in autonomous driving. Autonomous driving trajectory prediction refers to predicting the future driving trajectory of the vehicle by analyzing various data during the vehicle's driving process. As the core module of autonomous driving, the quality of trajectory prediction is crucial to downstream planning control. The trajectory prediction task has a rich technology stack and requires familiarity with autonomous driving dynamic/static perception, high-precision maps, lane lines, neural network architecture (CNN&GNN&Transformer) skills, etc. It is very difficult to get started! Many fans hope to get started with trajectory prediction as soon as possible and avoid pitfalls. Today I will take stock of some common problems and introductory learning methods for trajectory prediction! Introductory related knowledge 1. Are the preview papers in order? A: Look at the survey first, p

This paper explores the problem of accurately detecting objects from different viewing angles (such as perspective and bird's-eye view) in autonomous driving, especially how to effectively transform features from perspective (PV) to bird's-eye view (BEV) space. Transformation is implemented via the Visual Transformation (VT) module. Existing methods are broadly divided into two strategies: 2D to 3D and 3D to 2D conversion. 2D-to-3D methods improve dense 2D features by predicting depth probabilities, but the inherent uncertainty of depth predictions, especially in distant regions, may introduce inaccuracies. While 3D to 2D methods usually use 3D queries to sample 2D features and learn the attention weights of the correspondence between 3D and 2D features through a Transformer, which increases the computational and deployment time.

Some of the author’s personal thoughts In the field of autonomous driving, with the development of BEV-based sub-tasks/end-to-end solutions, high-quality multi-view training data and corresponding simulation scene construction have become increasingly important. In response to the pain points of current tasks, "high quality" can be decoupled into three aspects: long-tail scenarios in different dimensions: such as close-range vehicles in obstacle data and precise heading angles during car cutting, as well as lane line data. Scenes such as curves with different curvatures or ramps/mergings/mergings that are difficult to capture. These often rely on large amounts of data collection and complex data mining strategies, which are costly. 3D true value - highly consistent image: Current BEV data acquisition is often affected by errors in sensor installation/calibration, high-precision maps and the reconstruction algorithm itself. this led me to

Suddenly discovered a 19-year-old paper GSLAM: A General SLAM Framework and Benchmark open source code: https://github.com/zdzhaoyong/GSLAM Go directly to the full text and feel the quality of this work ~ 1 Abstract SLAM technology has achieved many successes recently and attracted many attracted the attention of high-tech companies. However, how to effectively perform benchmarks on speed, robustness, and portability with interfaces to existing or emerging algorithms remains a problem. In this paper, a new SLAM platform called GSLAM is proposed, which not only provides evaluation capabilities but also provides researchers with a useful way to quickly develop their own SLAM systems.

Please note that this square man is frowning, thinking about the identities of the "uninvited guests" in front of him. It turned out that she was in a dangerous situation, and once she realized this, she quickly began a mental search to find a strategy to solve the problem. Ultimately, she decided to flee the scene and then seek help as quickly as possible and take immediate action. At the same time, the person on the opposite side was thinking the same thing as her... There was such a scene in "Minecraft" where all the characters were controlled by artificial intelligence. Each of them has a unique identity setting. For example, the girl mentioned before is a 17-year-old but smart and brave courier. They have the ability to remember and think, and live like humans in this small town set in Minecraft. What drives them is a brand new,

In September 23, the paper "DeepModelFusion:ASurvey" was published by the National University of Defense Technology, JD.com and Beijing Institute of Technology. Deep model fusion/merging is an emerging technology that combines the parameters or predictions of multiple deep learning models into a single model. It combines the capabilities of different models to compensate for the biases and errors of individual models for better performance. Deep model fusion on large-scale deep learning models (such as LLM and basic models) faces some challenges, including high computational cost, high-dimensional parameter space, interference between different heterogeneous models, etc. This article divides existing deep model fusion methods into four categories: (1) "Pattern connection", which connects solutions in the weight space through a loss-reducing path to obtain a better initial model fusion
