私たちは皆、データを正規化する利点について教えられてきました。詳しいことは説明しませんが、要約すると次のようになります。
正規化は、データベース内のデータを整理するプロセスです。これには、データを保護することと、冗長性と一貫性のない依存関係を排除してデータベースをより柔軟にすることの両方を目的として設計されたルールに従ってテーブルを作成し、それらのテーブル間の関係を確立することが含まれます。
Microsoft 365 - 正規化の説明
正直に言うと、「高度に正規化された」複数のレガシー アプリケーションを扱う必要があった最近まで、正規化についてまったく考えたこともありませんでした。そして、私が「高度に正規化された」と言うとき、私は「高度に正規化された」ことを意味しますが、もはや意味が分からないほどです。そこで私は、Coding Horror によるこの素晴らしい記事を思い出しました: たぶん正規化は正常ではないのでしょう。
問題は、よほど幸運でない限り、このようなことを心配する必要がないということです。これについて仮定で話す代わりに、特定のシナリオを見て、このトピックの複雑さを理解するためにさまざまなテクニックを試してみましょう。このシナリオを確認したら、高度に正規化されたアーキテクチャが問題となる理由をより深く理解し、エクスペリエンスを向上させるために考慮できる最適化を確認するために技術的な内容について説明します。
?この記事のコードはここで確認できます。
あなたは、確立されたレガシーの大規模な SASS (サービスとしてのソフトウェア) ベースの在庫管理システムに取り組んでいます。システムは在庫品目で構成され、各在庫品目はカテゴリ、サプライヤー、倉庫などのさまざまな属性を持ちます。クライアントがレポートをリクエストしました。このレポートには、サプライヤー名や倉庫名を含む品目の詳細を表示する必要があります。
これは、マルチテナンシーを除いた単純化されたスキーマです (物事を単純にするためです)。
各品目は、カテゴリ、サプライヤー、および倉庫テーブルのエントリを参照します。各アイテムの属性は item_attributes テーブルに保存されます。これはすべて理にかなっており、非常に簡単に作成できます。
CREATE TABLE items ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, category_id INT, supplier_id INT, warehouse_id INT, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, FOREIGN KEY (category_id) REFERENCES categories(id), FOREIGN KEY (supplier_id) REFERENCES suppliers(id), FOREIGN KEY (warehouse_id) REFERENCES warehouses(id) ); CREATE TABLE categories ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ); CREATE TABLE suppliers ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ); CREATE TABLE warehouses ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, location VARCHAR(255) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ); CREATE TABLE item_attributes ( id INT AUTO_INCREMENT PRIMARY KEY, item_id INT, attribute_name VARCHAR(255) NOT NULL, attribute_value VARCHAR(255) NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, FOREIGN KEY (item_id) REFERENCES items(id) ); -- To illustrate the denormalization strategy mentioned, here’s an example of a denormalized items_denormalized table: CREATE TABLE items_denormalized ( id INT AUTO_INCREMENT PRIMARY KEY, name VARCHAR(255) NOT NULL, category_name VARCHAR(255), supplier_name VARCHAR(255), warehouse_name VARCHAR(255), attribute_name VARCHAR(255), attribute_value VARCHAR(255), created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP ); CREATE INDEX idx_items_id ON items(id); CREATE INDEX idx_categories_id ON categories(id); CREATE INDEX idx_suppliers_id ON suppliers(id); CREATE INDEX idx_warehouses_id ON warehouses(id); CREATE INDEX idx_item_attributes_item_id ON item_attributes(item_id);
私たちが行うあらゆるパフォーマンス作業では、アプリケーションがどのように実行されるかをよく理解するために、予想される規模を再現できることが重要です。そのため、次のシード スクリプトをまとめました:
require 'faker' def create_records(message, &block) puts "Creating #{message}." starting = Process.clock_gettime(Process::CLOCK_MONOTONIC) yield if block_given? ending = Process.clock_gettime(Process::CLOCK_MONOTONIC) elapsed = ending - starting puts "#{message.capitalize} created. #{elapsed}" end puts 'Truncating database...' ActiveRecord::Tasks::DatabaseTasks.truncate_all puts 'Database truncated.' create_records('categories') do 10.times do Category.create(name: Faker::Book.genre) end end create_records('suppliers') do 25.times do Supplier.create(name: Faker::Company.name) end end create_records('warehouses') do 1000.times do Warehouse.create(name: Faker::Company.name, location: Faker::Address.full_address) end end create_records('items') do categories = Category.all.to_a suppliers = Supplier.all.to_a warehouses = Warehouse.all.to_a items = 100_000.times.map do { name: Faker::Commerce.product_name, category_id: categories.sample.id, supplier_id: suppliers.sample.id, warehouse_id: warehouses.sample.id } end items.each_slice(1000) do |batch| Item.insert_all(batch) end end create_records('item attributes') do items = Item.all # We'll bump this up later to 1_000_000 in order to see # the perf issues come up. item_attributes = 100_000.times.map do { attribute_name: Faker::Lorem.word, attribute_value: Faker::Lorem.word, item_id: items.sample.id } end item_attributes.each_slice(1000) do |batch| ItemAttribute.insert_all(batch) end end create_records('denormalized items') do items_with_associations = Item.includes(:category, :supplier, :warehouse) denormalized_items_attributes = [] items_with_associations.find_each(batch_size: 1000) do |item| denormalized_items_attributes << { name: item.name, item_id: item.id, category_name: item.category.name, category_id: item.category.id, supplier_name: item.supplier.name, supplier_id: item.supplier.id, warehouse_name: item.warehouse.name, warehouse_id: item.warehouse.id, created_at: DateTime.now, updated_at: DateTime.now } end denormalized_items_attributes.each_slice(1000) do |batch| ItemDenormalized.insert_all(batch) end end
このシード スクリプトは、すべてのエンティティのレコードの作成に役立ちます。スクリプトを微調整して、アーキテクチャをストレス テストするためのレコードを作成したり、作成したりすることができます。これは、まさにこれから行うことです。
ここで、これはローカル コンピュータ上で実行されるため、ここでは運用レベルのリソースをテストしているわけではないことを思い出してください。できれば、運用レベルの環境を用意して、さまざまな戦略を試すことができれば幸いですが、ここでのポイントは運用環境を再現することではなく、高度に正規化されたアーキテクチャを操作する際の複雑さをよく理解することです。
シードを実行すると、次のログが取得されます:
bundle exec rails db:seed Truncating database... Database truncated. Creating categories. Categories created. 3.226257999893278 Creating suppliers. Suppliers created. 0.1299410001374781 Creating warehouses. Warehouses created. 4.184017000021413 Creating items. Items created. 7.629256000043824 Creating item attributes. Item attributes created. 59.715396999847144 Creating denormalized items. Denormalized items created. 12.066422999836504
それでは、クエリの実行を始めましょう。
パフォーマンス?一体どんなパフォーマンス?!
それでは、倒産したサプライヤーである McDermott-Casper からの商品を除くすべての商品が欲しいとしましょう。また、属性 enim や modi が関連付けられたアイテムも望ましくありません:
ActiveRecord を使用すると、次のように非常に簡単にクエリを作成できます。
excluded_suppliers = Supplier .select('id') .where(name: "McDermott-Casper") .to_sql excluded_attributes = ItemAttribute .select(:item_id) .where(attribute_name: ['enim', 'modi']) .to_sql Item .distinct .select('items.id, items.name, categories.name AS category_name, suppliers.name AS supplier_name, warehouses.name AS warehouse_name') .joins(:category, :supplier, :warehouse) .left_outer_joins(:item_attributes) .where("items.supplier_id NOT IN (#{excluded_suppliers})") .where("items.id NOT IN(#{excluded_attributes})") .to_a
シナリオに基づいてアイテムを除外する条件は、埋め込みサブクエリとして WHERE 条件で利用され、カテゴリ、サプライヤー、倉庫、および (左外部結合) アイテム属性を結合して、条件に一致するアイテムのみを確実に取得します。 .
それでは、これをテストしてみましょう:
bundle exec rails c Loading development environment (Rails 7.1.3.4) irb(main):001* excluded_suppliers = irb(main):002> Supplier irb(main):003> .select('id') irb(main):004> .where(name: "McDermott-Casper") irb(main):005> .to_sql => "SELECT \"suppliers\".\"id\" FROM \"suppliers\" WHERE \"suppliers\".\"name\" = 'McDermott-Casper'" irb(main):006* excluded_attributes = irb(main):007> ItemAttribute irb(main):008> .select(:item_id) irb(main):009> .where(attribute_name: ['enim', 'modi']) irb(main):010> .to_sql => "SELECT \"item_attributes\".\"item_id\" FROM \"item_attributes\" WHERE \"item_attributes\".\"attribute_name\" IN ('enim', 'modi')" irb(main):011> Item irb(main):012> .distinct irb(main):013> .select('items.id, items.name, categories.name AS category_name, suppliers.name AS supplier_name, warehouses.name AS warehouse_name') irb(main):014> .joins(:category, :supplier, :warehouse) irb(main):015> .left_outer_joins(:item_attributes) irb(main):016> .where("items.supplier_id NOT IN (#{excluded_suppliers})") irb(main):017> .where("items.id NOT IN(#{excluded_attributes})") irb(main):018> .to_a Item Load (535.5ms) SELECT DISTINCT items.id, items.name, categories.name AS category_name, suppliers.name AS supplier_name, warehouses.name AS warehouse_name FROM "items" INNER JOIN "categories" ON "categories"."id" = "items"."category_id" INNER JOIN "suppliers" ON "suppliers"."id" = "items"."supplier_id" INNER JOIN "warehouses" ON "warehouses"."id" = "items"."warehouse_id" LEFT OUTER JOIN "item_attributes" ON "item_attributes"."item_id" = "items"."id" WHERE (items.supplier_id NOT IN (SELECT "suppliers"."id" FROM "suppliers" WHERE "suppliers"."name" = 'McDermott-Casper')) AND (items.id NOT IN(SELECT "item_attributes"."item_id" FROM "item_attributes" WHERE "item_attributes"."attribute_name" IN ('enim', 'modi'))) =>
すごい!フェッチは 1 秒未満です。
わかりました。システム内の属性の数を…たとえば 100 万に増やしたときに何が起こるかを見てみましょう。これを行うには、シード スクリプトから抽出した次のコードを実行します。
items = Item.all # We'll bump this up later to 1_000_000 in order to see # the perf issues come up. item_attributes = 900_000.times.map do { attribute_name: Faker::Lorem.word, attribute_value: Faker::Lorem.word, item_id: items.sample.id } end item_attributes.each_slice(1000) do |batch| ItemAttribute.insert_all(batch) end
ここで、上記には enim または modi に一致する 1,187 個の項目属性レコードがあったことに注意してください。
irb(main):001* excluded_suppliers = irb(main):002> Supplier irb(main):003> .select('id') irb(main):004> .where(name: "McDermott-Casper") irb(main):005> .to_sql irb(main):006> => "SELECT \"suppliers\".\"id\" FROM \"suppliers\" WHERE \"suppliers\".\"name\" = 'McDermott-Casper'" irb(main):007* excluded_attributes = irb(main):008> ItemAttribute irb(main):009> .select(:item_id) irb(main):010> .where(attribute_name: ['enim', 'modi']) irb(main):011> .to_sql irb(main):012> => "SELECT \"item_attributes\".\"item_id\" FROM \"item_attributes\" WHERE \"item_attributes\".\"attribute_name\" IN ('enim', 'modi')" irb(main):013> Item irb(main):014> .distinct irb(main):015> .select('items.id, items.name, categories.name AS category_name, suppliers.name AS supplier_name, warehouses.name AS warehouse_name') irb(main):016> .joins(:category, :supplier, :warehouse) irb(main):017> .left_outer_joins(:item_attributes) irb(main):018> .where("items.supplier_id NOT IN (#{excluded_suppliers})") irb(main):019> .where("items.id NOT IN(#{excluded_attributes})") irb(main):020> .to_a irb(main):021> Item Load (3002.4ms) SELECT DISTINCT items.id,
おお!わかりました。現在 3 秒です。
時間の経過とともにさらに多くのアイテムがシステムに追加されるにつれて、問題はさらに悪化し、それに関連して item_attributes がこの特定のクエリに影響を与え続けることになります。さらに 900,000 個の属性が追加されると、enim または modi に一致するレコードの数が増加しました。実際、レコード数は 1,187 から 12,154 になりました。
This kind of scale is completely normal and really shouldn’t be unexpected. As the number of attributes for items can increase significantly over time in an inventory management system for all sorts of reasons. Ok, so more records were added - of course performance would be impacted. What exactly is happening?
Is normalization really the issue here?
I’m going to remove the joins to categories and warehouses:
irb(main):029> Item irb(main):030> .distinct irb(main):031> .select('items.id, items.name, suppliers.name AS supplier_name') irb(main):032> .joins(:supplier) irb(main):033> .left_outer_joins(:item_attributes) irb(main):034> .where("items.supplier_id NOT IN (#{excluded_suppliers})") irb(main):035> .where("items.id NOT IN(#{excluded_attributes})") irb(main):036> .to_a irb(main):037> Item Load (1938.4ms) SELECT DISTINCT items.id, items.name, suppliers.name AS supplier_name FROM "items" INNER JOIN "suppliers" ON "suppliers"."id" = "items"."supplier_id" LEFT OUTER JOIN "item_attributes" ON "item_attributes"."item_id" = "items"."id" WHERE (items.supplier_id NOT IN (SELECT "suppliers"."id" FROM "suppliers" WHERE "suppliers"."name" = 'McDermott-Casper')) AND (items.id NOT IN(SELECT "item_attributes"."item_id" FROM "item_attributes" WHERE "item_attributes"."attribute_name" IN ('enim', 'modi'))) =>
Ok, so yeah, we get a ~30% improvement just removing the join. Let's run an explain on these and try to understand what's going on.
Unique (cost=80266.89..84016.89 rows=250000 width=99) -> Sort (cost=80266.89..80891.89 rows=250000 width=99) Sort Key: items.id, items.name, categories.name, suppliers.name, warehouses.name -> Hash Join (cost=20105.00..44177.93 rows=250000 width=99) Hash Cond: (items.warehouse_id = warehouses.id) -> Hash Join (cost=20066.50..43480.40 rows=250000 width=89) Hash Cond: (items.supplier_id = suppliers.id) -> Hash Join (cost=20030.63..42785.86 rows=250000 width=78) Hash Cond: (items.category_id = categories.id) -> Hash Right Join (cost=19998.80..42094.91 rows=250000 width=54) Hash Cond: (item_attributes.item_id = items.id) -> Seq Scan on item_attributes (cost=0.00..19471.00 rows=1000000 width=8) -> Hash (cost=19686.30..19686.30 rows=25000 width=54) -> Seq Scan on items (cost=16933.30..19686.30 rows=25000 width=54) Filter: ((NOT (hashed SubPlan 1)) AND (NOT (hashed SubPlan 2))) SubPlan 1 -> Seq Scan on suppliers suppliers_1 (cost=0.00..24.38 rows=1 width=8) " Filter: ((name)::text = 'McDermott-Casper'::text)" SubPlan 2 -> Gather (cost=1000.00..16878.93 rows=11996 width=8) Workers Planned: 2 -> Parallel Seq Scan on item_attributes item_attributes_1 (cost=0.00..14679.33 rows=4998 width=8) " Filter: ((attribute_name)::text = ANY ('{enim,modi}'::text[]))" -> Hash (cost=19.70..19.70 rows=970 width=40) -> Seq Scan on categories (cost=0.00..19.70 rows=970 width=40) -> Hash (cost=21.50..21.50 rows=1150 width=27) -> Seq Scan on suppliers (cost=0.00..21.50 rows=1150 width=27) -> Hash (cost=26.00..26.00 rows=1000 width=26) -> Seq Scan on warehouses (cost=0.00..26.00 rows=1000 width=26)
The plan above is telling us the output of each join is funneled into the next one:
(items <> warehouses) -> (items <> suppliers) -> (items <> categories)
Because of the multiple joins, we essentially increase the performance impact as more data is spread out across your database, e.g. normalization.
Now, let’s look at the plan after we remove the joins:
Unique (cost=73750.91..76250.91 rows=250000 width=49) -> Sort (cost=73750.91..74375.91 rows=250000 width=49) Sort Key: items.id, items.name, suppliers.name -> Hash Join (cost=20034.68..42789.45 rows=250000 width=49) Hash Cond: (items.supplier_id = suppliers.id) -> Hash Right Join (cost=19998.80..42094.91 rows=250000 width=38) Hash Cond: (item_attributes.item_id = items.id) -> Seq Scan on item_attributes (cost=0.00..19471.00 rows=1000000 width=8) -> Hash (cost=19686.30..19686.30 rows=25000 width=38) -> Seq Scan on items (cost=16933.30..19686.30 rows=25000 width=38) Filter: ((NOT (hashed SubPlan 1)) AND (NOT (hashed SubPlan 2))) SubPlan 1 -> Seq Scan on suppliers suppliers_1 (cost=0.00..24.38 rows=1 width=8) " Filter: ((name)::text = 'McDermott-Casper'::text)" SubPlan 2 -> Gather (cost=1000.00..16878.93 rows=11996 width=8) Workers Planned: 2 -> Parallel Seq Scan on item_attributes item_attributes_1 (cost=0.00..14679.33 rows=4998 width=8) " Filter: ((attribute_name)::text = ANY ('{enim,modi}'::text[]))" -> Hash (cost=21.50..21.50 rows=1150 width=27) -> Seq Scan on suppliers (cost=0.00..21.50 rows=1150 width=27)
Ok, so we get a better query plan. Less joins, less data to scan and therefore more performance. However, doing this won't meet the requirements. Remember, the report needs the names of the associated suppliers and warehouses. Let's see what happens when we denormalize the data and simplify the lookup process.
irb(main):074* excluded_suppliers = irb(main):075> Supplier irb(main):076> .select('id') irb(main):077> .where(name: "McDermott-Casper") irb(main):078> .to_sql irb(main):079> irb(main):080* excluded_attributes = irb(main):081> ItemAttribute irb(main):082> .select(:item_id) irb(main):083> .where(attribute_name: ['enim', 'modi']) irb(main):084> .to_sql irb(main):085> irb(main):086> ItemDenormalized irb(main):087> .distinct irb(main):088> .select('items_denormalized.id as id, items_denormalized.category_name as category_name, items_denormalized.supplier_name as supplier_name, items_denormalized.warehouse_name as warehouse_name') irb(main):089> .joins(:supplier) irb(main):090> .left_outer_joins(:item_attributes) irb(main):091> .where("items_denormalized.supplier_id NOT IN (#{excluded_suppliers})") irb(main):092> .where("items_denormalized.item_id NOT IN(#{excluded_attributes})") irb(main):093> .to_a irb(main):094> ItemDenormalized Load (1107.3ms) SELECT DISTINCT items_denormalized.id as id,
In this example, the lookup on the denormalized table performed similarly to when we removed the joins (1107.3ms v. 1938.4ms). The difference is that we have the category and warehouse names. Denormalization does introduce multiple complexities that need to be handled; such as redundancy and integrity of the data, e.g. what happens when categories are updated? or when warehouses are deleted?
Putting that aside though, we see that denormalization handles certain scenarios well when it comes to performance. We should consider it's benefits when building applications that will inevitably need to scale. In our example above, we can see with just a million records, we start to run into some performance bottlenecks.
Let's think through what bottlenecks start to come into play after running through the examples above.
Highly normalized schemas often require complex queries with multiple joins, which can be slow and resource-intensive.
SELECT DISTINCT items.id, items.name, categories.name AS category_name, suppliers.name AS supplier_name, warehouses.name AS warehouse_name FROM "items" INNER JOIN "categories" ON "categories"."id" = "items"."category_id" INNER JOIN "suppliers" ON "suppliers"."id" = "items"."supplier_id" INNER JOIN "warehouses" ON "warehouses"."id" = "items"."warehouse_id" LEFT OUTER JOIN "item_attributes" ON "item_attributes"."item_id" = "items"."id" WHERE (items.supplier_id NOT IN( SELECT "suppliers"."id" FROM "suppliers" WHERE "suppliers"."name" = 'McDermott-Casper')) AND(items.id NOT IN( SELECT "item_attributes"."item_id" FROM "item_attributes" WHERE "item_attributes"."attribute_name" IN('enim', 'modi')));
I wouldn't consider the above too complex, however, the conditions that execute subqueries can start to get complex when joining on joins. This happens a lot in large scale applications that have evolved over time. Again, normalization is great in an ideal world - but it is also important to understand what other complexities it introduces.
Each table lookup can lead to additional I/O operations, slowing down the overall query performance. When we start to talk through IO operations in the database, it's important to know, high level, why this is an important part of the puzzle. So let's dive into some issues that come up at scale.
Read/Write: Each join that involves disk-based temporary tables or large data sets will increase the number of disk reads and writes. This can cause a significant I/O load, especially in applications where the behavior is quite active (jobs, high traffic, etc.).
Buffer Pool Pressure: Joins can put pressure on the MySQL buffer pool, especially with larger data sets. When the buffer pool is full, MySQL has to evict pages to make room for new data, causing additional disk I/O.
Temporary Tables: MySQL may create temporary tables to hold intermediate results during complex join operations. These temporary tables can be stored in memory or on disk, depending on their size. Disk-based temporary tables increase I/O operations, leading to slower performance.
In a highly concurrent environment, frequent access and updates across multiple tables can lead to lock contention and further degrade performance.
Lock Types: MySQL uses different types of locks (e.g., shared, exclusive) depending on the operation. Complex queries with multiple joins can require various locks, leading to contention if different parts of the query need the same resources.
Row-Level vs. Table-Level Locks: InnoDB uses row-level locking, which is generally more efficient than table-level locking used by MyISAM. However, even row-level locks can cause contention if multiple transactions try to modify the same rows simultaneously.
Increased Lock Duration: Queries involving joins on joins often take longer to execute. The longer a transaction holds locks, the higher the chance of contention with other transactions.
Lock Escalation: Although InnoDB uses row-level locking, high contention can sometimes cause lock escalation, where the database engine escalates to table-level locks to manage the contention, leading to broader performance issues. This is typically due to non-existent and/or lacking indexes.
Lock Waits: When a transaction needs a lock held by another transaction, it must wait, leading to increased query execution time and potential timeouts.
Deadlocks: Complex queries with multiple joins increase the risk of deadlocks, where two or more transactions are waiting for each other’s locks, causing the database to automatically roll back one of the transactions to resolve the deadlock, typically the "victim" is rolled back.
To mitigate performance issues in highly normalized architectures, consider the following strategies:
The process for denormalizing data involves adding redundant data to tables to reduce the number of joins required. While this increases storage requirements and the risk of data anomalies, it can significantly improve read performance.
SELECT i.id, i.name, i.category_name, i.supplier_name, i.warehouse_name, i.attribute_value FROM items_denormalized i WHERE i.id = ?
In this example, the items_denormalized table combines data from the categories, suppliers, warehouses, and item_attributes tables, eliminating the need for multiple joins.
Proper indexing can dramatically improve query performance. Ensure that all columns used in joins and WHERE clauses are indexed. Remember, an index is super important to prevent full table locks. Keep in mind, that even this will not help if temporary tables are created with your joins, which will NOT have indexes.
CREATE INDEX idx_items_id ON items(id); CREATE INDEX idx_categories_id ON categories(id); CREATE INDEX idx_suppliers_id ON suppliers(id); CREATE INDEX idx_warehouses_id ON warehouses(id); CREATE INDEX idx_item_attributes_item_id ON item_attributes(item_id);
Implement caching mechanisms to store frequently accessed data in memory, reducing the need for repeated database queries. There are multiple strategies for implementing caching, which will be covered in a different post, but these strategies can range from utilizing summary tables, to integrating different technologies that can store results temporarily.
# Example using Ruby on Rails with Redis cache item = Rails.cache.fetch("item_#{id}", expires_in: 12.hours) do Item.includes(:category, :supplier, :warehouse, :item_attributes).find(id) end
Analyze and optimize your queries to ensure they are as efficient as possible. Use tools like MySQL’s EXPLAIN ANALYZE statement to understand the execution plan and identify bottlenecks.
EXPLAIN SELECT i.id, i.name, c.name AS category, s.name AS supplier, w.name AS warehouse, ia.attribute_value FROM items i JOIN categories c ON i.category_id = c.id JOIN suppliers s ON i.supplier_id = s.id JOIN warehouses w ON i.warehouse_id = w.id JOIN item_attributes ia ON i.id = ia.item_id WHERE i.id = 1;
Normalization is a powerful technique for maintaining data integrity, but it can lead to performance challenges in large-scale applications. Knowing the tradeoffs here can help you scale your application in the long term, considering denormalization as just another strategy to help scale. If denormalization is not favorable; consider reviewing indices (including composites), result caching and query optimization to improve performance. Thank you for reading and please reach out if you have any questions!
Microsoft 365 - Description of the database normalization basics
Coding Horror - Maybe Normalizing Isn't Normal
informIT - When You Can't Change a SQL Database Design
PureStorage - Denormalized vs. Normalized Data.
MySQL - Buffer Pool
MySQL - InnoDB Disk I/O
MySQL - Internal Temporary Table Use in MySQL
MySQL - Locks Set by Different SQL Statements in InnoDB
Percona - Understanding Hash Joins in MySQL 8
Percona - Horizontal Scaling in MySQL – Sharding Followup
PlanetScale - How to Scale your Database and when to Shard MySQL
Awesome - Database Design
Awesome - MySQL
以上が複雑な正規化データベースを操作するためのヒントの詳細内容です。詳細については、PHP 中国語 Web サイトの他の関連記事を参照してください。