Skip to content

Commit 6fadc39

Browse files
committed
Change some of the tpmC in the images to tpmTOTAL
1 parent 335f659 commit 6fadc39

11 files changed

+39
-39
lines changed

Chapter2.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -100,7 +100,7 @@ From the above, it can be seen that there are 1000 warehouses, with a concurrenc
100100

101101
The following figure illustrates the throughput over time during long-term testing. The TPC-C throughput shows a decline rate that significantly surpasses expectations, nearing a 50% decrease.
102102

103-
<img src="media/image-20240829081832578.png" alt="image-20240829081832578" style="zoom:150%;" />
103+
<img src="media/image-degrade.png" alt="image-degrade" style="zoom:150%;" />
104104

105105
Figure 2-7. Performance degradation exposed during BenchmarkSQL testing of MySQL 8.0.27.
106106

Chapter6.md

Lines changed: 37 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -26,28 +26,28 @@ Optimizing a DBMS for a specific workload is complex and often requires expertis
2626

2727
For performance testing in this book, the following strategies are applied:
2828

29-
1. **Conduct Pre- and Post-Optimization Comparisons:** Where feasible, perform performance comparisons before and after optimization, aiming for minimal variation.
30-
2. **Match Configuration Parameters:** Align configuration parameters as closely as possible with the production environment.
31-
3. **Conduct Performance Comparisons on Identical Hardware**: Perform tests on the same x86 machine in a NUMA environment with identical configurations. Reinitialize MySQL data directories and clean SSDs (via TRIM) to avoid interference. Test across a range of concurrency levels to assess throughput and determine whether optimizations improve performance or scalability.
32-
4. **Repeat Testing with NUMA Node Binding**: After binding to NUMA node 0, repeat the tests to compare performance in an SMP environment.
33-
5. **Test on x86 Machines with NUMA Disabled**: Conduct comparative performance testing on x86 machines with identical hardware but with NUMA disabled (in the BIOS).
34-
6. **Evaluate Performance on ARM Machines**: Test comparative performance on ARM machines in a NUMA environment with similar MySQL configurations.
35-
7. **Verify Consistency with Different Tools**: Use various testing tools to compare results and ensure consistency. For example, employ BenchmarkSQL and modified versions of tpcc-mysql for TPC-C testing.
36-
8. **Assess Performance Under Varying Network Latency**: Examine performance effects under different network latency conditions.
37-
9. **Test Performance with Different "Thinking Time" Scenarios**: Evaluate how performance varies with different "thinking time" scenarios to gauge consistency.
38-
10. **Perform Closed-Loop Testing**: Ensure no interference during testing by repeating the initial tests and comparing results with the first round. Small differences in test results indicate that the environment is relatively stable.
39-
11. **Verify Bottleneck Interference**: Confirm whether interference from other bottlenecks under high concurrency has distorted the performance comparison.
40-
12. **Analyze Theoretical Basis and Anomalies**: Evaluate whether the performance optimization has a theoretical basis and if any anomalies can be explained. Analyze the type of optimization, its general applicability, and which environments benefit most. Investigate anomalies to determine their causes.
29+
1. **Conduct Pre- and Post-Optimization Comparisons:** Where feasible, perform performance comparisons before and after optimization, aiming for minimal variation.
30+
2. **Match Configuration Parameters:** Align configuration parameters as closely as possible with the production environment.
31+
3. **Conduct Performance Comparisons on Identical Hardware**: Perform tests on the same x86 machine in a NUMA environment with identical configurations. Reinitialize MySQL data directories and clean SSDs (via TRIM) to avoid interference. Test across a range of concurrency levels to assess throughput and determine whether optimizations improve performance or scalability.
32+
4. **Repeat Testing with NUMA Node Binding**: After binding to NUMA node 0, repeat the tests to compare performance in an SMP environment.
33+
5. **Test on x86 Machines with NUMA Disabled**: Conduct comparative performance testing on x86 machines with identical hardware but with NUMA disabled (in the BIOS).
34+
6. **Evaluate Performance on ARM Machines**: Test comparative performance on ARM machines in a NUMA environment with similar MySQL configurations.
35+
7. **Verify Consistency with Different Tools**: Use various testing tools to compare results and ensure consistency. For example, employ BenchmarkSQL and modified versions of tpcc-mysql for TPC-C testing.
36+
8. **Assess Performance Under Varying Network Latency**: Examine performance effects under different network latency conditions.
37+
9. **Test Performance with Different "Thinking Time" Scenarios**: Evaluate how performance varies with different "thinking time" scenarios to gauge consistency.
38+
10. **Perform Closed-Loop Testing**: Ensure no interference during testing by repeating the initial tests and comparing results with the first round. Small differences in test results indicate that the environment is relatively stable.
39+
11. **Verify Bottleneck Interference**: Confirm whether interference from other bottlenecks under high concurrency has distorted the performance comparison.
40+
12. **Analyze Theoretical Basis and Anomalies**: Evaluate whether the performance optimization has a theoretical basis and if any anomalies can be explained. Analyze the type of optimization, its general applicability, and which environments benefit most. Investigate anomalies to determine their causes.
4141

4242
### 6.1.3 Overly-specific Tuning
4343

4444
These problems can be mitigated by conducting a range of experiments beyond standardized benchmarks. While standardized benchmarks provide a useful baseline, some systems may be heavily optimized for them, reducing their effectiveness for comparison. Thus, additional queries should be tested and measured [9].
4545

4646
To address these problems, MySQL configuration should meet the following criteria:
4747

48-
1. **Minimize Impact of Configuration Parameters**: Ensure parameters, like buffer pool size, do not hinder other optimizations.
49-
2. **Use Default Configurations**: Apply default settings for uncertain parameters, such as spin delay.
50-
3. **Match Production Configurations**: Align test settings with production configurations, e.g., sync_binlog=1 and innodb_flush_log_at_trx_commit=1.
48+
1. **Minimize Impact of Configuration Parameters**: Ensure parameters, like buffer pool size, do not hinder other optimizations.
49+
2. **Use Default Configurations**: Apply default settings for uncertain parameters, such as spin delay.
50+
3. **Match Production Configurations**: Align test settings with production configurations, e.g., sync_binlog=1 and innodb_flush_log_at_trx_commit=1.
5151

5252
To overcome the limitations of single-type testing, employ a variety of test scenarios. For TPC-C, include tests with varying conflict severity, thinking time, and network latency. For SysBench, use tests with Pareto distributions, read/write, and write-only operations.
5353

@@ -99,33 +99,33 @@ The TPC-C benchmark is the gold standard for database concurrency control in bot
9999

100100
Experiment settings can significantly impact evaluation results. In TPC-C:
101101

102-
- Introducing wait time makes experiments I/O intensive.
103-
- Removing wait time makes experiments CPU/memory intensive.
104-
- Reducing the number of warehouses makes experiments contention intensive.
102+
- Introducing wait time makes experiments I/O intensive.
103+
- Removing wait time makes experiments CPU/memory intensive.
104+
- Reducing the number of warehouses makes experiments contention intensive.
105105

106106
TPC-C can stress test almost every key component of a computer system, but this versatility poses challenges for fair comparisons between different systems [8].
107107

108108
At a high level, the following factors reduce contention:
109109

110-
- More warehouses
111-
- Fewer cross-warehouse transactions
112-
- Fewer workers/users per warehouse
113-
- Adding wait time
114-
- Short or no I/Os within a critical section
110+
- More warehouses
111+
- Fewer cross-warehouse transactions
112+
- Fewer workers/users per warehouse
113+
- Adding wait time
114+
- Short or no I/Os within a critical section
115115

116116
In low-contention settings, throughput is limited by the system's slowest component:
117117

118-
- Disk I/O if data exceeds DRAM size
119-
- Network I/O if using traditional TCP stack and data fits in DRAM
120-
- Centralized sequencers or global dependency graphs may also cause scalability bottlenecks
118+
- Disk I/O if data exceeds DRAM size
119+
- Network I/O if using traditional TCP stack and data fits in DRAM
120+
- Centralized sequencers or global dependency graphs may also cause scalability bottlenecks
121121

122122
Conversely, the following factors increase contention:
123123

124-
- Fewer warehouses
125-
- More cross-warehouse transactions
126-
- More workers/users per warehouse
127-
- No wait time
128-
- Long I/Os within a critical section
124+
- Fewer warehouses
125+
- More cross-warehouse transactions
126+
- More workers/users per warehouse
127+
- No wait time
128+
- Long I/Os within a critical section
129129

130130
In high-contention settings, throughput is determined by the concurrency control mechanism. Systems that can release locks earlier or reduce aborts will have advantages [8].
131131

@@ -167,22 +167,22 @@ According to the TPC-C benchmark, the database must operate in a steady state fo
167167

168168
To meet these stability requirements in MySQL testing, the following measures were implemented:
169169

170-
1. Regularly cleaning the binlog to prevent SSD performance degradation due to I/O space constraints.
171-
2. Utilizing a larger number of warehouses.
172-
3. Adding indexes.
173-
4. Deploying multiple SSDs.
170+
1. Regularly cleaning the binlog to prevent SSD performance degradation due to I/O space constraints.
171+
2. Utilizing a larger number of warehouses.
172+
3. Adding indexes.
173+
4. Deploying multiple SSDs.
174174

175175
Following these measures, TPC-C testing was performed using BenchmarkSQL. The figure below illustrates the stability test comparison between MySQL 8.0.27 and the improved MySQL 8.0.27.
176176

177-
<img src="media/image-20240829093722953.png" alt="image-20240829093722953" style="zoom:150%;" />
177+
<img src="media/image-degrade2.png" alt="image-degrade2" style="zoom:150%;" />
178178

179179
Figure 6-4. Comparison of stability tests: MySQL 8.0.27 vs. improved MySQL 8.0.27.
180180

181181
From the figure, it is evident that although MySQL and the improved MySQL start with similar throughput, the throughput of MySQL decreases more rapidly over time than expected, while the improved MySQL remains significantly more stable.
182182

183183
Additionally, comparisons were made for the improved MySQL at different concurrency levels. The figure below shows the throughput over time: the deep blue curve represents 100 concurrency, while the deep red curve represents 200 concurrency.
184184

185-
<img src="media/image-20240829093752930.png" alt="image-20240829093752930" style="zoom:150%;" />
185+
<img src="media/image-degrade3.png" alt="image-degrade3" style="zoom:150%;" />
186186

187187
Figure 6-5. Stability test comparison: 100 vs. 200 concurrency.
188188

Chapter8.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -137,7 +137,7 @@ From the figure, it is evident that this patch has significantly improved MySQL'
137137

138138
Finally, let's examine the results of the long-term stability testing for TPC-C. The following figure shows the results of an 8-hour test under 100 concurrency, with throughput captured at various hours (where 1 ≤ n ≤ 8).
139139

140-
<img src="media/image-20240829102722393.png" alt="image-20240829102722393" style="zoom:150%;" />
140+
<img src="media/image-degrade4.png" alt="image-degrade4" style="zoom:150%;" />
141141

142142
Figure 8-13. Comparison of stability tests: MySQL 8.0.27 vs. improved MySQL 8.0.27.
143143

media/image-20240829081832578.png

-18 KB
Binary file not shown.

media/image-20240829093722953.png

-22.5 KB
Binary file not shown.

media/image-20240829093752930.png

-23.9 KB
Binary file not shown.

media/image-20240829102722393.png

-22.5 KB
Binary file not shown.

media/image-degrade.png

18.9 KB
Loading

media/image-degrade2.png

25.9 KB
Loading

media/image-degrade3.png

27.9 KB
Loading

0 commit comments

Comments
 (0)