blog

Self-healing for cloud-native high-performance query processing Pt. 3: System model

Introduction As we analyzed in the previous parts (part 1 and part 2), checkpointing can be promising from the perspective of a single query, especially for queries with a low output/input factor. However, the number of parameters makes detailed analysis more difficult. Additionally, we concentrated on cases where a failure occurred. In the actual system, the failures are rare events. For this reason, in this part we will evaluate how both methods would work in systems with different query types, sizes, and failure probabilities.

Self-healing for cloud-native high-performance query processing Pt. 2: Cost model

Introduction In the previous part, we focused on creating time models for two self-healing methods - recomputation and checkpointing. We have stated that the recomputation model can be described using the following parameters: Input size Processing speed Failure point The checkpointing model required the following additional parameters: Output/input factor Network connection Checkpoint file size CPU overhead for data transfer However, we can often achieve close to 0 processing times in the cloud using scaling.

Self-healing for cloud-native high-performance query processing Pt. 1: Time model

Introduction The cloud is one of the most impactful innovations in recent years. More and more companies are moving to the cloud instead of using on-premise servers. As a consequence, the amount of data processed in the cloud increases. For these reasons, new services for query execution are developed, e.g., Snowflake, Amazon Redshift, and Google BigQuery. However, the cloud is built on unreliable hardware and failures are common. Consequently, high-performance query processing has to be able to detect such accidents and execute self-healing operations to hide them from customers in order to provide high-quality service.

How to have your cake and eat it too with modern buffer management Pt. 2: VMCache

In the first and second part of our series about modern database storage we have already learned about the history of of our series about modern database storage we have already learned about the history of database storage technology and about pointer swizzling as a first modern buffer management technique. I have also described some of the drawbacks of pointer swizzling, mainly that it leaks into your entire system because you cannot have a nice clean buffer manager API with loose coupling that just dereferences page ids to pointers.

How to have your cake and eat it too with modern buffer management Pt. 1: Pointer Swizzling

In the first part of this series about modern database storage we have already seen how SSDs have been becoming the go-to base storage for modern database systems more and more in the late 2010s and especially now. As long as most of the data that most of your queries are touching fits into main memory, SSDs can easily handle the rest at a very small performance cost. The approaches There are multiple approaches to achieving in-memory speeds for in-memory cases and utilizing the SSD well in out-of-memory cases relying on fast NVMe SSDs.

Why databases found their old love of disk again

“640K ought to be enough for anybody.” - Bill Gates claims to have never actually said that. 640KB is definitely not enough to hold most data sets in memory. That’s why the assumption made in old database systems is that you have to access the disk for basically every operation. Since HDDs were essentially the only available storage medium for databases at the time (other than tape), the cost of I/O accesses dominated database performance.

The Case for a Unified, JIT Compiling Approach to Data Processing

Over the past 12 years, Just In Time Compilation for SQL query plans (pioneered by Thomas Neumann at TUM) gained popularity when it comes to developing high-performance analytical database management systems. The main idea sounds simple: the system generates specialized code for an individual query and avoids the overhead of interpretation of traditional query engines. LingoDB, a new research project from Michael Jungmair at TUM, aims to enhance the flexibility and extensibility of this approach drastically.

Data Processing on AWS S3 Express One Zone

At this year’s re:invent conference, AWS announced a new storage class, S3 Express One Zone (S3 EOZ), that offers single-digit millisecond latency, an up to 100x improvement over S3 Standard, which leaves us wondering if this could disrupt database design in the cloud we know today. State of the Art Snowflake pioneered the decoupling of compute and storage for analytical query processing and set the industry standard to use blob storage like S3 as the storage layer.