Geek Logbook

Tech sea log book

Apache Cassandra vs Apache Parquet: Understanding the Differences

In modern data architectures, it’s common to encounter both Apache Cassandra and Apache Parquet, particularly when dealing with large-scale, distributed systems. Both technologies are associated with columnar data models, which often leads to confusion. However, Cassandra and Parquet serve fundamentally different purposes and operate at different layers of the data stack. This article clarifies their

Import Live Crypto Prices into Google Sheets

Are you tired of checking crypto prices manually? Want to automate your portfolio tracking or build a custom crypto dashboard? Good news — with just a few steps, you can pull live cryptocurrency prices directly into Google Sheets. In this guide, we’ll show you three simple methods to get real-time crypto data, whether you’re a

How Dynamo Reshaped the Internal Architecture of Amazon S3

IntroductionAmazon S3 launched in 2006 as a scalable, durable object storage system. It avoided hierarchical file systems and used flat key-based addressing from day one. However, early versions of S3 ran into architectural challenges—especially in metadata consistency and fault tolerance. Meanwhile, another internal team at Amazon was building Dynamo, a distributed key-value store optimized for

What’s Behind Amazon S3?

When you upload a file to the cloud using an app or service, there’s a good chance it’s being stored on Amazon S3 (Simple Storage Service). But what powers it under the hood? What is Amazon S3? Amazon S3 is an object storage service that allows users to store and retrieve any amount of data,

Summary: Teaching HDFS Concepts to New Learners

Introducing Hadoop Distributed File System (HDFS) to newcomers can be both exciting and challenging. To make the learning experience structured and impactful, it’s helpful to break down the core topics into digestible parts. This blog post summarizes a beginner-friendly teaching sequence based on real questions and progressive discovery. Key Topics to Cover Teaching Tips Conclusion

How HDFS Achieves Fault Tolerance Through Replication

One of the core strengths of the Hadoop Distributed File System (HDFS) is its fault tolerance. In a world of distributed computing, failures are not rare—they’re expected. HDFS tackles this by using block-level replication to ensure that data is never lost, even when individual nodes fail. What Is Replication in HDFS? When a file is

How Spark and MapReduce Handle Partial Records in HDFS

When working with large-scale data processing frameworks like Apache Spark or Hadoop MapReduce, one common question arises: What happens when a record (e.g., a line of text or a JSON object) is split across two HDFS blocks? Imagine a simple scenario where the word "father" is split across two blocks like this: How do distributed

How Clients Know Where to Read or Write in HDFS

Hadoop Distributed File System (HDFS) is designed to decouple metadata management from actual data storage. But how does a client—like a Spark job or command-line tool—know where to read or write the bytes of a file across a distributed system? Let’s break down what happens when a client interacts with HDFS. The Role of the

How HDFS Avoids Understanding File Content

One of the defining features of Hadoop Distributed File System (HDFS) is that it doesn’t understand the contents of the files it stores. This is not a limitation—it’s an intentional design choice that makes HDFS flexible, scalable, and efficient for big data workloads. HDFS Is Content-Agnostic HDFS handles files as byte streams. It doesn’t care