Geek Logbook

Tech sea log book

How Hadoop Made Specialized Storage Hardware Obsolete

In the early 2000s, enterprise data processing was dominated by high-end hardware. Organizations relied heavily on centralized storage systems such as SAN (Storage Area Networks) and NAS (Network Attached Storage), typically connected to symmetric multiprocessing (SMP) servers or high-performance computing (HPC) clusters. These environments were expensive to scale, difficult to manage, and designed to avoid

EMR vs AWS Glue: Choosing the Right Data Processing Tool on AWS

When working with big data on AWS, two commonly used services for data processing are Amazon EMR and AWS Glue. Although both support scalable data transformation and analytics, they differ significantly in architecture, control, use cases, and cost models. Choosing the right tool depends on your specific workload, performance needs, and operational preferences. In this

Why You Should Use the -out Option with terraform plan

When working with Terraform, a common workflow involves running terraform plan followed by terraform apply. However, you may have come across the following warning: “You didn’t use the -out option to save this plan, so Terraform can’t guarantee to take exactly these actions if you run ‘terraform apply’ now.” This message is more than a

How Google Changed Big Data: The Story of GFS, MapReduce, and Bigtable

In the early 2000s, Google faced a unique challenge: how to store, process, and query massive amounts of data across thousands of unreliable machines. The traditional systems of the time—designed for a world of smaller datasets and centralized infrastructure—simply couldn’t keep up. Google responded by designing an entirely new architecture. It wasn’t just about solving

ecure Database Access in AWS Using SSH Tunneling

Accessing databases located in private subnets within AWS Virtual Private Clouds (VPCs) is a common requirement in enterprise architectures. To ensure secure connectivity without exposing the database to the public internet, developers and operations engineers often employ SSH tunneling via a bastion host. Background Databases in a private subnet cannot be accessed directly from external

The Origin and Evolution of the DataFrame

When working with data today—whether in Python, R, or distributed computing platforms like Spark—one of the most commonly used structures is the DataFrame. But where did it come from? This post explores the origin, evolution, and growing importance of the DataFrame in data science and analytics. What is a DataFrame? A DataFrame is a two-dimensional

Understanding ORM: Bridging the Gap Between Objects and Relational Databases

In modern software development, working with databases is a fundamental requirement. Most applications need to persist, retrieve, and manipulate data stored in relational databases such as PostgreSQL, MySQL, or SQLite. Traditionally, this interaction has been done through SQL queries. However, Object-Relational Mapping (ORM) has emerged as a powerful alternative that simplifies and streamlines this process.