Ajitabh Pandey's Soul & Syntax

Exploring systems, souls, and stories – one post at a time

Author: Ajitabh

  • Solving Ansible’s Flat Namespace Problem Efficiently

    In Ansible, the “Flat Namespace” problem is a frequent stumbling block for engineers managing multi-tier environments. It occurs because Ansible merges variables from various sources (global, group, and host) into a single pool for the current execution context.

    If you aren’t careful, trying to use a variable meant for “Group A” while executing tasks on “Group B” will cause the play to crash because that variable simply doesn’t exist in Group B’s scope.

    The Scenario: The “Mixed Fleet” Crash

    Imagine you are managing a fleet of Web Servers (running on port 8080) and Database Servers (running on port 5432). You want a single “Security” play to validate that the application port is open in the firewall.

    The Failing Code:

    - name: Apply Security Rules
    hosts: web:database
    vars:
    # This is the "Flat Namespace" trap!
    # Ansible tries to resolve BOTH variables for every host.
    app_port_map:
    web_servers: "{{ web_custom_port }}"
    db_servers: "{{ db_instance_port }}"

    tasks:
    - name: Validate port is defined
    ansible.builtin.assert:
    that: app_port_map[group_names[0]] is defined

    This code fails when Ansible runs this for a web_server, it looks at app_port_map. To build that dictionary, it must resolve db_instance_port. But since the host is a web server, the database group variables aren’t loaded. Result: fatal: 'db_instance_port' is undefined.

    Solution 1: The “Lazy” Logic

    By using Jinja2 whitespace control and conditional logic, we prevent Ansible from ever looking at the missing variable. It only evaluates the branch that matches the host’s group.

    - name: Apply Security Rules
    hosts: app_servers:storage_servers
    vars:
    # Use whitespace-controlled Jinja to isolate variable calls
    target_port: >-
    {%- if 'app_servers' in group_names -%}
    {{ app_service_port }}
    {%- elif 'storage_servers' in group_names -%}
    {{ storage_backend_port }}
    {%- else -%}
    22
    {%- endif -%}

    tasks:
    - name: Ensure port is allowed in firewall
    community.general.ufw:
    rule: allow
    port: "{{ target_port | int }}"

    The advantage of this approach is that it’s very explicit, prevents “Undefined Variable” errors entirely, and allows for easy defaults. However, it can become verbose/messy if you have a large number of different groups.

    Solution 2: The hostvars Lookup

    If you don’t want a giant if/else block, you can use hostvars to dynamically grab a value, but you must provide a default to keep the namespace “safe.”

    - name: Validate ports
    hosts: all
    tasks:
    - name: Check port connectivity
    ansible.builtin.wait_for:
    port: "{{ vars[group_names[0] + '_port'] | default(22) }}"
    timeout: 5

    This approach is very compact and follows a naming convention (e.g., groupname_port). But its harder to debug and relies on strict variable naming across your entire inventory.

    Solution 3: Group Variable Normalization

    The most “architecturally sound” way to solve the flat namespace problem is to use the same variable name across different group_vars files.

    # inventory/group_vars/web_servers.yml
    service_port: 80
    # inventory/group_vars/db_servers.yml
    service_port: 5432
    # Playbook - main.yml
    ---
    - name: Unified Firewall Play
    hosts: all
    tasks:
    - name: Open service port
    community.general.ufw:
    port: "{{ service_port }}" # No logic needed!
    rule: allow

    This is the cleanest playbook code; truly “Ansible-native” way of handling polymorphism but it requires refactoring your existing variable names and can be confusing if you need to see both ports at once (e.g., in a Load Balancer config).

    The “Flat Namespace” problem is really just a symptom of Ansible’s strength: it’s trying to make sure everything you’ve defined is valid. I recently solved this problem in a multi-play playbook, which I wrote for Digital Ocean infrastructure provisioning and configuration using the Lazy Logic approach, and I found this to be the best way to bridge the gap between “Group A” and “Group B” without forcing a massive inventory refactor. While I have generalized the example code, I actually faced this problem in a play that set up the host-level firewall based on dynamic inventory.

  • The Ghost Force

    In my last post, we looked at the possibility that we’re living in a giant cosmic ghost town, a 2-billion-light-year void. It’s a compelling idea because it explains why our “Local Team” sees the universe rushing away so quickly without breaking the laws of physics.

    But as I read further, I realized the plot thickens. Even if the “void” explains part of the mystery, we still have to ask: Is our equipment lying to us?

    Checking Hubble’s Homework

    My first thought was similar to what NASA had: “Maybe the Hubble Space Telescope is just getting old and blurry?” After all, it was launched in the early 1990’s, around 35 years back. Perhaps it’s miscounting stars or confusing distant galaxies with their neighbors (an effect called “crowding”), making them look closer than they really are.

    But unlike me, NASA sent in the James Webb Space Telescope (JWST) to settle this. One of Webb’s secret missions was to “check the homework” of the Hubble Telescope. So, in 2024-2025, Webb looked at the same stars Hubble did, but with much sharper infrared eyes. The results were a blow to people like me who were hoping for a simple mistake. It turns out, Hubble was right. The measurements were rock solid. The “crowding” wasn’t the issue. The universe really is expanding faster in our neighborhood.

    The Great Data Civil War

    Just when it seemed the “Local Team” had won, a new twist emerged. A separate group, the Chicago-Carnegie program, used a totally different type of star, JAGB stars (J-region Asymptotic Giant Branch stars) to measure the same distance. The result? The JAGB stars gave a speed of ~67.9 km/s/Mpc. Now, this matches the “Baby Picture” team (the Early Universe), not the Local team!

    The JAGB stars are aging, “sooty” red giant stars that have entered a very specific phase of life. They are called carbon-rich giants because they’ve dredged up so much carbon from their cores that it creates a “smoky” atmosphere. For astronomers, they are the perfect “standard candles.” Because these stars have a very consistent, predictable brightness in the near-infrared, they act like a cosmic lightbulb of a known wattage. If we know how bright they should be, we can compare their actual brightness to calculate exactly how far away their galaxy is. Unlike other stars that can be finicky or hidden by dust, JAGB stars are bright, easy to spot, and incredibly reliable. This is why it’s so shocking that they’re currently giving us a different answer than the other “Local” teams!

    So now, we have a literal “civil war” in the data. One reliable method says the universe is sprinting at 73+, while another equally reliable method says it’s cruising at 67. JWST was supposed to solve the problem; instead, it proved that the problem is even deeper than we imagined.

    The Ghost in the Machine

    If the measurements are all correct, then our understanding of physics must be wrong. I started looking into the leading theory of Early Dark Energy (EDE).

    “Dark Energy” is the invisible force pushing the universe apart today. But some physicists think there was a second, hidden burst of energy right after the Big Bang, specifically around 380,000 years in.

    Imagine the universe was expanding normally, and then – WHOOSH – a temporary “ghost” energy field kicked in, shoved everything apart faster for a few millennia, and then vanished without a trace.

    This “Early Dark Energy” would essentially “shrink the ruler” we use to measure the cosmos. If the ruler we use for the early universe is actually smaller than we thought, the faster speeds we see today would suddenly make perfect sense. The Tension would vanish.

    The Catch

    It’s a beautiful theory, but reality is proving to be a harsh critic. New data from the Dark Energy Spectroscopic Instrument (DESI) released recently is making it hard for this “ghost” energy to fit the facts. The data suggests that if this energy existed, it had to be so incredibly precise that it’s almost “too lucky” to be true.

    We are left with a universe that potentially had a massive growth spurt we can’t explain, driven by a force we can’t find.

    Whether we are living in a Cosmic Void, or witnessing the remnants of Early Dark Energy, one thing is clear: our “Standard Model” of the universe is missing a few chapters. We’re living in a cosmic ghost town, watching a ghost force, waiting for the next big discovery to tell us where we truly stand.

    References:

  • Are We Living in A Cosmic Ghost Town?

    In the last post about the Observable Universe, I discussed the sheer scale of the cosmos and that mind-bending 46.5 billion light-year edge. But as I kept digging into how we actually measure that expansion, I stumbled into a conflicting opinion among the astronomers. This made me further explore the Hubble Tension, and now I finally understand why astronomers might be freaking out.

    The universe is currently presenting us with two different answers to the same basic question: “How fast are we growing?” During my school days, I studied Hubble’s Law, which states that “galaxies are moving away from Earth at speeds proportional to their distance, providing key evidence for the expansion of the universe.” In 1929, Edwin Hubble proposed the Hubble constant, which quantifies the rate of the universe’s expansion. This constant can be measured by observing the distances of celestial objects and the speeds at which they are moving away from us.

    We have two primary methods for measuring the Hubble Constant. Currently, these two methods are at a standoff. To keep things simple while I was wrapping my head around this, I started calling them the “Baby Picture” team and the “Local” team“.

    On one side, we have the “Baby Picture” Team (which scientists formally call the Early Universe or CMB measurements). They look at the Cosmic Microwave Background, the afterglow of the Big Bang, to calculate how fast the universe should be expanding based on its initial conditions. Their math gives us a speed of 67.4 km/s/Mpc.

    On the other side, there’s the “Local” Team (officially known as the Late Universe or Distance Ladder measurements). Instead of looking at the beginning of time, they look at what’s happening right now, measuring actual stars and galaxies in our neck of the woods. Their measurement comes in much higher, at roughly 73 km/s/Mpc.

    A gap of five or six units might not seem like a big deal, but in the world of physics, it’s a total disaster. It’s like two people measuring your height: one insists you’re 5’8″ and the other is positive you’re 6’1″, and both are certain their tape measures are perfect.

    This disagreement is what scientists call the Hubble Tension. It’s the ultimate “it doesn’t add up” moment, creating a massive conflict between the “blueprints” we see in the early universe and the “finished house” we see in our local neighborhood today.

    I found a paper published in January 2025 titled “The Hubble Tension in Our Own Backyard: DESI and the Nearness of the Coma Cluster, ” which addresses the ongoing debate about the expansion rate of the universe. The research team, led by Dan Scolnic, used the Dark Energy Spectroscopic Instrument (DESI) to obtain the most precise measurement to date, 76.5 km/s/Mpc. This measurement is even faster than previously anticipated. As a result, it effectively rules out the possibility of measurement error. This indicates that the universe around us is definitely expanding faster than expected.

    NOTE

    In the post, the term km/s/Mpc is used. “Mpc” just stands for Megaparsec, which is about 3.26 million light-years. One parsec is 3.26 light-years.

    Think of it like a speed-per-distance rule. If the rate is 73 km/s/Mpc, it means a galaxy located at 1 Megaparsec away is moving at 73 km/s, while a galaxy twice as far away is moving at 146 km/s. The further out you go, the faster the “stretch” happens!

    Are We Currently in a Bubble?

    So, if the measurements aren’t wrong, what is?

    To answer this, when I further searched on the net, I came across a groundbreaking new study from the Royal Astronomical Society titled, “Earth Inside Huge Void May Explain Big Bang Expansion Rate Puzzle“, which suggested that the problem is not the math, it’s our own address, where we live.

    To understand this better, let us think of it as a giant game of cosmic tug-of-war. Usually, matter is spread out evenly, so everyone is pulling on each other with equal strength. But because we live in a ‘void’ (an empty pocket), there are way fewer people on our side of the rope. Meanwhile, the rest of the universe outside our bubble is packed with matter. Because they have more ‘players’ (more gravity), they are pulling galaxies away from us much harder and faster than they normally would. From our perspective in the middle, it looks like everything is rushing away, but really, they’re just being winched out by the heavy-hitters outside our neighborhood.

    The study suggests Earth and the Milky Way are drifting through a massive cosmic void spanning 2 billion light-years. This region contains significantly less matter than the rest of the universe. Because we are in this “empty” pocket, gravity from the denser universe outside the bubble is pulling galaxies away from us faster than normal.

    If this theory is right, then we can say for sure that the universe isn’t breaking the laws of physics; we just happen to live in a weird, lonely neighborhood. In fact, the researchers calculate it is 100 times more likely that we reside in such a void than in a normal region.

    It’s a humbling thought: We might be looking out at the universe from the inside of a giant cosmic ghost town, wondering why everything is running away from us so fast.

  • Why We See Only a Fraction of the Universe

    When I was reading about the universe’s age the other night, I stumbled onto a Wikipedia page about the “observable universe,” and it honestly blew my mind. It’s one of those things that sounds like science fiction, the idea that there’s a hard limit to what we can see, and it’s way further away than you’d think.

    If the universe is roughly 13.8 billion years old, you’d assume the furthest thing we can see is 13.8 billion light-years away. But the actual number is about 46.5 billion light-years in every direction. That makes the whole observable universe a giant sphere about 93 billion light-years across. Here is what I discovered about the boundaries of our cosmic neighborhood.

    Observable Edge

    The observable edge, or cosmic horizon, can be thought of as a time-delay boundary rather than a physical wall. It represents the furthest limit from which light has had enough time to travel and reach our eyes since the Big Bang. Everything within this boundary forms a perfect sphere with us at the center. This isn’t because we are in the middle of the universe, but because we are the center of our own perspective. The scale of this sphere is difficult to visualize, but the data gives us a framework for just how much “room” we have to explore.

    So, how is the radius 46.5 billion light-years if the light has only been traveling for 13.8 billion years? While light travels toward us, the space through which it travels is actually expanding. It’s like a runner trying to finish a marathon while the road itself is being stretched behind and in front of them. The light eventually reaches us, but the source of that light has since moved much further away.

    Why the math doesn’t seem to add up

    The reason the edge is so much further away than the age of the universe is because of the way space stretches. A lot of people use the “expanding balloon” analogy to explain this, and it’s probably the best way to visualize it.

    Imagine you have a balloon that hasn’t been blown up yet. You draw two dots on it with a Sharpie to represent galaxies. If you start blowing air into that balloon, the rubber stretches and the dots move away from each other. Now, imagine a tiny ant crawling from one dot to the other. While the ant is walking, the balloon is growing. By the time the ant reaches the second dot, the actual distance it covered is much longer than the distance between the dots when it first started its journey.

    In this scenario, light is the ant. While light is traveling toward us, the space through which it’s traveling is expanding. So, by the time the light from a distant galaxy finally hits our telescopes, that galaxy has been pushed much further away than it was when it first emitted that light.

    The trippy part is that because this expansion is actually speeding up, there are parts of the universe that are basically “dropping off” our map. These regions are moving away from us faster than the speed of light. That doesn’t mean the galaxies themselves are breaking physics; it just means the space between us is growing so fast that light can never bridge the gap. It’s like trying to run up a down-escalator that’s moving way faster than you can sprint. You’ll just never reach the top.

    The first snapshot of the universe

    If we look as far back as possible, to the very edge of that 46.5 billion light-year radius, we find the Cosmic Microwave Background (CMB). This is essentially the afterglow of the Big Bang.

    For the first 380,000 years, the universe was so hot and crowded that light couldn’t even move; it was like a thick, glowing fog. Eventually, things cooled down enough for light to break free, and that light has been traveling through space for billions of years. Because the universe has stretched so much since then (an expanding balloon analogy), those light waves have been stretched until they became microwaves.

    New problems in 2025

    Scientists are actually in a bit of a crisis over this right now. As of early 2025, data from the James Webb Space Telescope and recent studies from the Atacama Cosmology Telescope have confirmed something called the “Hubble Tension.”

    Basically, when we look at the CMB to see how fast the universe should be expanding, we get one answer. But when we look at actual stars and galaxies today, they seem to be moving much faster than the early data predicted. Research published throughout 2024 and into 2025 suggests we might need “New Physics” to explain the gap, maybe a weird version of dark energy that only existed for a little while right after the universe began.

    It’s a bit humbling to realize that even with our best tech, we’re essentially sitting inside a bubble, looking at a “baby picture” of the cosmos that we’re still trying to fully understand.

    Where does this leave us?

    Realizing that our maps of the universe are still being redrawn is actually pretty exciting. We often think of science as a finished book, but the “Hubble Tension” reminds us that we’re still very much in the middle of the story. The fact that the universe we see today doesn’t quite match the “baby picture” from the CMB doesn’t mean we’re wrong, it just means there’s something massive and invisible still waiting to be discovered.

    At the end of the day, the 46.5 billion light-year edge isn’t a wall, it’s just the limit of our current perspective. We are small observers in a vast, stretching fabric, trying to decode a message that has been traveling for billions of years. It’s a reminder that no matter how much we think we’ve figured out, the cosmos still has plenty of ways to surprise us. Whether the answer lies in new physics or a deeper understanding of dark energy, the search itself is what keeps us looking up.

  • From /etc/hosts to 127.0.0.53: A Sysadmin’s View on DNS Resolution

    If you’ve been managing systems since the days of AT&T Unix System V Release 3 (SVR3), you remember when networking was a manual affair. Name resolution often meant a massive, hand-curated /etc/hosts file and a prayer.

    As the Domain Name System (DNS) matured, the standard consolidated around a single, universally understood text file: /etc/resolv.conf. For decades, that file served us well. But the requirements of modern, dynamic networking, involving laptops hopping Wi-Fi SSIDs, complex VPN split-tunnels, and DNSSEC validation, forced a massive architectural shift in the Linux world, most notably in the form of systemd-resolved.

    Let’s walk through history, with hands-on examples, to see how we got here.

    AT&T SVR3: The Pre-DNS Era

    Released around 1987-88, SVR3 was still rooted in the hosts file model. The networking stacks were primitive, and TCP/IP was available but not always bundled. I still remember that around 1996-97, I used to install AT&T SVR3 version 4.2 using multiple 5.25-inch DSDD floppy disks, then, after installation, use another set of disks to install the TCP/IP stack. DNS support was not native, and we relied on /etc/hosts for hostname resolution. By SVR3.2, AT&T started shipping optional resolver libraries, but these were not standardized.

    # Example /etc/hosts file on SVR3
    127.0.0.1 localhost
    192.168.1.10 svr3box svr3box.local

    If DNS libraries were installed, /etc/resolv.conf could be used:

    # /etc/resolv.conf available when DNS libraries were installed
    nameserver 192.168.1.1
    domain corp.example.com

    dig did not exists then, and we used to use nslookup.

    nslookup svr3box
    Server: 192.168.1.1
    Address: 192.168.1.1#53

    Name: svr3box.corp.example.com
    Address: 192.168.1.10

    Solaris Bridging Classical and Modern

    When I was introduced to Sun Solaris around 2003-2005, I realized that DNS resolution was very well structured (at least compared to the SVR3 systems I had worked on earlier). Mostly, I remember working on Solaris 8 (with a few older SunOS 5.x systems). These systems required both /etc/resolv.conf and /etc/nsswitch.conf

    # /etc/nsswitch.conf
    hosts: files dns nis

    This /etc/nsswitch.conf had only the job of instructing the libc C library to look in /etc/hosts, then DNS, and then NIS. Of course, you can change the sequence.

    The /etc/resolv.conf defined the nameservers –

    nameserver 8.8.8.8
    nameserver 1.1.1.1
    search corp.example.com

    Solaris 11 introduced SMF (Service Management Facility), and this allowed the /etc/resolv.conf to auto-generate based on the SMF profile. Manual edits were discouraged, and we were learning to use:

    svccfg -s dns/client setprop config/nameserver=8.8.8.8
    svcadm refresh dns/client

    For me, this marked the shift from text files to managed services, although I did not work much on these systems.

    BSD Unix: Conservatism and Security

    The BSD philosophy is simplicity, transparency and security-first.

    FreeBSD and NetBSD still rely on /etc/resolv.conf file and the dhclient update the file automatically. This helps in very straightforward debugging.

    cat /etc/resolv.conf
    nameserver 192.168.1.2

    nslookup freebsd.org

    OpenBSD, famous for its “secure by default” stance, includes modern, secure DNS software like unbound in its base installation; its default system resolution behavior remains classical. Unless the OS is explicitly configured to use a local caching daemon, applications on a fresh OpenBSD install still read /etc/resolv.conf and talk directly to external servers. They prioritize a simple, auditable baseline over complex automated magic.

    The Modern Linux Shift

    On modern Linux distributions (Ubuntu 18.04+, Fedora, RHEL 8+, etc.), the old way of simply “echoing” a nameserver into a static /etc/resolv.conf file is effectively dead. The reason for this is that the old model couldn’t handle race conditions. If NetworkManager, a VPN client, and a DHCP client all tried to write to that single file at the same time, the last one to write won.

    In modern linux systems, systemd-resolved acts as a local middleman, a DNS broker that manages configurations from different sources dynamically. The /etc/resolv.conf file is no longer a real file; it’s usually a symbolic link pointing to a file managed by systemd that directs local traffic to a local listener on 127.0.0.53.

    systemd-resolved adds features like –

    • Split-DNS to help route VPN domains seperately.
    • Local-Caching for faster repeated lookups.
    • DNS-over-TLS for encrypted queries.
    ls -l /etc/resolv.conf
    lrwxrwxrwx 1 root root 39 Dec 24 11:00 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf

    This complexity buys us features needed for modern mobile computing: per-interface DNS settings, local caching to speed up browsing, and seamless VPN integration.

    The modern linux systems uses dig and resolvectl for diagnostics:

    $ dig @127.0.0.53 example.com

    ; <<>> DiG 9.16.50-Raspbian <<>> @127.0.0.53 example.com
    ; (1 server found)
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17367
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 1232
    ;; QUESTION SECTION:
    ;example.com. IN A

    ;; ANSWER SECTION:
    example.com. 268 IN A 104.18.27.120
    example.com. 268 IN A 104.18.26.120

    ;; Query time: 9 msec
    ;; SERVER: 127.0.0.53#53(127.0.0.53)
    ;; WHEN: Wed Dec 24 12:49:43 IST 2025
    ;; MSG SIZE rcvd: 72

    $ resolvectl query example.com
    example.com: 2606:4700::6812:1b78
    2606:4700::6812:1a78
    104.18.27.120
    104.18.26.120

    -- Information acquired via protocol DNS in 88.0ms.
    -- Data is authenticated: no; Data was acquired via local or encrypted transport: no
    -- Data from: network

    Because editing the file directly no longer works reliably, we must use tools that communicate with the systemd-resolved daemon.

    Suppose you want to force your primary ethernet interface (eth0) to bypass DHCP DNS and use Google’s servers temporarily:

    sudo systemd-resolve --set-dns=8.8.8.8 --set-dns=8.8.4.4 --interface=eth0

    To check what is actually happening—seeing which DNS servers are bound to which interface scopes—run:

    systemd-resolve --status

    and to clear the manual overrides and go back to whatever setting DHCP provided:

    sudo systemd-resolve --revert --interface=eth0

    We’ve come a long way from System V R3. While the simplicity of the classical text-file approach is nostalgic for those of us who grew up on it, the dynamic nature of today’s networking requires a smarter local resolver daemon. It adds complexity, but it’s the price we pay for seamless connectivity in a mobile world.