Ajitabh Pandey's Soul & Syntax

Exploring systems, souls, and stories – one post at a time

Author: Ajitabh

  • From /etc/hosts to 127.0.0.53: A Sysadmin’s View on DNS Resolution

    If you’ve been managing systems since the days of AT&T Unix System V Release 3 (SVR3), you remember when networking was a manual affair. Name resolution often meant a massive, hand-curated /etc/hosts file and a prayer.

    As the Domain Name System (DNS) matured, the standard consolidated around a single, universally understood text file: /etc/resolv.conf. For decades, that file served us well. But the requirements of modern, dynamic networking, involving laptops hopping Wi-Fi SSIDs, complex VPN split-tunnels, and DNSSEC validation, forced a massive architectural shift in the Linux world, most notably in the form of systemd-resolved.

    Let’s walk through history, with hands-on examples, to see how we got here.

    AT&T SVR3: The Pre-DNS Era

    Released around 1987-88, SVR3 was still rooted in the hosts file model. The networking stacks were primitive, and TCP/IP was available but not always bundled. I still remember that around 1996-97, I used to install AT&T SVR3 version 4.2 using multiple 5.25-inch DSDD floppy disks, then, after installation, use another set of disks to install the TCP/IP stack. DNS support was not native, and we relied on /etc/hosts for hostname resolution. By SVR3.2, AT&T started shipping optional resolver libraries, but these were not standardized.

    # Example /etc/hosts file on SVR3
    127.0.0.1 localhost
    192.168.1.10 svr3box svr3box.local

    If DNS libraries were installed, /etc/resolv.conf could be used:

    # /etc/resolv.conf available when DNS libraries were installed
    nameserver 192.168.1.1
    domain corp.example.com

    dig did not exists then, and we used to use nslookup.

    nslookup svr3box
    Server: 192.168.1.1
    Address: 192.168.1.1#53

    Name: svr3box.corp.example.com
    Address: 192.168.1.10

    Solaris Bridging Classical and Modern

    When I was introduced to Sun Solaris around 2003-2005, I realized that DNS resolution was very well structured (at least compared to the SVR3 systems I had worked on earlier). Mostly, I remember working on Solaris 8 (with a few older SunOS 5.x systems). These systems required both /etc/resolv.conf and /etc/nsswitch.conf

    # /etc/nsswitch.conf
    hosts: files dns nis

    This /etc/nsswitch.conf had only the job of instructing the libc C library to look in /etc/hosts, then DNS, and then NIS. Of course, you can change the sequence.

    The /etc/resolv.conf defined the nameservers –

    nameserver 8.8.8.8
    nameserver 1.1.1.1
    search corp.example.com

    Solaris 11 introduced SMF (Service Management Facility), and this allowed the /etc/resolv.conf to auto-generate based on the SMF profile. Manual edits were discouraged, and we were learning to use:

    svccfg -s dns/client setprop config/nameserver=8.8.8.8
    svcadm refresh dns/client

    For me, this marked the shift from text files to managed services, although I did not work much on these systems.

    BSD Unix: Conservatism and Security

    The BSD philosophy is simplicity, transparency and security-first.

    FreeBSD and NetBSD still rely on /etc/resolv.conf file and the dhclient update the file automatically. This helps in very straightforward debugging.

    cat /etc/resolv.conf
    nameserver 192.168.1.2

    nslookup freebsd.org

    OpenBSD, famous for its “secure by default” stance, includes modern, secure DNS software like unbound in its base installation; its default system resolution behavior remains classical. Unless the OS is explicitly configured to use a local caching daemon, applications on a fresh OpenBSD install still read /etc/resolv.conf and talk directly to external servers. They prioritize a simple, auditable baseline over complex automated magic.

    The Modern Linux Shift

    On modern Linux distributions (Ubuntu 18.04+, Fedora, RHEL 8+, etc.), the old way of simply “echoing” a nameserver into a static /etc/resolv.conf file is effectively dead. The reason for this is that the old model couldn’t handle race conditions. If NetworkManager, a VPN client, and a DHCP client all tried to write to that single file at the same time, the last one to write won.

    In modern linux systems, systemd-resolved acts as a local middleman, a DNS broker that manages configurations from different sources dynamically. The /etc/resolv.conf file is no longer a real file; it’s usually a symbolic link pointing to a file managed by systemd that directs local traffic to a local listener on 127.0.0.53.

    systemd-resolved adds features like –

    • Split-DNS to help route VPN domains seperately.
    • Local-Caching for faster repeated lookups.
    • DNS-over-TLS for encrypted queries.
    ls -l /etc/resolv.conf
    lrwxrwxrwx 1 root root 39 Dec 24 11:00 /etc/resolv.conf -> ../run/systemd/resolve/stub-resolv.conf

    This complexity buys us features needed for modern mobile computing: per-interface DNS settings, local caching to speed up browsing, and seamless VPN integration.

    The modern linux systems uses dig and resolvectl for diagnostics:

    $ dig @127.0.0.53 example.com

    ; <<>> DiG 9.16.50-Raspbian <<>> @127.0.0.53 example.com
    ; (1 server found)
    ;; global options: +cmd
    ;; Got answer:
    ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 17367
    ;; flags: qr rd ra; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 1

    ;; OPT PSEUDOSECTION:
    ; EDNS: version: 0, flags:; udp: 1232
    ;; QUESTION SECTION:
    ;example.com. IN A

    ;; ANSWER SECTION:
    example.com. 268 IN A 104.18.27.120
    example.com. 268 IN A 104.18.26.120

    ;; Query time: 9 msec
    ;; SERVER: 127.0.0.53#53(127.0.0.53)
    ;; WHEN: Wed Dec 24 12:49:43 IST 2025
    ;; MSG SIZE rcvd: 72

    $ resolvectl query example.com
    example.com: 2606:4700::6812:1b78
    2606:4700::6812:1a78
    104.18.27.120
    104.18.26.120

    -- Information acquired via protocol DNS in 88.0ms.
    -- Data is authenticated: no; Data was acquired via local or encrypted transport: no
    -- Data from: network

    Because editing the file directly no longer works reliably, we must use tools that communicate with the systemd-resolved daemon.

    Suppose you want to force your primary ethernet interface (eth0) to bypass DHCP DNS and use Google’s servers temporarily:

    sudo systemd-resolve --set-dns=8.8.8.8 --set-dns=8.8.4.4 --interface=eth0

    To check what is actually happening—seeing which DNS servers are bound to which interface scopes—run:

    systemd-resolve --status

    and to clear the manual overrides and go back to whatever setting DHCP provided:

    sudo systemd-resolve --revert --interface=eth0

    We’ve come a long way from System V R3. While the simplicity of the classical text-file approach is nostalgic for those of us who grew up on it, the dynamic nature of today’s networking requires a smarter local resolver daemon. It adds complexity, but it’s the price we pay for seamless connectivity in a mobile world.

  • Book Review: Day by Day Armageddon Series

    I recently finished all four books in the Day by Day Armageddon series on Audible. All of them are narrated by Jay Snider, who does an amazing job across the series. His voice brings the tension, fear and loneliness of a zombie apocalypse to life in a very natural way. Even when the story dipped for me at times, his narration kept me going.

    The series as a whole was enjoyable. I liked the atmosphere and the survival focus. Some books worked better than others, but I am happy I listened to all four.

    Here are my thoughts on each book.

    Book 1: Day by Day Armageddon

    The first book is still my favorite. It is written in a diary format and pulls you in right from the start. You feel the panic and confusion as the world collapses around the main character. The simple day by day entries make it feel real and personal.

    Jay Snider’s narration fits perfectly. His calm but tense delivery keeps the suspense high and makes you want to know what happens next. This book gave me a strong start to the series and made me excited to continue.

    Book 2: Beyond Exile

    The second book continues the story, but it did not land as well for me. The first book focused on personal survival and the emotional weight of the apocalypse. This one moves more into a military style adventure. It has missions, more action and a wider plot. I missed the close and intimate tone from the first book.

    Characters like John, Annabelle, Jan and William appear only for a short while. Even Tara, who seems important at first, gets very little time. The shift in tone made me feel less connected to the story.

    The one thing that kept the book enjoyable was Jay Snider’s narration. He always brings life to the story. Even when the plot did not work for me as much, his voice kept me engaged.

    Book 3: Shattered Hourglass

    The third book grows the world even more by adding many new military characters. This makes the story feel bigger, but it also takes away the personal feeling from the earlier books. The original protagonist hardly has any role this time, which was a surprise.

    At times, I felt like this could have been the final book. The story expands, the stakes rise and it feels like things are moving toward an ending. But the author chose to continue with a fourth book.

    Jay Snider once again delivers a great performance.

    Book 4: Ghost Run

    The fourth book returns to a more personal journey. Most of the story follows the main character alone as he moves through empty towns and dangerous spaces. Other people appear, but only for short moments. At first I wanted more interaction, but later I saw how well it matched the feeling of a dead and broken world.

    The plot has some tense moments and I enjoyed many parts of it. The ending could have been a bit stronger, but it does wrap things up in a way that works. It also leaves a small space for future books if the author ever decides to continue.

    Jay Snider shines again with the narration. His tone captures both the silence and the danger around every corner.

    Final Thoughts

    The Day by Day Armageddon series is a mixed experience, but a good one overall. I enjoyed the survival theme, the lonely atmosphere and the sense of a world falling apart. Some places in the books drifted away from what I liked most about the series, but there were always enough good moments to keep me listening.

    Jay Snider is the real star for me. His narration lifted every book and made the whole series more immersive.

    If you enjoy zombie stories with a mix of survival, action and tension, this series is worth trying. My final rating for the whole series is 3 out of 5.

  • Book Review: Flight of the Intruder by Stephen Coonts

    Stephen Coonts’ Flight of the Intruder takes readers straight into the tense, roaring heart of the Vietnam War — not from the jungles, but from the cockpit of an A-6 Intruder bomber. The novel follows Navy pilot Jake Grafton, who launches from a U.S. carrier to strike targets deep inside North Vietnam.

    Where this book truly soars is in its flying scenes. Coonts, himself a former naval aviator, writes with authenticity and precision. Each mission feels real — from the preflight checks to the disorienting flashes of anti-aircraft fire. When Grafton straps into the cockpit, you feel the adrenaline, the discipline, and the quiet fear of what’s ahead.

    Equally compelling is the portrayal of carrier life, the hierarchy, the routines, and the fragile balance between boredom and chaos. Coonts successfully brings to life the world below deck as effectively as the one above the clouds.

    However, the novel’s main plotline is an illegal bombing run on Hanoi. Perhaps that has happened in Coonts experience, but for me this strains credibility. It’s hard to imagine a disciplined Navy pilot jeopardizing his career and future on a rogue mission, no matter how frustrated he feels about the war’s politics. This stretch of believability weakens an otherwise solid narrative.

    Still, the thrill factor remains undeniable. The air combat scenes are cinematic, and Coonts’ insider perspective adds a layer of realism that most military thrillers lack.

    Benjamin L. Darcie’s audiobook narration deserves special mention. His delivery captures both the tension of flight and the quieter moments of introspection, making the story engaging from takeoff to landing.

    In the end, Flight of the Intruder is an exciting, well-crafted piece of military fiction — a mix of technical precision, human drama, and the moral gray zones of wartime decision-making. Even with a few implausible turns, it’s a journey worth taking for anyone fascinated by aviation or naval life.

  • Three Simple Rules for Making AI Work

    Three Simple Rules for Making AI Work

    Everyone agrees that data powers AI, but very few use it wisely. Data is often described as the fuel for the machine learning engine, yet it rarely arrives clean or ready to use. Real success in AI depends not only on having data but on knowing what kind of data matters, how to build around it, and how to avoid common mistakes.

    Rule One: Data is the Heart of Your Business

    Data is not just numbers in a file. It is a reflection of your business. You define what the inputs and outputs mean. That definition shapes how AI learns and what it can do.

    Data SetPotential A (Input)Potential B (Output)
    House Size, Bedrooms, Price(Size, Bedrooms)Price
    Machine Temperature, Pressure(Temp, Pressure)Machine Failure (Yes or No)
    Customer Purchase History, Price Offered(History, Price)Product Purchase (Yes or No)

    Each of these examples shows how data directly connects to your business question. You are not just collecting numbers. You are deciding what matters.

    Data also comes in two main forms. Structured data fits neatly in tables and spreadsheets, such as housing prices or temperature logs. Unstructured data includes things like images, audio, and text that humans understand easily but machines need help with. Generative AI often works best with unstructured data, while supervised learning handles both types very well.

    When you think about your data, start by asking what problem you are solving. The value of data appears only when it connects to a real business case.

    Rule Two: Keep Improving Through Iteration

    Building an AI system is not a one-time task. It is a loop that repeats again and again. Every successful AI follows this same pattern.

    First, you collect the data that contains your inputs and the matching outputs. Next, you train the model so it can learn to move from A to B. The first version usually fails. That is expected. The team must adjust, fine-tune, and try again many times.

    Once the model starts performing well, it is deployed into the real world. That is where the real learning begins. For example, a speech model might work perfectly in a lab but fail to understand accents or noisy environments once it is in use. A self-driving car might misread new vehicle types like golf carts.

    Every time this happens, the data from those failures becomes valuable. It flows back to the AI team, who retrain and improve the model. This constant cycle of feedback, learning, and updating is what makes AI systems smarter and more reliable over time.

    Rule Three: Avoid the Common Misuses of Data

    Many organizations stumble not because they lack data, but because they misunderstand it. Here are three mistakes that leaders often make and how to avoid them.

    Mistake One – The Long IT Plan

    A company decides to build the perfect IT setup first and promises to collect the perfect dataset in a few years. By the time they are ready, the business needs have already changed.
    The better approach is to get your AI engineers involved early. They can tell you what kind of data to record and how often. A small change, like capturing machine readings every minute instead of every ten, can make a big difference in model quality.

    Mistake Two – Assuming More Data Means Better Data

    Some teams believe that having terabytes of data automatically means success. In reality, most of that data may not even connect to the problem they are trying to solve.
    Before collecting or buying more, talk to your AI team about what kind of data is truly useful. Quality and relevance matter much more than size.

    Mistake Three – Ignoring the Quality of Data

    Data is rarely perfect. It may contain wrong labels, missing values, or strange entries. If these errors go unchecked, the model will learn incorrect patterns.
    A skilled AI team can clean and organize the data so that the system learns the right things. This process may not sound exciting, but it determines whether your AI succeeds or fails.

    Turning Data into Real Value

    True AI success does not come from hype or futuristic dreams. It comes from disciplined use of supervised learning and a smart, iterative approach to data. When you understand your business inputs and outputs, build your models step by step, and keep refining your data, you unlock real and measurable value.

    AI is not about chasing magic. It is about turning A into B, one clean, well-understood dataset at a time.

  • How A to B Mapping Powers Modern AI

    How A to B Mapping Powers Modern AI

    If AI is the car, then Machine Learning is the engine. And the most powerful engine inside it is something called Supervised Learning. Understanding this one idea helps you see how most of today’s AI really works.

    The Simple Idea of A to B Mapping

    At its heart, supervised learning is about learning how to go from one thing to another. The input is called A, and the desired output is called B. It may sound almost too simple, but this pattern is the foundation of nearly every AI system in use today.

    Input (A)Output (B)Example Use
    Email textSpam or Not SpamEmail filters
    Audio clipText wordsSpeech recognition
    Image and radar dataCar positionsSelf driving cars
    Ad and user infoClick or No ClickOnline advertising
    A few wordsThe next wordGenerative AI like ChatGPT

    This basic mapping is how machines learn patterns. It turns data into predictions and predictions into decisions.

    Why Supervised Learning Took Off

    Supervised learning has been around for many years. For a long time, its progress was slow and steady, and the results felt limited. Then something changed. The arrival of neural networks and the rise of deep learning completely transformed what was possible.

    To understand why, imagine plotting a graph where performance rises as you feed more data into an AI system. In older systems, the performance curve would rise a little and then flatten out. No matter how much extra data you added, the system would stop improving. It simply could not learn more.

    Neural networks, on the other hand, behaved differently. They kept improving as more data was added. A small network showed some improvement. A larger one did better. And when the models grew huge, their performance kept climbing higher and higher. The curve never seemed to flatten.

    This change was not just about smarter ideas. It was about scale. Two things came together at the right time. First, companies started collecting massive amounts of data from the internet, apps, and sensors. Second, hardware like GPUs made it possible to train very large models much faster.

    These two forces, data and compute, gave supervised learning a new life. Suddenly, models that once struggled could now learn patterns far beyond human imagination. That breakthrough is what pushed AI from the lab into the real world, powering tools like speech recognition, image search, and later, large language models such as ChatGPT.

    Generative AI is Just Bigger Supervised Learning

    Large Language Models such as ChatGPT may look magical, but they are built on the same foundation as supervised learning. The only real difference is scale. Instead of training on small datasets, they learn from hundreds of billions of words gathered from the internet.

    The task they perform is simple. The model reads a sequence of words and tries to predict the next one. For example, if the training text says “My favorite drink is lychee bubble tea,” the model learns that the phrase “My favorite drink is” is usually followed by “lychee.” It stores this connection as one of countless A to B mappings.

    When this process is repeated millions of times, the model slowly builds an understanding of language. It learns how words fit together, how ideas connect, and how context shapes meaning. Over time, it becomes capable of generating text that sounds natural, answers questions, and even reasons through complex topics.

    So while it feels like the model is thinking or creating, it is really applying the same principle that powers all supervised learning. It looks at an input and predicts an output. The scale and training data make it seem intelligent, but at its core, it is still the same A to B mapping that drives every part of modern AI.

    The Hidden Power of Simplicity

    The beauty of supervised learning is how something so simple powers almost everything. From your phone’s photo app to voice assistants to autonomous cars to AI writing tools, it all begins with the same idea — learn to go from input to output.

    Big data and big models turned that small idea into a trillion-dollar industry. And the journey from A to B is still far from over.