Ajitabh Pandey's Soul & Syntax

Exploring systems, souls, and stories – one post at a time

Author: Ajitabh

  • Book Review: Flight of the Intruder by Stephen Coonts

    Stephen Coonts’ Flight of the Intruder takes readers straight into the tense, roaring heart of the Vietnam War — not from the jungles, but from the cockpit of an A-6 Intruder bomber. The novel follows Navy pilot Jake Grafton, who launches from a U.S. carrier to strike targets deep inside North Vietnam.

    Where this book truly soars is in its flying scenes. Coonts, himself a former naval aviator, writes with authenticity and precision. Each mission feels real — from the preflight checks to the disorienting flashes of anti-aircraft fire. When Grafton straps into the cockpit, you feel the adrenaline, the discipline, and the quiet fear of what’s ahead.

    Equally compelling is the portrayal of carrier life, the hierarchy, the routines, and the fragile balance between boredom and chaos. Coonts successfully brings to life the world below deck as effectively as the one above the clouds.

    However, the novel’s main plotline is an illegal bombing run on Hanoi. Perhaps that has happened in Coonts experience, but for me this strains credibility. It’s hard to imagine a disciplined Navy pilot jeopardizing his career and future on a rogue mission, no matter how frustrated he feels about the war’s politics. This stretch of believability weakens an otherwise solid narrative.

    Still, the thrill factor remains undeniable. The air combat scenes are cinematic, and Coonts’ insider perspective adds a layer of realism that most military thrillers lack.

    Benjamin L. Darcie’s audiobook narration deserves special mention. His delivery captures both the tension of flight and the quieter moments of introspection, making the story engaging from takeoff to landing.

    In the end, Flight of the Intruder is an exciting, well-crafted piece of military fiction — a mix of technical precision, human drama, and the moral gray zones of wartime decision-making. Even with a few implausible turns, it’s a journey worth taking for anyone fascinated by aviation or naval life.

  • Three Simple Rules for Making AI Work

    Three Simple Rules for Making AI Work

    Everyone agrees that data powers AI, but very few use it wisely. Data is often described as the fuel for the machine learning engine, yet it rarely arrives clean or ready to use. Real success in AI depends not only on having data but on knowing what kind of data matters, how to build around it, and how to avoid common mistakes.

    Rule One: Data is the Heart of Your Business

    Data is not just numbers in a file. It is a reflection of your business. You define what the inputs and outputs mean. That definition shapes how AI learns and what it can do.

    Data SetPotential A (Input)Potential B (Output)
    House Size, Bedrooms, Price(Size, Bedrooms)Price
    Machine Temperature, Pressure(Temp, Pressure)Machine Failure (Yes or No)
    Customer Purchase History, Price Offered(History, Price)Product Purchase (Yes or No)

    Each of these examples shows how data directly connects to your business question. You are not just collecting numbers. You are deciding what matters.

    Data also comes in two main forms. Structured data fits neatly in tables and spreadsheets, such as housing prices or temperature logs. Unstructured data includes things like images, audio, and text that humans understand easily but machines need help with. Generative AI often works best with unstructured data, while supervised learning handles both types very well.

    When you think about your data, start by asking what problem you are solving. The value of data appears only when it connects to a real business case.

    Rule Two: Keep Improving Through Iteration

    Building an AI system is not a one-time task. It is a loop that repeats again and again. Every successful AI follows this same pattern.

    First, you collect the data that contains your inputs and the matching outputs. Next, you train the model so it can learn to move from A to B. The first version usually fails. That is expected. The team must adjust, fine-tune, and try again many times.

    Once the model starts performing well, it is deployed into the real world. That is where the real learning begins. For example, a speech model might work perfectly in a lab but fail to understand accents or noisy environments once it is in use. A self-driving car might misread new vehicle types like golf carts.

    Every time this happens, the data from those failures becomes valuable. It flows back to the AI team, who retrain and improve the model. This constant cycle of feedback, learning, and updating is what makes AI systems smarter and more reliable over time.

    Rule Three: Avoid the Common Misuses of Data

    Many organizations stumble not because they lack data, but because they misunderstand it. Here are three mistakes that leaders often make and how to avoid them.

    Mistake One – The Long IT Plan

    A company decides to build the perfect IT setup first and promises to collect the perfect dataset in a few years. By the time they are ready, the business needs have already changed.
    The better approach is to get your AI engineers involved early. They can tell you what kind of data to record and how often. A small change, like capturing machine readings every minute instead of every ten, can make a big difference in model quality.

    Mistake Two – Assuming More Data Means Better Data

    Some teams believe that having terabytes of data automatically means success. In reality, most of that data may not even connect to the problem they are trying to solve.
    Before collecting or buying more, talk to your AI team about what kind of data is truly useful. Quality and relevance matter much more than size.

    Mistake Three – Ignoring the Quality of Data

    Data is rarely perfect. It may contain wrong labels, missing values, or strange entries. If these errors go unchecked, the model will learn incorrect patterns.
    A skilled AI team can clean and organize the data so that the system learns the right things. This process may not sound exciting, but it determines whether your AI succeeds or fails.

    Turning Data into Real Value

    True AI success does not come from hype or futuristic dreams. It comes from disciplined use of supervised learning and a smart, iterative approach to data. When you understand your business inputs and outputs, build your models step by step, and keep refining your data, you unlock real and measurable value.

    AI is not about chasing magic. It is about turning A into B, one clean, well-understood dataset at a time.

  • How A to B Mapping Powers Modern AI

    How A to B Mapping Powers Modern AI

    If AI is the car, then Machine Learning is the engine. And the most powerful engine inside it is something called Supervised Learning. Understanding this one idea helps you see how most of today’s AI really works.

    The Simple Idea of A to B Mapping

    At its heart, supervised learning is about learning how to go from one thing to another. The input is called A, and the desired output is called B. It may sound almost too simple, but this pattern is the foundation of nearly every AI system in use today.

    Input (A)Output (B)Example Use
    Email textSpam or Not SpamEmail filters
    Audio clipText wordsSpeech recognition
    Image and radar dataCar positionsSelf driving cars
    Ad and user infoClick or No ClickOnline advertising
    A few wordsThe next wordGenerative AI like ChatGPT

    This basic mapping is how machines learn patterns. It turns data into predictions and predictions into decisions.

    Why Supervised Learning Took Off

    Supervised learning has been around for many years. For a long time, its progress was slow and steady, and the results felt limited. Then something changed. The arrival of neural networks and the rise of deep learning completely transformed what was possible.

    To understand why, imagine plotting a graph where performance rises as you feed more data into an AI system. In older systems, the performance curve would rise a little and then flatten out. No matter how much extra data you added, the system would stop improving. It simply could not learn more.

    Neural networks, on the other hand, behaved differently. They kept improving as more data was added. A small network showed some improvement. A larger one did better. And when the models grew huge, their performance kept climbing higher and higher. The curve never seemed to flatten.

    This change was not just about smarter ideas. It was about scale. Two things came together at the right time. First, companies started collecting massive amounts of data from the internet, apps, and sensors. Second, hardware like GPUs made it possible to train very large models much faster.

    These two forces, data and compute, gave supervised learning a new life. Suddenly, models that once struggled could now learn patterns far beyond human imagination. That breakthrough is what pushed AI from the lab into the real world, powering tools like speech recognition, image search, and later, large language models such as ChatGPT.

    Generative AI is Just Bigger Supervised Learning

    Large Language Models such as ChatGPT may look magical, but they are built on the same foundation as supervised learning. The only real difference is scale. Instead of training on small datasets, they learn from hundreds of billions of words gathered from the internet.

    The task they perform is simple. The model reads a sequence of words and tries to predict the next one. For example, if the training text says “My favorite drink is lychee bubble tea,” the model learns that the phrase “My favorite drink is” is usually followed by “lychee.” It stores this connection as one of countless A to B mappings.

    When this process is repeated millions of times, the model slowly builds an understanding of language. It learns how words fit together, how ideas connect, and how context shapes meaning. Over time, it becomes capable of generating text that sounds natural, answers questions, and even reasons through complex topics.

    So while it feels like the model is thinking or creating, it is really applying the same principle that powers all supervised learning. It looks at an input and predicts an output. The scale and training data make it seem intelligent, but at its core, it is still the same A to B mapping that drives every part of modern AI.

    The Hidden Power of Simplicity

    The beauty of supervised learning is how something so simple powers almost everything. From your phone’s photo app to voice assistants to autonomous cars to AI writing tools, it all begins with the same idea — learn to go from input to output.

    Big data and big models turned that small idea into a trillion-dollar industry. And the journey from A to B is still far from over.

  • Rethinking Resilience in the Age of Agentic AI

    A short while back, I wrote a series on Resilience, focusing on why automated recovery isn’t optional anymore. (If you missed the first post, you can find it here: [The Unseen Heroes: Why Automated System Recovery Isn’t Optional Anymore]).

    The argument that human speed cannot match machine speed, is now facing its ultimate test. We are witnessing the rise of Agentic AI. Agentic AI is a new class of autonomous attacker operating at light speed, capable of learning, adapting, and executing a complete breach before human teams even fully wake up.

    This evolution demands more than recovery; it requires an ironclad strategy for automated, complete infrastructure rebuild.

    Autonomy That Learns and Adapts

    For years, the threat landscape escalated from small hacking groups to the proliferation of the Ransomware-as-a-Service (RaaS) model. RaaS democratized cybercrime, allowing moderately skilled criminals to rent sophisticated tools on the dark web for a subscription fee (learn more about the RaaS model here: What is Ransomware-as-a-Service (RaaS)?).

    The emergence of Agentic AI is the next fundamental leap.

    Unlike Generative AI, which simply assists with tasks, Agentic AI is proactive, autonomous, and adaptive. These AI agents don’t follow preprogrammed scripts; they learn on the fly, tailoring their attack strategies to the specific environment they encounter.

    For criminals, Agentic AI is a powerful tool because it drastically lowers the barrier to entry for sophisticated attacks. By automating complex tasks like reconnaissance and tailored phishing, these systems can orchestrate campaigns faster and more affordably than hiring large teams of human hackers, ultimately making cybercrime more accessible and attractive (Source: UC Berkeley CLTC)

    Agentic ransomware represents a collection of bots that execute every step of a successful attack faster and better than human operators. The implications for recovery are profound: you are no longer fighting a team of humans, but an army of autonomous systems.

    The Warning Signs Are Already Here

    Recent high-profile incidents illustrate that no industry is safe, and the time-to-breach window is shrinking:

    • Change Healthcare (Early 2024): This major incident demonstrated how a single point of failure can catastrophically disrupt the U.S. healthcare system, underscoring the severity of supply-chain attacks (Read incident details here).
    • Snowflake & Ticketmaster (Mid-2024): A sophisticated attack that exploited stolen credentials to compromise cloud environments, leading to massive data theft and proving that third-party cloud services are not magically resilient on their own (Learn more about the Snowflake/Ticketmaster breach).
    • The Rise of Non-Human Identity (NHI) Exploitation (2025): Security experts warn that 2025 is seeing a surge in attacks exploiting Non-Human Identities (API keys, service accounts). These high-privilege credentials, often poorly managed, are prime targets for autonomous AI agents seeking to move laterally without detection (Read more on 2025 NHI risks).

    The Myth of Readiness in a Machine-Speed World

    When faced with an attacker operating at machine velocity, relying solely on prevention-focused security creates a fragile barrier.

    So, why do well-funded organizations still struggle? In many cases, the root cause lies within. Organizations are undermined by a series of internal fractures:

    Siloed Teams and Fragmented Processes

    When cybersecurity, cloud operations, application development and business-continuity teams function in isolation, vital information becomes trapped inside departmental silos, knowledge of application dependencies, network configurations or privileged credentials may live only in one team or one tool. Here are some examples –

    • Cisco Systems’s white-paper shows how siloed NetOps and SecOps teams lead to delayed detection and containment of vulnerability-events, undermining resilience.
    • An industry article highlights that when delivering a cloud-based service like Microsoft Teams, issues spread across device, network, security, service-owner and third-party teams—and when each team only worries about “is this our problem?” the root-cause is delayed.

    Organizations must now –

    • Integrate cross-functional teams and ensure shared ownership of outcomes.
    • Map and document critical dependencies across teams (apps, networks, credentials).
    • Use joint tools and run-books so knowledge isn’t locked in one group.

    Runbooks That Are Theoretical, Not Executable

    Policies and operational run-books often exist only in Wiki or Confluence pages. These are usually never tested end-to-end for a real-world crisis. When a disruption hits, these “prepare-on-paper” plans prove next-to-useless because they haven’t been executed, updated or validated in context. Some of the examples to illustrate this are –

    • A study on cloud migration failures emphasises that most issues aren’t purely technical, but stem from poor process, obscure roles and un-tested plans.
    • In the context of cloud migrations, the guidance “Top 10 Unexpected Cloud Migration Challenges” emphasises that post-migration testing and refinement are often skipped. This means that even when systems are live, recovery paths may not exist.

    The path forward lies in to –

    • Validate and rehears e run-books using realistic simulations, not just table-top reviews.
    • Ensure that documentation is maintained in a form that can be executed (scripts, automation, playbooks) not just “slides”.
    • Assign clear roles, triggers and escalation paths—every participant must know when and how they act.

    Over-Reliance on Cloud Migration as a Guarantee of Resilience

    Many organisations assume that migrating to the cloud automatically improves resilience. In reality, cloud migration only shifts the complexity: without fully validated rebuild paths, end-to-end environment re-provisioning and regular recovery testing, cloud-based systems can still fail under crisis.

    Real-world examples brings this challenge into focus –

    • A recent issue reported by Amazon Web Services (AWS) showed thousands of organisations facing outage due to a DNS error, reminding us that even “trusted” cloud platforms aren’t immune—and simply “being in the cloud” doesn’t equal resilience.
    • Research shows that “1 in 3 enterprise cloud migrations fail” to meet schedule or budget expectations, partly because of weak understanding of dependencies and recovery requirements.

    These underscores the importance to –

    • Treat cloud migration as an opportunity to rebuild resiliency, not assume it comes for free.
    • Map and test full application environment re-builds (resources, identities, configurations) under worst-case conditions.
    • Conduct regular fail-over and rebuild drills; validate that recovery is end-to-end and not just infrastructure-level.

    The risk is simple: The very worst time to discover a missing configuration file or an undocumented dependency is during your first attempt at a crisis rebuild.

    Building Back at Machine Speed

    The implications of Agentic AI are clear: you must be able to restore your entire infrastructure to a clean point-in-time state faster than the attacker can cause irreparable damage. The goal is no longer recovery (restoring data to an existing system), but a complete, automated rebuild.

    This capability rests on three pillars:

    1. Comprehensive Metadata Capture: Rebuilding requires capturing all relevant metadata—not just application data, but the configurations, Identity and Access Management (IAM) policies, networking topologies, resource dependencies, and API endpoints. This is the complete blueprint of your operational state.
    2. Infrastructure as Code (IaC): The rebuild process must be entirely code-driven. This means integrating previously manual or fragmented recovery steps into verifiable, executable code. IaC ensures that the environment is built back exactly as intended, eliminating human error.
    3. Automated Orchestration and Verification: This pillar ties the first two together. The rebuild cannot be a set of sequential manual scripts; it must be a single, automated pipeline that executes the IaC, restores the data/metadata, and verifies the new environment against a known good state before handing control back to the business. This orchestration ensures the rapid, clean point-in-time restoration required.

    By making your infrastructure definition and its restoration process code, you match the speed of the attack with the speed of your defense.

    Resilience at the Speed of Code

    Automating the full rebuild process transforms disaster recovery testing from an expensive chore into a strategic tool for cost optimization and continuous validation.

    Traditional disaster recovery tests are disruptive, costly, and prone to human error. When the rebuild is fully automated:

    • Validated Resilience: Testing can be executed frequently—even daily—without human intervention, providing continuous, high-confidence validation that your environment can be restored to a secure state.
    • Cost Efficiency: Regular automated rebuilds act as an audit tool. If the rebuild process reveals that your production environment only requires 70% of the currently provisioned resources to run effectively, you gain immediate, actionable insight for reducing infrastructure costs.
    • Simplicity and Consistency: Automated orchestration replaces complex, documented steps with verifiable, repeatable code, lowering operational complexity and the reliance on individual expertise during a high-pressure incident.

    Agentic AI has closed the window for slow, manual response. Resilience now means embracing the speed of code—making your restoration capability as fast, autonomous, and adaptive as the threat itself.

  • The AI Gold Rush: Why Narrow Intelligence Will Generate Trillions

    There’s no doubt that Artificial Intelligence (AI) is the technology of our time, promising to reshape industries and create unprecedented economic value. But beneath the headlines and hype, what exactly is driving this revolution, and where is the estimated $13 to $22 trillion in annual value going to come from by 2033?

    The source for this valuation is a landmark 2018 report from the McKinsey Global Institute (MGI) on the impact of AI on the world economy. You can find the full details of this modeling here: Notes from the AI frontier: Modeling the impact of AI on the world economy.

    The answer requires demystifying AI and understanding its three core ideas.

    The Three Faces of AI

    The term “AI” can be confusing because it refers to three distinct concepts:

    Artificial Narrow Intelligence (ANI)

    Artificial Narrow Intelligence (ANI) is the powerhouse driving the vast majority of AI value creation today. Unlike the broader, theoretical concept of human-level AI, ANI refers specifically to focused AI systems designed to master one specific task incredibly well. These systems are essentially “one-trick ponies,” but when the trick is appropriately chosen, the resulting impact is transformative and lucrative. Common examples include –

    • the spam filters that protect your inbox
    • the algorithms that power object detection in self-driving cars
    • the smart speakers that recognize your wake word, and
    • the automated visual inspection systems used in manufacturing to spot tiny defects in products coming off the assembly line.

    This high-impact, focused nature is why studies, such as those by the McKinsey Global Institute, consistently suggest that the largest portion of the projected multi-trillion dollar future value of AI will be unlocked through these narrow, yet highly optimized, applications, frequently relying on the machine learning technique known as Supervised Learning.

    Generative AI (GenAI)

    Generative AI, or GenAI, is absolutely the newest superstar in the tech world. This kind of AI has rapidly become famous because it can produce amazing, high-quality content—things like coherent text, realistic images, and even audio. It’s fundamentally expanded what we thought AI could do. For instance, tools like ChatGPT aren’t just single-task programs; they can jump between being a helpful copy editor, a creative brainstorming partner, or a concise text summarizer all in one go. Even though GenAI grabs a lot of headlines, it’s expected to account for a smaller, but still massive, slice of the total economic pie, we’re talking about $4 trillion annually. Its success is blurring the lines, making it seem like the previously narrow AI (ANI) can now handle much more general-purpose tasks.

    Artificial General Intelligence (AGI)

    Artificial General Intelligence, or AGI, is the stuff of science fiction. It’s the big dream: creating an AI smart enough to handle any intellectual task a person can do, maybe even becoming super-intelligent, capable of far more than any human. But here’s the reality check: we are still very far away from achieving AGI. The recent exciting and valuable steps we’ve made with narrow AI (ANI) and generative AI (GenAI) are impressive, sure, yet they often trick people into thinking AGI is right around the corner. Instead, it’s better to think of our current progress as tiny, promising baby steps leading toward what is still a distant, long-term goal.

    Beyond Software: Where AI Will Strike Next

    AI has definitely already created tremendous value in the world of software. But we must understand that the biggest future opportunities, the most exciting part actually lie outside of the tech sector itself. We’re talking about massive transformation coming to industries like retail, travel, transportation, manufacturing, and the automotive sector. Seriously, it’s really hard to name one single industry that won’t be hugely impacted by this technology in the next few years. The real trick, the big challenge for companies that don’t build software, is figuring out exactly which specific, narrow “tricks” those powerful, focused AI applications (ANI) can use to unlock this huge, multi-trillion dollar potential right within their own daily operations.