Ajitabh Pandey's Soul & Syntax

Exploring systems, souls, and stories – one post at a time

Category: Tips/Code Snippets

  • Why Systemd Timers Outshine Cron Jobs

    For decades, cron has been the trusty workhorse for scheduling tasks on Linux systems. Need to run a backup script daily? cron was your go-to. But as modern systems evolve and demand more robust, flexible, and integrated solutions, systemd timers have emerged as a superior alternative. Let’s roll up our sleeves and dive into the strategic advantages of systemd timers, then walk through their design and implementation..

    Why Ditch Cron? The Strategic Imperative

    While cron is simple and widely understood, it comes with several inherent limitations that can become problematic in complex or production environments:

    • Limited Visibility and Logging: cron offers basic logging (often just mail notifications) and lacks a centralized way to check job status or output. Debugging failures can be a nightmare.
    • No Dependency Management: cron jobs are isolated. There’s no built-in way to ensure one task runs only after another has successfully completed, leading to potential race conditions or incomplete operations.
    • Missed Executions on Downtime: If a system is off during a scheduled cron run, that execution is simply missed. This is critical for tasks like backups or data synchronization.
    • Environment Inconsistencies: cron jobs run in a minimal environment, often leading to issues with PATH variables or other environmental dependencies that work fine when run manually.
    • No Event-Based Triggering: cron is purely time-based. It cannot react to system events like network availability, disk mounts, or the completion of other services.
    • Concurrency Issues: cron doesn’t inherently prevent multiple instances of the same job from running concurrently, which can lead to resource contention or data corruption.

    systemd timers, on the other hand, address these limitations by leveraging the full power of the systemd init system. (We’ll dive deeper into the intricacies of the systemd init system itself in a future post!)

    • Integrated Logging with Journalctl: All output and status information from systemd timer-triggered services are meticulously logged in the systemd journal, making debugging and monitoring significantly easier (journalctl -u your-service.service).
    • Robust Dependency Management: systemd allows you to define intricate dependencies between services. A timer can trigger a service that requires another service to be active, ensuring proper execution order.
    • Persistent Timers (Missed Job Handling): With the Persistent=true option, systemd timers will execute a missed job immediately upon system boot, ensuring critical tasks are never truly skipped.
    • Consistent Execution Environment: systemd services run in a well-defined environment, reducing surprises due to differing PATH or other variables. You can explicitly set environment variables within the service unit.
    • Flexible Triggering Mechanisms: Beyond simple calendar-based schedules (like cron), systemd timers support monotonic timers (e.g., “5 minutes after boot”) and can be combined with other systemd unit types for event-driven automation.
    • Concurrency Control: systemd inherently manages service states, preventing multiple instances of the same service from running simultaneously unless explicitly configured to do so.
    • Granular Control: Timers offer second-resolution scheduling (with AccuracySec=1us), allowing for much more precise control than cron‘s minute-level resolution.
    • Randomized Delays: RandomizedDelaySec can be used to prevent “thundering herd” issues where many timers configured for the same time might all fire simultaneously, potentially overwhelming the system.

    Designing Your Systemd Timers: A Two-Part Harmony

    systemd timers operate in a symbiotic relationship with systemd service units. You typically create two files for each scheduled task:

    1. A Service Unit (.service file): This defines what you want to run (e.g., a script, a command).
    2. A Timer Unit (.timer file): This defines when you want the service to run.

    Both files are usually placed in /etc/systemd/system/ for system-wide timers or ~/.config/systemd/user/ for user-specific timers.

    The Service Unit (your-task.service)

    This file is a standard systemd service unit. A basic example:

    [Unit]
    Description=My Daily Backup Service
    Wants=network-online.target # Optional: Ensure network is up before running
    
    [Service]
    Type=oneshot # For scripts that run and exit
    ExecStart=/usr/local/bin/backup-script.sh # The script to execute
    User=youruser # Run as a specific user (optional, but good practice)
    Group=yourgroup # Run as a specific group (optional)
    # Environment="PATH=/usr/local/bin:/usr/bin:/bin" # Example: set a custom PATH
    
    [Install]
    WantedBy=multi-user.target # Not strictly necessary for timers, but good for direct invocation
    

    Strategic Design Considerations for Service Units:

    • Type=oneshot: Ideal for scripts that perform a task and then exit.
    • ExecStart: Always use absolute paths for your scripts and commands to avoid environment-related issues.
    • User and Group: Run services with the least necessary privileges. This enhances security.
    • Dependencies (Wants, Requires, After, Before): Leverage systemd‘s powerful dependency management. For example, Wants=network-online.target ensures the network is active before the service starts.
    • Error Handling within Script: While systemd provides good logging, your scripts should still include robust error handling and exit with non-zero status codes on failure.
    • Output: Direct script output to stdout or stderr. journald will capture it automatically. Avoid sending emails directly from the script unless absolutely necessary; systemd‘s logging is usually sufficient.

    The Timer Unit (your-task.timer)

    This file defines the schedule for your service.

    [Unit]
    Description=Timer for My Daily Backup Service
    Requires=your-task.service # Ensure the service unit is loaded
    After=your-task.service # Start the timer after the service is defined
    
    [Timer]
    OnCalendar=daily # Run every day at midnight (default for 'daily')
    # OnCalendar=*-*-* 03:00:00 # Run every day at 3 AM
    # OnCalendar=Mon..Fri 18:00:00 # Run weekdays at 6 PM
    # OnBootSec=5min # Run 5 minutes after boot
    Persistent=true # If the system is off, run immediately on next boot
    RandomizedDelaySec=300 # Add up to 5 minutes of random delay to prevent stampedes
    
    [Install]
    WantedBy=timers.target # Essential for the timer to be enabled at boot
    

    Strategic Design Considerations for Timer Units:

    • OnCalendar: This is your primary scheduling mechanism. systemd offers a highly flexible calendar syntax (refer to man systemd.time for full details). Use systemd-analyze calendar "your-schedule" to test your expressions.
    • OnBootSec: Useful for tasks that need to run a certain duration after the system starts, regardless of the calendar date.
    • Persistent=true: Crucial for reliability! This ensures your task runs even if the system was powered off during its scheduled execution time. The task will execute once systemd comes back online.
    • RandomizedDelaySec: A best practice for production systems, especially if you have many timers. This spreads out the execution of jobs that might otherwise all start at the exact same moment.
    • AccuracySec: Defaults to 1 minute. Set to 1us for second-level precision if needed (though 1s is usually sufficient).
    • Unit: This explicitly links the timer to its corresponding service unit.
    • WantedBy=timers.target: This ensures your timer is enabled and started automatically when the system boots.

    Implementation and Management

    1. Create the files: Place your .service and .timer files in /etc/systemd/system/.
    2. Reload systemd daemon: After creating or modifying unit files: sudo systemctl daemon-reload
    3. Enable the timer: This creates a symlink so the timer starts at boot: sudo systemctl enable your-task.timer
    4. Start the timer: This activates the timer for the current session: sudo systemctl start your-task.timer
    5. Check status: sudo systemctl status your-task.timer; sudo systemctl status your-task.service
    6. View logs: journalctl -u your-task.service
    7. Manually trigger the service (for testing): sudo systemctl start your-task.service

    Conclusion

    While cron served its purpose admirably for many years, systemd timers offer a modern, robust, and integrated solution for scheduling tasks on Linux systems. By embracing systemd timers, you gain superior logging, dependency management, missed-job handling, and greater flexibility, leading to more reliable and maintainable automation. It’s a strategic upgrade that pays dividends in system stability and ease of troubleshooting. Make the switch and experience the power of a truly systemd-native approach to scheduled tasks.

  • Is crontab not a shell script….really?

    While trying to figure out an error, I found the following line in one of the crontab files and I could not stop myself from smiling.

    PATH=$PATH:/opt/mysoftware/bin

    And that single line perfectly encapsulated the misconception I want to address today: No, a crontab is NOT a shell script!

    It’s a common trap many of us fall into, especially when we’re first dabbling with scheduling tasks on Linux/Unix systems. We’re used to the shell environment, where scripts are king, and we naturally assume crontab operates under the same rules. But as that PATH line subtly hints, there’s a fundamental difference.

    The Illusion of Simplicity: What a Crontab Looks Like

    At first glance, a crontab file seems like it could be a script. You define commands, specify execution times, and often see environmental variables being set, just like in a shell script. Here’s a typical entry:

    0 2 * * * /usr/bin/some_daily_backup.sh

    This tells cron to run /usr/bin/some_daily_backup.sh every day at 2:00 AM. Looks like a command in a script, right? But the key difference lies in how that command is executed.

    Why Crontab is NOT a Shell Script: The Environment Gap

    The critical distinction is this: When cron executes a job, it does so in a minimal, non-interactive shell environment. This environment is significantly different from your interactive login shell (like Bash, Zsh, or even a typical non-login shell script execution).

    Let me break down the implications, and why that PATH line I discovered was so telling:

    Limited PATH

    This is perhaps the most frequent culprit for “my cron job isn’t working!” errors. Your interactive shell has a PATH variable populated with directories where executables are commonly found (e.g., /usr/local/bin, /usr/bin, /bin). The default PATH for cron jobs is often severely restricted, sometimes just to /usr/bin:/bin.

    This means if your script or command relies on an executable located outside of cron’s default PATH (like /opt/mysoftware/bin/mycommand), it simply won’t be found, and the job will fail. That’s why the PATH=$PATH:/opt/mysoftware/bin line was necessary – it explicitly tells cron where to look for executables for that specific job.

    Minimal Environment Variables

    Beyond PATH, most other environment variables you rely on in your interactive shell (like HOME, LANG, TERM, or custom variables you’ve set in your .bashrc or .profile) are often not present or have very basic values in the cron environment.

    Consider a script that needs to know your HOME directory to find configuration files. If your cron job simply calls this script without explicitly setting HOME, the script might fail because it can’t locate its resources.

    No Interactive Features

    Cron jobs run non-interactively. This means:

    • No terminal attached.
    • No user input (prompts, read commands, etc.).
    • No fancy terminal features (like colors or cursor manipulation).
    • No aliases or shell functions defined in your dotfiles.

    If your script assumes any of these, it will likely behave unexpectedly or fail when run by cron.

    Specific Shell Invocation

    While you can specify the shell to be used for executing cron commands (often done with SHELL=/bin/bash at the top of the crontab file), even then, that shell is invoked in a non-login, non-interactive mode. This means it won’t necessarily read your personal shell configuration files (.bashrc, .profile, .zshrc, etc.) unless explicitly sourced.

    The “Lot of Information” Cron Needs: Practical Examples

    So, if crontab isn’t a shell script, what “information” does it need to operate effectively in this minimalist shell? It needs explicit instructions for everything you take for granted in your interactive session.

    Let’s look at some common “incorrect” entries, what people expected, and how they should be corrected.

    Example 1: Missing the PATH

    The incorrect entry would look something like below:

    0 * * * * my_custom_command
    

    The user expected here was, “I want my_custom_command to run every hour. It works perfectly when I type it in my terminal.”

    The my_custom_command is likely located in a directory that’s part of the user’s interactive PATH (e.g., /usr/local/bin/my_custom_command or /opt/mysoftware/bin/my_custom_command). However, cron’s default PATH is usually minimal (/usr/bin:/bin), so it cannot find my_custom_command. The error usually manifests as a “command not found” message mailed to the cron user or present in the syslog.

    The fix here would be to always use the full, absolute path to your executables as shown in the below sample entry:

    0 * * * * /usr/local/bin/my_custom_command
    

    Or, if multiple commands from that path are used, you can set the PATH at the top of the crontab:

    PATH=/usr/local/bin:/usr/bin:/bin # Add other directories as needed
    0 * * * * my_custom_command
    

    Example 2: Relying on Aliases or Shell Functions

    The incorrect entry would look like below:

    @reboot myalias_cleanup
    

    The user assumed that, “I have an alias myalias_cleanup='rm -rf /tmp/my_cache/*' defined in my .bashrc. I want this cleanup to run every time the system reboots.”

    But the aliases and shell functions are defined within your interactive shell’s configuration files (.bashrc, .zshrc, etc.). Cron does not source these files by default when executing jobs. Therefore, myalias_cleanup is undefined in the cron environment, leading to a “command not found” error.

    The correct thing would be to replace aliases or shell functions with the actual commands or create a dedicated script.

    # If myalias_cleanup was 'rm -rf /tmp/my_cache/*'
    @reboot /bin/rm -rf /tmp/my_cache/*
    

    Or, if it’s a complex set of commands, put them into a standalone script and call that script:

    # In /usr/local/bin/my_cleanup_script.sh:
    #!/bin/bash
    /bin/rm -rf /tmp/my_cache/*
    # ... more commands
    
    # In crontab:
    @reboot /usr/local/bin/my_cleanup_script.sh
    

    Example 3: Assuming User-Specific Environment Variables

    The incorrect entry in this case looks like:

    0 0 * * * my_script_that_uses_MY_API_KEY.sh
    

    Inside my_script_that_uses_MY_API_KEY.sh:

    #!/bin/bash
    curl "https://api.example.com/data?key=$MY_API_KEY"
    

    The user expected here that, “I have export MY_API_KEY='xyz123' in my .profile. I want my script to run daily using this API key.”

    This assumption is wrong as similar to aliases, cron does not load your .profile or other user-specific environment variable files. The MY_API_KEY variable will be undefined in the cron environment, causing the curl command to fail (e.g., “authentication failed” or an empty key parameter).

    To fix this explicitly set required environment variables within the crontab entry or directly within the script. There are two possible options to do this:

    Option A: In Crontab (good for a few variables specific to the cron job):

    MY_API_KEY="xyz123"
    0 0 * * * /path/to/my_script_that_uses_MY_API_KEY.sh
    

    Option B: Inside the Script (often preferred for script-specific variables):

    #!/bin/bash
    export MY_API_KEY="xyz123" # Or read from a secure config file
    curl "https://api.example.com/data?key=$MY_API_KEY"
    

    Example 4: Relative Paths and Current Working Directory

    The incorrect entry for this example looks like:

    0 1 * * * python my_app/manage.py cleanup_old_data
    

    The user expected that, “My Django application lives in /home/user/my_app. When I’m in /home/user/my_app and run python manage.py cleanup_old_data, it works. I want this to run nightly.”

    Again, this assumption is incorrect as when cron executes a job, the current working directory is typically the user’s home directory (~). So, cron would look for my_app/manage.py inside ~/my_app/manage.py, not /home/user/my_app/manage.py. This leads to “file not found” errors.

    To fix this either use absolute paths for the script or explicitly change the directory before executing. Here are the examples using two possible options:

    Option A: Absolute Path for Script:

    0 1 * * * /usr/bin/python /home/user/my_app/manage.py cleanup_old_data
    

    Option B: Change Directory First (useful if the script itself relies on being run from a specific directory):

    0 1 * * * cd /home/user/my_app && /usr/bin/python manage.py cleanup_old_data
    

    Note the && which ensures the python command only runs if the cd command is successful.

    Example 5: Output Flooding and Debugging

    To illustrate this case, look at the following incorrect example entry:

    */5 * * * * /usr/local/bin/my_chatty_script.sh
    

    The user expected that, “I want my_chatty_script.sh to run every 5 minutes.”

    This expectation is totally baseless as by default, cron mails any standard output (stdout) or standard error (stderr) from a job to the crontab owner. If my_chatty_script.sh produces a lot of output, it will quickly fill up the user’s mailbox, potentially causing disk space issues or overwhelming the mail server. While not a “failure” of the job itself, it’s a major operational oversight.

    The correct way is to redirect output to a log file or /dev/null for production jobs.

    Redirect to a log file (recommended for debugging and auditing):

    */5 * * * * /usr/local/bin/my_chatty_script.sh >> /var/log/my_script.log 2>&1
    
    • >> /var/log/my_script.log appends standard output to the log file.
    • 2>&1 redirects standard error (file descriptor 2) to the same location as standard output (file descriptor 1).

    Discard all output (for jobs where output is not needed):

    */5 * * * * /usr/local/bin/my_quiet_script.sh > /dev/null 2>&1
    

    The Takeaway

    The smile I had when I saw that PATH line in a crontab file was the smile of recognition – recognition of a fundamental operational truth. Crontab is a scheduler, a timekeeper, an orchestrator of tasks. It’s not a shell interpreter.

    Understanding this distinction is crucial for debugging cron job failures and writing robust, reliable automated tasks. Always remember: when cron runs your command, it’s in a stark, bare-bones environment. You, the administrator (or developer), are responsible for providing all the context and information your command or script needs to execute successfully.

    So next time you’re troubleshooting a cron job, don’t immediately blame the script. First, ask yourself: “Does this script have all the information and the right environment to run in the minimalist world of cron?” More often than not, the answer lies there.

  • Using Telegram for Automation Using Python Telethon Module

    Using Telegram for Automation Using Python Telethon Module

    Telegram is a cloud based messaging application which provides an excellent set of APIs to allow developers to automate on top of the platform. It is increasingly being used to automate various notifications and messages. It has become a platform of choice to create bots which interact with users and groups.

    Telethon is an asyncio Python 3 library for interacting with telegram API. It is one of the very exhaustive libraries which allows users to interact with telegram API as a user or as a bot.

    Recently I have written some AWS Lambda functions to automate certain personal notifications. I could have run the code as a container on one of my VPSs or on Hulu or other platforms, but I took this exercise as an opportunity to learn more about serverless and functions. Also, my kind of load is something which can easyly fall under the Lambda free tier.

    In this post we will look into the process of how to start with the development and write some basic python applications.

    Registering As a Telegram Developer

    Following steps can be followed to obtain the API ID for telegram –

    • Sign up for Telegram using any application
    • Login to the https://my.telegram.org/ website using the same mobile number. Telegram will send you a confirmation code on Telegram application. After entering the confirmation code, you will be seeing the following screen –
    Screenshot of Telegram Core Developer Page
    • In the above screen select the API Development Tools and complete the form. This page will provide some basic information in addition to api_id and api_hash.

    Setting up Telethon Development Environment

    I assume that the reader is familiar with basic python and knows how to set up a virtual environment, so rather than explaining, I would more focus on quick code to get the development environment up and running.

    $ mkdir telethon-dev && cd telethon-dev 
    $ python3 -m venv venv-telethon
    $ source venv-telethon/bin/activate
    (venv-telethon) $ pip install --upgrade pip
    (venv-telethon) $ pip install telethon
    (venv-telethon) $ pip install python-dotenv

    Obtaining The Telegram Session

    I will be using .env file for storing the api_id and api_hash so that the same can be used in the code which we will write. Replace NNNNN with your api_id and XX with your api_hash

    TELEGRAM_API_ID=NNNNN
    TELEGRAM_API_HASH=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

    Next we will need to create a session to be used in our code. For full automation, it is needed that we store the session either as a file or as a string. Since the cloud environments destroy the ephimeral storage they provide, so I will get the session as a string. The following python code will help obtain the same.

    #! /usr/bin/env python3
    
    import os
    
    from dotenv import load_dotenv
    
    from telethon.sync import TelegramClient
    from telethon.sessions import StringSession
    
    load_dotenv()
    
    with TelegramClient(StringSession(), os.getenv("TELEGRAM_API_ID"), os.getenv("TELEGRAM_API_HASH")) as client:
        print(client.session.save())

    When this code is executed, it will prompt for your phone number. Here you would need to print the phone number with the country code. In the next step, an authorization code will be received in the telegram application which would need to be entered in the application prompt. Once the authorization code is typed correctly, the session will be printed as a string value on standard output. You would need to save the same.

    (venv-telethon) $ ./get_string_session.py
     Please enter your phone (or bot token): +91xxxxxxxxxx
     Please enter the code you received: zzzzz
    Signed in successfully as KKKKKK KKKKKKK
    9vznqQDuX2q34Fyir634qgDysl4gZ4Fhu82eZ9yHs35rKyXf9vznqQDuX2q34Fyir634qgDyslLov-S0t7KpTK6q6EdEnla7cqGD26N5uHg9rFtg83J8t2l5TlStCsuhWjdzbb29MFFSU5-l4gZ4Fhu9vznqQDuX2q34Fyir634qgDysl9vznqQDuX2q34Fyir634qgDy_x7Sr9lFgZsH99aOD35nSqw3RzBmm51EUIeKhG4hNeHuF1nwzttuBGQqqqfao8sTB5_purgT-hAd2prYJDBcavzH8igqk5KDCTsZVLVFIV32a9Odfvzg2MlnGRud64-S0t7KpTK6q6EdEnla7cqGD26N5uHg9rFtg83J8t2l5TlStCsuhWjdzbb29MFFSU5=

    I normally put the string session along with the API ID and Hash in the .env file. All these three values would need to be protected and should never be shared with a third party.

    For the next code, I will assume that you have used a variable TELEGRAM_STRING_SESSION. So the final .env file will look like below –

    TELEGRAM_API_ID=NNNNN
    TELEGRAM_API_HASH=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    TELEGRAM_STRING_SESSION=YYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYY

    Sending a Message to A Contact

    Now we have the ground work done, so we will write a simple python application to send a message to a contact. The important point to note here is that the recipient must be in your telegram contacts.

    #! /usr/bin/env python3
    
    import os
    
    from telethon.sync import TelegramClient
    from telethon.sessions import StringSession
    from dotenv import load_dotenv
    
    load_dotenv()
    
    try:
        client = TelegramClient(StringSession(os.getenv("STRING_TOKEN")), os.getenv("API_ID"), os.getenv("API_HASH"))
        client.start()
    except Exception as e:
        print(f"Exception while starting the client - {e}")
    else:
        print("Client started")
    
    async def main():
        try:
            # Replace the xxxxx in the following line with the full international mobile number of the contact
            # In place of mobile number you can use the telegram user id of the contact if you know
            ret_value = await client.send_message("xxxxxxxxxxx", 'Hi')
        except Exception as e:
            print(f"Exception while sending the message - {e}")
        else:
            print(f"Message sent. Return Value {ret_value}")
    
    with client:
        client.loop.run_until_complete(main())

    Next Steps

    The telethon API is quite versatile, a detailed API documentation can be find at https://tl.telethon.dev/. Hope this post will help the reader quickly start off with the telegram messaging with telethon module.

  • Upgrading Raspbian 8 (Jessie) to Raspbian 9 (Stretch)

    I decided to upgrade my oldest Raspberry Pi to the latest Raspbian. Since I was two releases behind, I decided to do it step-by-step. Today I updated from 8 – 9. I plan. to perform similar steps to upgrade 9 – 10.

    Following are the quick sequence of steps I followed to perform the upgrade. This is a Model B Rev 2 Pi, so was considerably slow to update and took me more than 4 hours to complete.

    Step 1 – Prepare The System For Upgrade

    Apply the latest updates to the system.

    $ sudo apt update && sudo apt upgrade -y && sudo apt-get dist-upgrade -y

    Next step is to search for packages which have been only partially installed on the system using dpkg -C command.

    $ sudo dpkg -C

    The dpkg may indicate what needs to be done with these. I did not find anything under this category, which was good. In last, I ran apt-mark showhold command to get a list of all packages which have been marked as hold.

    $ sudo apt-mark showhold

    While I did not get any packages in this list, but if there are any, we are expected to resolve this before proceedig to step 2.

    Stpe 2 – Prepare the APT System for Upgrade

    $ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list
    $ sudo sed -i 's/jessie/stretch/g' /etc/apt/sources.list.d/raspi.list
    $ echo 'deb http://archive.raspberrypi.org/debian/ stretch main' >> /etc/apt/sources.list

    I am updating only the two files but if your system has any other source files, then you need to update them appropriately as well. A list of such files can be found using – grep -lnr jessie /etc/apt

    In addition to this I also removed the package apt-listchange which displays what changed in the new version of the Debian package as compared to the version currently installed on the system. This is expected to speed-up the entire process. This is not mandatory, so you can skip it.

    # optional step
    $ sudo apt-get remove apt-listchange 

    Step 3 – Perform The Upgrade and Cleanup

    As a last step initiate the upgrade process. This is the time where you can just leave the system for few hours.

    $ sudo apt update && sudo apt upgrade -y && sudo apt-get dist-upgrade -y

    I faced issues with chromium-browser and at the last command (dist-upgrade), the dpkg bailed out with a message indicating archive corruption of chromium-browser package. Since I am at Run Level 3, and do not need chromium on the headless pi, I decided to remove the following three packages. In any case in the absence of chromium, the debian system will automatically use update-alternatives and choose epiphany-browser to satisfy gnome-www-browser requirement.

    $ sudo apt-get remove chromium-browser chromium-browser-l10n rpi-chromium-mods

    After removing the chromium browser, I did another round of update, upgrade and dist-upgrade, just to make sure before initiating the cleanup as below –

    $ sudo apt-get autoremove -y && sudo apt-get autoclean

    The new OS version can be verified by

    $ cat /etc/debian_version;cat /etc/os-release

    I also took this opportunity to update the firmware of the raspberry pi by running the following command. Please note this step is absolutely optional and it is recomended also that do not perform this unless you know what you are doing or you are being asked by a support person.

    $ sudo rpi-update

  • Raspberry Pi – rsyslog fixes on Raspbian 8 (Jessie)

    Raspberry Pi – rsyslog fixes on Raspbian 8 (Jessie)

    One of my Raspberry Pi (Raspberry Pi Model B Rev 2) is running quite old versio of Rasbian – although fully updated and patched. Today while checking the syslog on my raspberry pi, I noticed the following error which was very frequently – almost every minute and thus was filling up the syslog.

    Dec 24 20:59:35 rads rsyslogd-2007: action 'action 17' suspended, next retry is Thu Dec 24 21:01:05 2020 [try http://www.rsyslog.com/e/2007 ]

    Thanks to logrotate I was not in an immediate need for action, but still I thought it will be better to fix this – atleast that will reduce the write and increase life of my SD Card.

    The URL at the end of the message was very helpful. According to the specified URL this message simply means that rsyslog.conf contains the instruction to write to the xconsole pipe, but this pipe is never read. on editing the /etc/rsyslog file, the following section can be commented out or removed completely.

    daemon.*;mail.*;\        news.err;\        *.=debug;*.=info;\        *.=notice;*.=warn       |/dev/xconsole

    A simple systemctl restart rsyslog after that will fix the issue.

    I did not see this issue on my other Raspberry Pi which runs Raspbian based on Debian Buster (Debian 10). I checked the /etc/rsyslog.conf on that and could not find the offending lines there. So my understanding is that this issue is with Raspbian based on Jessie.