<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Random Bits]]></title><description><![CDATA[Thoughts, stories and ideas.]]></description><link>https://blog.randombits.host/</link><generator>Ghost 5.40</generator><lastBuildDate>Wed, 08 Apr 2026 13:07:18 GMT</lastBuildDate><atom:link href="https://blog.randombits.host/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Why You Should Be Using Git Worktrees]]></title><description><![CDATA[<p></p><p>&#x2003;It&apos;s no surprise to anyone reading this, that git is the standard for source control today. It has completely replaced SVN, Mercurial, and many other systems we&apos;re glad to leave in the past, becoming without doubt the industry standard source control system. Maybe someday it</p>]]></description><link>https://blog.randombits.host/why-you-should-be-using-git-worktrees/</link><guid isPermaLink="false">68e29d3cd5d41d00015b8dc5</guid><category><![CDATA[Quick Tip]]></category><category><![CDATA[Git]]></category><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Wed, 15 Oct 2025 21:39:27 GMT</pubDate><content:encoded><![CDATA[<p></p><p>&#x2003;It&apos;s no surprise to anyone reading this, that git is the standard for source control today. It has completely replaced SVN, Mercurial, and many other systems we&apos;re glad to leave in the past, becoming without doubt the industry standard source control system. Maybe someday it will be superceded by <a href="https://jj-vcs.github.io/jj/latest/?ref=blog.randombits.host">jujutsu</a> or something even fancier, but for now, <code>git</code> is here to stay. It&apos;s also not surprising to say that people are moreso interested in programming as opposed to source control management systems, so naturally people often learn <em>just enough</em> git to make it get out of the way for them, and little else. There&apos;s a number of features that are criminally underused for this reason. The main ones I&apos;m thinking of are <code>git bisect</code>, staging hunks with <code>git add -p</code>, and the topic of today&apos;s post, worktrees.</p><p>&#x2003;Worktrees are branches on steroids. However, unlike steroids (or some other git features), worktrees are not anger inducing. They are in fact almost a drop-in replacement for branches, which is one of the few git features you can be certain anyone using a source control system is familiar with. It&apos;s best to get this kind of information <a href="https://git-scm.com/docs/git-worktree?ref=blog.randombits.host">straight from the source</a>, but as a brief explanation (i.e. just enough working knowledge to understand this post), they are simply a means of checking out a new branch <em>in a separate directory</em>. As a horrible pseudo-equivalence, you can imagine <code>git worktree add ../&lt;branch name&gt; &lt;branch name&gt;</code> to be roughly equivalent to <code>git clone . ../&lt;branch name&gt; &amp;&amp; cd ../&lt;branch name&gt; &amp;&amp; git checkout -b &lt;branch name&gt;</code>.</p><p>&#x2003;The reason this is useful, it that it means you can work on multiple branches concurrently with a much lower amount of effort to switch contexts.</p><p>&#x2003;I can already hear the cries of detraction: Context switching is the devil! Just keep your working directory clean! This is already solved by &lt;some bloated GUI&gt;!</p><p>&#x2003;Well these complaints just aren&apos;t practical. You often <em>need</em> to leave your current state of a repo dirty. Sometimes the different technology doesn&apos;t fit your workflow, and while nobody wants to context switch frequently, it&apos;s an inevitability in the workplace. You can&apos;t simply sit around with one open pull request being worked on and do nothing else until you get feedback on it! Given that you are going to be context switching, and that worktrees make it much less painless. Let&apos;s get into some examples to show this.</p><h2 id="demo">Demo</h2><figure class="kg-card kg-video-card kg-width-wide kg-card-hascaption"><div class="kg-video-container"><video src="https://blog.randombits.host/content/media/2025/10/worktree_demo.mp4" poster="https://img.spacergif.org/v1/2940x1912/0a/spacer.png" width="2940" height="1912" loop autoplay muted playsinline preload="metadata" style="background: transparent url(&apos;https://blog.randombits.host/content/images/2025/10/media-thumbnail-ember106.jpg&apos;) 50% 50% / cover no-repeat;"></video><div class="kg-video-overlay"><button class="kg-video-large-play-icon"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/></svg></button></div><div class="kg-video-player-container kg-video-hide"><div class="kg-video-player"><button class="kg-video-play-icon"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M23.14 10.608 2.253.164A1.559 1.559 0 0 0 0 1.557v20.887a1.558 1.558 0 0 0 2.253 1.392L23.14 13.393a1.557 1.557 0 0 0 0-2.785Z"/></svg></button><button class="kg-video-pause-icon kg-video-hide"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><rect x="3" y="1" width="7" height="22" rx="1.5" ry="1.5"/><rect x="14" y="1" width="7" height="22" rx="1.5" ry="1.5"/></svg></button><span class="kg-video-current-time">0:00</span><div class="kg-video-time">/<span class="kg-video-duration"></span></div><input type="range" class="kg-video-seek-slider" max="100" value="0"><button class="kg-video-playback-rate">1&#xD7;</button><button class="kg-video-unmute-icon"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M15.189 2.021a9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h1.794a.249.249 0 0 1 .221.133 9.73 9.73 0 0 0 7.924 4.85h.06a1 1 0 0 0 1-1V3.02a1 1 0 0 0-1.06-.998Z"/></svg></button><button class="kg-video-mute-icon kg-video-hide"><svg xmlns="http://www.w3.org/2000/svg" viewbox="0 0 24 24"><path d="M16.177 4.3a.248.248 0 0 0 .073-.176v-1.1a1 1 0 0 0-1.061-1 9.728 9.728 0 0 0-7.924 4.85.249.249 0 0 1-.221.133H5.25a3 3 0 0 0-3 3v2a3 3 0 0 0 3 3h.114a.251.251 0 0 0 .177-.073ZM23.707 1.706A1 1 0 0 0 22.293.292l-22 22a1 1 0 0 0 0 1.414l.009.009a1 1 0 0 0 1.405-.009l6.63-6.631A.251.251 0 0 1 8.515 17a.245.245 0 0 1 .177.075 10.081 10.081 0 0 0 6.5 2.92 1 1 0 0 0 1.061-1V9.266a.247.247 0 0 1 .073-.176Z"/></svg></button><input type="range" class="kg-video-volume-slider" max="100" value="100"></div></div></div><figcaption>A live demo of how easy it is to use worktrees (I&apos;ll show you the gwc script at the end!)</figcaption></figure><h2 id="examples">Examples</h2><h3 id="demo-meeting">&#x2003;Demo Meeting</h3><p>&#x2003;&#x2003;It&apos;s time for you to show off and get feedback on the features you&apos;ve been working on! You arrange a meeting with some colleagues and you&apos;re ready to go. Are you going to demo the cool new features from some branch you had to messily <a href="https://git-scm.com/docs/git-merge.html?ref=blog.randombits.host#Documentation/git-merge.txt-octopus">octopus merge</a> together? Maybe instead you plan to stop the demo halfway through so you can checkout another branch, set up the environment correctly, and then begin? Well with a worktree based workflow, you can spend 30 seconds before the meeting starts, and run each feature you want to demo on a server on a different port! Now when you&apos;re moving from one feature to another, you simply open another tab! Easy!</p><h3 id="bug-squashing">&#x2003;Bug Squashing</h3><p>&#x2003;&#x2003;The new feature has been released, and customers are <em>loving it</em>. Unfortunately, they&apos;re expressing their love in a funny way - by pointing out many small UI inconsistencies... &quot;These buttons are out of alignment!&quot;, &quot;When I have 5,000 options in this dropdown, the search is slow!&quot;, you know the drill.</p><p>&#x2003;Your options are:</p><ul><li>Bundling all these fixes into one PR, slowing down the deployment of every change you&apos;re making and frustrating the review process</li><li>Checking out a new branch for each bug fix, and when you (inevitably) read the reviews showing you made a typo, or that there was a linting error, you have to stop your current work in progress, stash the changes, check out a different branch, make the quick fix, push the change, switch back to your WIP branch, pop your stash....</li></ul><p>&#x2003;If you were using worktrees however, you can just open the file in another directory to make a quick fix and push the change without ever having to interfere with your current working state.</p><h3 id="proof-of-concepts">&#x2003;Proof of Concepts</h3><p>&#x2003;&#x2003;You have a great idea for a new feature, but you want to experiment with it before sharing with a wider audience. It&apos;s low priority, maybe just for fun, and because of that you can only really work on it in downtime or when blocked. Instead of you forgetting that you ever checked out a branch to do this investigation, and never opening it again, or having 25 different unnamed stashes in the stack, you can see every time you look at your <code>Code/</code> directory that it&apos;s right there. Waiting for you in the exact same state as when you left it.</p><h3 id="comparisons-benchmarks">&#x2003;Comparisons &amp; Benchmarks</h3><p>&#x2003;&#x2003;Suppose you&apos;re writing a new API endpoint and you want to run some benchmarks. You can quickly and easily run two instances of your backend simultaneously and just flip the port to compare the implementations. No need to <code>git clone</code> the repo to another location and disrupt your workflow. This is immediately possible with worktrees.</p><h2 id="limitations">Limitations</h2><p>&#x2003;However, there is <em>one</em> minor limitation and it is related to dependencies. I&apos;m hesitant to really call it a limitation, but I want to get ahead of the <code>git branch</code> diehards and address it. Very likely, most of your branches don&apos;t end up requiring new dependencies. That means you don&apos;t need to install anything new as you&apos;re still operating within the same directory. However, since worktrees are actually completely separate directories, you do have to install all your dependencies in each new worktree. Depending on your tooling and overall environment, this mightn&apos;t be too big of a deal. For example, with <code>bun</code> and <code>uv</code>, most of my installs are <em>extremely</em> fast. So fast that this limitation doesn&apos;t really affect me.</p><p>&#x2003;On the positive side of this coin though, you have the benefit of being able to try out new runtimes, major version changes, etc without any risk to your primary worktree.</p><p>&#x2003;I&apos;m sure that there is a smart way to bypass this limitation if you rarely end up changing dependencies. For example, could you simply symlink the <code>venv/</code> or <code>node_modules/</code> in your new worktree? Let me know if you have any experience of doing this or something similar!</p><h2 id="im-convinced-let-me-use-them">I&apos;m convinced, let me use them!</h2><p>&#x2003;First of all, <code>git worktree list</code> or <code>git worktree remove --force</code> is far too much to type out each time. I have the following very basic set of aliases:</p><pre><code class="language-zshrc">alias gw=&quot;git worktree&quot;
alias gwl=&quot;git worktree list&quot;
alias gwr=&quot;git worktree remove&quot;

export PATH=&quot;$PATH:/home/username/scripts&quot;</code></pre><p>&#x2003;However, the real trick is setting up your own script for <code>git worktree create</code> to suit your workflow.</p><p>&#x2003;There are only two reasons I create a new worktree. Either I want to test and review a colleague&apos;s work, or I want to create my own worktree to mess around with. Either way, I would have to manually create the worktree, <code>cd</code> to it, run my personal <code>justfile</code> rules for instantiating the environment, and only then could I start on whatever I was doing. Around the fifth time I was auto-piloting my way through this sequence of typing, I decided to spend 10 minutes creating a script to do exactly this. It was a good idea. Now when I run <code>gwc &lt;name&gt;</code>, the script will pull the branch from the remote if it exists and set up a new worktree for it, or if it doesn&apos;t exist, it will create a new worktree for it. Finally, it will copy over my personal, not git tracked utilities, and initialize the project. I highly recommend customizing this to suit your needs!</p><figure class="kg-card kg-code-card"><pre><code class="language-bash">#!/bin/bash

set -e

if [ $# -eq 0 ]; then
    echo &quot;Usage: gwc &lt;branch-name&gt;&quot;
    exit 1
fi

BRANCH_NAME=&quot;$1&quot;
WORKTREE_PATH=&quot;../$BRANCH_NAME&quot;

# If worktree already exists, just cd to it:
# NOTE: Can&apos;t cd from a script, so only option is to manually cd.
# Other option is to have this as an alias in zshrc to run
# `gwc ... &amp;&amp; cd &lt;new-dir&gt;` or a wrapper function in general -_-
if [ -d &quot;$WORKTREE_PATH&quot; ]; then
    echo &quot;Worktree already exists!&quot;
    # cd &quot;$WORKTREE_PATH&quot;
    exit 0
fi

# Always fetch first to ensure we have latest remote refs
git fetch origin

# Check if branch exists on remote
if git show-ref --verify --quiet refs/remotes/origin/&quot;$BRANCH_NAME&quot;; then
    # Remote branch exists - ensure worktree points to remote head
    if git show-ref --verify --quiet refs/heads/&quot;$BRANCH_NAME&quot;; then
        # Local branch exists - delete it and recreate from remote to ensure sync
        git branch -D &quot;$BRANCH_NAME&quot; 2&gt;/dev/null || true
    fi
    # Create tracking branch and worktree from remote
    git worktree add --track -b &quot;$BRANCH_NAME&quot; &quot;$WORKTREE_PATH&quot; origin/&quot;$BRANCH_NAME&quot;
elif git show-ref --verify --quiet refs/heads/&quot;$BRANCH_NAME&quot;; then
    # Only local branch exists - use it as is
    git worktree add &quot;$WORKTREE_PATH&quot; &quot;$BRANCH_NAME&quot;
else
    # Branch doesn&apos;t exist anywhere - create new branch
    git worktree add &quot;$WORKTREE_PATH&quot; -b &quot;$BRANCH_NAME&quot;
fi

# Copy files that live out of git tracking
cp .env &quot;$WORKTREE_PATH/.env&quot;
cp web/.env &quot;$WORKTREE_PATH/web/.env&quot;
cp justfile &quot;$WORKTREE_PATH/justfile&quot;

# Set strictPort to false so that worktrees can run concurrently
sed -i &apos;&apos; &apos;s/strictPort: true,/strictPort: false,/g&apos; &quot;$WORKTREE_PATH/web/vite.config.ts&quot;

# Install dependencies
cd &quot;$WORKTREE_PATH/web&quot;
bun install
# When working on backend stuff, do just init_project also

# Switch to the worktree directory
# NOTE: Can&apos;t cd from a script, so only option is to manually cd.
# Other option is to have this as an alias in zshrc to run
# `gwc ... &amp;&amp; cd &lt;new-dir&gt;` or a wrapper function in general -_-
# cd &quot;$WORKTREE_PATH&quot;</code></pre><figcaption>~/scripts/gwc - There are a lot of comments here, but I promise you it&apos;s very straightforward!</figcaption></figure><h2 id="conclusion">Conclusion</h2><ul><li>I hope you either already are using worktrees, or I have managed to convince you to try them out!</li><li>Keep learning your tools! Maybe new features haven&apos;t been released since you first learned to use them, but maybe you can now understand different means of using the existing features.</li></ul>]]></content:encoded></item><item><title><![CDATA[Naming Variables Just Got Harder]]></title><description><![CDATA[<p><a href="https://www.martinfowler.com/bliki/TwoHardThings.html?ref=blog.randombits.host">It&apos;s a joke that has been done to death</a>, but it is true. Naming things is one of the hardest aspects of Computer Science. It affects the readability, maintainability, and every facet of the lossy interface between the concept in your mind, and the cold reality of the</p>]]></description><link>https://blog.randombits.host/naming-variables-just-got-harder/</link><guid isPermaLink="false">6509ab2cf653220001ce6f13</guid><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Tue, 19 Sep 2023 21:43:25 GMT</pubDate><content:encoded><![CDATA[<p><a href="https://www.martinfowler.com/bliki/TwoHardThings.html?ref=blog.randombits.host">It&apos;s a joke that has been done to death</a>, but it is true. Naming things is one of the hardest aspects of Computer Science. It affects the readability, maintainability, and every facet of the lossy interface between the concept in your mind, and the cold reality of the code. This has been always the way since <a href="https://en.wikipedia.org/wiki/Short_Code_(computer_language)?ref=blog.randombits.host">variables could be named</a>. However, I believe that variable naming has become much more important in the past few months with the increasing presence of &quot;AI&quot; assisted tools.</p><p>I was initially stubborn when it came to personally using &quot;AI&quot; powered tools. I still am, but I have now accepted that these tools are here to stay. The cat is out of the bag, Pandora&apos;s Box has been opened, etc etc. For better or worse, we should all be operating under the assumption that these tools are currently in use, and will become more relied upon in future. This is why naming in software development has become even more important.</p><p>These tools rely, at least partially, on the semantic meaning of your variable and function names. That means if your names are inconsistent or poorly conceived, you&apos;re going to get worse output. Some examples in regular code block: (Because I also reject Jupyter Notebooks)</p><pre><code class="language-python">import openai
import os
import sys
import time

from pydantic import BaseModel, Field


openai.api_key = os.getenv(&quot;OPENAI_API_KEY&quot;)


class GermanTranslationResponse(BaseModel):
    message_in_german: str = Field(
        description=&quot;The user message translated into German&quot;
    )

class FrenchTranslationResponse(BaseModel):
    message_in_french: str = Field(
        description=&quot;The user message translated into French&quot;
    )

class FrenchTranslationResponse2(BaseModel):
    message_in_german: str = Field(
        description=&quot;The user message translated into French&quot;
    )

class FrenchTranslationResponse3(BaseModel):
    message_in_french: str = Field(
        description=&quot;The user message translated into German&quot;
    )


def get_response(message: str, language: str, model) -&gt; str:
    context = [{
        &quot;role&quot;: &quot;user&quot;,
        &quot;content&quot;: message
    }]

    chat_completion = openai.ChatCompletion.create(
        model=&quot;gpt-3.5-turbo-0613&quot;,
        messages=context,
        functions=[
            {
                &quot;name&quot;: f&quot;get_{language}_translation&quot;,
                &quot;description&quot;: f&quot;get the message translated to {language}&quot;,
                &quot;parameters&quot;: model.schema(),
            },
        ],
        function_call={&quot;name&quot;: f&quot;get_{language}_translation&quot;},
        temperature=0.0,
    )

    return chat_completion.choices[0].message.function_call.arguments

if __name__ == &quot;__main__&quot;:
    print(f&quot;Input message: {sys.argv[1]}&quot;)

    # Translates to German
    print(&quot;German&quot;)
    print(get_response(sys.argv[1], &quot;German&quot;, GermanTranslationResponse))
    time.sleep(1)

    # Translates to French
    print(&quot;French&quot;)
    print(get_response(sys.argv[1], &quot;French&quot;, FrenchTranslationResponse))
    time.sleep(1)

    # Translates to English
    print(&quot;French and German&quot;)
    print(get_response(sys.argv[1], &quot;French&quot;, GermanTranslationResponse))
    time.sleep(1)

    # Translates to English
    print(&quot;German and French&quot;)
    print(get_response(sys.argv[1], &quot;German&quot;, FrenchTranslationResponse))
    time.sleep(1)

    # Translates to German
    print(&quot;German and French2&quot;)
    print(get_response(sys.argv[1], &quot;German&quot;, FrenchTranslationResponse2))
    time.sleep(1)

    # Translates to French
    print(&quot;German and French3&quot;)
    print(get_response(sys.argv[1], &quot;French&quot;, FrenchTranslationResponse3))

$ python3 ex.py &quot;what&apos;s up&quot;
Input message: what&apos;s up
German
{
  &quot;message_in_german&quot;: &quot;Was gibt&apos;s Neues?&quot;
}
French
{
  &quot;message_in_french&quot;: &quot;Quoi de neuf&quot;
}
French and German
{
  &quot;message_in_german&quot;: &quot;what&apos;s up&quot;
}
German and French
{
  &quot;message_in_french&quot;: &quot;what&apos;s up&quot;
}
German and French2
{
  &quot;message_in_german&quot;: &quot;Was gibt&apos;s Neues?&quot;
}
German and French3
{
  &quot;message_in_french&quot;: &quot;Quoi de neuf&quot;
}</code></pre>]]></content:encoded></item><item><title><![CDATA[Dealing With Being Distrusting of HomeAssistant Automations]]></title><description><![CDATA[<p><a href="https://www.home-assistant.io/?ref=blog.randombits.host">HomeAssistant</a> is something I&apos;m sure everyone is aware of - an open source tool for managing your smart home devices. I have used it for a number of years, starting with a rough and ready setup on a Raspberry Pi 3B. Back then, I only had three smart</p>]]></description><link>https://blog.randombits.host/dealing-with-being-untrustworthy-with-homeassistant-automations/</link><guid isPermaLink="false">64cc063637a8b80001401b8e</guid><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Thu, 03 Aug 2023 20:49:54 GMT</pubDate><content:encoded><![CDATA[<p><a href="https://www.home-assistant.io/?ref=blog.randombits.host">HomeAssistant</a> is something I&apos;m sure everyone is aware of - an open source tool for managing your smart home devices. I have used it for a number of years, starting with a rough and ready setup on a Raspberry Pi 3B. Back then, I only had three smart light bulbs and essentially all I did was manually turn them on/off from my phone. Today, I have a slightly more elaborate setup, with lights, plugs, and sensors in almost every room of my house. The real power of Home Assistant however comes with the <em><a href="https://www.home-assistant.io/docs/automation/?ref=blog.randombits.host">automations</a></em>.</p><p>From having lights turn on a little dim before the morning alarm on my phone, to making sure all devices are off whenever there is nobody home, the depth of what you can do with automation is immense. However, there is one UX issue I have with these automations - I just don&apos;t quite <em>trust</em> them. Too many times I&apos;ve come home and realized that my automation didn&apos;t realize I had left the house and never turned off my devices. Similar to alarms and logs in software development, automations are only worthwhile if they are reliable and you can trust them. I set out to find a way to increase my trust in the automations I have at home, and also have an easier path to debugging them if they don&apos;t run when I expected them to.</p><p>My solution was to have a simple notification on my phone whenever an automation runs on Home Assistant. The notification medium was obvious for my workflow/style - use <a href="https://ntfy.sh/?ref=blog.randombits.host">ntfy.sh</a> to trigger a notification on my phone so that whenever I was concerned if the automation ran or not, I would already have the information available in a zero click manner. I thought this might be a little difficult, but as it turns out, there is a built in <a href="https://github.com/caronc/apprise?ref=blog.randombits.host">Apprise</a> notification service built into Home Assistant (which I think was a great idea!). My second issue was making sure that this notification would fire for <em>all</em> of my automations and that I wouldn&apos;t have to manually add it to each and every one. This turned out to be easier than expected, as there is an event you can trigger on called <code><a href="https://www.home-assistant.io/docs/configuration/events/?ref=blog.randombits.host#automation_triggered">AUTOMATION_TRIGGERED</a></code> which fires on every single automation run. The plan was set!</p><!--kg-card-begin: markdown--><p>The first thing I had to do was add the Apprise integraion to my <code>configuration.yaml</code> for Home Assistant with the URLs I wanted to trigger notifications on: <sup>[1]</sup></p>
<!--kg-card-end: markdown--><pre><code class="language-yaml">notify:
  - platform: apprise
    url: ntfy://ntfy.sh/home-assistant-notification-topic</code></pre><p>After that, I created an automation that would trigger whenever an automation ran, and would then notify me with the name of the automation that ran:</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.randombits.host/content/images/2023/08/Screenshot-from-2023-08-03-22-17-43.png" class="kg-image" alt loading="lazy" width="972" height="865" srcset="https://blog.randombits.host/content/images/size/w600/2023/08/Screenshot-from-2023-08-03-22-17-43.png 600w, https://blog.randombits.host/content/images/2023/08/Screenshot-from-2023-08-03-22-17-43.png 972w" sizes="(min-width: 720px) 720px"><figcaption>Home Assistant UI screenshot showing the details of how to set up an automation to notify you whenever a different automation runs</figcaption></figure><p>That was the sum total of what I had to do! This has made me feel much more at ease and trusting of when my automations have run, and if they don&apos;t run when expected, I can easily debug what <em>didn&apos;t</em> happen when I expected it to. One interesting thing to note is that the automation that sends a notification doesn&apos;t end up triggering the automation to fire again. There is no recursive behavior from it. I hope this helps anyone looking to set up automations on Home Assistant using Apprise, ntfy.sh, or just trying to trigger an automation to run whenever a different automation runs! If you have any fun automations for Home Assistant, or any thoughts/comments, please let me know through whatever means you found this post, and thanks for reading!</p><p>[1] I <a href="https://docs.ntfy.sh/install/?ref=blog.randombits.host">self-host ntfy.sh</a> so this URL is just a dummy! If you want another blog post on self-hosting ntfy.sh, let me know, but those docs are great and it&apos;s one of the most stable services I have ever ran.</p>]]></content:encoded></item><item><title><![CDATA[Vanity, Recognition, and Fighting Perfectionism - A Buildlog for git-vain]]></title><description><![CDATA[<p><em>&#x2003;This post is 50% a build log, 10% thoughts on vanity, and 40% about dev ops/project structure/CI/CD/etc. Everything should be easily navigable from the headings below. If you want to get in touch, find me on <a href="https://mastodon.social/@conorf?ref=blog.randombits.host">Mastodon</a>, write me an email, or follow me on</em></p>]]></description><link>https://blog.randombits.host/git-vain/</link><guid isPermaLink="false">648b754d1edd0a00011d7d8a</guid><category><![CDATA[Docker]]></category><category><![CDATA[Side Project]]></category><category><![CDATA[Self Hosted]]></category><category><![CDATA[Non-Tech]]></category><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Wed, 12 Jul 2023 11:00:56 GMT</pubDate><content:encoded><![CDATA[<p><em>&#x2003;This post is 50% a build log, 10% thoughts on vanity, and 40% about dev ops/project structure/CI/CD/etc. Everything should be easily navigable from the headings below. If you want to get in touch, find me on <a href="https://mastodon.social/@conorf?ref=blog.randombits.host">Mastodon</a>, write me an email, or follow me on <a href="https://github.com/conor-f?ref=blog.randombits.host">Github</a> (My <code>vanity</code> will appreciate it!).</em></p><h1 id="introduction">Introduction</h1><p>&#x2003;Everybody is a little vain. Vanity frequently spurs people to action where other, more <em>pure</em>, motives fail. &#xA0;There&apos;s an interesting relationship between vanity and perfectionism. Perfectionism is often driven by vanity, the wish to be perceived as being a master in whatever actions you are currently undertaking. This is clearly an impossible task, but striving towards an impossible ideal can often lead to successful results along the way.</p><p>&#x2003;Recognizing that perfectionist tendencies could have a different root cause than you previously thought leads to a different perspective on them. For me, perfectionist tendencies have been counterproductive to my desire for acknowledgement and recognition in the things I (at least think!) am knowledgeable about. However, you need to <a href="https://brooker.co.za/blog/2023/04/20/hobbies.html?ref=blog.randombits.host">be visible in order to be recognized</a>, so perfectionism really gets in the way. If my perfectionist tendencies were allowed to run free, I would rarely, if ever, release anything. The concept of &quot;Building in Public&quot; has been a great perspective for me in this regard, even though almost all it ever leads to is me writing <a href="https://blog.randombits.host/piframe/">half-baked blog posts</a> and creating things like my most recent project - <a href="https://github.com/conor-f/git-vain?ref=blog.randombits.host">git-vain</a>.</p><h1 id="buildlog">Buildlog</h1><p>&#x2003;Gitvain is a simple Python project that sends you a notification whenever the follower/stargazers on a Github change. It supports notifications using <a href="https://github.com/caronc/apprise?ref=blog.randombits.host">Apprise</a> which is a fantastic tool that allows you to specify some simple configuration and then send notifications using many, many services. <a href="https://github.com/PyGithub/PyGithub?ref=blog.randombits.host">PyGithub</a> is used to fetch changes on the watched repositories. There are many outstanding issues, that will most likely never be completed. This is by design, however, as I set out a plan to get to a point where I could be satisfied enough to leave it alone, say I sufficiently completed it, and thereby prevent my perfectionism from getting in the way.</p><p>&#x2003;A major part of my recent work in software development has focused around best practices and ops-adjacent work. Recognizing that setting up CI/CD at the start of a project is very easy compared to getting it set up midway through, and that actually <em>defining </em>the feature you want to work on and writing short functions with no side-effects to get it done makes you progress quickly. Being realistic with the scope of what you&apos;re working on allows you to keep motivation and leave your work in a state that&apos;s trivial to restart whenever you next get the chance. To achieve this, I simply started by writing out a minimum set of features I wanted to have before being happy to cut a <code>v1.0.0</code>. I then broke these features down into small components and added them to <a href="https://vikunja.io/?ref=blog.randombits.host">Vikunja</a> my favorite todo list application.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.randombits.host/content/images/2023/07/Screenshot-from-2023-07-11-00-35-33.png" class="kg-image" alt loading="lazy" width="1150" height="986" srcset="https://blog.randombits.host/content/images/size/w600/2023/07/Screenshot-from-2023-07-11-00-35-33.png 600w, https://blog.randombits.host/content/images/size/w1000/2023/07/Screenshot-from-2023-07-11-00-35-33.png 1000w, https://blog.randombits.host/content/images/2023/07/Screenshot-from-2023-07-11-00-35-33.png 1150w" sizes="(min-width: 720px) 720px"><figcaption>A satisfyingly complete todo list!</figcaption></figure><p>&#x2003;I started by getting a <a href="https://github.com/conor-f/git-vain/tree/8b5c9232bf9e08f7387ad8984908cd53973157b3?ref=blog.randombits.host">basic Python project set up</a> with continuous deployment using Github Actions to build a Dockerfile that would deploy to my home server. This is so simple that it literally just prints a Hello World statement in the Dockerfile! Even though this seems insignificant, from here you have the perfect jumping off point; a stable point you can solidly build on.</p><p>&#x2003;After this, I prioritised one type of event I wanted to be notified on - stargazers - and wrote a <a href="https://github.com/conor-f/git-vain/pull/1/files?ref=blog.randombits.host">basic script</a> to star/unstar a repo, and <a href="https://github.com/conor-f/git-vain/pull/2?ref=blog.randombits.host">another</a> to use <code>ntfy</code> to send a notification when this list changed. Already it was starting to take shape enough to keep my motivation going.</p><p>&#x2003;Once this was complete, I used the simple Python stdlib <a href="https://docs.python.org/3/library/shelve.html?ref=blog.randombits.host">shelve</a> to add some persistence, and integrated with <a href="https://github.com/caronc/apprise?ref=blog.randombits.host">Apprise</a> to support a wider variety of channels for notification.</p><p>&#x2003;There&apos;s no point in going into any further detail, because A) it&apos;s not very interesting, and B) it didn&apos;t progress much further! But this is a feature, not a bug - I set out with the intention of cutting a <code>v1.0.0</code> and this was almost there already!</p><p>&#x2003;But what next?</p><h1 id="next-steps">Next Steps</h1><p>&#x2003;I realised I had a few things I wanted to add as I was going through the development process. This is typical, and it&apos;s the source of all <a href="https://en.wikipedia.org/wiki/Feature_creep?ref=blog.randombits.host">feature creep</a>. I curtailed this by putting anything I wanted to do into Vikunja, and deciding to convert them to issues once <code>v1.0.0</code> had been completed. If I didn&apos;t, I would certainly still be adding little bits and pieces here and there now as opposed to enjoying the feeling of having &quot;finished&quot; a piece of work. Even if it only lasts as long as it takes to go back to working on <code>v2.0.0</code>!</p><p>&#x2003;Some things I want to improve in <code>v2.0.0</code> or in the next side project I work on are as follows:</p><!--kg-card-begin: markdown--><ul>
<li>Use pre-commit to add more stability and standardisation to the work
<ul>
<li>This feels particularly powerful and would be a good way to finally upgrade my development process which still consists of a pretty vanilla vim setup. Who knows? Maybe I&apos;ll even switch to NeoVim with this new-fangled LSP!</li>
</ul>
</li>
<li>There are some bugs such as not accounting for pagination of the response from Github.
<ul>
<li>I deliberately didn&apos;t try to address these during this effort as I could easily have gotten bogged down in it and it&apos;s a small, contained piece of work that should be easy to add in the future if I leave the project and come back to it after a period of time.</li>
</ul>
</li>
<li>SemVer and Conventional Commits seem to be a fantastic way to completely automate the release cycle.
<ul>
<li>If there&apos;s no Github Action for it, it shouldn&apos;t be too difficult to make either as I didn&apos;t find a suitable service for my needs so far - next project maybe??</li>
</ul>
</li>
<li>Python has a good few issues when it comes to naming conventions and project setup.
<ul>
<li>This is potentially exacerbated by the fact that the tooling I&apos;m familiar with is outdated by now.
<ul>
<li>Using setup.py works nicely (especially if you have experience with it) for a small project like this, but it&apos;s not a good idea for more serious work.</li>
</ul>
</li>
<li>The naming conventions I was having issues with surrounded small things like - vs _ vs camelCase. Package names vs module names vs binary names. Small things, that actually ended up adding some annoying decision fatigue and cognitive load to keep consistent.</li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown--><h1 id="conclusion">Conclusion</h1><p>&#x2003;The Python standards and conventions are a bit of a mess. Dev Ops/best practices actually <em>do</em> provide great help when used in conjunction with common sense, and the &quot;perfect&quot; project template likely cannot exist due to the subtle variations in what &quot;perfect&quot; looks like for different purposes.</p><p>&#x2003;Nonetheless, that&apos;s what my vanity is currently driving me towards finding, so for now, I&apos;ll keep looking.</p><p><em>&#x2003;&#x2003;Hopefully not in vain!</em></p>]]></content:encoded></item><item><title><![CDATA[Monitoring Self-Hosted Services]]></title><description><![CDATA[<p>I have been self-hosting for almost <s>two</s> three years now, and one thing I have never quite figured out is how to monitor all the applications I host. At this stage, there are approximately <em>forty</em> running Docker containers so I really should have some means of monitoring what&apos;s</p>]]></description><link>https://blog.randombits.host/monitoring-self-hosted-services/</link><guid isPermaLink="false">640205addcdb77000189f267</guid><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Wed, 07 Jun 2023 10:49:47 GMT</pubDate><content:encoded><![CDATA[<p>I have been self-hosting for almost <s>two</s> three years now, and one thing I have never quite figured out is how to monitor all the applications I host. At this stage, there are approximately <em>forty</em> running Docker containers so I really should have some means of monitoring what&apos;s going on on them and the general health of the server they are running on. Professionally, I have used <a href="https://www.splunk.com/?ref=blog.randombits.host">Splunk</a> and <a href="https://www.sumologic.com/?ref=blog.randombits.host">Sumo Logic</a> for monitoring services, but the open source solution I would prefer to use for this is <a href="https://grafana.com/?ref=blog.randombits.host">Grafana</a>. I have already set up Grafana to get logs from the <a href="https://via.randombits.host/?ref=blog.randombits.host">Via</a> app, and it seems to be a very widely used tool industry-wide, so it would be nice to not be completely in the dark on it! In particular, I will be using <a href="https://grafana.com/oss/loki/?ref=blog.randombits.host">Loki</a>, <a href="https://grafana.com/oss/prometheus/?ref=blog.randombits.host">Prometheus</a>, <a href="https://grafana.com/docs/loki/latest/clients/promtail/?ref=blog.randombits.host">Promtail</a>, <a href="https://github.com/prometheus/node_exporter?ref=blog.randombits.host">Node-Exporter</a>, and <a href="https://github.com/google/cadvisor?ref=blog.randombits.host">cAdvisor</a>. As I have basically no experience with any of these tools, I will summarize my research on them for you, and document how they interact with each other in my setup. After that, I will describe which data I wish to collect and for what purpose, before finally showing the dashboards/alerts I have made. Let&apos;s go!</p><h2 id="the-tools">The Tools</h2><h3 id="grafana">&#x2003;Grafana</h3><p>&#x2003;&#x2003;&#x2003;Lets start with the main one - what is Grafana? Grafana is at its core a web-based data visualization platform. It acts as a front end to many time-series databases, and uses plugins to consume data from different sources and support custom dashboard visualizations. &#xA0;It also has a simple graphical tool to help you craft queries on the data. The best place to try the Grafana platform out is at <a href="https://play.grafana.org/?ref=blog.randombits.host">play.grafana.org</a>.</p><h3 id="prometheus">&#x2003;Prometheus</h3><p>&#x2003;&#x2003;&#x2003;Prometheus is a time-series database which operates on a <code>pull</code> model. You configure exporters which will have metrics requested from them by Prometheus on a regular schedule. There is a suite of <a href="https://prometheus.io/docs/introduction/overview/?ref=blog.randombits.host#components">components</a> it can make use of, but one core feature we will be using is <a href="https://prometheus.io/docs/prometheus/latest/querying/examples/?ref=blog.randombits.host">PromQL</a> - the Prometheus Query Language. We will use this through Grafana to aggregate metrics collected by Prometheus. One thing that is important to note is that Prometheus is designed to work with numeric information only. This means it cannot be used to search through textual logs like you might do in Splunk or Sumo Logic.</p><h3 id="loki">&#x2003;Loki</h3><p>&#x2003;&#x2003;&#x2003;Being restricted to just working with metrics is quite a limitation, so we will also be using <a href="https://grafana.com/oss/loki/?ref=blog.randombits.host">Loki</a>. Loki encompasses a set of tools/services, but my working model of it doesn&apos;t extend much further than Prometheus for log lines. It accepts data in any format, and similar to Prometheus, it allows you to build metrics and alerts based on them.</p><h3 id="promtail">&#x2003;Promtail</h3><p>&#x2003;&#x2003;&#x2003;<a href="https://grafana.com/docs/loki/latest/clients/promtail/?ref=blog.randombits.host">Promtail</a> is responsible for delivering log lines from log files to Loki. It is the roughly equivalent component in the Loki stack as Node-Exporter is in the Prometheus stack. This is confusing as <em>Prom</em>tail looks like it should be part of the <em>Prom</em>etheus stack, but alas naming of open source tooling is never great!</p><p>&#x2003;&#x2003;&#x2003;Promtail will be used to collect log lines from containers of my own services, or of services being debugged.</p><h3 id="node-exporter">&#x2003;Node-Exporter</h3><p>&#x2003;&#x2003;&#x2003;<a href="https://grafana.com/oss/prometheus/exporters/node-exporter/?ref=blog.randombits.host">Node Exporter</a> monitors and exports hardware and kernel level metrics to Prometheus. It is highly configurable with a <a href="https://github.com/prometheus/node_exporter?ref=blog.randombits.host#collectors">long list</a> of metrics it can collect if you desire. Despite the warnings, we will be running <code>node-exporter</code> from a Docker container for now. This is just for ease of encapsulation until I can move my personal home server to using NixOS or similar.</p><p>&#x2003;&#x2003;&#x2003;This will provide the host-level metrics we need, such as CPU usage, RAM usage, free space, etc.</p><h3 id="cadvisor">&#x2003;cAdvisor</h3><p>&#x2003;&#x2003;&#x2003;From the <a href="https://github.com/google/cadvisor?ref=blog.randombits.host">cAdvisor Github page</a>:</p><blockquote>[cAdvisor] is a running daemon that collects, aggregates, processes, and exports information about running containers.</blockquote><p>&#x2003;&#x2003;&#x2003;<a href="https://github.com/google/cadvisor/blob/master/docs/storage/prometheus.md?ref=blog.randombits.host">These metrics</a> can be exposed for Prometheus, and will provide the per-container resource usage metrics we need.</p><h2 id="stack">Stack</h2><p>&#x2003;&#x2003;&#x2003;So now we have all the components explained, it&apos;s worthwhile visualizing the stack we will have. One crucial thing to remember is that while this seems like a large number of services, each one is very small and modular, so it won&apos;t be consuming a huge amount of resources.</p><figure class="kg-card kg-image-card"><img src="https://blog.randombits.host/content/images/2023/03/grafana.png" class="kg-image" alt loading="lazy" width="711" height="321" srcset="https://blog.randombits.host/content/images/size/w600/2023/03/grafana.png 600w, https://blog.randombits.host/content/images/2023/03/grafana.png 711w"></figure><p></p><h2 id="the-data">The Data</h2><p>&#x2003;&#x2003;&#x2003;Now we know the <em>how</em> of observability, we need to get to the <em>what</em>. Honestly, I spent a long time putting this off, probably because this was the largest gap in my knowledge! However, I think an iterative approach works best here anyways - both in iteratively building up to &quot;complete&quot; observability/insight, and iteratively building up my knowledge of the Grafana stack.<br>&#x2003;&#x2003;&#x2003;I suppose it makes sense to think about <em>why</em> I&apos;m making some monitoring on these services. Primarily it&apos;s to see what my server is capable of. i.e. do I need to add some RAM/storage/replace the entire CPU? How many additional containers can I run? Has there been a large spike in usage? If so by what containers/services? How much network input/output is each service going through? As a percentage of the whole input/output? How much storage is each container using? Secondly, I want to have insights into actual logs of my own services (or for others if I really want I guess? But primarily my homemade services). This should be all logs for debug purposes and usage metrics in general.<br>&#x2003;&#x2003;&#x2003;Lets make a list:</p><!--kg-card-begin: markdown--><ul>
<li>Host
<ul>
<li>Metrics
<ul>
<li>CPU Usage</li>
<li>RAM Usage</li>
<li>Storage Usage %</li>
<li>Load (1 min, 5 min, 15 min seems standard)</li>
<li>Network Throughput (Input/Output volume)</li>
</ul>
</li>
<li>Logs
<ul>
<li>syslog</li>
<li>auth.log</li>
</ul>
</li>
</ul>
</li>
<li>Per-Service
<ul>
<li>Metrics
<ul>
<li>CPU Usage %</li>
<li>RAM Usage</li>
<li>Storage Usage %</li>
<li>Network Throughput</li>
</ul>
</li>
<li>Logs
<ul>
<li>For specific containers</li>
</ul>
</li>
</ul>
</li>
</ul>
<!--kg-card-end: markdown--><h2 id="the-implementation">The Implementation</h2><p>&#x2003;&#x2003;&#x2003;Now we know <em>what</em> we&apos;re observing, and <em>how</em> we&apos;re going to ingest it, we just need to do it! </p><figure class="kg-card kg-embed-card kg-card-hascaption"><iframe width="200" height="113" src="https://www.youtube.com/embed/RZGV9Z5Gvgs?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen title="Completing the plan"></iframe><figcaption>I wish I could have found the author&apos;s site to link to and not Youtube...</figcaption></figure><p>&#x2003;&#x2003;&#x2003;Since we painstakingly mapped out the different component services, we can tell instantly that we need <code>cAdvisor</code> for the per-service metrics, <code>NodeExporter</code> for the host metrics, and <code>loki</code> for all the log lines. Lets start with the metrics.<br>&#x2003;&#x2003;&#x2003;The metrics all need to feed into <code>prometheus</code> in order to end up in Grafana, so we need to edit the prometheus config file in order to do that. For getting all our container metrics from <code>cAdvisor</code>, we just need a few lines. For <code>NodeExporter</code>, just a few more:</p><pre><code class="language-yaml">scrape_configs:
  - job_name: &quot;cadvisor&quot;
    scrape_interval: 15s
    static_configs:
      - targets: [&quot;cadvisor:8080&quot;]
  - job_name: &quot;node_exporter&quot;
    scrape_interval: 15s
    static_configs:
      - targets: [&quot;node-exporter:9100&quot;]</code></pre><p>&#x2003;&#x2003;&#x2003;Then Loki needs to be configured for <code>syslog</code> and <code>auth.log</code>. This was achieved by a simple promtail config and mapping <code>/var/log:/var/log/host_logs</code> in docker-compose:</p><pre><code class="language-yaml">scrape_configs:
- job_name: hostlogs_job
  static_configs:
  - targets:
      - localhost
    labels:
      job: hostlogs
      __path__: /var/hostlogs/*log
- job_name: docker_container_logs
  docker_sd_configs:
  - host: unix:///var/run/docker.sock
    refresh_interval: 5s
  relabel_configs:
    - source_labels: [&apos;__meta_docker_container_name&apos;]
      regex: &apos;/(.*)&apos;
      target_label: &apos;container&apos;
</code></pre><h2 id="alerting">Alerting</h2><p>&#x2003;Finally we have the full stack set up. Last remaining thing to get a semi-professional (emphasis on the semi!) is to get some alerting going. For alerting, I&apos;m going to use <a href="https://github.com/binwiederhier/ntfy?ref=blog.randombits.host">ntfy</a> and a small Grafana integration I found called <a href="https://github.com/kittyandrew/grafana-to-ntfy?ref=blog.randombits.host">grafana-to-ntfy</a>. This took a little more work than expected, but eventually I got it all working. Firstly, I set up a personal <code>ntfy</code> instance, then added the <code>grafana-ntfy</code> container to my docker-compose along with a simple env file as explained in the README. I then integrated it with Grafana alerting. One of the key things to note here is that I just used plain <code>http</code> for communication with the <code>grafana-ntfy</code> container as I couldn&apos;t get it set up with SSL! I kept getting invalid cert errors with reference to a cert only valid for Traefik. Also not fully documented, but the <code>BAUTH</code> variables need to be passed too although they should be made optional. May submit a PR for that... Follow the README to do a test notification</p><p>Set up a query to make sure notifications are coming through and then just get on with standard alarming!</p><h2 id="conclusion">Conclusion</h2><p>&#x2003;So, as you&apos;ve probably realized, I <em>really</em> lost steam towards the end of this post. I have been working on this post/stack setup for about three months and it has been frustrating me to no end and stopping me from writing things I would prefer to write about, and follow my current tech interests. I try to balance doing things I feel I <em>should</em> do with things that I have a strong (but usually fleeting) motivation to do as these rarely overlap. This time however, even though I can see the huge benefit of having a well set up monitoring stack for my home server and how all aspects of this will improve my quality of life when debugging/doing basic admin, the balance has just tipped to being more stressful than beneficial to me.</p><p>&#x2003;I will update my stack in the future, and hopefully write a more concise post on setting up a home server monitoring stack, but for now, this is all you get!</p>]]></content:encoded></item><item><title><![CDATA[PiFrame V2.0]]></title><description><![CDATA[<p>&#x2003;&#x2003;&#x2003;During my move, the microSD card that was driving my LED matrix to show Spotify covers broke. As I&apos;m stupid and had nothing backed up or in source control, I&apos;m now tasked with recreating it from scratch. I&apos;ll be using Docker</p>]]></description><link>https://blog.randombits.host/piframe/</link><guid isPermaLink="false">6410ed1bdcdb77000189f3ef</guid><category><![CDATA[Docker]]></category><category><![CDATA[Docker Compose]]></category><category><![CDATA[Self Hosted]]></category><category><![CDATA[Side Project]]></category><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Sat, 25 Mar 2023 18:21:32 GMT</pubDate><media:content url="https://blog.randombits.host/content/images/2023/03/cover.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://blog.randombits.host/content/images/2023/03/cover.jpeg" alt="PiFrame V2.0"><p>&#x2003;&#x2003;&#x2003;During my move, the microSD card that was driving my LED matrix to show Spotify covers broke. As I&apos;m stupid and had nothing backed up or in source control, I&apos;m now tasked with recreating it from scratch. I&apos;ll be using Docker sitting on top of a Ubuntu 22.04 server edition running on a Raspberry Pi 3B+. I will be doing it all headless as I find it to be a much more streamlined approach than getting a HDMI cable, Bluetooth keyboard, and other accessories all in one place. </p><h2 id="preparation">Preparation</h2><p>&#x2003;&#x2003;&#x2003;First, grab a microSD card and get the Ubuntu image from <a href="https://ubuntu.com/download/raspberry-pi/thank-you?version=22.04.2&amp;architecture=server-arm64+raspi&amp;ref=blog.randombits.host">here</a>. Extract it using <code>xz</code> and finally use a <code>dd</code> command to flash it onto the card.<br><br>&#x2003;&#x2003;&#x2003;Be careful! Make sure you specify the correct device! I am deliberately using something wrong in the snippet below so you don&apos;t blindly copy paste your way to wiping your system. Find the correct device by running <code>lsblk</code> before and after you plug in the SD card. It should look something like <code>mmcblk0</code> or <code>sd[a|b|c|...]</code>. Also note that you don&apos;t use a partition number with the <code>dd</code> command when you are flashing a bootable image.</p><pre><code class="language-shell">$ xz -d ubuntu-22.04.2-preinstalled-server-arm64+raspi.img.xz
$ sudo dd if=ubuntu-22.04.2-preinstalled-server-arm64+raspi.img of=/dev/REPLACE_WITH_YOUR_DEVICE_ID bs=4M status=progress &amp;&amp; sync</code></pre><p>&#x2003;&#x2003;&#x2003;Once this completes, you have to configure your Pi for remote access. The easiest way to do this is by setting the hostname, and then plugging in your Pi via Ethernet. If you were also too lazy to go find an Ethernet cable like me, then you will also have to update your <code>network-config</code> file to automatically connect to your WiFi network on boot.</p><pre><code class="language-shell">$ sudo mount /dev/sda1 /mnt
$ # Set the hostname:
$ sudo vim /mnt/user-data
... Search for &quot;hostname&quot; and modify the line as needed before exiting the editor
$ # Set your network-config:
$ sudo vim /mnt/network-config
... Modify as needed and quit.
$ # Finally tell cloud-init to reboot after applying your settings:
$ echo &quot;power_state:\n&quot; &gt;&gt; /mnt/user-data
$ echo &quot;\tmode: reboot&quot; &gt;&gt; /mnt/user-data
$ sudo umount /dev/sda1</code></pre><p>&#x2003;&#x2003;&#x2003;For more details on the cloud-init and network-config modifications, check <a href="https://github.com/DavidUnboxed/Ubuntu-20.04-WiFi-RaspberyPi4B?ref=blog.randombits.host">here</a> for a good explanation. Either way, by now you should have a Pi you can simply SSH directly into with a set hostname!</p><h3 id="installing-docker">Installing Docker</h3><p>&#x2003;&#x2003;&#x2003;To ease the development process, and to make the service easy to maintain/update in the future, I&apos;m going to be running everything through Docker. Setting up Docker is a pretty straightforward process. I&apos;ve copied the commands here for ease of getting started.</p><pre><code class="language-shell">$ sudo apt update &amp;&amp; sudo apt upgrade -y &amp;&amp; sudo reboot
$ curl -fsSL https://get.docker.com -o get-docker.sh &amp;&amp; sudo sh ./get-docker.sh
$ # Now allow running docker commands as non-root user:
$ sudo groupadd docker  # This is likely already created
$ sudo usermod -aG docker $USER
$ newgrp docker
$ # Test it:
$ docker run hello-world
$ # Now configure Docker to start on boot with systemd
$ sudo systemctl enable docker.service
$ sudo systemctl enable containerd.service</code></pre><h2 id="testing-a-docker-composeyaml">Testing a docker-compose.yaml</h2><p>&#x2003;&#x2003;&#x2003;The easiest way I have found to run multiple containers is to manage them through <code>docker compose</code>. I often test my setup with a simple <code>docker-compose.yaml</code> file once I think I have everything set up and ready to go. Here&apos;s the file I used to test this setup. If you are connected to your local network, you should then be able to type in the hostname of your device into your browser and see the <code>whoami</code> page.</p><pre><code class="language-yaml">version: &quot;3.3&quot;

services:

  traefik:
    image: &quot;traefik:v2.9&quot;
    container_name: &quot;traefik&quot;

    restart: &quot;unless-stopped&quot;

    command:
      #- &quot;--log.level=DEBUG&quot;
      #- &quot;--api.insecure=true&quot;
      - &quot;--providers.docker=true&quot;
      - &quot;--providers.docker.exposedbydefault=false&quot;
      - &quot;--entrypoints.web.address=:80&quot;
    ports:
      - &quot;80:80&quot;
      - &quot;8080:8080&quot;
    volumes:
      - &quot;/var/run/docker.sock:/var/run/docker.sock:ro&quot;

  whoami:
    image: &quot;traefik/whoami&quot;
    container_name: &quot;simple-service&quot;

    restart: &quot;unless-stopped&quot;

    labels:
      - &quot;traefik.enable=true&quot;
      - &quot;traefik.http.routers.whoami.rule=Host(`piframe`)&quot;
      - &quot;traefik.http.routers.whoami.entrypoints=web&quot;</code></pre><h2 id="v001">V0.0.1</h2><p>&#x2003;&#x2003;&#x2003;Now that all the prerequisites are set up, it&apos;s time to move onto <code>V0.0.1</code> of the project. I always find it good to spend some time setting out clearly defined goals to get to a <code>V1.0.0</code> and for me, <code>V0.0.1</code> always involves setting up continuous deployment and making sure I can view log messages/errors clearly. This involved setting up Github Actions to automatically build a Docker image and push it to Docker Hub on a push to the main branch or when a tag is pushed of the format <code>Vx.y.z</code>. You can see how the code looked at this point over at the <a href="https://github.com/conor-f/piframe/tree/v0.0.1?ref=blog.randombits.host">repo</a>. At this point, I also set up <a href="https://containrrr.dev/watchtower/?ref=blog.randombits.host">Watchtower</a> via a few lines in my <code>docker-compose.yaml</code>, which monitors my container for updates, and automatically pulls and restarts the container when it changes.</p><figure class="kg-card kg-image-card kg-width-wide kg-card-hascaption"><img src="https://blog.randombits.host/content/images/2023/03/working_caching.png" class="kg-image" alt="PiFrame V2.0" loading="lazy" width="1919" height="871" srcset="https://blog.randombits.host/content/images/size/w600/2023/03/working_caching.png 600w, https://blog.randombits.host/content/images/size/w1000/2023/03/working_caching.png 1000w, https://blog.randombits.host/content/images/size/w1600/2023/03/working_caching.png 1600w, https://blog.randombits.host/content/images/2023/03/working_caching.png 1919w" sizes="(min-width: 1200px) 1200px"><figcaption>On the left, the container building (with nice caching!) through Github Actions, and on the right, the container running, updating, and logging on my host.</figcaption></figure><h2 id="v002">V0.0.2</h2><p>&#x2003;&#x2003;&#x2003;This version was centred around getting <em>anything</em> to display on the LED matrix. I completed all of this outside of a Docker container in the interest of development speed, keeping in mind that everything I do outside of the container should be reproducible within one. <a href="https://learn.adafruit.com/adafruit-rgb-matrix-bonnet-for-raspberry-pi?view=all&amp;ref=blog.randombits.host">Here</a> is the Adafruit documentation on the specific hardware I was using which was essential for configuring the hardware (which I&apos;m not going to cover unless requested!). During this, I found the authoritative resource on all things Raspberry Pi + LED Matrix related: <a href="https://github.com/hzeller/rpi-rgb-led-matrix/?ref=blog.randombits.host">hzeller&apos;s rpi-rgb-led-matrix repository</a>. Then it was simply a matter of running the following to get a basic demo square running.</p><pre><code class="language-shell">ubuntu@piframe:~/matrix_test$ git clone https://github.com/hzeller/rpi-rgb-led-matrix/
ubuntu@piframe:~/matrix_test/rpi-rgb-led-matrix$ make -C examples-api-use</code></pre><h2 id="v003">V0.0.3</h2><p>&#x2003;&#x2003;&#x2003;This step was to make the work I had done outside of Docker reproducible and working from my CD workflow. I additionally added some tools to the Docker image at this point to ease working with the container. e.g. installing <code>vim</code> and <code>git</code>. This may seem very pedantic and slow of an approach, but my goal when I am doing personal projects like this is to keep each step achievable, reproducible, and small. It allows me to easily leave a project and return to it days or weeks later and pick up where I left off. I find that breaking personal projects down into these achievable blocks makes finishing them much more likely. As usual, you can see the repo at this point <a href="https://github.com/conor-f/piframe/tree/v0.0.3?ref=blog.randombits.host">here</a>.</p><h2 id="v004">V0.0.4</h2><p>&#x2003;&#x2003;&#x2003;As the ultimate goal is to display full images on the LED matrix, I sought out an example that did just that. At this point we can display an image on the screen only using a command line interface which is workable, but definitely something to try improve in the future. Regardless <code><a href="https://github.com/conor-f/piframe/tree/v0.0.4?ref=blog.randombits.host">V0.0.4</a></code> was complete!</p><h2 id="v005">V0.0.5</h2><p>&#x2003;&#x2003;&#x2003;This is where we integrate with Spotify. Plan of action is to make a loop that periodically checks if Spotify is online and if so, display the album artwork. If nothing is playing, clear the screen. As I had previously worked with the Spotify API before while making <a href="https://github.com/conor-f/spotibar?ref=blog.randombits.host">Spotibar</a>, I decided to just pull that in as a dependency because I was already familiar with the setup of it.<br>&#x2003;&#x2003;&#x2003;As I had time, and was really annoyed at the command line interface for displaying an image, I decided to set up Python bindings as shown <a href="https://github.com/hzeller/rpi-rgb-led-matrix/tree/master/bindings/python?ref=blog.randombits.host">here</a>. This made development much smoother. <a href="https://github.com/conor-f/piframe/commit/de498eb6035b15cd29c7a148f2372d66e05b5b6c?ref=blog.randombits.host">Here</a> is the code as of that point. One thing of note is that I am continually SSHing into the Pi to test commands on the Docker container before putting them to the Dockerfile! </p><h2 id="the-doldrums">The Doldrums</h2><p>&#x2003;&#x2003;&#x2003;At this point, I had begun to slack on setting clear targets as I thought I was so close to the finish line. However, this is where the <a href="https://en.wikipedia.org/wiki/Pareto_principle?ref=blog.randombits.host">Pareto principle</a> struck. I had a very difficult time authorizing the Spotify API without having to take awkward steps. I tried a few different things (including switching to trying Last.fm instead of Spotify!) but ended up making a number of changes to <code>spotibar</code> to allow installation to just have to include one extra step as follows:</p><p>&#x2003;1) Add the <code>docker-compose.yaml</code> file to your Pi<br>&#x2003;2) Run <code>docker compose pull</code><br>&#x2003;3) Run <code>sudo docker compose run -it piframe spotibar --init</code><br>&#x2003;4) Follow the Spotibar <a href="https://github.com/conor-f/spotibar?ref=blog.randombits.host#installation">instructions</a><br>&#x2003;&#x2003;4.1) For config filepath, put in <code>/app/config/spotibar_config.json</code><br>&#x2003;&#x2003;4.2) For auth path, put in <code>/app/config/spotibar_auth_cache</code><br>&#x2003;&#x2003;4.3) Ignore any errors. Just look for the line &quot;Successfully authenticated.&quot;<br> &#xA0; &#xA0;Optional) <code>sudo chmod -R 777 config/</code> to remove some errors from the logs<br>&#x2003;5) <code>docker compose up -d</code></p><h2 id="finishing-touches">Finishing Touches</h2><p>&#x2003;&#x2003;&#x2003;As the project was transitioning from PoC to MVP stage, I wanted to put some gloss on it. The main issue was that the image still flickered quite a lot. To address that, I changed the boot options to disable audio as explained in the hzeller repo&apos;s <a href="https://github.com/hzeller/rpi-rgb-led-matrix?ref=blog.randombits.host#troubleshooting">Troubleshooting section</a>. However, there was a slight issue with the documentation where it used the wrong path, which I diligently created a <a href="https://github.com/hzeller/rpi-rgb-led-matrix/pull/1524?ref=blog.randombits.host">PR</a> to update the documentation to save people some time in the future. Alongside this, I supported the configuration of parameters of the LED Matrix through <a href="https://github.com/conor-f/piframe/blob/v1.0.0/src/piframe.py?ref=blog.randombits.host#L30">environment variables</a> as suggested by the <a href="https://12factor.net/config?ref=blog.randombits.host">12-factor app</a>.</p><h2 id="v100">V1.0.0</h2><p>&#x2003;&#x2003;&#x2003;At this point, I was happy to cut <code>V1.0.0</code>. I had achieved all my initial goals, and now had something I could plug into my wall and leave running confident in its reliability. I tested everything from scratch, took some pictures, and called it a day!</p><h2 id="finishing-thoughts">Finishing Thoughts</h2><ul><li>Continuous Deployment is <em>not</em> only for large projects or for your job. It makes your life far easier and allows you to jump in/out of a project at ease, knowing you won&apos;t forget some magic incantation if you stop working on it for a week. It&apos;s also quite easy to set up, and is almost templatable.</li><li>I hit major issues when I stopped breaking down my work into small chunks. This could be a coincidence, but the quality of my output was significantly lower when I didn&apos;t set clear goals.</li><li>The 80/20 rule hits hard.</li></ul><p>Thank you for reading this, and if you have any feedback, you can reach me on <a href="https://mastodon.social/@conorf?ref=blog.randombits.host">Mastodon</a>, by email (via anything at this hostname.tld), or any other way you can find me! Anything from content changes to advice on my writing style is appreciated!</p>]]></content:encoded></item><item><title><![CDATA[DIYnDNS - The Lengths I Will Go to Not Pay My ISP for a Static IP Address]]></title><description><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.randombits.host/content/images/2023/02/4fc53ae8bb19ab51a9d72da70318a149-3.jpg" class="kg-image" alt loading="lazy" width="500" height="700"></figure><p>DNS is infuriating to me. It exists in the uncanny valley of technologies where on paper I know a decent amount about it, but in reality I just flounder aimlessly and ultimately end up in an IRC channel somewhere spilling my woes to any (seemingly mystically intelligent) kind soul who</p>]]></description><link>https://blog.randombits.host/diyndns/</link><guid isPermaLink="false">63f3c7be32cd480001b97dbb</guid><category><![CDATA[Docker]]></category><category><![CDATA[Self Hosted]]></category><category><![CDATA[DNS]]></category><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Mon, 20 Feb 2023 20:55:00 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-card kg-image-card"><img src="https://blog.randombits.host/content/images/2023/02/4fc53ae8bb19ab51a9d72da70318a149-3.jpg" class="kg-image" alt loading="lazy" width="500" height="700"></figure><p>DNS is infuriating to me. It exists in the uncanny valley of technologies where on paper I know a decent amount about it, but in reality I just flounder aimlessly and ultimately end up in an IRC channel somewhere spilling my woes to any (seemingly mystically intelligent) kind soul who will help me debug my issues. This does not pair well with trying to host your own server in your living room.</p><p>I had all my DNS issues tidied up enough to forget about them (the ideal state of affairs for DNS in my opinion) while I was living in Ireland and my ISP by default gave out static IPv4 addresses, or if not, changed them so infrequently that I didn&apos;t experience a change in over a year. However, with a new ISP in Berlin, I had quite a different experience. After congratulating myself on the quality of the infrastructure-as-code I had developed, so that even after 5 months of downtime, I could just plug in my server again, connect it to the WiFi, and it was back online, I went to bed happy. The following morning, however, I woke up to find that my IP address had changed overnight. I gave the DNS provider a call, and they informed me that a static IP address can easily be provided, but it will cost an additional ten euro per month. They reassign IP addresses every 24 hours too, just to be annoying about it. This means you can&apos;t rely on a rarely updating dynamic IP address either.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.randombits.host/content/images/2023/02/Screenshot-from-2023-02-20-21-14-36.png" class="kg-image" alt="Screenshot of a chart and some raw data showing an IP address being updated once per 24 hours." loading="lazy" width="1840" height="582" srcset="https://blog.randombits.host/content/images/size/w600/2023/02/Screenshot-from-2023-02-20-21-14-36.png 600w, https://blog.randombits.host/content/images/size/w1000/2023/02/Screenshot-from-2023-02-20-21-14-36.png 1000w, https://blog.randombits.host/content/images/size/w1600/2023/02/Screenshot-from-2023-02-20-21-14-36.png 1600w, https://blog.randombits.host/content/images/2023/02/Screenshot-from-2023-02-20-21-14-36.png 1840w" sizes="(min-width: 720px) 720px"><figcaption>What&apos;s this wonderful tool updating my IP address every 24 hours?? Also, ignore the day I turned my server off by accident and was very confused why nothing was working...</figcaption></figure><p>I felt personally attacked at the thought of paying for a static IP address, and for some inexplicable reason, I didn&apos;t want to use existing Dynamic DNS solutions. Maybe some vague notion of not wanting another external resource that I&apos;d have to pay for or could break due to a misconfiguration, but most likely it was just stubbornness. Enter <a href="https://github.com/conor-f/diyndns?ref=blog.randombits.host">DIYnDNS</a>! A simple Python script that will check if my IP address has changed in the last N minutes, and if so, connect to Cloudflare, update my DNS records to point at the new IP address, then go back to sleep until it notices the IP address has changed again.</p><p>This is all pretty straightforward, but there are a few nice aspects and things I learned along the way.</p><!--kg-card-begin: markdown--><ul>
<li>First of all, this is all packaged up in a nice and small Docker container, which allows me to also make a simple <code>docker-compose</code> file to allow it to be installed with ease on any reverse proxy setup like <code>Caddy</code> or <code>Traefik</code>.</li>
<li>Secondly, it uses a plain <code>.ini</code> file for configuration, which I never understood why people used before. I think I&apos;ll be using <code>.ini</code> files for my configuration in future Python projects, primarily due to the inbuilt <code>configparser</code> library and how easy it was to use.</li>
<li>Thirdly, there&apos;s a <a href="https://containrrr.dev/watchtower/?ref=blog.randombits.host">watchtower</a> container defined in the <code>docker-compose</code> file too, which will automatically update the <code>diyndns</code> container whenever there&apos;s a new image pushed to docker hub.</li>
<li>Finally, this container is updated on every push to the repository&apos;s <code>main</code> branch using some straightforward Github Actions CI.</li>
</ul>
<!--kg-card-end: markdown--><p>The fact that all of this is possible with just some glorified configuration files and one easy to throw together python script is amazing to me. I really feel like anyone who isn&apos;t on the boat of self-hosting simple services, and understanding even the basics of how to put together a full CI/CD system like this, is missing out.</p><p>Finally, there are a good few areas that need to be improved. For starters, I should be able to configure the <code>cron</code> from the config file somehow, but I had horrific trials trying to get the <code>crontab</code> to respect env vars, follow my <code>PATH</code> etc. Best solution for this is likely to put it in a <code>systemd</code> service file, or else some baked in Python threading tool. I also think my usage of <code>configparser</code> could be better. I want to explore the library more so I can use it for other projects in the near future.</p><p>Thank you for reading, and please get in touch if you have any comments, advice, or suggestions. My next post will be about the hassle I had and the effort I put in to try getting this setup working with Namecheap, my original DNS provider. It was <em>significantly</em> more involved than this and ultimately was ridiculous enough that I moved my DNS to Cloudflare!</p>]]></content:encoded></item><item><title><![CDATA[Writing on your Github Contributions Heatmap]]></title><description><![CDATA[<p>I&apos;ve thought for a while now that the Github contributions heatmap is a particularly uninteresting page. This thought popped into my head once more after reading a <a href="https://jwiegley.github.io/git-from-the-bottom-up/?ref=blog.randombits.host">fantastic ebook on the fundamentals of Git</a>, so armed with new confidence and knowledge of Git commits, I decided to fake</p>]]></description><link>https://blog.randombits.host/writing-on-your-github-contributions-heatmap/</link><guid isPermaLink="false">62d3d95a3c4bf30001cd20ce</guid><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Sun, 17 Jul 2022 10:23:09 GMT</pubDate><media:content url="https://blog.randombits.host/content/images/2022/07/Screenshot-from-2022-07-17-10-41-31.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.randombits.host/content/images/2022/07/Screenshot-from-2022-07-17-10-41-31.png" alt="Writing on your Github Contributions Heatmap"><p>I&apos;ve thought for a while now that the Github contributions heatmap is a particularly uninteresting page. This thought popped into my head once more after reading a <a href="https://jwiegley.github.io/git-from-the-bottom-up/?ref=blog.randombits.host">fantastic ebook on the fundamentals of Git</a>, so armed with new confidence and knowledge of Git commits, I decided to fake contributions on my profile in order to spell out words on my heatmap.</p><p>I knew that the contributions chart responded to commits in a Git repo, along with issues, pull requests, and other Github specific interactions. The commits seemed to be the easiest to fake (although I am a big fan of the relatively new <a href="https://cli.github.com/?ref=blog.randombits.host">Github CLI</a>), so I found where dates are considered in a commit object and tried faking them to see if Github would accept them. The only two places I could see were the author date and the commit date. Faking the committer date was a simple matter of prepending the <code>git commit</code> command with an env variable <code>GIT_COMMITTER_DATE</code>. Changing the author date was even easier as <code>git commit</code> accepts a <code>--date</code> argument! So making a commit for a specific date came up to be:</p><p><code>$ LC_ALL=C GIT_COMMITTER_DATE=&quot;$(date --date=&apos;01/01/1970 12:00&apos;)&quot; git commit -a -m&quot;01/01/1970 12:00&quot; --no-edit --date &quot;$(date --date=&apos;01/01/1970 12:00&apos;)&quot;</code></p><p>Running this will give you a commit from 01/01/1970 which unsurprisingly is the earliest date you can have a commit on. We&apos;re not stopping there though, even though it is funny how the UI is broken on your profile page after doing that.</p><figure class="kg-card kg-image-card"><img src="https://blog.randombits.host/content/images/2022/07/Peek-2022-07-17-11-04.gif" class="kg-image" alt="Writing on your Github Contributions Heatmap" loading="lazy" width="1136" height="636" srcset="https://blog.randombits.host/content/images/size/w600/2022/07/Peek-2022-07-17-11-04.gif 600w, https://blog.randombits.host/content/images/size/w1000/2022/07/Peek-2022-07-17-11-04.gif 1000w, https://blog.randombits.host/content/images/2022/07/Peek-2022-07-17-11-04.gif 1136w" sizes="(min-width: 720px) 720px"></figure><p>Now with a basis to build on, I decided to package this up into a simple Python script, and add some helper methods around it. Namely, being able to &quot;draw&quot; in a particular <code>(x, y)</code> position on the heatmap.</p><!--kg-card-begin: markdown--><pre><code class="language-python">def get_origin_datetime(year):
    &quot;&quot;&quot;
    Returns the datetime of (0,0) on the contributions heatmap.

    This is horrific code.
    &quot;&quot;&quot;
    d = datetime.datetime(year, 1, 1, 12)

    while d.weekday() != 6:
        d += datetime.timedelta(1)

    return d


def xy_to_datetime(year, x, y):
    return get_origin_datetime(year) + datetime.timedelta((7 * x) + y)
</code></pre>
<!--kg-card-end: markdown--><p>Then I used a handy <a href="http://dotmatrixtool.com/?ref=blog.randombits.host">Dot Matrix Tool</a> to create a font which was 3x5 pixels, and translated each character into a simple template:</p><!--kg-card-begin: markdown--><pre><code>    &apos;s&apos;: [
        (0, 0), (1, 0), (2, 0),
        (0, 1),
        (0, 2), (1, 2), (2, 2),
                        (2, 3),
        (0, 4), (1, 4), (2, 4),
    ],
</code></pre>
<!--kg-card-end: markdown--><p>With this done, there was only one method left to implement and we were good to go:</p><!--kg-card-begin: markdown--><pre><code>def draw_letter(year, letter, letter_number):
    &quot;&quot;&quot;
    The letter_number is to calculate the offset to apply in the grid.
    &quot;&quot;&quot;
    x_offset = letter_number * (LETTER_WIDTH + INTER_LETTER_SPACE_WIDTH)
    # This centres the letter vertically
    y_offset = 1

    for coord in LETTERS[letter]:
        commit_on_xy(year, x_offset + coord[0], y_offset + coord[1])
</code></pre>
<!--kg-card-end: markdown--><p>To view this more sensibly, check out this <a href="https://gist.github.com/conor-f/93221bb74fb522d0e98029c48417eac0?ref=blog.randombits.host">gist</a>. The only improvement I&apos;m thinking about at the minute is to include an opacity/darkness option on each pixel that is drawn to add a bit more detail to the lettering. But for a few hours on a Saturday morning, I&apos;m happy with the result :)</p>]]></content:encoded></item><item><title><![CDATA[Infrastructure-First Development]]></title><description><![CDATA[<p>Unless you&apos;re a very particular kind of person, project management and system infrastructure are the last things on your mind when you undertake a new side project or startup. For a side project, the fun is in taking the motivation from having an idea, prototyping an MVP as</p>]]></description><link>https://blog.randombits.host/infrastructure-first-development/</link><guid isPermaLink="false">6208f86c31a6d600011a47a9</guid><category><![CDATA[Side Project]]></category><category><![CDATA[Self Hosted]]></category><category><![CDATA[Quick Tip]]></category><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Sun, 13 Feb 2022 13:37:56 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1519389950473-47ba0277781c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fGluZnJhc3RydWN0dXJlJTIwZGV2ZWxvcG1lbnR8ZW58MHx8fHwxNjQ0NzU1MDk4&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1519389950473-47ba0277781c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fGluZnJhc3RydWN0dXJlJTIwZGV2ZWxvcG1lbnR8ZW58MHx8fHwxNjQ0NzU1MDk4&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Infrastructure-First Development"><p>Unless you&apos;re a very particular kind of person, project management and system infrastructure are the last things on your mind when you undertake a new side project or startup. For a side project, the fun is in taking the motivation from having an idea, prototyping an MVP as quickly as possible, and following the shiny thoughts as much as you can. In a startup environment, the impetus for change is now customer feedback/demand, a pivoting idea, or features that were previously unaccounted for. In both cases however, the priority is speed. Speed of development and ability to dynamically adjust is what will make your side project fulfilling to work on (or even lead to you actually feeling you completed it!), or let your startup gain an edge over your competitors.</p><p>Most people will actively agree with this as a concept, but will then barrel straight into working as hard as possible leading to frustration and burn out. Inevitably when you couple rapidly changing goals with trying to keep pace with them, you&apos;re in for a bad time for the simple asymmetry of the work. It takes 30 seconds to point out an improvement that could take 3 days to implement! Once you think about this asymmetry and how frequently it occurs and the imbalances it causes, you&apos;ll see it everywhere and will try temper it with a renewed desire for planning and prioritization. Maybe even some of those dreaded &#xA0;<em>a g i l e &#xA0;t e c h n i q u e s</em> useful...</p><p>I&apos;m not there yet, don&apos;t worry. I&apos;m not going to turn into some agile-scrum-coach-master, but all I want to do is preach the virtue of having an infrastructure-first work focus. When rapidly changing goals are made the top priority, things get out of hand very quickly, especially if there&apos;s no reconciliation period to deal with the issues. Tech debt racks up, work is frequently duplicated as quickly developed, non-refactored code is hard to reuse, and ultimately you end up with <em>something</em> that while potentially fulfilling the goals, is near-impossible to sustainably work with. I suggest treating your infrastructure management almost as a pre-requisite of doing the work.</p><p>Given all the issues I pointed out above, the quickest antidote I can see is optimizing your common tasks. You will have to engage with rapidly changing goals if you&apos;re doing interesting things, it&apos;s almost a fact of doing them! You can however, in many cases predict the specific infrastructure you&apos;re going to be working with as you have a finite set of primary skills. For me (currently) they revolve around Vue for front-end, Python for back-end, and a mix of Github Actions, Docker, AWS Lambda, SQL-like databases, and some more bits for the glue. I have a set of infrastructure tooling built around these technologies that will let me spin up an end-to-end of a Vue static site, calling a REST API interacting with a MySQL database or reaching out to some serverless methods all running on a custom domain with prod/test environments set up all in less than half a day. That&apos;s just not possible for someone sitting down to manually do all this work and then worse again, what happens when they want to add a few new endpoints!</p><p>This realization of the outsized impact of infrastructure tooling is new to me, but when I get the chance I will be pushing all of these to Github to share with everyone, but for now a list will have to suffice! I suggest as a minimum the following:</p><ul><li>A Github Action to publish a Python package to PIP.</li><li>A Github Action to publish a Docker image to a hub.</li><li>A Github Action to publish a serverless method live.</li><li>A Github Action to run tests on PRs.</li><li>A few <a href="https://github.com/cookiecutter/cookiecutter?ref=blog.randombits.host">cookiecutter</a> templates to package combinations of these together with dummy code to get up and running with development (e.g. a Makefile that has rules for running tests and building a virtualenv, a <code>src</code> and <code>test</code> directory, and a basic <code>main.py</code> that will give a simple HTTP 200 return code for a serverless method cookiecutter).</li><li>A logging framework that incorporates invocation IDs and ideally logs to some central infrastructure. Invocation IDs are simply an ID that you reuse through one execution of the code path. This allows you to grep your logs for one ID and see all relevant logs to just that execution without guessing which logs are related from a group of random logs!</li><li>And as an extra, a mindset change. If there are tasks that you forsee being time consuming or frequently disruptive (sending notification texts/emails, creating new users, etc etc) then try find a way to automate them. Having the ability to start a long running command then just pipe it to a generic notifier is incredibly useful!<br></li></ul>]]></content:encoded></item><item><title><![CDATA[Why You Should Walk In The Bus Lane]]></title><description><![CDATA[<p>I&apos;m a huge advocate for public transport, and as is typical in <a href="https://europa.eu/eurobarometer/surveys/detail/1110?ref=blog.randombits.host">Europe</a>, my attitude towards private cars in an urban environment is on the negative side. My opinion however, may be much more negative than most! To preface this, I have a driver&apos;s license, and</p>]]></description><link>https://blog.randombits.host/why-you-should-walk-in-the-bus-lane/</link><guid isPermaLink="false">61e6ad402d5db10001f734d6</guid><category><![CDATA[Light Hearted]]></category><category><![CDATA[Non-Tech]]></category><category><![CDATA[Cycling]]></category><category><![CDATA[Socio-political Hot Takes]]></category><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Wed, 26 Jan 2022 06:57:53 GMT</pubDate><media:content url="https://images.unsplash.com/9/barcelona-traffic.jpg?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDQ1fHxidXMlMjBsYW5lfGVufDB8fHx8MTY0MjUwNzM2Ng&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/9/barcelona-traffic.jpg?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDQ1fHxidXMlMjBsYW5lfGVufDB8fHx8MTY0MjUwNzM2Ng&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Why You Should Walk In The Bus Lane"><p>I&apos;m a huge advocate for public transport, and as is typical in <a href="https://europa.eu/eurobarometer/surveys/detail/1110?ref=blog.randombits.host">Europe</a>, my attitude towards private cars in an urban environment is on the negative side. My opinion however, may be much more negative than most! To preface this, I have a driver&apos;s license, and I do see the need for some private car ownership and use within a city (for example, people with mobility issues or work vehicles), but cities almost universally have given far too much importance to the private car and not half enough to alternative forms of transport like cycling, walking, and public transit.</p><p>I&apos;ve always believed that people will do the easiest thing over the &quot;right&quot; thing if there is any form of friction to doing the &quot;right&quot; thing<sup>[1]</sup>. In this vein, governments posit that they want to increase use of these alternative forms of transport, but haven&apos;t concretely made it easier to use alternative forms of transport instead of private cars, and so, all their &quot;efforts&quot; end up having unsatisfactory results. I think the problem is two-fold. On the one hand, the &quot;effort&quot; being put in really isn&apos;t substantial enough to actually make the difference expected, but secondly, the investment is just in the wrong place. An example of both of these in action was the boom in cycling during the COVID-19 pandemic. As widely written about, cities all around Europe <a href="https://www.bbc.com/news/world-europe-54353914?ref=blog.randombits.host">allocated extra money towards cycling infrastructure</a>, and there were so many people buying bikes that <a href="https://www.theguardian.com/world/2020/jun/09/we-sold-eight-bikes-in-20-minutes-will-the-cycling-boom-last?ref=blog.randombits.host">shops just couldn&apos;t keep up</a>. While great on the surface, what has transpired is that the extra money and infrastructure was temporary, and all the new bike owners are giving way to the danger they are put in by poor infrastructure and reverting to previous habits. People will do the easiest thing, and in this case, that is the safest.</p><p>The primary reason that roads are so dangerous for cyclists are private car use coupled with this lack of cyclist-first infrastructure (you only need to look at <a href="https://joyride.city/blog/amsterdam-tips-during-a-bike-boom/?ref=blog.randombits.host">Amsterdam&apos;s focus on infrastructure</a> to see how safety should be done at that level, but how and ever). With the assumption that people cannot force the development of infrastructure from an unwilling government, what can people do? In my opinion, people need to assert their ownership of their city. A city-space is not, and should not, be designed for the speed of cars over the safety and comfort of individuals. One of the few examples of <a href="https://en.wikipedia.org/wiki/Shared_space?ref=blog.randombits.host">shared space</a> are common bus lanes, meant to accommodate buses, taxis, and cyclists in the one lane of road. These are a huge benefit to cyclists over the alternative of fighting with all traffic, but they are constantly abused by entitled private car drivers to skip traffic, or by design in some cases where they are legally allowed to be used by private cars at certain times of day! Our cities would all have less noise and air pollution, be orders of magnitude safer, and encourage small businesses and green spaces all over if the private car was removed from them. The city streets would rightfully prioritize people instead of cars, and this is the reason I think people should walk in bus lanes (under certain conditions!).</p><p>You have more of a right to walk where you like in the city than a private car does. If you walk in the bus lane when there are no buses, bikes, or taxis around, then you are asserting to everyone who passes that this space could be used by people and it could be designed in a new way to maximize human comfort and enjoyment. It also has the added benefit of stopping entitled drivers from deciding they deserve to skip everyone in the traffic, which I&apos;m sure everyone appreciates!</p><p>[1] I looked for a source to this as I expected there to have been studies done on this phenomenon, but I didn&apos;t find anything. Let me know if you are aware of anything in this area.</p>]]></content:encoded></item><item><title><![CDATA[Sane Vim Configs on Remote Instances]]></title><description><![CDATA[<p>Vim is my editor of choice. Alongside the shell, it&apos;s my full IDE and I don&apos;t see much reason to change that. I can build all scaffolds and supports necessary to be as proficient (if not faster!) than people who rely solely on the capabilities of</p>]]></description><link>https://blog.randombits.host/sane-vim-configs-on-remote-instances/</link><guid isPermaLink="false">61d583c22d5db10001f7346f</guid><category><![CDATA[Vim]]></category><category><![CDATA[Side Project]]></category><category><![CDATA[Quick Tip]]></category><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Wed, 05 Jan 2022 08:08:00 GMT</pubDate><media:content url="https://blog.randombits.host/content/images/2022/01/Screenshot-from-2022-01-05-11-45-01.png" medium="image"/><content:encoded><![CDATA[<img src="https://blog.randombits.host/content/images/2022/01/Screenshot-from-2022-01-05-11-45-01.png" alt="Sane Vim Configs on Remote Instances"><p>Vim is my editor of choice. Alongside the shell, it&apos;s my full IDE and I don&apos;t see much reason to change that. I can build all scaffolds and supports necessary to be as proficient (if not faster!) than people who rely solely on the capabilities of their graphical IDE. Not only does building these type of supports make me learn more about the nitty-gritty of any project like the way different packages are built/deployed, or how imports are managed, but once it&apos;s done for one style of project, I usually end up with a readily transferable add-in to all similar projects! Whenever I &quot;have&quot; to use an IDE (I&apos;m still looking for a way to sanely work with Android for example!), the only plugin or setting I need to find before I can do anything meaningful is adding the <a href="https://plugins.jetbrains.com/plugin/164-ideavim?ref=blog.randombits.host">relevant</a> <a href="https://marketplace.visualstudio.com/items?itemName=vscodevim.vim&amp;ref=blog.randombits.host">Vim</a> <a href="https://github.com/XVimProject/XVim2?ref=blog.randombits.host">plugin</a> for that IDE. Really says a lot that there&apos;s so many, doesn&apos;t there....</p><p>For that reason, I was getting really frustrated when I was connecting to some Raspberry Pi&apos;s I was doing some side projects on (blogs to come!), or when I had to SSH into a fresh AWS instance and I couldn&apos;t use my own <code>.vimrc</code>! I decided I needed to address this, so introducing <sub>some of the most shoddily thrown together work I have ever done...</sub> <a href="https://github.com/conor-f/vim-stuff?ref=blog.randombits.host">vim-stuff</a>! Name, along with everything else, is clearly still work in progress!</p><p>This essentially will let you set up a nice set of defaults for a Vim config quickly on any machine. All you have to do is clone and run <code>make install</code>. This will give you <a href="https://github.com/jacoborus/tender.vim?ref=blog.randombits.host">a nice colour scheme</a> + <a href="https://github.com/sheerun/vim-polyglot?ref=blog.randombits.host">syntax highlighting</a>, <a href="https://github.com/preservim/nerdtree?ref=blog.randombits.host">NERDTree</a>, <a href="https://github.com/pechorin/any-jump.vim?ref=blog.randombits.host">AnyJump</a> (a personal underdog favourite), and a good few more! Give it a try if you need to get a decent Vim install quickly and easily on the go :)</p>]]></content:encoded></item><item><title><![CDATA[Robot Olympics]]></title><description><![CDATA[<p>I&apos;m not a staunch nationalist, nor am I an athlete of any description, yet I find myself in the same quadrennial boat as most others - the Olympics is an entertaining event to watch. Good-spirited competition drives people to push themselves to the extremes and work together to</p>]]></description><link>https://blog.randombits.host/robot-olympics/</link><guid isPermaLink="false">6182a608fa38ec000157388d</guid><category><![CDATA[Light Hearted]]></category><category><![CDATA[Non-Tech]]></category><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Wed, 03 Nov 2021 19:28:00 GMT</pubDate><media:content url="https://images.unsplash.com/photo-1501514799070-290ae1c889fe?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDh8fGZsYWdzfGVufDB8fHx8MTYzNTk1MjI2NQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" medium="image"/><content:encoded><![CDATA[<img src="https://images.unsplash.com/photo-1501514799070-290ae1c889fe?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDh8fGZsYWdzfGVufDB8fHx8MTYzNTk1MjI2NQ&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" alt="Robot Olympics"><p>I&apos;m not a staunch nationalist, nor am I an athlete of any description, yet I find myself in the same quadrennial boat as most others - the Olympics is an entertaining event to watch. Good-spirited competition drives people to push themselves to the extremes and work together to achieve their goals. The athlete is only the tip of the iceberg, backed by coaches, nutritionists, therapists, and I&apos;m sure more teams of supports. Together, they dedicate themselves to one singular goal, and I want to see that happen for technology. <a href="https://www.inverse.com/science/olympic-world-records-broken-2021-science?ref=blog.randombits.host">The rate of improvement in human performance</a> backing these records is diminishing, and technology improvements be it in <a href="https://www.businessinsider.com/nike-runners-trounce-olympics-competitors-super-spike-shoe-technology-2021-8?r=US&amp;IR=T&amp;ref=blog.randombits.host">shoes</a>, <a href="https://apnews.com/article/2020-tokyo-olympics-elaine-thompson-track-and-field-dalilah-muhammad-technology-b9e92bd41a7dd218d942abccc2318079?ref=blog.randombits.host">track</a>, or anything in between, is increasingly the reason records are being shaved down. So why not embrace this? What would that even look like?</p><p>To foster the same level of excitement and dedication as the Olympics, lets mirror the structure of the Olympics as closely as possible and replicate as many events as we can with robot competitors. This will allow people who aren&apos;t interested in purely technical feats of engineering to recognize the level of performance on display by comparison to the main events. The primary entry requirement is that the robot must be able to autonomously perform the event to a standard above what the human record is. In short, lets have country teams make robots compete in the Olympic events!</p><p>In keeping with the theme of the Olympics, and to allow the host nation have another event to try recoup the enormous (and wildly varying) <a href="https://en.wikipedia.org/wiki/Cost_of_the_Olympic_Games?ref=blog.randombits.host#Table">costs</a>, lets host our new robot Olympics in the same locations as the Summer Olympics, but a few months after the main event. There also will have to be some regulation about the robots in order to keep an event running smoothly. The competitors must be able to negotiate their way to and from &quot;neutral&quot; zones autonomously. We want to get rid of the idea of someone wheeling out a huge, static trebuchet right up to the javelin line and saying it&apos;s good to go!</p><p>On the topic of trebuchets, there&apos;s one important caveat to these Robot Olympics. Repeatability. A competitor&apos;s skill should be somewhat consistent, and to that end, instead of it being a simple &quot;what&apos;s the best you can do?&quot;, the result a competitor achieves is the simple average of their attempts. In events where there&apos;s usually only one result considered (e.g. did you clear the high jump bar or not? Did you win the 100m race or not?) the success must be repeated (e.g. for these examples, you must clear the high jump bar 3/5 attempts or the 100m is raced three times and each result is the average of them). After all, nobody is interested in how high you can jump by strapping explosives underneath a robot and shooting it out of the stadium if it can&apos;t do it again!</p><p>Finally, there shouldn&apos;t be any explosive/inhuman elements to a competitor. They can&apos;t be 10m tall doing the long jump or 3,000 kg doing the hammer throw! Similarly, using explosives/combustion/similar is forbidden. We&apos;re focusing on kinetic movement here. The robots don&apos;t need to be humanoid, bipedal, or anything like <a href="https://www.youtube.com/watch?v=tF4DML7FIWk&amp;ref=blog.randombits.host">Boston Dynamics&apos; newest dystopian concept</a>, just:</p><ol><li>Capable of self-directing itself to the event start point, fulfilling the event on cue and returning to where it was released from.</li><li>Be able to perform the event consistently above the peak human level without destroying itself.</li><li>Not use any form of combustion.</li></ol><p>I really want this to be a legitimate event and I don&apos;t see much reason something better fleshed out couldn&apos;t be a reality in the future. I would be more than willing to be on the steering committee if someone wants to give me a warm introduction to the Olympic council. Naturally before I do this, I&apos;ll want to hear feedback, thoughts and further restrictions/improvements anyone has so leave a comment below or reach out to me by email so we can make this a reality!</p>]]></content:encoded></item><item><title><![CDATA[Syncing Mobile Photos with Photoprism]]></title><description><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://demo-cdn.photoprism.org/static/img/logo-avatar.svg" class="kg-image" alt="Photoprism Logo" loading="lazy"><figcaption>Photoprism Logo</figcaption></figure><p><a href="https://demo.photoprism.org/?ref=blog.randombits.host">Photoprism</a> is the self-hosted photo/video library that does it all for me. It has labeling of images so you can search by picture content, it has calendar and map features for more natural finding of the pictures you&apos;re trying to find and it lets you</p>]]></description><link>https://blog.randombits.host/syncing-mobile-photos-with-photoprism/</link><guid isPermaLink="false">616e707dfa38ec00015737d3</guid><category><![CDATA[Quick Tip]]></category><category><![CDATA[Self Hosted]]></category><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Tue, 19 Oct 2021 21:35:05 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://demo-cdn.photoprism.org/static/img/logo-avatar.svg" class="kg-image" alt="Photoprism Logo" loading="lazy"><figcaption>Photoprism Logo</figcaption></figure><p><a href="https://demo.photoprism.org/?ref=blog.randombits.host">Photoprism</a> is the self-hosted photo/video library that does it all for me. It has labeling of images so you can search by picture content, it has calendar and map features for more natural finding of the pictures you&apos;re trying to find and it lets you have public/private content as you choose. To be clear, I want our reliance on huge mega-corps to be gotten rid of. I don&apos;t think that necessarily implies some form of <a href="https://en.wikipedia.org/wiki/Neo-Luddism?ref=blog.randombits.host">neo-luddism</a> or that we should have to deal with worse products. We get what we create and support, and this project is definitely worthy of both. Pictures are some of the most personal things we routinely create so we should try protect these moments from prying eyes. Moreover, with <a href="https://9to5google.com/2021/06/18/google-photos-storage-guide/?ref=blog.randombits.host">Google Photos continuing to degrade their product</a> by reneging on their promise of free unlimited storage and by silently <a href="https://www.theverge.com/2021/5/24/22451607/google-photos-high-quality-storage-saver-tool-free-space-blurry-screenshots?ref=blog.randombits.host">degrading your picture quality</a> an alternative should be welcomed!</p><p>From self-hosting Photoprism, I&apos;ve found it to be quite a smooth process relative to the size and scale of its features. The one process I found difficult to get working was one of the most crucial however; syncing with my Android phone. I mostly take 35mm analog pictures (because I&apos;m insufferable) but I appreciate being able to scroll back through my screenshots for conversations with friends and taking a quick picture on the go. I found Photoprism&apos;s <a href="https://docs.photoprism.org/user-guide/sync/mobile-devices/?ref=blog.randombits.host">guide</a> however lacking to solve the use case of my phone automatically uploading new content to Photoprism and having it imported ready for viewing, so I tried a few different apps and settled on <a href="https://play.google.com/store/apps/details?id=dk.tacit.android.foldersync.lite&amp;hl=en_IE&amp;gl=US&amp;ref=blog.randombits.host">FolderSync</a>.</p><p>FolderSync allows you to sync files between different locations on your phone or use <a href="https://en.wikipedia.org/wiki/WebDAV?ref=blog.randombits.host">WebDAV</a> to sync with an external location. Photoprism supports WebDAV so this should be a cinch! I decided to set up one folder on my device that would one-way upload to Photoprism and then locally sync any files in other media folders on my phone to that folder. This might be a needless step, but I think it&apos;s useful for supporting multiple different ingestion locations and also optionally not indiscriminately syncing all media from your phone with a potentially public library! From screenshots, my FolderSync setup looks something like this for SD Card Sync:</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161644_FolderSync.jpg" width="1080" height="2400" loading="lazy" alt srcset="https://blog.randombits.host/content/images/size/w600/2021/10/Screenshot_20211019-161644_FolderSync.jpg 600w, https://blog.randombits.host/content/images/size/w1000/2021/10/Screenshot_20211019-161644_FolderSync.jpg 1000w, https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161644_FolderSync.jpg 1080w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161653_FolderSync.jpg" width="1080" height="2400" loading="lazy" alt srcset="https://blog.randombits.host/content/images/size/w600/2021/10/Screenshot_20211019-161653_FolderSync.jpg 600w, https://blog.randombits.host/content/images/size/w1000/2021/10/Screenshot_20211019-161653_FolderSync.jpg 1000w, https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161653_FolderSync.jpg 1080w" sizes="(min-width: 720px) 720px"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161703_FolderSync.jpg" width="1080" height="2400" loading="lazy" alt srcset="https://blog.randombits.host/content/images/size/w600/2021/10/Screenshot_20211019-161703_FolderSync.jpg 600w, https://blog.randombits.host/content/images/size/w1000/2021/10/Screenshot_20211019-161703_FolderSync.jpg 1000w, https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161703_FolderSync.jpg 1080w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161714_FolderSync.jpg" width="1080" height="2400" loading="lazy" alt srcset="https://blog.randombits.host/content/images/size/w600/2021/10/Screenshot_20211019-161714_FolderSync.jpg 600w, https://blog.randombits.host/content/images/size/w1000/2021/10/Screenshot_20211019-161714_FolderSync.jpg 1000w, https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161714_FolderSync.jpg 1080w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption>SD Sync Settings Screenshots&#xA0;</figcaption></figure><p>And for WebDAV we have the following:</p><figure class="kg-card kg-gallery-card kg-width-wide kg-card-hascaption"><div class="kg-gallery-container"><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161736_FolderSync.jpg" width="1080" height="2400" loading="lazy" alt srcset="https://blog.randombits.host/content/images/size/w600/2021/10/Screenshot_20211019-161736_FolderSync.jpg 600w, https://blog.randombits.host/content/images/size/w1000/2021/10/Screenshot_20211019-161736_FolderSync.jpg 1000w, https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161736_FolderSync.jpg 1080w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161743_FolderSync.jpg" width="1080" height="2400" loading="lazy" alt srcset="https://blog.randombits.host/content/images/size/w600/2021/10/Screenshot_20211019-161743_FolderSync.jpg 600w, https://blog.randombits.host/content/images/size/w1000/2021/10/Screenshot_20211019-161743_FolderSync.jpg 1000w, https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161743_FolderSync.jpg 1080w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161749_FolderSync.jpg" width="1080" height="2400" loading="lazy" alt srcset="https://blog.randombits.host/content/images/size/w600/2021/10/Screenshot_20211019-161749_FolderSync.jpg 600w, https://blog.randombits.host/content/images/size/w1000/2021/10/Screenshot_20211019-161749_FolderSync.jpg 1000w, https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161749_FolderSync.jpg 1080w" sizes="(min-width: 720px) 720px"></div></div><div class="kg-gallery-row"><div class="kg-gallery-image"><img src="https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161755_FolderSync.jpg" width="1080" height="2400" loading="lazy" alt srcset="https://blog.randombits.host/content/images/size/w600/2021/10/Screenshot_20211019-161755_FolderSync.jpg 600w, https://blog.randombits.host/content/images/size/w1000/2021/10/Screenshot_20211019-161755_FolderSync.jpg 1000w, https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161755_FolderSync.jpg 1080w" sizes="(min-width: 720px) 720px"></div><div class="kg-gallery-image"><img src="https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161851_FolderSync.jpg" width="1080" height="2400" loading="lazy" alt srcset="https://blog.randombits.host/content/images/size/w600/2021/10/Screenshot_20211019-161851_FolderSync.jpg 600w, https://blog.randombits.host/content/images/size/w1000/2021/10/Screenshot_20211019-161851_FolderSync.jpg 1000w, https://blog.randombits.host/content/images/2021/10/Screenshot_20211019-161851_FolderSync.jpg 1080w" sizes="(min-width: 720px) 720px"></div></div></div><figcaption>WebDAV FolderSync Settings Screenshots</figcaption></figure><p>Using these, you should be able to set up automatic photo syncing from your Android phone to Photoprism! If you have any issues, refer back to the <a href="https://docs.photoprism.org/user-guide/sync/mobile-devices/?ref=blog.randombits.host">Photoprism guide</a> for this topic as it&apos;s definitely more up to date, &#xA0;thorough, and useful than this short post. However, I strongly recommend FolderSync over their suggestions of PhotoSync and SMBSync2!</p>]]></content:encoded></item><item><title><![CDATA[Publishing a Package to Pip]]></title><description><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.randombits.host/content/images/2021/10/image.png" class="kg-image" alt loading="lazy" width="248" height="186"><figcaption>PyPi - The Python Package Index</figcaption></figure><p>I love Python. It&apos;s easy to read, even easier to write, and best of all, the easiest language (I&apos;ve used) to interact with other people&apos;s code. I spent too long being only on one end of that equation</p>]]></description><link>https://blog.randombits.host/publishing-a-package-to-pip/</link><guid isPermaLink="false">6165e485b1d5c50001eba661</guid><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Tue, 12 Oct 2021 21:14:33 GMT</pubDate><content:encoded><![CDATA[<figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://blog.randombits.host/content/images/2021/10/image.png" class="kg-image" alt loading="lazy" width="248" height="186"><figcaption>PyPi - The Python Package Index</figcaption></figure><p>I love Python. It&apos;s easy to read, even easier to write, and best of all, the easiest language (I&apos;ve used) to interact with other people&apos;s code. I spent too long being only on one end of that equation though, only consuming code other people have written as opposed to taking the things I was making and making them public so the cycle could continue. I felt it was too much hassle to get started with and for no reason because who would be using the libraries I write anyways? After finally learning how to create and publish packages to <a href="https://pypi.org/?ref=blog.randombits.host">PyPi</a>, I&apos;m glad to report it&apos;s super easy and definitely worthwhile. Even if it&apos;s just so you can easily <code>import</code> code you wrote for a previous project without going into the horrors of relative imports, I feel it&apos;s worth the five minutes investment to get up and running with it.</p><p>First of all, you need to <a href="https://pypi.org/account/register/?ref=blog.randombits.host">make a PyPi account</a> and then add your credentials to <code>~/.pypirc</code>. Your file should look something like this:</p><pre><code class="language-toml">[distutils]
    index-servers = pypi
    
[pypi]
    repository: https://upload.pypi.org/legacy/
    username: &lt;Your PyPi username&gt;
    password: &lt;Your PyPi password&gt;</code></pre><p><br>Realistically, you should set up API keys, but I am just trying to show the bare minimum to get up and running with PyPi. I am in no way an authority &#xA0;on this and most often don&apos;t even follow best practices.</p><p>Next take a simple project or tool you&apos;ve been working on and impose some <em>structure.</em> Personally, I am a fan of simple tools that do things I understand. Consequentially, I end up using tools that started falling out of fashion in the 80s. I find <code>Makefile</code> to be just the right level of build tool for me. I understand the file at a glance, I can hack it to get what I want done, and it <strong>is</strong> going to be the topic of another blog post so I&apos;m not going to dwell on it too much here, suffice to say you should add a <code>Makefile</code> to the root of your repository/project and have these key rules:</p><pre><code class="language-make">PYTHON=python3.8

ENV_DIR=.env_$(PYTHON)
IN_ENV=. $(ENV_DIR)/bin/activate &amp;&amp;

upload_pip: build_dist
    twine upload --repository pypi dist/*

build:
    $(IN_ENV) $(PYTHON) -m pip install --editable .
    rm -fr dist/
    $(IN_ENV) $(PYTHON) setup.py sdist bdist_wheel

build_dist:
    rm -fr dist/
    $(IN_ENV) python setup.py sdist

setup:
    $(PYTHON) -m pip install --upgrade virtualenv
    $(PYTHON) -m virtualenv -p $(PYTHON) $(ENV_DIR)
    $(IN_ENV) $(PYTHON) -m pip install --upgrade -r requirements.txt
    $(IN_ENV) $(PYTHON) -m pip install --editable .</code></pre><p><br>This will let you run <code>make upload_pip</code> and magically have your package arrive on PyPi once you install <code>python3 -m pip install twine</code>. But wait, what is it going to upload?? That&apos;s defined in your <code>setup.py</code>:</p><pre><code class="language-python">from setuptools import (
    find_packages,
    setup
)

INSTALL_REQUIRES = []

setup(
    name=&apos;my-first-pip-package-name&apos;,
    description=&apos;This is my first Pip package!&apos;,
    version=0.0.1,
    url=&apos;https://github.com/link/to/your/repo&apos;,
    python_requires=&apos;&gt;=3.6&apos;,
    packages=find_packages(&apos;src&apos;),
    package_dir={&apos;&apos;: &apos;src&apos;},
    install_requires=INSTALL_REQUIRES,
    entry_points={
        &apos;console_scripts&apos;: []
    }
)</code></pre><p>This should be pretty self evident for what should be where, &#xA0;but examples are the best way to grok something and here are two <a href="https://github.com/conor-f/timewarp?ref=blog.randombits.host">personal</a> <a href="https://github.com/conor-f/spotibar?ref=blog.randombits.host">repos</a> where I use these structures for <code>Makefile</code> and <code>setup.py</code> to publish Pip packages.</p><p>I know this is quite a low quality introduction to making a Pip package, so please leave me feedback. I want to improve at my writing and hearing responses is the best way I can get there!</p>]]></content:encoded></item><item><title><![CDATA[Auto-Updating Docker Containers]]></title><description><![CDATA[<p>Docker is the biggest advance to production software engineering in the past decade. If you&apos;re like me though, you slept on it this entire time and are now feeling like you&apos;re too far behind to catch up and finally start using them. I&apos;m going</p>]]></description><link>https://blog.randombits.host/auto-updating-docker-containers/</link><guid isPermaLink="false">615c1fd7bb9dca00013d8ef3</guid><category><![CDATA[Docker]]></category><category><![CDATA[Docker Compose]]></category><category><![CDATA[Quick Tip]]></category><dc:creator><![CDATA[Conor]]></dc:creator><pubDate>Tue, 05 Oct 2021 10:07:34 GMT</pubDate><content:encoded><![CDATA[<p>Docker is the biggest advance to production software engineering in the past decade. If you&apos;re like me though, you slept on it this entire time and are now feeling like you&apos;re too far behind to catch up and finally start using them. I&apos;m going to make a few short posts on the rudimentary basics to get up and running with Docker, creating a <code>Dockerfile</code>, publishing it to <a href="https://hub.docker.com/?ref=blog.randombits.host">Docker Hub</a> and deploying/running it live on a server. This post will show you how to set up a <code>docker-compose.yaml</code> file which will automatically update when a new image is pushed to Docker Hub. Let&apos;s go!</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://images.unsplash.com/photo-1464725220624-b292bb3a600c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fHdhdGNodG93ZXJ8ZW58MHx8fHwxNjMzNDI3NzIz&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2000" class="kg-image" alt loading="lazy" width="6000" height="4000" srcset="https://images.unsplash.com/photo-1464725220624-b292bb3a600c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fHdhdGNodG93ZXJ8ZW58MHx8fHwxNjMzNDI3NzIz&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=600 600w, https://images.unsplash.com/photo-1464725220624-b292bb3a600c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fHdhdGNodG93ZXJ8ZW58MHx8fHwxNjMzNDI3NzIz&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=1000 1000w, https://images.unsplash.com/photo-1464725220624-b292bb3a600c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fHdhdGNodG93ZXJ8ZW58MHx8fHwxNjMzNDI3NzIz&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=1600 1600w, https://images.unsplash.com/photo-1464725220624-b292bb3a600c?crop=entropy&amp;cs=tinysrgb&amp;fit=max&amp;fm=jpg&amp;ixid=MnwxMTc3M3wwfDF8c2VhcmNofDN8fHdhdGNodG93ZXJ8ZW58MHx8fHwxNjMzNDI3NzIz&amp;ixlib=rb-1.2.1&amp;q=80&amp;w=2400 2400w" sizes="(min-width: 720px) 720px"><figcaption>Photo by <a href="https://unsplash.com/@veroz?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Andy Mai</a> / <a href="https://unsplash.com/?utm_source=ghost&amp;utm_medium=referral&amp;utm_campaign=api-credit">Unsplash</a></figcaption></figure><p>First of all, I would not suggest doing this for anything you want to remain stable or have any decent uptime on. This is to keep up to date with the latest bleeding edge of a project and can be useful for development and side projects when you merge your code you can show it to people live a few minutes later. If you do this with containers you&apos;re looking to rely on (such as Matrix, Vikunja or Home Assistant (ohh, foreshadowing?)), you can easily see how you could accidentally deploy to a breaking version and have to do some messy sysadmin work! However, there are valid reasons to automatically update to the <code>latest</code> tag, and to do this, we will be using <a href="https://github.com/containrrr/watchtower?ref=blog.randombits.host">Watchtower</a> which is a Docker container that monitors other containers and if their image updates, then will <code>SIGTERM</code> it, pull down the newest version and restart it. Pretty meta! Onto the code:</p><pre><code class="language-yaml">version: &apos;3.3&apos;

services:
  via-web:
    container_name: &quot;via-web&quot;
    image: conorjf/via-web:latest
    restart: unless-stopped

    volumes:
      - /var/log/instance_logs/via:/var/log

  via-watchtower:
    container_name: &quot;via-watchtower&quot;
    image: containrrr/watchtower
    restart: unless-stopped

    volumes:
      - /var/run/docker.sock:/var/run/docker.sock

    command: via-web --interval 30</code></pre><p>This relatively simple <code>docker-compose.yaml</code> shows how easy this is. We define two services, <a href="https://github.com/RobertLucey/via?ref=blog.randombits.host">Via</a> and <a href="https://github.com/containrrr/watchtower?ref=blog.randombits.host">Watchtower</a>. Via&apos;s image is set to <code>latest</code> from Docker Hub and it uses a simple trick to easily view logs explained <a href="blog.randombits.host/simple-splunk-log-ingestion-from-docker-containers/">here</a>. This is clearly something under development and so it&apos;s safe to set it up to deploy <code>latest</code> blindly. The <code>via-watchtower</code> service is also very simple, really only requiring 2 statements apart from the image. The <code>volumes</code> is so Watchtower can be informed about Docker updates/changes in state, etc and the <code>command</code> is of the format <code>&lt;container to watch for updates on&gt; --interval &lt;seconds to check for updates&gt;</code>. It&apos;s that easy! Using this and some continuous integration to build and deploy the Docker image, Via is kept constantly up to date for development and beta testing.</p><p>I hope this was instructive and please reach out if you have any questions, suggestions or comments in general :)</p>]]></content:encoded></item></channel></rss>