[
        {
          "id": "posts-apt-alternative",
          "title": "Nala is faster than apt",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux",
          "tags": "debian, package_management, ubuntu",
          "url": "/posts/apt-alternative/",
          "content": "Nala does two things that apt doesn’t and maintains compatibility with the familiar existing commands.\n\nThe first thing it does, and the thing you want to do right after installing it is nala fetch. What the fetch command does is is finds the mirrors that are fastest for you and allows you to select multiple mirrors to download from.\n\nThe second thing it does, is it downloads more than one package at a time, it will try up to 3 concurrent downloads per mirror, as well as running downloads from multiple mirrors simultaneously. The 3 package limit is a throttle to prevent it from overwhelming mirrors. Apt only downloads 1 item at a time as a result Nala is much faster on the download portions of install, and apt update benefits as well.\n\nNala is in the repos for Trixie (Debian) and Questing Quagga (Ubuntu). On older releases you’ll need to install it directly.\n\nFor documentation and alternate install see: Nala’s Github Page."
        },
        {
          "id": "posts-dns-masq-fault-tolerant",
          "title": "Fault Tolerant DNSMasq With Redundancy",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Networking",
          "tags": "dns",
          "url": "/posts/dns-masq-fault-tolerant/",
          "content": "After ISC DHCP Server was deprecated by it’s authors, I followed their recommendation and switched to Kea DHCP Server, and then switched to dnsmasq configured for DHCP only. When I upgraded to an OpenWRT Router, switching BIND to Master/Slave gave me redundant synchronized DNS, with a little effort, I was able to make dnsmasq fault tolerant as well.\n\nFirst, why I chose dnsmasq over Kea.\n\nKea DHCP Server\n\nISC Kea is designed as an enterprise solution to replace the legacy ISC DHCP Server. It looks great on paper.\n\n\n  Modern JSON configuration\n  API‑driven dynamic updates\n  SQL‑backed lease storage\n  Built‑in HA with state synchronization\n  The official successor to ISC DHCP\n\n\nIn practice, the JSON configuration is both cumbersome and surprisingly inflexible. My environment had two profiles: one for unreserved hosts and one for reserved hosts. Kea has no mechanism to assign a profile to a reservation, so any deviation from the default profile had to be manually duplicated into each reservation block. This quickly becomes tedious and error‑prone.\n\nHigh Availability requires a database backend — and that database must be highly available. Once the system is fully built out, any failure means troubleshooting multiple moving parts: the DHCP daemon, the control agent, the database, the HA state machine, and the JSON configuration itself.\n\ndnsmasq DHCP Server\n\ndnsmasq DHCP is a simple one line configuration for reservations. It supports tagging entries to apply different profiles, a feature Kea lacks, it can associate scopes with interfaces to serve multiple subnets. My Kea configuration was 608 lines, my dnsmasq configuration is 84 lines, not including the guest vlan added after the conversion.\n\nAs a DNS Server dnsmasq is a host file with caching, but it’s easy to turn off and switch to BIND or Unbound.\n\nI’ve already touched on an important basic feature dnsmasq has that Kea doesn’t. If you need a feature it lacks, a systemd path unit triggered by changes to the reservations or leases file can run a script to create the feature.\n\nConfiguration Example\n\nKea Reservation\n\n  {\n    \"hostname\": \"hifiberry1\",\n    \"hw-address\": \"a0:ae:a1:87:e7:a0\",\n    \"ip-address\": \"192.168.1.31\",\n    \"option-data\": [\n      {\n        \"space\": \"dhcp4\",\n        \"name\": \"domain-name-servers\",\n        \"code\": 6,\n        \"data\": \"192.168.1.1,192.168.1.254\"\n      }\n    ]\n  }\n\n\n\ndnsmasq Reservation\n\n\n# dnsmasq config allows comments like this one, where JSON does not.\n# The tag can be added to multiple reservations unlike kea which\n# requires copying the option to each reservation.\ndhcp-option=tag:reserved-dns,6,192.168.1.1,192.168.1.254\ndhcp-host=a0:ae:a1:87:e7:a0,hifiberry1,192.168.1.31,set:reserved-dns\n\n\n\nImplementing dnsmasq With Failover\n\nSplit the Config\n\ndnsmasq supports breaking the configuration into smaller files, and I strongly recommend doing this. Jobs that synchronize leases and reservation files become simpler, and editing is easier when each file has a clear purpose.\n\nIn your main dnsmasq.conf, add:\n\nconf-dir=/etc/dnsmasq.d/,*.conf\n\nThen break your configuration into sections and move them into .conf files in that directory. Order doesn’t matter unless you accidentally define the same option twice.\n\nHousekeeping\n\nIf you want both servers to be equals, make sure both specify dhcp-authoritative on both. If you want a primary/secondary setup, only the primary should be authoritative. Non‑authoritative servers delay about a second before answering requests.\n\nTurn off DNS with port=0 if you’re using another DNS server.\n\nDefine Your Scopes\n\nFor my network I’ve divided up a class C space into 3 blocks: one for reservations and static addresses, and two for dhcp scopes. Create a reservations.conf in the conf.d directory. Create a scope.conf file in conf.d, where one server has the first scope, the other the second. If you’re running equal servers the scopes should be the same size, and each should be large enough to accommodate all connections when the other is down. If you’re running a Primary and Failover the Failover scope can be smaller, the failover scope can be smaller and should use a short lease time so clients quickly return to the primary after an outage.\n\nConfigure SSH Access\n\nEven if you’re running equal servers, you’ll need to pick which host will act as the configuration master, in failover configuration the primary is the logical choice. You’ll need to setup ssh from the configuration primary to the configuration secondary.\n\nSet up SSH from the configuration primary to the secondary. Using root is simpler, but a compromise of the primary’s root account compromises the secondary as well.\n\nA Script to Synchronize Reservations\n\nIf you’re using root, the following script will test and reload dnsmasq, then push the updated reservations. Placing it in cron.daily ensures it runs even if you forget to use the script and reload dnsmasq manually.\n\nFor better security, use a non‑privileged account on the secondary. Symlink the reservations file into a writable location, and use a systemd path unit (or inotifyd on OpenWrt) to reload dnsmasq when the file changes.\n\n#!/bin/bash\n\n# you may want to use the ip address instead of host name.\ndnsmasq --test || { echo \"dnsmasq config test failed\" ; exit 1; }\nsystemctl reload dnsmasq.service\nrsync /etc/dnsmasq.d/reservations.conf root@secondary:/etc/dnsmasq.d/reservations.conf\n# OpenWRT or other non systemd secondary\nssh root@secondary '/etc/init.d/dnsmasq reload'\n# Secondary with systemd\nssh root@secondary 'systemctl reload dnsmasq'\n\n\nConclusion\n\ndnsmasq’s DNS features are basic by design and easy to outgrow, but its DHCP engine is outstanding. With a few small scripts and a clean configuration model, dnsmasq becomes a reliable and maintainable DHCP server — even at scale."
        },
        {
          "id": "posts-css-frameworks",
          "title": "Picking a CSS Framework",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Development",
          "tags": "css",
          "url": "/posts/css-frameworks/",
          "content": "For some recent coding projects, I experimented with a few popular CSS framework options.\n\nI’ve used Bootstrap for most of my projects for a long time. It consistently remains one of the top CSS frameworks and is probably still the most widely used. Tailwind and Foundation are also popular. My primary focus isn’t design, although I think this site and the Eastern Pennsylvania Gaming Society site I designed both look good. They’re both Bootstrap sites. When I was working on ParkPin, I wanted something lighter and ultimately chose Bulma.\n\nFor this article, I built some framework demos, which you can reach at https://techinfo.brainbuz.org/frameworkdemo. I also made an archive of the source files, which you can download at https://techinfo.brainbuz.org/assets/frameworksdemo.tar.gz. While developing the demo page, I started with Bulma, then translated it to a Pure CSS version, and then to the other frameworks. You’ll see that I was able to get all of them to produce fairly similar sites. Pico was the exception: because it’s a lighter framework, I chose to lean into its strengths and paradigm rather than forcing it to reproduce a design built with another framework.\n\nAccording to W3Techs Market share trends for CSS frameworks, Bootstrap is still the king, with a 75% market share, while Foundation and Tailwind are each around 2%. Aguko also has similar numbers. My unscientific impression is that Bootstrap has a majority marketshare, with a significant cadre of Tailwind supporters, and a larger group that are looking at other alternatives where Bulma and Pico have the most buzz at the moment, I’m a little surprised that the data puts Foundation ahead of Tailwind and that Bootstrap still has over 15 times Tailwind and Foundation’s combined market share.\n\nBootstrap\n\nBootstrap has the advantage of excellent documentation and lots of forum discussion, and it gives you a pretty complete kit out of the box. It’s easy to get started and easy to find answers when you get stuck. On the downside, I sometimes feel like I’m fighting with it.\n\nWhen I’m creating a site, one of the first things I tackle is the color scheme. In theory, you should be able to build from Sass and override colors in Sass. But Bootstrap 5.x isn’t fully compatible with modern Sass and won’t be fully compatible with Dart Sass until version 6. I’m writing this in 2026, and Bootstrap 6 might still be several years away. In practice, you often need to use Bootstrap precompiled and then load an override stylesheet after it.\n\nBootstrap gives you a very solid foundation, but when you need to go beyond it, it can feel like a fight. For example, Bootstrap defines a lot of sub-elements in table styling, which can make restyling difficult.\n\nSites built with Bootstrap can look similar if they aren’t heavily customized. This site is built with Bootstrap (as of early 2026), and hopefully it doesn’t look like a run-of-the-mill Bootstrap site.\n\nFoundation\n\nBefore writing this page, I hadn’t done anything with Foundation. For the demo page, I found it comparable to Bootstrap. The general consensus seems to be that it has a steeper learning curve (AI helped me on the demo, so I didn’t get the full experience), offers better customization than Bootstrap, but isn’t as ready out of the box.\n\nBootstrap and Foundation are about the same age, and Foundation shares Bootstrap’s issues with modern Dart Sass, including no clear timeline for a fully compatible update.\n\nTailwind\n\nTailwind is favored by many UI developers, but it was also the most difficult for me to use. In Tailwind, you’ll use a lot of classes. When I customized Tailwind to match the appearance of the demo that was initially created with Bulma, the SCSS file was very short: I defined 3 colors and set 2 theme colors, everything else was done by applying many utility classes.\n\nBy contrast, the original Bulma version defined a 7-color palette and redefined 13 theme colors. For my demo, the Tailwind customization was mostly CSS, which fits Tailwind’s philosophy of writing very little custom CSS. For basic palette setup, this worked well; if you need more, the Tailwind project recommends PostCSS.\n\nI can see the appeal for developers who do a lot of design and want precise control. It has built-in classes for very complex layouts. For me, it’s a harder sell. Without AI assistance, it would have taken me a long time to learn it well enough to build this demo page, and everything needs many class tags. Tailwind is the hardest to master and takes the most effort to use.\n\nBulma\n\nBulma is a unique animal: semantic elements don’t dictate style. In Bulma, &lt;p&gt;, &lt;h1&gt;, and &lt;div&gt; can look the same without classes. In addition to the Bulma Framework Demo Page, I created a more specific Bulma Example to show how this works.\n\nInstead of semantic context, elements need classes. For example, an H1 might be &lt;h1 class=\"title\"&gt;, but if semantic hierarchy is irrelevant in your site, it could just as easily be &lt;div class=\"title\"&gt;.\n\nBulma also doesn’t include JavaScript, focusing purely on CSS. In the example, there’s a small helper JavaScript snippet for the menu. If you want to make text small with danger styling and a warning background, you can do this: &lt;span class=\"is-size-6 has-text-danger has-background-warning\"&gt;.\n\nDespite how different it is, I like using it.\n\nMy site ParkPin has no semantic structure, so Bulma was a perfect fit.\n\nPico\n\nPico is the opposite of Bulma: it can be used without classes. Pico provides 20 theme templates with different text color schemes, and it also has a set of 380 predefined colors that can be referenced by name. You’ll see those predefined colors in the SCSS source for the Pico Framework Demo Page. For a site where formatting follows semantic tagging, Pico is a great choice.\n\nPico is also minimalist. It’s tiny compared to the others (fast to load) and better suited for sites with simpler design needs.\n\nConclusion\n\nFor anyone inexperienced with CSS frameworks, I recommend starting with Bootstrap. It still has a huge lead in popularity, there is lots of coverage on the web, the docs are good, and it’s designed to be easy to learn.\n\nFoundation seems like a reasonable alternative to Bootstrap, but I don’t see a strong reason to pick it, and it has a steeper learning curve. Tailwind might be the best choice for design-oriented developers, but it also has a steep learning curve and requires extensive class usage.\n\nPico is lightweight and designed for classless use. It may limit your design options, but if your project is tightly tied to semantic document hierarchy, you can pick one of its 20 color schemes and move quickly. Bulma is a great platform, and once you get going, it becomes intuitive.\n\nMy CSS framework toolbox includes Bootstrap, Bulma, and Pico. Pico won’t fit every project, but for something like a documentation site, it can be a great choice. I already have several Bootstrap sites, with no reason to switch them. For new sites, Bulma is my current go-to."
        },
        {
          "id": "posts-openwrt-bind",
          "title": "Bind on OpenWRT",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Networking",
          "tags": "openwrt, dns",
          "url": "/posts/openwrt-bind/",
          "content": "Installation\n\nopkg update\nopkg install bind-server bind-tools\n\n\nStop DNSMasq from serving DNS\n\n# /etc/config/dhcp\nconfig dnsmasq\n  option domain ''\n  option port 0```\n\n\n/etc/init.d/dnsmasq restart\n/etc/init.d/named start\n\n\nBIND root hints on OpenWrt\n\nDebian typically uses /usr/share/dns/root.hints or /usr/share/dns/bind.keys.\nOpenWrt usually does not ship a root hints file, so define one explicitly. I copied this to /etc/bind/named.conf.root-hints and included it via named.conf.options.\n\nFailures priming and dnssec failures could be caused by time issues, make sure the router is synchronized via NTP and showing the time in UTC.\n\nTroubleshooting configuration\n\nstop the init script\n\n/etc/init.d/named stop\n\nrun bind in the foreground with logging\n\n/usr/sbin/named -g -d 2 -c /etc/bind/named.conf\n\nor increase debug level if nothing shows\n\n/usr/sbin/named -g -d 4 -c /etc/bind/named.conf\n\ncheck syslog output in another shell\nlogread -f\n\nquick config checks\n\n/usr/sbin/named-checkconf -z /etc/bind/named.conf\n\n/usr/sbin/named-checkzone example.com /etc/bind/zones/db.example.com\n\ncheck file permissions for zone files and keys\nls -la /etc/bind /etc/bind/zones\n\nrestart normally after fixing\n/etc/init.d/named start\n\nQUERY Logging and RNDC\n\nOpenWRT’s implementation effectively breaks the rndc utility, by trying to regenerate part of the rndc config every time named starts, so rndc won’t work to enable and disable query logging interactively. The fix is to add a config named.conf.logging and include it when needed via named.conf, which requires restarting named to activate and de-activate.\n\nlogging {\n    channel query_log {\n        syslog daemon;\n        severity dynamic;\n    };\n    category queries { query_log; };\n};"
        },
        {
          "id": "posts-openwrt",
          "title": "Networking With OpenWRT",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Networking",
          "tags": "openwrt",
          "url": "/posts/openwrt/",
          "content": "I’ve tried open router distributions before. Installation has always been an ordeal, and even though a piece of hardware is supported, installation doesn’t always succeed. I did have DDWRT running for a while, until that router died. This time I decided to upgrade my router, rather than rush to replace a failed one. I considered these popular open firmwares: DDWRT, Fresh Tomato, and OpenWRT. DDWRT, while established and maintained, is often behind on hardware compatibility. Fresh Tomato is Broadcom Specific, limiting compatible hardware, and also lags on features. OpenWRT targets ARM based hardware, and is able to support a lot of newer hardware, and is ahead of the other distributions in both implementing new features.\n\nI chose the Flynt 2 / GL.iNet GL-MT6000: it is wifi 6 capable, comes with a Quad Core 2Ghz ARMv8 processor, 1 GB of RAM and 8 GB of storage, making it a capable small server. It also ships with OpenWRT based firmware, installation is a simple firmware upgrade. It was easy, not just easy, impressively easy. GL.iNet’s own firmware is a fork of OpenWRT, and all one need to do is find the appropriate OpenWRT image, download it, and install it just as if it was an upgrade of the OEM firmware.\n\nNow that installation was easy and it wasn’t an emergency replacement for a failed router, I really enjoyed that I could customize which services were installed and configure it like a Linux Server.\n\nBasic Setup\n\nOnce flashed, OpenWRT defaults to 192.168.1.1, connect through ethernet and open a browser to that address. The initial page requires you to set a root password, this will also apply to the console root password.\n\nFirst Steps\n\nEach time you change settings, remember to click ‘Save and Apply’.\n\nCheck that NTP is turned on. System ⮞ System ⮞ Time_Synchronization.\n\nEnable SSH access, System ⮞ Administration ⮞ SSH_Access, my recommended settings are not allow root to use a password and to only answer on the lan, then tab over and add your ssh keys.\n\nSwitch the web interface to https, System ⮞ Administration ⮞ HTTP(S)_Access. The router ships with a self signed certificate which you’ll want to replace, there are two routes for doing this: using acme, you can read the instructions at https://openwrt.org/docs/guide-user/services/tls/acmesh, or you can distribute certificates to it, I’ll talk a more about that later.\n\nSet the IP address of your device. If you want the default with the router at 192.168.1.1, you don’t need to change anything, but I wanted to drop it in to replace a router on a different subnet, with a different ip address. The router will be pre-configured with 3 interfaces: lan, wan and wan6. By the way during setup giving the lan interface a temporary static ip on my network and then daisy chaining wan off the old router worked fine, which is helpful when you’re building the router and your setup computer isn’t isolated from the network.\n\nAdd missing packages. By design OpenWRT tries to select the minimal set of packages that will get you to a functional wifi router setup. If your device isn’t tight on storage install some common utilities, here are my suggestions: tcpdump ss bind-dig bind-host rsync bind-host bind-dig. FYI, netstat isn’t available, as the project considers it obsoleted by ss.\n\nWhen you configure Wifi, check the country code on each of your connections, on my router this defaulted to 00-world, which limits the radio to a very low power level.\n\nOpenWRT Linux Specifics\n\nOpenWRT is minimal, and a lot of things that would normally go on persistent storage are on volatile storage. The reason it prefers volatile storage is that OpenWRT supports a lot of very limited devices, and concerns for SSD wear.\n\nThe web management interface is called LuCI, and it wraps the command line management interface uCI. You can edit the uCI configurations by hand in /etc/config, but you always need to be careful about whether a particular service is managed by uCI or it’s regular config files. A minor flaw is that if you disable uci for a service to use normal configs, if it is a core feature, you can’t disable it in LuCI or tell LuCI to only display the config file, if you make changes in LuCI, they’ll be ignored, which is much better than having two frontends trying to configure the same thing. My preference is to either work through LuCI or edit the uci config directly rather than use uci at the cli, it is important to remember that you should always make sure your changes are saved if youbetween LuCI and either of the CLI options.\n\nOpenWRT has it’s own package manager: opkg. The package selection is limited, but this distribution has a narrow focus, the selection is pretty reasonable in context. Don’t forget to run opkg update if you’re installing from the cli, not only can the cache get stale, but it lives on volatile storage.\n\nOpenWRT is lightweight and uses /etc/init.d to control daemons. The service command is a wrapper, so /etc/init.d uhttpd restart and service uhttpd are the same thing.\n\nOpenWRT uses Dropbear as its ssh server, which uses both a system configuration for ssh keys in /etc/dropbear.\n\nInstalling Your Own Certificates\n\nFor my home environment I already have infrastructure using certbot and letsencrypt to generate wildcard certficates for my home network use the DNS API.\n\nMake a directory for your certificates, and install rsync if you haven’t already.\n\n/etc/config/uhttpd:\n#       option cert '/etc/uhttpd.crt'\n#       option key '/etc/uhttpd.key'\n        option cert '/etc/mycert/fullchain.pem'\n        option key '/etc/mycert/privkey.pem'\n\n\nCreate an ssh cert for root on the router and add it to authorized_keys for certagent on the infra server.\n\nmkdir -p /root/.ssh\nchmod 0700 /root.ssh\ncd /root/.ssh\nssh-keygen -t ed25519 -f id_dropbear -C certagent@openwrt\n\n\nEdit root’s crontab\n\n# Run weekly on Sunday at 05:00\n0 5 * * 0 rsync -r certagent@mycerthost:/ssl/mycert /etc/\n# Reload utthpd which hosts LuCI after getting new cert c\n# could be done with '&amp;&amp;' but I find separate entries more readable.\n05 5 * * 0 /etc/init.d/uhttpd reload\n\n\nBacking Up\n\nYou can run the sysupgrade command or use the LuCI interface. Small gzip tar files are created which are trivial to store. You can use a utility like ark to browse the configs from etc in the archive. When you customize configs you can add the files to /etc/sysupgrade.conf.\n\nConfiguring A Service Manually\n\nFor my home environment I was already running BIND for DNS and DNSMasq for DHCP only. During outages of my primary server I’ve temporarily stood up a second server, but I wanted to take this opportunity to go to a fault tolerant design, BIND is designed for it, doing it with DNSMasq will be another topic.\n\nOpenWRT has prebuilt packages for named (BIND), and doesn’t expect it to be managed by uCI. There were still some considerations, which may be the topic of another article.\n\nFor DNSMasq I had to disable the built in service and create the ‘real’ service. Also note that you’ll want to remove the standard dnsmasq package and then install dnsmasq-full.\n\nCreate Standalone DNSMasq Config\n\n/etc/init.d/dnsmasq-real\n\n#!/bin/sh /etc/rc.common\nSTART=60\n\nstart() {\n    /usr/sbin/dnsmasq -C /etc/dnsmasq.conf\n}\n\nstop() {\n    killall dnsmasq\n}\n\n\nEnable the real dnsmasq config\n\n/etc/init.d/dnsmasq stop\n/etc/init.d/dnsmasq disable\nchmod +x /etc/init.d/dnsmasq-real\n /etc/init.d/dnsmasq-real enable\n /etc/init.d/dnsmasq-real start\n\n# check for correct dnsmasq server in running processes.\npgrep -fa dnsmasq\n\n\nAdd Custom Config to Backup\n\nAdd the following to /etc/sysupgrade.conf, changes take effect after reboot.\n\n/etc/dnsmasq.conf\n/etc/dnsmasq.d/"
        },
        {
          "id": "posts-linux-firewalls",
          "title": "Linux Firewalls",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux",
          "tags": "firewalld, ufw, netfilter, nftables, iptables",
          "url": "/posts/linux-firewalls/",
          "content": "During the course of its history, Linux has had several built-in firewall stacks. ipfirewall was superseded early in the 2.x kernel series by ipchains, which was replaced by iptables in 2.4. The project (netfilter) responsible for iptables replaced it in 2014 with nftables and made iptables a front end for nftables.\n\nOn a modern Linux system, the command iptables -V will produce output like: iptables v1.8.11 (nf_tables). The nf_tables in parentheses indicates that iptables wraps nf_tables. The nft utility can be used to manage firewall rules. For modern releases of iptables, ufw, and firewalld, the rules created can be seen with the nft list ruleset command. However, the command doesn’t show which interface created the rule! Rules created directly with nft are invisible to the other front ends.\n\nComparison of Front Ends\n\n\n  \n    \n      Feature\n      iptables\n      nft\n      ufw\n      firewalld\n    \n  \n  \n    \n      Persisting Changes\n      requires iptables-save command and iptables service to load rules from saved file\n      similar to iptables; rules in /etc/nftables.conf or /etc/nftables/*.nft will be loaded by the service\n      changes made interactively are permanent; the config is stored in /etc/ufw/user.rules\n      interactive changes are ephemeral but can be made permanent with the --permanent flag or firewall-cmd --runtime-to-permanent\n    \n    \n      Complexity\n      rule syntax is terse and cryptic\n      rule syntax is comparable to iptables but less terse, parts of syntax inspired by tcpdump\n      simplest rule syntax\n      core syntax for basic rules is comparable to ufw, with the addition of having to specify zone; complex rules can be created with the rich-rules feature\n    \n    \n      Interface/Zone Binding\n      tables can be defined and rules bound to adapters\n      tables and chains are similar to iptables\n      rules can be applied per adapter\n      zone concept similar to Windows Firewall, cumbersome since all commands need to specify zone; rarely a useful feature on servers\n    \n    \n      Capability\n      can implement sophisticated and complex rulesets\n      goes beyond firewall rules; more advanced features like bridge interfaces are built in nft on modern distros\n      handles most common rules\n      has ability to create more complex rules with rich rules, less powerful than iptables and nft\n    \n  \n\n\nAvoiding Conflict\n\nThe easiest way to avoid conflict is to use only one front end. When you enable one, you should ensure the others are disabled. However, you can’t disable nftables or prevent packages from creating rules. nft list ruleset is a command you need to know, even if you don’t write rules in it. For example, Docker creates a lot of networking rules, and setting up a bridge interface for Kernel Virtualization creates nft rules to implement it. The only place you can see all running rules is through nft.\n\nWriting Rules\n\nSince all rules are really translated to nft and the only place they are completely visible is through nft, nft is the best place to manage firewall rules. There are practical barriers to just switching to it. iptables has been around for a long time, organizations with more complex rules already have iptables in place, and nft is another complex system to master. In environments where ufw and firewalld are sufficient, they remain good choices.\n\nHere are a few commands that are useful to know:\n\nnft list ruleset        # View all active nftables rules\nnft flush ruleset       # Remove all nftables rules (use with caution!)\nnft delete rule inet filter input handle 17 # handle number from nft list ruleset\n\n\nHere is a comparison of a basic rule to allow SSH. The nft version of the rule is a little friendlier than the iptables version. The ufw version of the command is much cleaner than the firewalld version.\n\n# both iptables and nft require permanent rules in a file, and may also\n# require setting up chains and tables in the configuration.\n# the example presumes the inet table and chains already exist for both iptables and nft.\niptables -A INPUT -p tcp --dport 22 -j ACCEPT\nnft add rule inet filter input tcp dport 22 accept\nufw allow ssh\nfirewall-cmd --zone=public --add-service=ssh --permanent\n# alternately you may use the port instead of the predefined service\n# in both ufw and firewalld\nufw allow 22/tcp\nfirewall-cmd --zone=public --add-port=22/tcp --permanent\n\n\nHere’s an example of allowing SSH only from a single host.\n\niptables -A INPUT -p tcp --dport 22 -s 192.168.1.100 -j ACCEPT\nnft add rule inet filter input ip saddr 192.168.1.100 tcp dport 22 accept\nufw allow from 192.168.1.100 to any port 22 proto tcp\nfirewall-cmd --permanent --zone=public \\\n  --add-rich-rule='rule family=\"ipv4\" source address=\"192.168.1.100\" service name=\"ssh\" accept'\nfirewall-cmd --reload # must reload to activate new rich rule.\n\n\nWhy I Strongly Prefer ufw to firewalld\n\nThe examples above demonstrate that ufw is a much easier basic firewall front end. Beyond syntax simplicity, firewalld has several practical drawbacks:\n\n\n  \n    Zone complexity: Every command requires specifying a zone, adding unnecessary complexity for most server use cases where a single security policy applies.\n  \n  \n    Two-step process: Changes require either the --permanent flag or a separate --runtime-to-permanent command, creating opportunities for mistakes.\n  \n  \n    Rich rule syntax: Complex rules require XML-like rich rule syntax that’s prone to escaping issues. From the examples we can see that you often have to go to rich rules for things that ufw can handle with simpler syntax.\n  \n  \n    Historical baggage: During the RHEL 7 lifecycle, firewalld used an outdated interface to iptables, which exacerbated compatibility issues as nftables phased in.\n  \n\n\nIn contrast, ufw provides immediate permanent changes, intuitive syntax, and handles the most common firewall scenarios without the overhead of zones or complex rule formats. For the 90% of use cases involving basic allow/deny rules, ufw is simply the better choice.\n\nHelpful Links\n\nIf you’re interested in transitioning to nftables, here are some resources for getting started.\n\n\n  RedHat Documentation: Getting started with nftables\n  Arch Linux Wiki nftables article\n  netfilter project homepage\n  nftables wiki\n  linuxconfig.org article on transitioning from iptables\n  How to Build a Firewall with nftables\n  fosslife.org How to Use nftables for Firewall Rules\n\n\nufw on non Debian Systems\n\nArch\n\nInstall iptables-nft and mask the iptables and ip6tables services before installing ufw. There are no default rules, so you must create a rule for ssh before enabling it.\n\nRHEL and clones\n\nNot recommended. While the package is available, even after applying similar fixes as needed for Arch it still doesn’t work well."
        },
        {
          "id": "posts-which-primer",
          "title": "Which Primer is Best?",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Home Improvement",
          "tags": "",
          "url": "/posts/which-primer/",
          "content": "PVA vs Latex\n\nPoly Vinyl Acetate Primers are widely used for fresh Dry Wall, and are often labelled as Dry Wall Primers, PVA for short. Latex Acrylic Primers have been around for longer and are more general purpose, referred to as Latex in this article.\n\nLatex Primers will bond to a wider variety of surfaces, including wood and plaster. PVA, only bonds to highly porous surfaces, such as fresh dry wall and joint compound. PVA also creates a complete seal preventing any moisture from passing through it, Latex does not create as complete a seal and moisture can penetrate it. For fresh dry wall both work really well. In damp environments PVA won’t hold up as well because moisture can be trapped under it. Plaster is very smooth and PVA won’t adhere to it well, and any humidity will cause it to peel off. Durabond and similar joint compounds are closer to plaster than easy sanding and drying type compounds (premixed in the bucket), they set shiny, and I would treat them as plaster if they’re not covered by the other compounds.\n\nThe advantage of PVA is cost. Leftover Latex Primer has a longer shelf life and is useful for other projects, offsetting the PVA cost advantage on smaller projects.\n\nOther Primers\n\nWhile Latex is the best all around general primer, there are many other types. Here are a few that I use regularly.\n\nOil based Cover Stain\n\nBonds well to metal and wood, and won’t raise the grain (a concern with any water based product), Cover Stain is also high hide and seals like PVA. It is great for painting over discoloration like water stains and smoke stains. While it is my first choice on metal and hard wood, I don’t consider it a great choice for fresh drywall or drywall repairs, and as a sealing primer it shouldn’t be used over plaster. If your kid drew all over the walls it is great for covering over the crayon marks you couldn’t scrub off the wall!\n\nShellac\n\nShellac based primers are great for sealing in stains, smoke damage, and odors. There are few things it won’t stick to, making it great for difficult surfaces. Shellac is a stronger sealant than PVA (making it a bad choice for plaster), and is the best knot sealer available. It also won’t raise the grain on wood, making it the best primer for wood with knots. Shellac is very fast drying and its solvent is alcohol, so it is harder to work with.\n\nBonding Primers\n\nBonding primers are usually acrylic based and are meant to bond to difficult surfaces. If you’re not comfortable with solvent based products, they can do a lot of the jobs I would pick oil or shellac for. Stix is a popular bonding primer, I dislike it because it brushes like Oatmeal, and requires sanding the globs instead of a light scuffing.\n\nLatex Primer can do a lot!\n\nSometimes Latex will work well on a difficult surface with the right prep — I’ve had great results with Latex Primer on PVC pipe where I sanded the pipe first then wiped it down with MEK immediately before priming."
        },
        {
          "id": "posts-wire-nuts",
          "title": "Wire Nuts are Absolete",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Home Improvement",
          "tags": "electrical",
          "url": "/posts/wire-nuts/",
          "content": "Before the twist on wire Nut, electricians had to solder wires together, then tape them. Ceramic twist on wire connectors were introduced in the 1920s, by the 1950s they were plastic and the current color codes for the different sizes were established.\n\nPros and Cons of Wire Nuts.\n\n\n  \n    Pro: Can be removed to allow reconfigurations.\n  \n  \n    Pro: Color coding makes it easy to select the right size nut from a mixed container.\n  \n  \n    Con: Some colors are similar, often Orange and Red connectors appear the same, I refer to them as Small Red And Large Red.\n  \n  \n    Con: Sometimes a wire will work lose, especially when pushing wires into a box after working on them, but they can work loose over time (typically if they were partially loosened while being pushed in).\n  \n  \n    Con: Twisting wires tightly requires effort.\n  \n  \n    Con: Take up space in a cramped box.\n  \n  \n    Con: When reconfiguring pigtails old twists often need to be straightened out or the wire trimmed.\n  \n  \n    Con: The most wires a nut can handle is 6, the more connections in a nut the more likely a loose wire.\n  \n  \n    Pro: For household wiring they are much easier to work with than the older alternatives: Screw Terminal Blocks, Push In, and Soldered.\n  \n\n\nWhile searching for more compact wire nuts I happened on Wago lever wire connectors. Initially I was skeptical, having previously experienced crimp on connectors (common in automotive wiring) and push in wire connectors.\n\nAdvantages of Wago 221 Connectors over Wire Nuts\n\n\n\n\n  \n    See through. Connectors can be visually checked to confirm all wires are properly seated.\n  \n  \n    Lever Lock. Holds the wire tight. Can be released if changes need to be made, wires aren’t twisted so don’t need to be untwisted or trimmed to reseat.\n  \n  \n    Compact. Takes up less space than twist on connectors.\n  \n  \n    Splice Connectors (included in most starter assortments). Don’t require a pigtail to extend a wire that is too short.\n  \n  \n    Color Coding is for the wire size. You can see by looking at the connector how many wires it holds.\n  \n  \n    Come in sizes for up to 10 wires!\n  \n\n\nThe orange connectors can handle 12-24 AWG in each socket, if you’re working with heavy wire the green connectors are 12 AWG specific and the grey ones are meant for 10 AWG. Most household wiring is 12 and 14 AWG, contractors often only run the heavier 12 AWG which can be used in both 15 and 20 Amp circuits; they do this so they are handling only 1 wire type and don’t have to keep track of which circuit is going to be which amperage, for them the that outweighs the higher cost for 12 AWG and that 12 AWG is harder to bend. 12 AWG and 14 AWG are the only two wires that are commonly mixed so it is important that connectors support this.\n\nOlder Push Ins\n\nWorking with the type of Push In connector used in auto wiring required tight crimping to make a solid connection, and changes require trimming the wires. If one side failed to crimp, the other side still needed to be cut to retry the connection. The push in connectors for outlets and switches were particularly bad, known for working lose, and once pushed in you had to cut the wire to replace the outlet.\n\nThe new lever connections are much better than the old push in connections: they’re see through so you can see that all of your wires are in place, if something does work lose, you can see it without having to take everything apart (as with wire nuts). If you need to rearrange your wires, the ends in the connectors are straight and not twisted, and they lever right out. The old push in outlets used a spring to grab the wire, the lever requires force to lock, making it likely to be more reliable in the long term.\n\nOther Lever Options\n\nThere was an earlier Wago Lever Nut which isn’t considered as good as the 221 Series, so make sure you’re getting 221. Other manufacturers are making similar products, but be wary of Push In connectors that might look similar to the lever connectors. Leviton is using a lever on their GFCI outlets, and maybe we’ll see it on regular outlets."
        },
        {
          "id": "posts-a-working-serial-console-in-kvm-guests",
          "title": "A Working Serial Console in KVM Guests",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux",
          "tags": "kvm",
          "url": "/posts/a-working-serial-console-in-kvm-guests/",
          "content": "By default guests don’t have a working console.\n\nSince, spice is being effectively deprecated, and in reality it never worked as reliably as vnc anyway (despite promises to be much better), I have to replace the spice displays on existing vms. For server vms, the serial console is a much better choice than VNC — lower overhead and works with virsh console command.\n\nYou’ll need to edit the guest with virsh edit to make sure it has the needed entries, some of which may already be present:\n\n&lt;controller type='virtio-serial' index='0'&gt;\n  &lt;address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/&gt;\n&lt;/controller&gt;\n\n&lt;serial type='pty'&gt;\n  &lt;target type='isa-serial' port='0'&gt;\n    &lt;model name='isa-serial'/&gt;\n  &lt;/target&gt;\n&lt;/serial&gt;\n&lt;console type='pty'&gt;\n  &lt;target type='serial' port='0'/&gt;\n&lt;/console&gt;\n\n\n\nand need to add these lines to /etc/default/grub on RedHat, or create a new /etc/default/grub.d/00_console.cfg (0644) on Debian.\n\nGRUB_TERMINAL=\"console serial\"\nGRUB_SERIAL_COMMAND=\"serial --speed=115200 --unit=0 --word=8 --parity=no --stop=1\"\nGRUB_CMDLINE_LINUX_DEFAULT=\"console=tty0 console=ttyS0,115200n8\"\n\n\nthen run the command:\n\n# redhat\ngrub2-mkconfig -o /boot/grub2/grub.cfg\n# debian\nupdate-grub\n#\nsystemctl enable serial-getty@ttyS0.service\nsystemctl start serial-getty@ttyS0.service\n\n\nIf you create your guests with virt-install, then adding the flag will create the console entry:\n\n  --console pty,target_type=virtio\n\n\nIf you create your guests with VMM, it defaults to creating a spice display, which you’ll need to change to VNC. After you create the machine you can add the console with virsh edit.\n\nThis article originally created 2025-03-13, revised 2025-08-01."
        },
        {
          "id": "posts-ssl-settings-for-web",
          "title": "Keeping Web Server SSL Settings Secure",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Security, Web Servers",
          "tags": "apache, nginx, kali",
          "url": "/posts/ssl-settings-for-web/",
          "content": "Recently I was updating my certificate issuance automation (which uses certbot to get certificates from Let’s Encrypt), and decided it was time to review my SSL web server settings. SSL Labs was rating my sites ‘A’, so nothing was urgent.\n\nKali Linux includes several tools for checking SSL, I decided to see what they turned up.\n\n\n  sslyze\n  sslscan\n  openssl\n\n\nThen I used Mozilla’s SSL Configuration Generator to review my web server configurations.\n\nopenssl\n\nThis utility is likely to already be installed on any Linux distribution. It can easily be used to read an SSL certificate from a remote host: openssl s_client -connect parkpin.me:443. It can also read a locally installed certificate: openssl x509 -text -noout -in /etc/ssl/mycerts/parkpin/cert.pem. It has options for forcing protocol versions and ciphers. I use this utility in my personal toolkit wrapped in scripts to parse the most important fields from certificates, as the raw output is several screenfuls.\n\nsslyze\n\nIs Python-based and very informative. It will tell you which protocols, ciphers and curves are in use, and also check compliance with the ssl-config.mozilla.org recommendations. It complained about two minor issues: a negative serial number for a certificate, and an older curve in my certificate chain.\n\nsslscan\n\nProvides less information than sslyze and does not check compliance with any recommendation sets. Compared to openssl it offers an easier cli syntax for simple queries and more concise output.\n\nFixing The Issues\n\nThe sslyze warning about the serial number comes on STDERR while everything else is on STDOUT. The complaint is originating within a Python library sslyze is using, the sslyze developers don’t consider it worth repeating in the main output. The library may be interpreting certain leading bytes as a sign. The specification calls for an integer, without excluding negative integers. The openssl s_client utility shows an integer without a sign. My conclusion is that this is not an issue — if it exists at all and is a false positive.\n\nThere is an option to request a stronger curve in /etc/letsencrypt/cli.ini, and that clears up the curve issue when I reissue the certificate.\n\nkey-type = ecdsa\nelliptic-curve = secp384r1\n\n\nmoz://a SSL Configuration Generator\n\nThis tool will generate an example configuration for many popular servers with three profiles: Modern, Intermediate, Old. Intermediate still allows TLS 1.2, Modern does not. At this time I see no reason to support TLS 1.2 and older protocols on general websites. Since this tool is actively maintained by one of the major web browsers and gives it’s recommendations in the configuration of any number of supported server config formats, it’s my guide of choice.\n\nApache config suggestion\n\n# this configuration requires mod_ssl, mod_rewrite, mod_headers, and mod_socache_shmcb\n&lt;VirtualHost *:80&gt;\n    RewriteEngine On\n    RewriteCond %{REQUEST_URI} !^/.well-known/acme-challenge/\n    RewriteRule ^.*$ https://%{HTTP_HOST}%{REQUEST_URI} [R=301,QSA,L]\n&lt;/VirtualHost&gt;\n\n&lt;VirtualHost *:443&gt;\n    SSLEngine on\n    SSLCertificateFile      /path/to/signed_cert_and_intermediate_certs\n    SSLCertificateKeyFile   /path/to/private_key\n    # enable HTTP/2, if available\n    Protocols h2 http/1.1\n    # HTTP Strict Transport Security (mod_headers is required) (63072000 seconds)\n    Header always set Strict-Transport-Security \"max-age=63072000\"\n&lt;/VirtualHost&gt;\n\n# modern configuration\nSSLProtocol             -all +TLSv1.3\nSSLOpenSSLConfCmd       Curves X25519:prime256v1:secp384r1\nSSLHonorCipherOrder     off\nSSLSessionTickets       off\n\nSSLUseStapling On\nSSLStaplingCache \"shmcb:logs/ssl_stapling(32768)\"\n\n\nWhen applying the recommendation, pay attention to the VirtualHost sections and code outside them. The code outside should be added to the global server configuration. On Debian/Ubuntu, create /etc/apache2/conf-available/sslserveroptions.conf and enable it for the global configuration. On RedHat and others that use httpd.conf, add the server-wide config there. Ignore the port 80 VirtualHost if you’re still allowing insecure HTTP traffic or have other logic for redirecting HTTP requests.\n\nnginx config suggestion\n\nhttp {\n\n    server {\n        listen 443 ssl;\n        listen [::]:443 ssl;\n        http2 on;\n        ssl_certificate /path/to/signed_cert_plus_intermediates;\n        ssl_certificate_key /path/to/private_key;\n\n        # HSTS (ngx_http_headers_module is required) (63072000 seconds)\n        add_header Strict-Transport-Security \"max-age=63072000\" always;\n    }\n\n    # modern configuration\n    ssl_protocols TLSv1.3;\n    ssl_ecdh_curve X25519:prime256v1:secp384r1;\n    ssl_prefer_server_ciphers off;\n\n    # OCSP stapling\n    ssl_stapling on;\n    ssl_stapling_verify on;\n    # verify chain of trust of OCSP response using Root CA and Intermediate certs\n    ssl_trusted_certificate /path/to/root_CA_cert_plus_intermediates;\n\n    # replace with the IP address of your resolver;\n    # async 'resolver' is important for proper operation of OCSP stapling\n    resolver 127.0.0.1;\n\n    server {\n        listen 80 default_server;\n        listen [::]:80 default_server;\n        return 301 https://$host$request_uri;\n    }\n}\n\n\nIn nginx, the ‘server {‘ blocks are equivalent to Apache’s VirtualHost. HSTS and other ssl directives need to be in the ‘server { ssl’ blocks for each of your sites and the other directives need to be in the main body of your config.\n\nStapling and HSTS\n\nOCSP stapling allows the server to include a signed response from a CA that the certificate isn’t revoked, but depending on the CA, these responses can be up to a week old. Stapling reduces CRL checks, saving browsers time and CA resources, at the cost of slowing propagation of revocations.\n\nHSTS tells the browser the site is HTTPS only, the maxage directives mean that once a browser sees it on a site it will remember that forever and always require ssl for that site.\n\nLet’s Encrypt\n\nAfter applying the suggestions, there were errors from OSCP stapling in my error logs. What I found is that Let’s Encrypt no longer supports OSCP. If you use them you should turn off stapling, the errors are harmless, the effect is the feature isn’t used. Let’s Encrypt considers OCSP to be a privacy risk in comparison to CRLs (Certificate Revocation Lists). OSCP had been adapted to reduce traffic generated from checking CRLS, improvements from browsers have made CRLs more efficient, and requiring less resources for Let’s Encrypt. As a Certificate Authority, they must maintain CRLs while OSCP has always been optional."
        },
        {
          "id": "posts-digikam-projects",
          "title": "DigiKam Projects",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Audio and Video",
          "tags": "media",
          "url": "/posts/digikam-projects/",
          "content": "DigiKam is a great FOSS tool available on Windows, Mac, and of course, Linux. A severe limitation is that it has no concept of Projects or Profiles, and its album/collection management isn’t intuitive.\n\nWhat DigiKam does is manage libraries of digital images. It reads and edits tags, and writes metadata to the files and XMP sidecars. For example, I have a large archive of family photos. With DigiKam, I’ve scanned for face tags, which both automates tagging and identifies individuals within a photo, as well as tagged images manually (face tagging doesn’t always work). DigiKam keeps both a database of my images and updates the metadata in my files with my added information. I recommend using both embedded and sidecar metadata, because the sidecars can be read as text files and not every image viewing utility will show all the metadata. Of course, it presents galleries of your images and allows you to search on metadata.\n\nThe way DigiKam both embeds and generates sidecars makes your collections very portable. When I was last evaluating alternatives, I was surprised at the uneven support of these features, and that some platforms could read but not write extended data.\n\nAlbums and Collections are related features that impact each other, but the dialogs are in completely separate places and don’t link to each other. Collections can be on any local, remote, or removable file system that is mounted. When you add a Collection, it automatically creates an Album. However, from the Albums dialog you cannot add a Collection. Within an Album, child albums are just folders, and moving items between albums does move (or copy) the files.\n\nMany of us have multiple collections of photos; for example, we might have work-related and personal collections. It follows that when we open our photo manager, we would want the option of working on only one project at a time.\n\nWhen you first run DigiKam, it asks where your pictures are and to set up a database. The default is SQLite, and I recommend using a local SSD for performance. One thing to keep in mind is that once you’ve configured it to write metadata into the files and sidecars, the database is redundant — it can be rebuilt by re-scanning your files (which can take a while). To add photos from other locations, you can either import them (which makes a copy), or go to settings and find the Collections dialog to add an alternate collection. Unfortunately, there is no built-in mechanism for keeping different profiles or project sets.\n\nMy workaround is to create multiple profiles and a script to switch between them.\n\nCreating Switchable Profiles\n\nThe DigiKam Config File and Your First Profile\n\nOn Linux, the configuration file is likely to be ~/.config/digikamrc. I created a ~/.config/digikam folder to hold my profiles. If you want to keep your existing profile, give it a name with the extension .digikamrc, and move it into the new digikam folder. When you run the script, it will show a list with the name you gave your profile. After you select it, ~/.config/digikamrc will be a symlink to that file.\n\nAdditional Profiles\n\nTo create a new profile, you’ll need to unlink ~/.config/digikamrc. Then launch DigiKam; it will prompt you to select a location for pictures. I recommend picking a place in your file system that has fast storage and free space to create all of your profile data directories under. When you launch DigiKam, you will go through the setup again. You can choose the location where your project images are located as the image location. On the database tab, create a subfolder matching the name you’re going to give this profile; your SQLite files will go there. After completing the wizard, check the settings to make sure that you are both writing data to files and creating sidecar files. Exit DigiKam.\n\nJust as for the first profile, move ~/.config/digikamrc to ~/.config/digikam/&lt;profileName&gt;.digikamrc. When you run the script, your new profile will be available.\n\nDigiKam Switch Script\n\nCopy this script into your path and make it executable. Adjust any of the paths as appropriate.\n\n#!/bin/bash\n\n# https://techinfo.brainbuz.org/posts/digikam-projects/\n\n# Define the config directory\nCONFIG_DIR=~/.config/digikam\n\n# Get a list of available config files\nCONFIG_FILES=($(ls \"$CONFIG_DIR\"/*.digikamrc))\n\nread -r -d '' HELP &lt;&lt;\"EOF\"\n\nDigiKam Profile Switcher\n\nLink ~/.config/digikamrc to a file in ~/.config/digikam/ to switch DigiKam profiles between projects.\n\nTo create a new profile, manually unlink ~/.config/digikamrc, launch DigiKam to create a new profile,\nthen move the digikamrc into the digikam folder with the extension digikamrc.\n\nEOF\n\n# Check if there are any config files\nif [ ${#CONFIG_FILES[@]} -eq 0 ]; then\n  echo \"No config files found in $CONFIG_DIR\"\n  exit 1\nfi\n\n# Print the available config files\necho \"Available config files:\"\nfor i in \"${!CONFIG_FILES[@]}\"; do\n  echo \"$((i+1)). ${CONFIG_FILES[i]##*/}\"\ndone\n\nCUR=$(readlink -f ~/.config/digikamrc)\necho \"----\"\necho \"Current config is: ${CUR}\"\necho \"----\"\n\n# Ask the user to choose a config file\nread -p \"Enter the number of the config file you want to use: \" CHOICE\n\n# Validate the user's choice\nif [ \"$CHOICE\" -lt 1 ] || [ \"$CHOICE\" -gt ${#CONFIG_FILES[@]} ]; then\n  echo \"Invalid choice\"\n  echo \"$HELP\"\n  exit 1\nfi\n\n# Kill running DigiKam processes\npkill --signal 3 digikam\n# Create a symlink to the chosen config file\nln -sf \"${CONFIG_FILES[CHOICE-1]}\" ~/.config/digikamrc\n\necho \"Config file switched to ${CONFIG_FILES[CHOICE-1]##*/}\""
        },
        {
          "id": "posts-choosing-new-blog-platform",
          "title": "Choosing the New Platform for this Blog",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Development",
          "tags": "bridgetown",
          "url": "/posts/choosing-new-blog-platform/",
          "content": "Every time I start to think about something that might be a web project I start with WordPress, and usually decide to do something else. At the moment, this blog, and the site of a club I belong to, are the only things running on my multi-site WordPress installation. I do most of my writing in AsciiDoc or Markdown, and I prefer working in those formats to working in Post Editor on WordPress.\n\nBecause the Text Editor -&gt; Commit -&gt; Push workflow fits my style of working, I’ve thought about going to a static site generator and tried out Jekyll and some others in the past. On this round of thinking about it, Grav and Jekyll were the leading choices. Another platform that interested me is Payload CMS. All three of these are considered Headless CMS, which means that in contrast to WordPress, Drupal and Joomla, they are not dependent on a front end application.\n\nPayload\n\nWhile both Grav and Jekyll work with Markdown files, Payload is radically different. Payload is a framework written on top of the Nextjs React Framework. Everything in Payload is TypeScript. It will work with any backend data you will it to. To look at it in the proper prospective it is a collection of components that you build your TypeScript/NEXTjs/React with to get CMS components pre-built. When I started working with it I was hoping that it would work as an on ramp to gradually learning those platforms.\n\nWhat I found is that, if I wanted it to be my platform, I needed to work from the other end and build my Java Script developer skills first. While I liked how everything became code in Payload, it wasn’t a practical choice, and it was going to be tied to a database, even if completely up to me which one.\n\nGrav\n\nThe next choice on the list was Grav. Grav is a flat file CMS written in PHP. Like WordPress, you need to run the application live on a server. For the club site I host, Grav has a working (but not great) Administration Plugin, and Jekyll does not, so I decided to give that a go next. After working on converting both the club’s site and this blog, I came to a number of points of frustration, some of which were limitations imposed by the twig templating system it uses.\n\nJekyll\n\nSo finally I came back to Jekyll. After putting the effort into Grav, I found Jekyll a lot easier to work with. While Jekyll has a jekyll-admin plugin, it had dependency problems that could only be resolved by installing really old components.\n\nAnother group I belong to was organizing a major event and I decided to make a calendar app for it, in Jekyll. I was able to put together a simple app. Having a job rebuild the site every hour, I had no trouble figuring out how to make it do the opposite of normal blog behavior — only show future posts and not past ones.\n\nSinatra\n\nBecause the event app needed to be updated on the fly and by people who would not be comfortable doing it in vi via ssh, I needed to create a way of doing this in a browser. This is why I had gone to Grav first. In Jekyll I was able to create some forms for updating/creating posts, I know enough Ruby that I was able to create a really simple sinatra app to act as an api backend. My complaint about Sinatra, is that it is only willing to run behind a proxy if the environment is production, but I needed to proxy it in development because the Jekyll webrick development server and sinatra are two separate processes on two different ports, so I had to set Sinatra’s environment to production in development. To deploy, there had to be two sites, the public static generated site, and the other using web server authentication for the admin site running the jekyll development server and sinatra.\n\nBridgetown\n\nAfter being able to provide limited Content functions by adding Sinatra, Jekyll was much easier to work with. Although superficially similar to Grav’s twig, the Liquid Template engine is superior. While Liquid is adequate for my needs, the BridgeTown fork of Jekyll supports erb out of the box, and lets you use both templates on the same site.\n\nThere are small changes between the two, which means you have to make adjustments, since there is far more coverage of the primary Jekyll branch. When using a coding assistant you need to remind them that you’re working with Bridgetown rather than Jekyll. As an example you might typically use site.posts to access the default posts collection in Jekyll, but in Bridgetown collections.posts.resources. Bridgetown also implements esbuild, using the build default of src/_components/js | css is easy enough, creating a src/assets folder is an easy way to bypass the build system (which merges your component files). For a project where the Static Site Generator was backing a more involved TypeScript application, having the plumbing already in place is a plus (the Bridgetown docs recommend using // @ts-check in vscode and not to use Typescript). IMHO, as long as scss gets compiled (which Jekyll does without a build system), the reason to want a build system is trans-piling TypeScript and managing larger JavaScript projects (that you likely want to convert to TypeScript).\n\nThe Winner\n\nBridgetown. For this blog both Jekyll and Bridgetown will work well. The Gaming Club website is deciding what they want to do. At the moment I have time to build them a site in BridgeTown and pair it with a Sinatra app, even better, Bridgetown comes with Roda and doesn’t need the external app to handle the admin site, a clear win. Markdown has great portability for the next platform switch. Hosted WordPress is inexpensive if they choose to stay with it."
        },
        {
          "id": "posts-keepass-and-keepassxc-totp-and-why-i-picked-aegis-as-my-totp-app",
          "title": "KeePass and KeePassXC TOTP (and why I picked Aegis as my TOTP App)",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Security",
          "tags": "aegis, keepass, otp",
          "url": "/posts/keepass-and-keepassxc-totp-and-why-i-picked-aegis-as-my-totp-app/",
          "content": "One of the neat things about both KeePass variants is that they can manage OTP keys, to try it out I wanted to import some existing keys. Most of the commercial vaults don’t allow you to export or see your OTP secrets. Neither version natively supports QR scanning, so you have to cut and paste the strings. This prompted me to look at some alternatives and the one I like is Aegis.\n\nAegis is FOSS software and it allows import and export of keys from other applications. Unfortunately, while it can read many other Vault program’s data, you have to root your phone first to try. Google Authenticator, does have the ability to export your data as QR codes, which Aegis can read. If you’ve used Authenticator Apps before Aegis is straightforward. It supports several methods of backup, including the android backup system that lets you restore apps when you get a new device, encrypting your backups with your password.\n\nAegis is a great tool for migrating. It can read Google Authenticator’s QR code exports. When you’re forced to re-register your accounts from other proprietary apps it will read the QR codes. It can generate QR codes to import to other authenticator apps, you can view (and copy) your secrets by editing the entry, and it will allow you to make an unencrypted export of your database, that you can access by mounting your phone on your PC. Don’t forget to delete the file when done.\n\nThere are several KeePass compatible android clients, I only looked at the most popular one Keepass2Android, which can store the TOTP field and read a QR code into it, but did not generate codes for me.\n\nAdding TOTP to an Entry in KeePass2\n\nEdit the entry you want to add TOTP to.\n\n\n\n\n\n\n\n\n\n\n\nTOTP in KeePassXC\n\nFrom the list of entries it is easy to access the TOTP functions.\n\n\n\n\n\n\n\nWhen I create a TOTP entry and need a place to save the recovery codes, the most convenient place is (not the best place, but better than in a manila folder on my desk) is the notes field of my password manager. By default both KeePass programs will show the notes.\n\nIn KeePass2, From the View Menu, Configure Columns and then uncheck the notes field so it is no longer in the entry display.\n\nIn KeePassXC click the Settings Icon, then the security settings (Shield), and check ‘Hide entry notes by default’.\n\nGetting your TOTP codes and your passwords reunited is a win for convenience, but if your vault is stolen and cracked, game over. If you’re using your vault correctly, you have a different random password for each site, which already contains the damage from a password breach at one site, to that site.\n\nThere are several Plugins for KeePass2 for TOTP, they try to enhance the existing features and don’t add significant features.\n\nIf your passwords aren’t on your phone, or at least only a subset, then using a separate app that lives on your phone is providing an additional layer of security because both your computer and your phone will need to be compromised to get your TOTP protected logins (provided you keep your recovery codes somewhere else). If you do decide that you want to unite TOTP in one application, both KeePass implementations support having multiple files open at the same time, splitting the login and the totp to different databases requires a hacker to steal and crack both files.\n\nAt least for now, I’m still syncing only a subset of my passwords on my phone (using non-foss Bitwarden, but they manage sync between devices), and switching to Aegis for TOTP."
        },
        {
          "id": "posts-keepass-keepassxc-evaluation",
          "title": "KeePass KeePassXC Evaluation",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Security",
          "tags": "keepass",
          "url": "/posts/keepass-keepassxc-evaluation/",
          "content": "A long time ago I started using an open source program called Password Safe to manage my passwords. On Linux Password Gorilla is file compatible. There hasn’t been a commit to Password Gorilla in 5 years, and there isn’t much activity on Password Safe.\n\nEven though it’s about the same age, KeePass is still very actively developed. Like Password Safe-Gorilla, KeePass started on Windows, then its’ community developed a cross platform version. KeePass still actively supports its’ legacy version 1, active development is on version 2. The Cross Platform version is KeePass XC, the two projects have a cooperative relationship. With mono, Key Pass 2 installs easily and runs well on Linux and Mac, KeePass XC was always intended to run on all three.\n\nKeePass also has an active Plugins community. KeePassXC does not support plugins. While KeePass maintains a registry of known Plugins, it does not have a plugin infrastructure or vetting process – you have to download each plugin and copy it into your KeePass installation, and the Plugins I did try to use either didn’t work or didn’t add a lot. Both programs integrate enough of a feature set that plugins are unnecessary.\n\nWhen setting up a new database, both variants rated my password quality. Which points to an inherent conflict, stronger passwords are a nuisance to type every time you open the vault, but medium passwords are no longer adequate. The option to add a file or hardware (yubikey) based key is offered, but unless one has the discipline to take the key out and place it somewhere else (and do this every time they need to unlock the vault), this doesn’t help if a burglar takes the computer with the key in it!\n\nFor the import process, you’ll want to use KeePass 2, because XC can only import from Bitwarden and 1Password. KeePass 2 has an import filter for Password Safe XML. Delete the export as soon as you’ve imported it, you don’t want to leave the un-encrypted file around.\n\nOn Windows both Browser and Biometrics Integration worked fairly well, but on Linux, my main OS, Biometrics didn’t work and the Browser Integration had issues. On Windows Bitwarden of course worked well with Browsers and Windows Hello, on Linux it has solid browser integration, including in browser Passkey Support.\n\nWhile KeePassXC is my direct successor for PassSafe, its’ victory is tainted by the fact that BitWarden works better and lets me use PassKeys on Linux today."
        },
        {
          "id": "posts-clean-install-2024",
          "title": "Clean Install 2024",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Debian and Ubuntu",
          "tags": "mint, ubuntu",
          "url": "/posts/clean-install-2024/",
          "content": "My current desktop is 6 years old and has been in place upgraded, and probably had a lot of miscellaneous things installed but not used since. I’m delighted that performance isn’t pushing me to a new system. Whether I went to the new version of Mint or another Distribution it was time for a clean install.\n\nDistribution Choice\n\nI have a problem with my main system locking up. My working theory is that a browser tab crashes (in the infinitely looping sense) and the whole system locks up, sometimes it comes back and sometimes it doesn’t and I have to do a hard reboot. I’ve never been able to get to the root of this, until recently I saw a post pointing at the Cinnamon Desktop Environment. My Windows PC doesn’t freeze or blue screen nearly as often. Let me qualify that, video conferencing apps, both Zoom and Teams crash either computer all the time, and given that one is an ASUS and one an Azrock and one Intel the other AMD, that has to be a bug in those apps with certain hardware types (I suspect the two brands have a lot of similar components and the boards are the same age, and my fanless graphics cards have similar nvidia gpus). My laptop running Kubuntu doesn’t have the problem.\n\nThe only way to truly know is to live away from Cinnamon for an extended period of time. I wanted to try living on Plasma 6. Unfortunately, Plasma 6 isn’t ready enough. I’m not comfortable switching my main system to Arch, although I’m happy with it on my laptop – I don’t live there, if an update breaks arch, reinstalling is just an inconvenience, not a disruption. I considered KDE Neon, but they’re still backed by 22.04, and Kubuntu is trying to have Plasma 6 ready for 24.10, with no promise of when Plasma 6 will backport to 24.04. I neither wanted to wait, nor have to upgrade a non-LTS twice a year. I can’t expect a stable up to date Debian or Ubuntu base for Plasma 6 for at least a few months. On my Mint system I did try Plasma 5 for a bit, and felt it to be a downgrade from Cinnamon.\n\nUbuntu is the base of Mint, Kubuntu and Neon, with the ability to install Cinnamon, should I find it not guilty, or worth a weekly crash, and Ubuntu will have the ability to install the Plasma 6 backport or PPA when the Kubuntu team makes it available.\n\nMoving Configurations\n\nSince my M2 hard drive was reporting itself at 50% of lifetime wear, bringing in a new M2 hard drive would allow me a dual boot option, with 2 M2 slots on my motherboard. Except that when I opened it up, the second M2 slot was a short slot and neither of my M2 drives would fit there! So I backed up my home directory and /etc to the Winchester drive living in the box, and began my install.\n\nAfter setup, file access to my old drive should have permitted me to bring back a number of programs. I was not able to do that with the snaps of thunderbird or firefox that come with Ubuntu, nor any snaps that I wanted to recover. Firefox maintains a debian repo for firefox, but not thunderbird, the latter I reinstalled through flatpak. Even packaged firefox did not want to accept my profile and I relented to setting up a mozilla account to sync the settings. I generally don’t want my browsers doing this, but as the fox is my primary browser, I decided to accept it and that syncing my primary linux with my primary windows browser was a benefit. Mozilla encrypts the sync data at the client so it should be safe if my mozilla account has a strong password. The Thunderbird flatpak was willing to import my old profile when copied (flatpak keeps user config in .var/app, snap in .cache). Incidentally, VSCode moved by simply copying my .config/Code folder into the new .config folder.\n\nDon’t bother with backing up a Thunderbird profile through the menu in Thunderbird, it won’t import a backup larger than 2GB. The snap version would not import an existing profile, but the flatpak one did. When there is a choice of flat or snap, choose flat.\n\nTo sign into firefox sync I had to boot from my old drive. To do that I had to get an add-on board to allow me to install a second M2. Which brings me to another interesting issue. There are two different types of M2 drive and slot, and while external enclosures and the one usable slot on my motherboard can use either, most add in cards only support the PCIE type. The two types are differentiated by the notches in the connecting edge, some have both notches to add to the confusion. Also the weirdness of the M2 connector conflicting with a SATA port makes sense, the SATA type needs to connect to a SATA port and mobo manufacturers didn’t want to add another SATA controller for the M2 slots or have fewer SATA connections. I found this inexpensive board that supports 1 module of each M2 type and requires a connection to a SATA port for the older type. It’s not staying permanently installed, but will be permanently in my tool kit for future upgrades and M2 replacements.\n\nhttps://www.newegg.com/p/17Z-013G-00007 or https://www.amazon.com/gp/product/B07JKH5VTL/ref=ppx_yo_dt_b_asin_title_o03_s00?ie=UTF8&amp;psc=1"
        },
        {
          "id": "posts-how-debian-broke-ssh-server-on-upgrades-on-bookworm-and-ubuntu-noble",
          "title": "How Debian Broke SSH Server on Upgrades on Bookworm and Ubuntu Noble.",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Debian and Ubuntu",
          "tags": "ssh",
          "url": "/posts/how-debian-broke-ssh-server-on-upgrades-on-bookworm-and-ubuntu-noble/",
          "content": "In the Bookworm release cycle someone decided that there was some reason ssh was a bad group name and that because only 1 other package was affected, changing to _ssh wouldn’t break anything.\n\nExcept, that when there are local changes to ssh group membership the upgrade still renames the existing group to _ssh. For everyone who had used the builtin ssh group, this change broke remote ssh access on the upgrade to Bullseye, and subsequently Ubuntu’s Noble.\n\nThe fix is simple, as long as you remember to do it before upgrading a system. Create a new group (you’ll want it to have different gid than the builtin one) something like sshusers, copy the users to it and then update the sshd configuration to ‘AllowGroups sshusers’ and restart the sshd service. Remember to update your ansible playbooks as needed."
        },
        {
          "id": "posts-ditching-google-voice-for-voip-ms",
          "title": "Ditching Google Voice for VOIP.MS",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Audio and Video, Messaging",
          "tags": "sms, telephone, voip",
          "url": "/posts/ditching-google-voice-for-voip-ms/",
          "content": "Sometimes calls got put straight to voicemail, sometimes it won’t make an outbound call and I had to switch to my Cell.\n\nThe problem with being unhappy with a free service is that every thing I seriously considered was a lot more expensive. Recently a friend recommended voip.ms – 3500 residential minutes per month for $4.25 + 1.50 for E911 and SMS for less than a penny a message.\n\nOf course there is a catch, while voip.ms has great features, it is a very raw, no nonsense SIP product, with a lot of features. Where GV worked in any web browser and had an app for my cell, I was going to need to replace those pieces.\n\nSetting Up My VOIP.MS Account\n\nAs soon as you’ve created the primary SIP (which you’re never going to use), carefully review the Account Settings, go through each tab. There was a field needed that wasn’t filled automatically when I enrolled. The good news is that their support was easy to reach via chat and quickly directed me to the setting that was missing. You’ll need to pick from a long list which proxy you’re going to use for your primary SIP, you’ll probably want to stick to it, if you want to use extensions they all need to be on the same proxy. When I set up the service I allowed them to assign me a number, it takes a few days to port in your number, and I wasn’t going to initiate a transfer until I was satisfied.\n\nIf there is a device active (registered) on your primary SIP none of your others will ring. You’ll have to create your main SIP, but don’t plan on using it. As soon as you finish setup create a sub-account for each device you plan to use, I have a VOIP Adapter, a client on my main desktop, and client on my smartphone. When you create your SIPs select Encrypted. I initially setup without, but then had to reconfigure each client, and a flaw of their interface is that every time you edit a sub account SIP you have to set a new password for it.\n\nSetting up VoiceMail, Ring Group and IVR\n\nYou can upload recordings, when configuring your Voice Mail you can select one of these or do the normal Voice Mail setup to record a greeting, you can choose to transcribe and email messages, and to delete them after sending.\n\nTo have more than one device answer a number, you’ll have to setup a Ring Group and assign each of your sub-account SIPs to it. Assign your DID to point to your Ring Group.\n\nOptionally you may want to set up an IVR for call screening. When GV screened my calls the phone would ring once, and I would still end up answering a call that was disconnected. I created two extensions in the IVR, one for the Ring Group and one to go straight to Voice Mail. The greeting on my IVR directs callers to select either of these extensions, most current robots won’t be able to select the right choice and my phone won’t ring. There is an option to upload contacts and automatically bypass the IVR for them. I also created a more aggressive IVR that doesn’t announce the ring through mailbox, but decided not to use it since I’d need to give out an extension with my number.\n\nwiki.voip.ms has a library of instructions for popular devices and softphones.\n\nSetting up My Adapter\n\nWhen I went to setup my VOIP adapter, I discovered that PolyCom had End of Lifed it, the GrandStream 801 device is so inexpensive that I ended up replacing my EOL Obi 200 device after some initial testing with it.\n\nVOIP Clients\n\nThe most important client for me is that the Phone Adapter., but it’s nice to be able to have the headphones on the Computer and to be able to take my number on the go with my Cell. The good and bad news is you have to bring your own client. I was initially successful with Zoiper (not free). It integrated directly with my contacts, and worked on both the Linux Desktop and Mobile. When I upgraded my Mobile, the settings did not transfer over and I had difficulty getting it working, their tech support blamed voip.ms, and voip.ms worked with me to determine the problem was with the Zoiper configuration. I grabbed MizuDroid and configured it in five minutes rather than working through configuration screens on Zoiper to fix it. I’m not thrilled with MizuDroid, it seems like I have to have the application open in order for it to work, where Zoiper was listening for calls all the time.\n\nWhat Isn’t Great\n\nTwo Factor Authentication\n\nSome institutions requiring Two Factor Authentication require that a number be serviced by a recognized mobile provider and won’t allow VOIP numbers for texting, but they may allow it for Phone Me 2FA. If you have an IVR setup you will need to either find out the number they call from and add it to your VOIP.ms phonebook or you will need to remove the number from your IVR to allow the 2FA to happen, then resume IVR.\n\nOn the other side of 2FA, I had set it up, only to eventually get locked out of my account. It looked like a time drift issue as the window to enter the code from my device got shorter and shorter until I was unable to login at all.\n\nPhone Book\n\nTo have numbers bypass your IVR you have to import them to the VOIP.ms Phonebook. Their format is 3 fields [‘Speed Dial’, ‘Name’, ‘Number’]. You can leave the Speed Dial Field empty if you don’t use it, but you need to begin with an empty field in your csv. Phone Number must be only digits. Finally, if contacts have multiple numbers, you’ll need to import each as a different name (ie ‘John Karr VOIP’, ‘John Karr CELL’). To refresh your phone book, you’ll need to export your contacts, you can use this GIST for my current python script to cleanup. To do a fresh import you must delete your old phonebook first. If you have two contacts who share a number, there will be a duplicate error. When you click ok on the error, don’t click import again, verify that the numbers are present. The only use I have for the Phone Book is for IVR bypass for recognized numbers. I could also use it to setup a block list, by creating a blocked group, this would be a multi-step every time a number has to be blacklisted and isn’t nearly as convenient as the equivalent is in the phone or messages app.\n\nThe script I wrote to help with the phonebook import is up on a GIST.\n\nSMS\n\nWhile SMS and RCS are supported, and inexpensive, when you count up the OTP messages you get in a day, it definitely costs more than the free messaging I had with GV and that I get on my Mobile. Interfacing with SMS is more cumbersome. I have email forwarding setup, so a copy of all SMS goes into a dedicated folder in email. Unfortunately, the SMS messages don’t thread in the inbox. VOIP.MS have a web interface, which requires opening their site, going through login and captcha every session, the interface isn’t great. When I’m using my Mobile, the VOIP.MS android app (created by one of their users, not them) works fairly well, it can read my contacts. I’ve found that often times long messages and image attachments don’t work. Overall the experience is much inferior to Google Voice, and I’ve been favoring my Mobile for texting, since once I’ve signed into Google on a browser I can just open the Google Messages Web App, I find myself preferring to use my Mobile number for text, even responding to messages sent to my voip line there."
        },
        {
          "id": "posts-tip-how-to-fix-windows-security-prompt-for-mapped-drives",
          "title": "Tip: How to Fix Windows Security Prompt for Mapped Drives",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Windows",
          "tags": "",
          "url": "/posts/tip-how-to-fix-windows-security-prompt-for-mapped-drives/",
          "content": "Windows applies internet security rules in annoying ways to a local system. Forcing you to click through a security warning on every file on a mapped drive. Even after you dig your way through settings to find the internet security tab, whitelisting the IP addresses on your local area network still doesn’t work for mapped drives. On a whim I tried entering my mapped drive letters (drive letter colon i.e. H:), and much to my surprise the annoying dialog went away."
        },
        {
          "id": "posts-getting-date-to-default-to-24-hour-format",
          "title": "Getting Date to Default to 24 Hour Format",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux",
          "tags": "ansible",
          "url": "/posts/getting-date-to-default-to-24-hour-format/",
          "content": "A previous employer of mine had started with their server clocks set to their local timezone, on which they’re continuously accumulating technical debt in their code maintaining the extra layer of timezone conversion. Learn from that mistake.\n\nSetting the Time Zone\n\nThe easier part is setting the timezone.\n\n# show current\ntimedatectl status\n# see all 600 or so timezonestimedatectl\ntimedatectl list-timezones\n# set to UTC\nset-timezone UTC\n\n\nSetting the Locale\n\nThe standard locale en_US.UTF-8 sets the time format to 12 hour AM/PM. It is possible to override just the time display by setting LC_TIME to a locale that defaults to 24 hour time, such as en_GB.UTF-8. The C.UTF-8 locale defaults to English with a 24 hour clock, which is what I’m currently using.\n\n# show all known locales/languages\n# If the locale you need is not present,\n# on Debian family install the package: locales-alllocalectl\nlocalectl list-locales\n# override just time displaylocalectl\nset-locale LC_TIME=C.UTF-8\n# My preferred solution# set LC_TIME to current locale to delete LC_TIME.\nset-locale C.UTF8\n\n\nSetting the Time Zone With Ansible\n\nSince this is something we want to do when setting up every server, it is just a few lines in Ansible.\n\nroles/rolename/defaults/main.yml\n\n...\ntimezone: 'UTC'\nsetlocale: 'C.UTF-8'\n\n\nroles/rolename/tasks/settimelocale.yml\n\n- name: Set Timeformat\n  ansible.builtin.shell: \"{{ item }}\"\n  tags:\n    - timezone\n  loop:\n    - \"localectl set-locale {{ setlocale }}\"\n    - \"timedatectl set-timezone {{ timezone }}\""
        },
        {
          "id": "posts-vrrp-and-keepalived",
          "title": "vrrp and keepalived",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Servers",
          "tags": "vrrp, keepalived, load-balancers",
          "url": "/posts/vrrp-and-keepalived/",
          "content": "Briefly keepalived is a utility for attaching virtual ip addresses to hosts that can fail over to another host when a check condition fails. keepalived uses the vrrp protocol, which sends status messages to a multicast address.\n\nAt the command line keepalived provides no utility that can tell whether nodes are exchanging their broadcast messages or which node is currently MASTER for a VIP.\n\nkeepalived can be made to dump some stats, but it requires sending a signal to the application. The stats will give an indication of packets sent and received, it doesn’t tell you directly which node currently has a VIP, but it can be used to deduce where master is – subtract the number of times the node released master from the number of times it became master, if the result is positive that node is master (if its greater than 1 there might be problem).\n\nIt takes two commands to see the stats, fortunately it is easy to wrap it in a shell script.\n\n#!/usr/bin/bash\n\nkill -s $(keepalived --signum=STATS) \\\n    $(cat /var/run/keepalived.pid)\ncat /tmp/keepalived.stats\n\n\nThe keepalived stats dump has to be run against each host, and perhaps the most important item, which host is master for each of your virtual ips, needs to be inferred.\n\nSince keepalived uses vrrp, it sends out vrrp broadcasts, at a frequency specified by the advert_int in each vrrp_instance block, most commonly this is 1. The vrrp broadcasts go to multicast 224.0.0.18, any host on the subnet (possibly further depending your router configuration) can hear all of these broadcasts.\n\nThe following command will capture 4 seconds of activity:\n\n tcpdump -v -i ${interface} host 224.0.0.18 -w /tmp/tcp.out -W 1 -G 4\n\n\nTo dump a readable version to both your screen and a text file:\n\ntcpdump -r /tmp/tcp.out -e -n -v | tee /tmp/tcp.txt\n\n\nYou’ll get output that looks like:\n\n\n  1  11:56:12.537357 52:54:00:00:00:00 &gt; 01:00:5e:00:00:12,\n   &gt;&gt; ethertype IPv4 (0x0800), length 60:\n   &gt;&gt; (tos 0xc0, ttl 255, id 22132, offset 0,\n   &gt;&gt; flags [none], proto VRRP (112), length 40)\n2  10.10.1.2 &gt; 224.0.0.18: vrrp 10.10.1.2 &gt; 224.0.0.18:\n   &gt;&gt; VRRPv2, Advertisement, vrid 201, prio 106,\n   &gt;&gt; authtype simple, intvl 1s, length 20,\n   &gt;&gt; addrs: 10.10.1.200 auth \"8 char key in clear text\"\n  \n\n\nYou can see that the auth strings the nodes use to authenticate each other aren’t secure at all, they’re broadcast in the clear, the insecure authentication has been removed from the vrrp specification, but keepalived still supports it. Given how insecure it is, and that other hosts would have no way of knowing the legitimate host if a rogue were advertising the same ip address, you may as well remove the authentication section from your configs."
        },
        {
          "id": "posts-how-to-use-an-ubuntu-cloud-image",
          "title": "How to Use an Ubuntu or Debian Cloud Image",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Debian and Ubuntu",
          "tags": "kvm",
          "url": "/posts/how-to-use-an-ubuntu-cloud-image/",
          "content": "You can find the current cloud images at: https://cloud-images.ubuntu.com and https://cloud.debian.org/images/cloud/. They’re built daily so that when you launch a host from a recent one there are no updates needed. There are images for many architectures and virtualization hosts KVM, Azure, LXD, VMware, VirtualBox, and Vagrant, there are also minimal builds. Once you’ve downloaded your image, use qemu-img to inspect it, possibly convert or rename it to qcow2. The images use the .img extension but internally are usually qcow2. Finally the images are very small, you’ll want to use qemu-img resize to specify the desired size when making a copy to use.\n\nThe official procedure involves creating a small ISO image (preseed) with information for the host you’re deploying, that will do things like set the password and networking configuration. For just bringing a simple vm in a simple environment it can be done more simply.\n\nBefore attempting to start the vm, you’ll need to create a root password, and you can set the hostname\n\nvirt-customize -a myhost.qcow2 \\\n   --root-password password:ubuntu \\   --hostname myhost\n\n\nThe utility can be used to install software and copy files, even run commands, into the vm at creation, set the timezone and add ssh keys.\n\nWhen the machine first boots networking will be disabled and the host ssh keys won’t have been created. The command ssh-keygen -A run in /etc/ssh will generate the keys. You could also do this with guestmount or virt-customize –run-command ‘cd /etc/ssh; ssh-keygen -A’ before starting the vm.\n\nBecause this is a headless machine you want to be able to attach the terminal at the command line and have to ensure that the serial console is working. I’ve never been able to get an ubuntu or debian vm to work with the serial console if it had been created with a graphical console, the cloud images are set up to work with the serial console, the grub parameters have already been added.\n\n# Example virt-install commandvirt-install \\--name myhost \\--ram 1024 \\--disk /var/lib/libvirt/images/myhost.qcow2,bus=sata \\--import \\--vcpus 1 \\--os-type linux \\--os-variant ubuntu20.04 \\--network bridge=br0,mac=54:54:54:54:54:54 \\--graphics none \\--console pty,target_type=serial \\--debug\n\n\nIf you omit ,mac=…. virt-install will generate a random mac. If you’re not using bridged networking, see the virt-install docs for specifying other network setups.\n\nOnce you login to the machine use lshw to find the generated name for the network interface and configure it. With Ubuntu cloud Jammy you have a lot of choices on how, You can use systemd-networkd or NetworkManager, or NetPlan or even go back to legacy interfaces. I recommend systemd-networkd over NetworkManager and NetPlan.\n\nIf you’re building multiple vms, once you know the interface name to use you can use guest-umount or guest-fish (virt-edit can’t create a file) or have virt-customize copy a file into the image before starting it.\n\n# /etc/systemd/network/20-wired.network# enp1s0 is a common assignment on KVM[Match]Name=enp1s0[Network]DHCP=yes\n\n\nYou’ll be able to connect from the command line with the virsh console.\n\nAnother issue to deal with is that while you resized the image externally, the filesystems won’t have been resized. On first login you’ll want to expand it with growpart /dev/sda 1, which will  allocate all the new space to that partition. Growpart is in cloud-guest-utils and should already be installed. You may also have to resize the file system, for any of the ext variants use resize2fs, for brfs and xfs the utilities are btrfs filesystem resize and xfs_growfs\n\nSteps to use the Image\n\n\n  Download\n  Make a copy for your VM\n  Resize the Image (qemu-img resize xxG)\n  Set the hostname and root password (virt-customize)\n  Create the ssh host key (virt-customize)\n  Optional use guestmount to install ssh keys, and enable networking before first boot.\n  Create the VM with virt-install.\n  Immediately after booting resize the filesystem with growpart to fill the expanded image.\n  Expand the filesystem with resize2fs or xfs_growfs.\n  Enable networking if you haven’t."
        },
        {
          "id": "posts-a-simple-script-for-listing-every-cron",
          "title": "A Simple Script for Listing Every Cron",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux",
          "tags": "",
          "url": "/posts/a-simple-script-for-listing-every-cron/",
          "content": "This script without arguments will give you all your local cron jobs, and with the arguments you would use to the ssh command for connecting as a remote user will run on the remote host, and prompt for the user’s sudo password. Since systemctl list-timers spews screenfuls of information I used jq to filter the output.\n\nListing of everycron:\n\n#!/bin/bash\n\n# list all the crons and systemd timers\n# for the systemd timers output is piped through jq\n# to produce a more readable list if you don't\n# expect jq on your systems use\n# systemctl list-timers by itself.\n\nfunction EC () {\n   echo -e \"\\n*** $1 ***\\n\"\n}\n\nif [ $# -eq 0 ]; then\n  if [ \"$EUID\" -ne 0 ]\n    then echo \"Must run as root for local cron listing\"\n    exit\n  fi\n  EC '/etc/cron*'\n  ls -b /etc/cron*\n  EC 'User Crons'\n  cd /var/spool/cron/crontabs ; grep -rvH '#' *\n  EC \"systemd timers\"\n  systemctl list-timers --output=json --no-pager | jq '.[].unit'\nelse\n  EC '/etc/cron*'\n  ssh \"$@\" -t 'ls -b /etc/cron*'\n  EC 'User Crons'\n  ssh \"$@\" -t \"sudo sh -c \\\"grep -rvH '#' /var/spool/cron/*\\\"\"\n  EC \"systemd timers\"\n  ssh \"$@\" -t \"sudo sh -c \\\"systemctl list-timers --output=json --no-pager | jq '.[].unit'\\\"\"\nfi\n\necho -e \"\\n*****\\n\""
        },
        {
          "id": "posts-working-with-splunk",
          "title": "Working with Splunk",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Database",
          "tags": "splunk, ansible",
          "url": "/posts/working-with-splunk/",
          "content": "The other articles in this series:\n\n\n  Splunk and Fluentd\n  Rejecting Fluend\n\n\nConfiguration File Layout\n\nIn the documentation $SPLUNKHOME refers to the location Splunk is installed. Configuration is in $SPLUNKHOME/etc. Every folder has a default and optionally local subfolder, configuration you create or modify should only go in local in case an upgrade overwrites default, local takes precedence. Most of the configuration you create will be in etc/system/local. When you use the gui to create configuration the app that writes the configuration file will create a file in its local folder. I’ve found that these configs can safely get moved to system/local, and that the gui will even preserve the file location if I modify it later. Moving configuration requires a stop before making the change. As you become familiar with the configuration you will rely on find and grep to help locate where the configuration you seek currently resides.\n\nSplunk has a lot of configuration files scattered about. If that weren’t bad enough, apps install their code into etc/apps, so when you try to put etc under version control you end up with a huge repository. I use a script to add just conf files. #!/bin/bash cd /opt/splunk/etc find . -name '*.conf' | xargs git add\n\nGetting Splunk\n\nDownloading Splunk requires registration of a Splunk account. You’ll then have to download each splunk and splunk forwarder package you need. This requires several clicks for each. If you want to download with the command line, you’ll need to cancel the download and cut and paste the wget script (which at least they put right on the page once you start the download).\n\n\nUpdate Feb 2025\n\nIt is no longer necessary to start and stop a download to see the wget command, it is now on the download pages.\n\n\nEvery time you need to update Splunk you’ll need to repeat this process.\n\nYou can use these links to shortcut in once you’ve registered with splunk:\n\nhttps://www.splunk.com/en_us/download/splunk-enterprise.html https://www.splunk.com/en_us/download/universal-forwarder.html\n\nPost Install Tasks\n\nweb.conf\n\nEvery time you access Splunk it will alert you about a new version, and new versions are released very frequently.\n\nAdd the following to $SPLUNKHOME/etc/system/local/web.conf\n\n[settings]\n# stop nuisance new release notices\nupdateCheckerBaseURL = 0\n# running behind a proxy\ntools.proxy.on = True\n\n\nThe second setting will be needed when you put Splunk behind a proxy. The free version has no access control, you need Apache or Nginx in front of it. The default port is 8000.\n\nLocal User, Accept License, and Switch the License\n\nThe splunk CLI requires an admin user to be created, and frequently requires logon by that user (the credential is cached for a limited time), Splunk have made it clear they will ever allow disabling this logon even for root. Every time you install or update Splunk, it is required to accept their license. Finally on your servers but not forwarders will always be installed with an enterprise trial license, using this license will result in your eventually being locked out of you data, switching to the free license is an important setup step to protect yourself.\n\nHere are some task steps for ansible to help do this:\n\nuser-seed.conf\n\n[user_info]\nUSERNAME = admin\nPASSWORD = ***secret***\n\n\n- name: seed admin user\n  ansible.builtin.template:\n    src: templates/user-seed.conf.j2\n    dest: \"/etc/system/local/user-seed.conf\"\n    owner: \"\"\n    group: \"\"\n- name: accept the license\n  ansible.builtin.shell:\n    cmd: \"\"\n    tags: [skip_ansible_lint]\n  loop:\n     - \"/bin/splunk stop\"\n     - \"/bin/splunk start --accept-license --answer-yes\"\n\n\n\n# On servers only\n\n- name: switch splunk license to free\n  ansible.builtin.lineinfile:\n    path: \"/etc/system/local/server.conf\"\n    state: present\n    regexp: '[license]'\n    insertafter: EOF\n    line: |\n      [license]\n      active_group = Free\n      #\n  when: splunk_server\n\n\n\nSetting up Indexes\n\nBy default Splunk creates a main index, in the simplest configuration can send everything there. Setting up indexes for different events can improve performance and allows finer grained control over event retention.\n\nThe default index settings are to keep everything forever. Splunk subdivides indexes into buckets, when it is time to remove old events, Splunk can only delete entire buckets not individual events, for smaller installations the defaults will never roll buckets. If for an index you only want to keep 90 days, you need Splunk to use a bucket for no more than 90 days. Making your buckets too small can hurt performance.\n\nThe values for aging buckets (which are not available in the gui) are: frozenTimePeriodInSecs and maxHotSpanSecs. There was some issue that could come up if those numbers were round, so add a small random integer to your values. freezing buckets is archiving or deleting them, the HotSpanSecs is how long a bucket can remain in use."
        },
        {
          "id": "posts-splunk-and-fluentd",
          "title": "Splunk and Fluentd",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Database",
          "tags": "fluentd, splunk",
          "url": "/posts/splunk-and-fluentd/",
          "content": "The Options\n\nSplunk is a commercial product with a limited free use tier. Fluentd/Fluent-Bit and TimeScaleDB are both Open Source projects that should fit together to make a great stack.\n\nThe other articles in this series:\n\n\n  Rejecting Fluend\n  Working with Splunk\n\n\nELK Elastic + Logstash + Kibana\n\nCon\n\nNo longer an Open Source Solution\n\nELK was previously available under the Apache license, but new versions are only on a proprietary license. While the new license may still permit more free use than Splunk, ELK can no longer be considered an Open Source solution.\n\nThe last time I looked seriously at this product was years ago, before the licensing changes. At that time I concluded that Splunk was a much better solution for the environment and worth paying extra for the licensing.\n\nThe core of the ELK stack, Elastic Search is a search engine,built on Apache’s Lucene Core, modern versions of Apache’s SOLR are very similar in capability and how data is queried, and as Apache projects they will always remain under Apache licensing.\n\nSplunk\n\nPro\n\nSplunk is well supported by the vendor and established in its market role.\n\n\n  It is a complete multi-platform solution that can handle just about any log type data.\n  It scales to massive environments.\n  It provides a graphical front end that makes it easy to begin searching.\n  It provides tools for graphing, monitoring, and alerting.\n\n\nCon\n\n\n  Not Free. Splunk is commercial software, limitations are imposed on the free version that make the free version suitable only for small environments.\n  When license limits are exceeded Splunk locks you out of your data for 30 days after you stop the violations.\n  The configuration is gnarly.\n  Uses a proprietary query language, which creates a steep learning curve for advancing beyond basic queries.\n\n\nIf a temporary situation such as an attack on your website or a mis-configuration issue cause splunk to lock out, support can remove the lock for paid versions. Splunk installs with a trial license, if you don’t convert it to the free license before it expires, you’ll find yourself locked out of your data and needing to purchase a license or wait for 30 days after switching to the free license to get back in.\n\nThe configuration structure follows the Java paradigm of having as many configuration files as possible. Splunk also places non-configuration such as installed Apps code into the configuration directory structure. Changes made in the GUI can be written anywhere.\n\nSplunk’s Query Language while suited for the product is very different from SQL, for administrators who only spend a limited amount of time with the product it will take much more time to develop queries than if the language had been based on SQL.\n\nFluentd, FluentBit, and TimeScaleDB\n\nPro\n\n\n  Fluentd supports a wide range of storage back ends including Postgres, MariaDB, Mongo, Splunk, ElasticSearch and SOLR.\n  TimeScaleDB is a modified Postgres designed for timescale data. The Query Language is Postgres, the performance is similar on large time series datasets to what Splunk and NoSQL databases offer.\n  The Query Language is SQL.\n  Fluentd/FluentBit and TimeScaleDB are available in Open Source and Paid versions.\n\n\nCon\n\n\n  A lot of Fluent’s plugins are abandoned, including the Postgres plugin!\n  You will need to configure everything by hand, unlike Splunk which installs to a state which is at least workable enough to begin experimenting with.\n  You’ll need to use a SQL tool to access your data and develop queries. To create dashboards and visualizations you’ll need to use a charting tool like Grafana or create your site with a library like ChartJS. This compares very poorly with Splunk’s Graphical interface. Plus Grafana is under AGPL, which I consider a commercial license.\n  You will likely use FluentBit on Forwarders, Fluentd on Indexers, which is two different products/ configurations.\n\n\nOverall Fluentd will require more work, and have an inferior end experience, with the one big advantage being that you don’t need to learn a specialized language to query the data."
        },
        {
          "id": "posts-rejecting-fluentd",
          "title": "Rejecting Fluentd",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Database, Linux",
          "tags": "fluentd, splunk",
          "url": "/posts/rejecting-fluentd/",
          "content": "My experience with Fluentd was a disappointment. Trying to use it in real  production will either result in living with a lot less or spending what you’re saving to build what’s missing.\n\nThe other articles in this series:\n\n\n  Splunk and Fluentd\n  Working with Splunk\n\n\nFirst architecturally, the Fluentd client, written in Ruby, is fairly big, which is fine for back-ends dedicated to processing events, but can become costly when embedded in micro-servers or containers. The solution is to use the light weight FluentBit for forwarding, it is written in Go but has a different configuration format.\n\nAs an aside the approach they’ve taken clearly shows that Perl would have been a better choice than Ruby in this specific case. Perl would permit the light client and the full server to share the same code and configuration format. Although the Perl to C Compiler is difficult to use and imposes limitations, a super minimal non-extendable tail and forward client could be compiled, and would probably be even smaller than the FluentBit client.\n\nAlthough the documentation was good, I found getting to a working configuration took quite a lot of effort. The configuration files quickly grew.\n\nIn its favor FluentD is a very plug-able architecture, and it is this flexibility that makes it attractive. You can write a custom parser for an unusual log, or find a plugin someone already wrote for it. There is a plugin for the backend storage to use every major SQL and NoSQL database you can think of. While there is a huge selection of Plugins I found that many of them are unmaintained, including important ones like the Postgres plugin!\n\nI found a bug in and re-wrote the documentation for the Postgres plugin, submitted a PR, and had a favorable review of my PR from another developer. The owner of the Plugin never responded, despite repeated nudges. If you do decide that Fluend + Postgres is the right solution for you, then you’ll need to install the Postgres Plugin from my fork, which is unlikely to be maintained going forward. You can install Fluentd from binary packages or RubyGems, and I recommend using Gems, either way you are breaking with process to manually install a GEM from GitHub. Since, I decided to stick with Splunk free Tier for my personal logging, and I don’t do a lot with Ruby,  I was not interested in trying to take over ownership of the Plugin."
        },
        {
          "id": "posts-pulsecast-update-ansible-role-published-to-galaxy",
          "title": "PulseCast Update, Ansible Role Published to Galaxy",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Audio and Video, Linux",
          "tags": "sbc, pi, pulse-audio, ansible",
          "url": "/posts/pulsecast-update-ansible-role-published-to-galaxy/",
          "content": "Updates to Linux Mint Ulyana (Focal based) on my workstation and Debian 10 (Buster) on the devices broke my setup.\n\nOn my workstation (Sender Side) Paprefs does not work and the sender needs to be configured through the configuration file. Pulseaudio is switching from deprecated gconf to  gsettings which may have something to do with this. Given that this is something that might get fixed upstream, the workaround is to not use paprefs at all and just configure through /etc/pulse.  I also found that in Mint the pavucontrol settings  to control which sink (soundcard or virtual/null sink) were not always respected by streams, and application settings to select the sound target were ignored.\n\nOn LXDE (on Debian 10.5) I did not have these problems, but on my LXDE system the only soundcard is the RTP Send Null Sink.\n\nFor the devices the upgrade from Stretch to Buster left some package errors and on a clean install the copied configuration settings did not work.\n\nAlthough it was not an issue on stretch and probably isn’t really part of the problem on buster, there is the –user systemd pulseaudio service and socket. These user services are set for interactive logins, not for running  as a service under a user account. These services are installed by the pulseaudio package, so simply deleting the unit files would not last past the next update. The –user flag to systemctl only affects the logged in user. The global switch to systemctl allows manipulation of user services globally. The disabled state still permits the service to be run either manually or at the request of another service, with masked the service can never start.\n\nsystemctl --global mask pulseaudio.socket &gt; systemctl --global mask pulseaudio.service\n\n\nIn the original presentation I only changed things in the pulse configs that needed to change, I became much more aggressive in editing /etc/pulse/default.pa while getting to a working state in buster.\n\nI’ve also published a role for ansible:\n\nhttps://galaxy.ansible.com/brainbuz/pulsecast\n\nor install with the command:\n\nansible-galaxy install brainbuz.pulsecast\n\nThe original slidedeck is still up https://techinfo.brainbuz.org/assets/pulsecast.pdf. Press P to toggle my presenter notes."
        },
        {
          "id": "posts-backup-with-restic-and-backblaze-b2",
          "title": "Backup with Restic and Backblaze B2",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux",
          "tags": "backup",
          "url": "/posts/backup-with-restic-and-backblaze-b2/",
          "content": "Restic with B2\n\nFor years I’ve been running rsync to keep a recent copy of my personal data on a VPS, which has saved my bacon a couple of times. But my VPS is not a cost effective destination for larger data.\n\nTo backup larger data like my music library I’ve relied on old hard drives and toasters (usb/esata devices that let you drop a hard drive in kind of like bread into a toaster) and rsyncing copies to multiple hosts in my house.\n\n\n\nThe master of my music library resides on one computer and all files get rsynced to my workstation which is where I listen from, this makes adding music a two step process download music or rip cd to master location then run rsync. Additional copies are made less often to other computers or old drives inserted in a toaster. I use the slave copy as an extra protection layer against accidental deletion, and enforcing that new music gets an immediate replica.\n\nGiven the abundance of inexpensive cloud storage options and that I would also like to be able to have point in time restore capability, I have several times considered new options.\n\nThe platform I chose is Backblaze B2 because Backblaze are the cool people who publish the dirt on hard drive reliability, and they’re a lot cheaper than AWS, Google or Azure. Storing a Terrabyte of data on their platform costs (2020) $5 a month, plus $10 for bandwidth if you ever need to download it.\n\nRestic is a de-duplicating backup solution that works with a wide assortment of cloud storage and local options such as SFTP and filesystem.\n\nThe Restic Mindset\n\nrepository\n\nLocations where restic will store your backup\n\nsnapshots\n\nEach backup you make is a snapshot. The first time you backup a target to a repository restic has to make a full copy. The next time you backup it creates a new snapshot which only reflects the difference.\n\nBackBlaze and B2\n\nB2 is very similar to S3, except that BackBlaze has two product lines: B2 and consumer and small business backups for Windows and Mac. To use B2 you create an account with Backblaze and create a Master application keys and optionally additional keys that will have more restricted access. You can download the B2 command line client. There is plenty of documentation on their site. It is written in Python so you will need to follow their instructions for installation.\n\nNote On Buckets\n\nB2 buckets have a handful of settings, for restic most of the defaults are correct, and we specified a private bucket in our create command. The settings are bucketinfo which is just whatever JSON you want for use by applications that use B2 as storage, lifecycle settings which default to keeping only the current version of a file, CORS rules which are about validating requests when B2 is used as web faced storage, and finally an option to snapshot a bucket (A B2 snapshot is something completely different than a Restic snapshot, it creates a zip file for download).\n\nInstalling Restic\n\nRestic distributes an official binary https://github.com/restic/restic/releases/latest, download it, bunzip it and move to /usr/bin/restic, then make sure it’s executable. Recent versions of the official binary include a selfupdate feature which makes it easy to update.\n\nRestic is now generally available from the official repositories of all of the major distributions, if your distribution has a recent version available this is probably preferable.\n\nIf you installed manually, execute the following to generate the man pages and shell completion\n\n  restic generate  --bash-completion /etc/bash_completion.d/restic\n  manpath # to get the paths available for manpages (manpages go in man3 under path)\n  restic generate --man /usr/local/man/man3\n  mandb # rebuild manpage index\n\n\nSetting Up a Test Backup Folder\n\nYou’ll need the ID of the Key and the Key. You’ll also need to create a bucket, you can do this through the b2 command or from the gui. To make it easier we’re going to use environment variables to hold information that will change. Because the default lifecycle is to keep all copies the command line requires passing json strings to change to keep only the latest. allPrivate specifies a private bucket.\n\n  export B2_ACCOUNT_ID=\"&lt;MY_KEY_ID&gt;\"\n  export B2_ACCOUNT_KEY=\"&lt;MY_SECRET_KEY&gt;\"\n\n  # will write .b2_account_info to current users home containing credentials.\n  b2 authorize-account $B2_ACCOUNT_ID $B2_ACCOUNT_KEY\n  # bucket names must be unique accross b2, use a prefix for yours\n  # underscores and spaces are not allowed but dashes are ok.\n  b2 create-bucket \\\n    --lifecycleRules '[{\"daysFromHidingToDeleting\": 1,\"fileNamePrefix\": \"\"}]' \\ myaccount-testing  \\\n    allPrivate\n  # b2 echos the ID of the bucket.\n\n  # you can use the name of the bucket you created here.\n  export RESTIC_REPOSITORY=\"b2:&lt;my-bucket&gt;\"\n  export RESTIC_PASSWORD_FILE=\"&lt;/path/to/&gt;restic-pw.txt\"\n\n  # generate a password with apg or however, create restic-pw.txt and enter it.\n  restic -r $RESTIC_REPOSITORY init\n  # provide the password when prompted\n  restic -r $RESTIC_REPOSITORY backup /path/to/test\n\n\nCreate a non-privileged account\n\nCreate a user (restic) to run your backups, create a private to them copy of the restic binary, use the setcap command to grant that binary unrestricted read on the system.\n\nFollow https://restic.readthedocs.io/en/latest/080_examples.html#backing-up-your-system-without-running-restic-as-root.\n\nIn restic’s home folder copy .b2_account_info and restic-pw.txt, confirm the file permissions.\n\nCreate your real Repositories\n\nsu or sudo to restic\n\n  export RESTIC_PASSWORD_FILE=\"&lt;/path/to/&gt;restic-pw.txt\"\n  /home/restic/bin/restic -r b2:&lt;my real bucket name&gt; init\n\n\nCreate a Backup Script\n\nNow create a backup script for it, it will be run by restic so put it in /home/restic.\n\n    #!/bin/bash\n\n    # Visually divide entries and date each new entry in the log.\n    fprint \"\\n***********************************\\n\\n\"\n    date\n    export B2_ACCOUNT_ID=\"&lt;MY_KEY_ID&gt;\"\n    export B2_ACCOUNT_KEY=\"&lt;MY_SECRET_KEY&gt;\"\n    export RESTIC_PASSWORD_FILE=\"&lt;/path/to/&gt;restic-pw.txt\"\n    export TARG=/what/to/backup\n    export REPO=b2:my-repo\n\n    fprint \"backing up $TARG to $REPO\\n\"\n\n    # prune if prune is passed as a script argument else do the backup\n    if [ $1 = 'prune' ]\n    then\n      /home/restic/bin/restic -r $REPO  forget \\\n        --keep-hourly 24 \\\n        --keep-daily 7 \\\n        --keep-weekly 5 \\\n        --keep-monthly 24 \\\n        --prune\n    else\n      /home/restic/bin/restic -r $REPO  backup $TARG\n    fi\n\n\nAn hourly backup will create 8760 snapshots per year, while they may each be tiny if there aren’t a lot of changes, you probably want to retain far fewer as shown in the example. The forget and prune operation to clean up do seem to take a bit of time and processing (they’re actually separate operations so the forget –prune command runs the two sequentially) so I’ve decided to do it once a day for my hourly backups by scheduling the basic cronjob every hour and the prune version at a different minute once a day.\n\nCreate A Cron Job\n\nBefore doing this create a new folder in /var/log for restic and grant restic ownership. The cronjob is going to redirect restic’s output to a file, and the log needs to be writable. In the event you need to see more information you can add -v to the restic command. The following runs the snapshot at 39 minutes past every hour and the pruning at 19 minutes after midnight.\n\n  # m h  dom mon dow   command\n  39 * * * * /home/restic/backup_home.sh &gt;&gt; /var/log/restic/home.log 2&gt;&amp;1\n  19 0 * * * /home/restic/backup_home.sh prune &gt;&gt; /var/log/restic/home.log 2&gt;&amp;1\n\n\nDon’t forget logrotate\n\n/etc/logrotate.d/restic\n/var/log/restic {\n  rotate 5\n  weekly\n  missingok\n  notifempty\n  nocompress\n}\n\n\nBackup Your Keys\n\nIf you lose the restic key you’ll never be able to access your backups again. So make some copies and secure them well.\n\nThe Music Folder\n\nRestic carries a fair amount of overhead for its comparisons and encryption and takes a while to work through large file sets. My music collection is about 100 times the combined size of all of the other data I’m backing up to B2. Because of the sheer size of my music folder and the fact that the data isn’t sensitive and doesn’t need encryption, I opted to back it up. I also changed the folder settings in the gui to keep old versions for 2 years.\n\nFor the backup script replace the if/else construct with the following, there is no need for the pruning job.\n\n  /usr/local/bin/b2 sync \\\n  --dryRun \\ # remove this line once you confirm your command is correct\n  --delete \\\n  --excludeRegex '^\\.' \\\n  --excludeDirRegex '\\.' \\\n  --replaceNewer \\\n  $TARG $REPO"
        },
        {
          "id": "posts-resolving-preferential-ballots-with-votecount",
          "title": "Resolving Preferential Ballots with Vote::Count",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Perl",
          "tags": "voting, votecount",
          "url": "/posts/resolving-preferential-ballots-with-votecount/",
          "content": "Preferential Ballots are a much better way of conducting public elections than the simple Plurality Ballot currently in use. I wrote a library to help resolve them.\n\nView the documentation on Metacpan, checkout the repository from GitHub."
        },
        {
          "id": "posts-getting-valid-certificates-for-development-environment",
          "title": "Getting Valid Certificates for Development Environment",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Web Servers",
          "tags": "certbot, certificates, dns",
          "url": "/posts/getting-valid-certificates-for-development-environment/",
          "content": "As more services strive for greater security, setting up labs where you don’t\ncare about the security (particularly certificate and ssl security), becomes more of a headache. Where once apon a time you had to figure out how to secure the thing, you have to work to turn off certificates or force trust of a self-signed certificate.\n\nIt becomes easier to just install a certificate, but certificates themselves take effort and usually cost money.\n\nMy solution is to use the free lets encrypt service and cheap domain names. Frequently alternative TLDs will run sales, so for example I just picked up brainbuz.xyz on sale for two years for $8US. Alternately if you don’t mind some extra typing a sub-domain of a domain you already own is free.\n\nSince brainbuz.xyz is only for my lab environment I only configured it on my private bind server. And then I just used one of my hosting accounts for public DNS.\n\nTo use lets encrypt you normally configure a web-server. You can configure a publicly accessible webserver and wildcard both the site and all domain hosts. You should be fine installing the certbot package in any recent linux distribution. However, if you want to use a wildcard cert you’ll need to use dns validation for Lets Encrypt to issue you one.\n\n\n  # command to use webserver for validation\n\n  certbot certonly –webroot -w /var/www/html -d brainbuz.xyz -d someservice.brainbuz.xyz\n\n\nIf you choose a wildcard cert I also recommend using pip3 to install certbot instead of your package manager. This is because wildcard support is only about a year old (as of this writing), and not all plugins are packaged or recent enough.\n\nI was able to use my api key and certbot’s appropriate dns plugin for my provider and issue the certificate.\n\nI had to create an certbot.ini to hold the credential (chmod 600 recommended).\n\n\n  # certbot.ini\n\n  dns_digitalocean_token = ******replace with yours******\n\n\n \n\n\n  # command to get the cert for brainbuz.xyz:\n\n  certbot certonly –dns-digitalocean –dns-digitalocean-credentials ~/certbot.ini -d *.brainbuz.xyz\n\n\nA certificate valid for 90 days was installed to  /etc/letsencrypt/live/brainbuz.xyz. Then I copied fullchain.pem and privkey.pem to the dev server and configured the service to use them. To renew the certificate just type certbot renew and copy the new certificate over again."
        },
        {
          "id": "posts-linux-kvm-talk",
          "title": "Linux KVM Talk",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux",
          "tags": "kvm, qemu",
          "url": "/posts/linux-kvm-talk/",
          "content": ""
        },
        {
          "id": "posts-kvm-virtualization-vmm-and-spice-on-ubuntu",
          "title": "KVM Virtualization, VMM, and Spice on Ubuntu",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Debian and Ubuntu",
          "tags": "kvm, spice",
          "url": "/posts/kvm-virtualization-vmm-and-spice-on-ubuntu/",
          "content": "The first issue you’re going to encounter is that the networking setup is a little complex and manual. My first recommendation is that on your host you’ll want to install ifupdown and resolvconf to switch away from NetPlan if your host is on NetPlan. There are plenty of other guides and articles that will help you install and configure Bridged Networking.\n\nAnother issue you’ll encounter is that video performance is pretty poor if you’re trying to crate a Virtual Desktop environment. The current workaround is to install extra video cards to give each virtual desktop a dedicated video card. I’ve read that improvements are in the works, but I’m still running Windows on hardware not KVM (I rarely need Windows anyway).\n\nKVM supports two virtual display drivers: VNC and Spice. The newer Spice Driver offers better performance than legacy VNC. However, Virtual Machine Manager (VMM) does not work work well with Spice on Debian based distros. I’ve gotten it to work occasionally, hunted the internet for fixes, when it doesn’t; by the way by occasionally I mean literally had it randomly working some of the time, not got it working and some update broke it. Meanwhile, the popular remote desktop viewers: Gnome’s Vinaigre, KDE’s Remote Viewer, and Remmina, all support Spice.\n\nTo use another viewer, open a virtual machine in VMM, the tab will open with the connection error. Click the information icon and scroll down to Display Spice to see what port it is on. Then you can open your preferred viewer to make the connection.\n\nBy default Spice uses unencrypted communications. If you have the host listen on all interfaces anyone will be able to connect to your spice console, and if you left a session logged in have immediately access to that session. When you create a new Virtual Machine you’ll want to confirm that it is listening only to local connections. I always review a newly created vm before starting it.\n\nThe first time you try this you might want to let your vm listen for spice on all interfaces (if your workstation and virtualization host are different machines). After that, you’ll want to use an ssh tunnel to the virtualization host to connect to the spice session, and make sure all of your guests are only listening for spice locally.\n\nIf you want to secure Spice without using ssh tunnels for security you’ll need to install a certificate and configure the spice server to use it, optionally you can also add client certificates. You can add a password to your virtual machines by specifying passwd=”****” in the&lt;graphics ….&gt; tag that assigns the port.\n\nAnother issue is that your hosts will be randomly assigned ports for spice viewer, beginning with the VNC port of 5900. To see what ports are in use virsh will pass commands through to qemu-monitor, which is a useful trick to remember any time you are trying to get information not directly available through documented virsh commands.\n\nvirsh qemu-monitor-command $host --hmp info spice\n\n\nAssigning the Spice Port\n\nWhile you can do this through VMM I prefer to edit the config file directly. On the virtualization host ‘virsh edit hostname’,\n\nFind the section:\n\n&lt;graphics type='spice' autoport='yes'&gt;   &lt;listen type='address'/&gt;&lt;/graphics&gt;\n\n\nand replace it:\n\n&lt;graphics type='spice' port='6001' autoport='no' listen='127.0.0.1'&gt;   &lt;listen type='address' address='127.0.0.1'/&gt;&lt;/graphics&gt;\n\n\nI start my manually numbered ports at 6000 to stay far away from the auto-assigned ports which begin at 5900.\n\nAfter saving the change you should be able to see the changed xml file in /etc/libvirt/qemu (qemu is an older emulator which still provides a lot of functionality to kvm, the two have become entwined, it is still possible to qemu without kvm).\n\nI’ve also written a short script to list the VMs and their spice listening ports from a shell on the hosting server. Save this in your path as lsspice.\n\nlsspice source at https://gist.github.com/brainbuz/3fb0139a2116ede60e687bf372379592"
        },
        {
          "id": "posts-aliasing-a-systemd-unit",
          "title": "Aliasing a SystemD Unit",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Configuration Management, Linux",
          "tags": "systemd",
          "url": "/posts/aliasing-a-systemd-unit/",
          "content": "I recently encountered a situation where different versions of a debian package were creating different service names for SystemD, wreaking havoc with deployment and management scripts.\n\nFortunately SystemD has nice support for aliasing its services. Of course the documentation could be better; fortunately someone else discovered and wrote about this and the duck found the clues.\n\n\n\nWhile there is an alias directive you can use within a unit file, SystemD supports symlinking an alias to a file in the directory containing the units.\n\nTo find the name of the unit file: systemctl list-unit-files\n\nThen find the unit file that was loaded by looking for the line Loaded: in the output of systemctl status _unit_file_name_\n\nThen just create a symbolic link to whatever name you would like to use and execute systemctl daemon-reload. systemctl status will show identical out for both your alias and the real unit name, and any systemctl or service commands may be directed at either name."
        },
        {
          "id": "posts-pulsecast-using-pulseaudio-to-create-a-whole-house-audio-system",
          "title": "PulseCast: Using PulseAudio to Create a Whole House Audio System",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Audio and Video, Linux",
          "tags": "sbc, pi, pulseaudio",
          "url": "/posts/pulsecast-using-pulseaudio-to-create-a-whole-house-audio-system/",
          "content": "Presentation to Philly Linux User’s Group 7 February 2018 See the complete slide deck with Presenters’ Notes here: https://techinfo.brainbuz.org/assets/pulsecast.pdf. Press P to toggle my presenter notes."
        },
        {
          "id": "posts-watts-to-lumens",
          "title": "Watts to Lumens",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Home Improvement",
          "tags": "lumens",
          "url": "/posts/watts-to-lumens/",
          "content": "Tungsten Incandescent Watt consumption was never a good measure of light output. With the changes in available lighting elements, it is pretty irrelevant but when your brain knows lighting in terms of Watts instead of Lumens, you have to figure out the equivalences every time you need to replace a place a bulb.\n\nFor my own convenience I’m putting a handy equivalence table right here.\n\n\n  15-watt incandescent bulb = 100 lumens\n  25-watt incandescent bulb = 300 lumens\n  40-watt incandescent bulb = 450 lumens\n  60-watt incandescent bulb = 800 lumens\n  75-watt incandescent bulb = 1050 lumens\n  100-watt incandescent bulb = 1600 lumens\n  150-watt incandescent bulb = 2650 lumens\n\n\nWatts to Lumens was always variable, halogen was always more efficient, while long life was less efficient than regular bulbs, and clear slightly more efficient than frosted."
        },
        {
          "id": "posts-owasp-crs-3-0-and-modsecurity-presentation-march-2017",
          "title": "OWASP CRS 3.0 and ModSecurity Presentation March 2017",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Servers, Security",
          "tags": "apache, owasp",
          "url": "/posts/owasp-crs-3-0-and-modsecurity-presentation-march-2017/",
          "content": "Slides for talk at March 2017 Philadelphia Linux User’s Group Meeting.\n\nThe slides are posted here. The live demo which shows more of the technical side is not archived.\n\nOWASP CRS 3.0 Presentation"
        },
        {
          "id": "posts-how-i-finally-got-my-car-radio-to-play-mp3-files-in-the-correct-order",
          "title": "How I finally got my car's radio to play mp3 files in the correct order.",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Audio and Video, Linux, Windows",
          "tags": "",
          "url": "/posts/how-i-finally-got-my-car-radio-to-play-mp3-files-in-the-correct-order/",
          "content": "When I bought a new Scion IQ, I was impressed with the full featured factory radio made by Pioneer that it came with. Except, I found that sometimes it wouldn’t sort the MP3’s in a folder correctly, and my files are both id3 tagged and prefixed with the track number with leading 0. I reported the bug to Toyota and had the dealer look at it. The dealer said that there was nothing wrong, that they even went so far as to play the stick on a computer (running Windows) and the files played in the same incorrect order. Their conclusion was that my files must be defective.\n\nThere is nothing wrong with my files. When you copy files to a Fat32 device on Windows, it will usually physically order the directory entries by alphanumeric sort. Even if it works with Windows Media Player, depending on the File System to write the files in the order the listener wants them played is the laziest possible solution.\n\nI recently discovered a utility that fixes this, its in the Debian and Ubuntu repositories so its easy to install. fatsort. You’ll need root privileges. Insert your thumb drive, use mount (if it automounted) to find the device it is on, then unmount the device, or try fdisk -l. fatsort -c %device, in seconds everything on your thumb drive has been sorted alphanumerically and your car or other mp3 player will play them in that order. Although there is an option to sort while mounted I had better results with the device unmounted.\n\n\nMeet Tyrion, my half-car."
        },
        {
          "id": "posts-how-to-use-pg_bulkload",
          "title": "deprecated article: How to use pg_bulkload",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Database",
          "tags": "",
          "url": "/posts/how-to-use-pg_bulkload/",
          "content": "update and deprecation 2018 &#8212; \n\nOn non-redhat systems pg_bulkload continues to develop new compilation difficulties, making it a nuisance to have to work through missing and misplaced dependencies each time I need to deploy. I&#8217;ve gone  ahead and written a replacement for my old utility: the new version can be downloaded from CPAN:  Pg::BulkLoad.  I  recommend using it instead of pg_bulkload. Pg::BulkLoad will require you to write a perl script or wrapper script, and is probably a little slower that pg_bulkload (I couldn&#8217;t compare because it wasn&#8217;t compiling when I decided to write my replacement). \n\n\n\n\n\n\nThe github project page for pg_bulkload is pg_bulkload.\n\n\n\nIn 2010 I wrote a bulk loader for Postgres because getting data in from somewhere else is one of the few places where postgres is dead last among popular SQL implementations, and I needed to get a lot of data loaded.\n\n\n\nMy Pg::BulkCopy program re-implemented the same basic idea as the Python program which was at the time the only game in town, which I unfortunately found completely unusable.\n\n\n\nThe problem is that the Postgres Copy command is extremely limited and very temperamental, and was the only way to move large amounts of data in or out (you might be able to use some of the newer features like foreign data wrappers as in import method which make this no longer true, but still mostly true). The solution was to break large data sets into smaller batches, load them, figure out what line failed, remove it from the batch and try again. Unless your data is pretty clean to begin with this was never going to be efficient but at least it worked.\n\n\n\npg_bulkload being written as an extension in C has a performance edge on my utility which could only use the COPY command via DBI. And since my program was one of the first things I ever put up on CPAN it really needed a complete re-write.\n\n\n\nWhile there are RPMs for pg_bulkload, binary packages for Debuntu and Arch aren&#8217;t available, which means that most of us need to install from source.\n\n\n\nBecome root or put sudo before every command.\n\n\n\nOn Debuntu install these packages: build-essential, git, postgresql-server-dev-X.X (where X.X is your Major.Minor version number), libpam0g-dev, libedit-dev, libselinux1-dev.\n\n\n\nmkdir /opt/pg_bulkload (or put it wherever you want).\n\n\n\ngit clone https://github.com/ossc-db/pg_bulkload.git\n\n\n\ncd /opt/pg_bulkload\nmake USE_PGXS=1\nmake USE_PGXS=1 install\nln -s /opt/pg_bulkload/bin/pg_bulkload \\ /usr/local/sbin/pg_bulkloadsudo -u postgres psql demodb &lt; /opt/pg_bulkload/lib/pg_bulkload.sql\n\n\n\nBecause pg_bulkload like the COPY command can only be executed by a superuser, I set up a group and directory for imports to take place in.\n\n\n\nmkdir /home/pgbulk\naddgroup pgbulk\nadduser pgbulk postgres\nadduser pgbulk myuser\nchgrp pgbulk /home/pgbulk\nchmod 770 /home/pgbulk\n\n\n\nCopy some data into the directory. The below is an example ctl file for a tab separated file with a header. pg_bulkload inherits COPY&#8217;s inability to comprehend a header row, and similarly requires the fields in your data to line up with the columns of your table. You can either deal with this when preparing the data or use the FILTER feature of pg_bulkload.\n\n\n\nINPUT = /home/pgbulk/demo_list.txt\nOUTPUT = demo\nLOGFILE = /home/pgbulk/demo.log\nPARSE_BADFILE = /home/pgbulk/bad.log\nDUPLICATE_BADFILE = /home/pgbulk/dupe.log\nSKIP = 1\nTYPE = CSV\nDELIMITER = \"\t\"\n\n\n\n# there needs to be a tab character between the quotes.\n\n\n\nFinally import the data (using sudo to run as the postgres account):sudo -u postgres pg_bulkload -d demodb demo.ctl\n\n\n\nMy utility Pg::BulkCopy is now deprecated and will eventually be removed from CPAN."
        },
        {
          "id": "posts-living-with-network-manager",
          "title": "Living With Network Manager",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Debian and Ubuntu",
          "tags": "gnome-network-manager, gnome, ubuntu, linux-mint",
          "url": "/posts/living-with-network-manager/",
          "content": "The gnome-network-manager is embedded in Ubuntu and Mint in the main Unity and Cinnamon desktop installs so deeply that you cannot get rid of it. When we want to manage our connection manually, instinctively we want to get rid of it, unfortunately a huge amount of work by the desktop environment developers has gone into insuring that this will either break networking or your desktop environment or both.\n\nAfter wasting many hours after upgrades and installs I’ve finally figured out how to live with it by editing its configuration files!\n\nUpdate 2019\n\nNetwork-Manager should respect a network/interfaces file, although its’ applets will show the network-manager network configuration rather than the real configuration.\n\nThe easiest way to disable network manager is with systemctl mask network-manager. Disable will not prevent a service that requests network-manager as a pre-requisite from loading it. Mask will cause systemd to not only disable it but to lie that it is running to services requesting it. The NM applet will show networking as not running which is better than showing the wrong configuration as running.\n\nYour connections are configured in files in: /etc/NetworkManager/system-connections\n\nAn example of a manual configuration file for a connection is:\n\n[802-3-ethernet]\nduplex=full\nmac-address=11:22:33:49:94:24\n\n\n[connection]\nid=Wired connection 1\nuuid=3a2e741c-9834-4784-bbbc-65209eba6fb5\ntype=802-3-ethernet\ndomain=\"brainbuz.org\"\n\n[ipv6]\nmethod=auto\n\n[ipv4]\nmethod=manual\ndns=172.1.1.2;172.3.1.3;\ndns-search=brainbuz.org;\naddress1=172.1.1.41/16,172.1.1.1\n\n\nYou can force network manager to reload with the command service network-manager restart (which may change to systemctl with the switch to systemd), but I’ve found that nm doesn’t always respect my changes if I make them while running so it is better to stop the service before editing the file and start it again after.\n\nIf you remove the Network Manager Applet and want to manually launch it the command is nm-applet"
        },
        {
          "id": "posts-formdiva-lightening-talk",
          "title": "Form::Diva Lightening Talk",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Development, Perl",
          "tags": "template-toolkit, mojolicious, mojo-template, form-diva",
          "url": "/posts/formdiva-lightening-talk/",
          "content": "Form::Diva is an HTML5 Form Element Generator.\n\nI have a slide deck from my lightening talk on April 11, 2015 at the DC Baltwash Perl Workshop: Click to view the Slide Deck.\n\nView the Documentation on MetaCPAN.\n\nRepository at GitHub"
        },
        {
          "id": "posts-talk-on-programming-and-databases-for-philadelphia-linux-users-group",
          "title": "Talk on Programming and Databases for Philadelphia Linux Users Group",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Database",
          "tags": "",
          "url": "/posts/talk-on-programming-and-databases-for-philadelphia-linux-users-group/",
          "content": "A review of the topic and then some ideas I had and an initial idea for a project I had for an anti-orm, creating an interface to data that carried the SQL paradigm but was easy to code. I never made much progress, but a few years later Sri and the Mojolicious project created Mojo::Pg which is a programmatically friendly wrapper for SQL. Mojo::Pg and Mojo::mysql are a joy to work with if you think about data in sql terms, and an undersold reason to use Perl."
        },
        {
          "id": "posts-hot-handles-a-new-solution-for-an-old-headache",
          "title": "Autoflush for Hot Handles in Perl.",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Perl",
          "tags": "",
          "url": "/posts/hot-handles-a-new-solution-for-an-old-headache/",
          "content": "By default Perl buffers writes to your file handles, most of the time this gives us a performance advantage. Sometimes it is a real headache like when we’re writing tests and want to test our application’s logging. Also if our program dies traumatically the last event may not get written or the buffer completely fails to flush.\n\nThe traditional ugly way of dealing with this was to write something like this.\n\n{  my $ofh = select LOG; $| = 1; select $ofh; }\n\nIf you do a web search on perl hot handles most of what you’ll find suggests this or using a module like IO:Handle.\n\n\n  \n    \n      Except that there is a much cleaner way of doing it: autoflush, that goes back to at least Perl 5.005, which I just came across in the Perl Documentation as I was revisiting the $\n      issue one more time.\n    \n  \n\n\nopen( my $FileHandle, '&gt;', $filename ); autoflush $FileHandle 1;\n\nThere is still no method I’ve discovered to selectively flush a file handle on demand. I’m using Log::Fast for logging and passing it a FileHandle so all I had to do was use an if statement to make the file hot before creating the logger when debugging.\n\nUsing either FileHandle or IO::Handle will get you some other features, the most relevant being to make autoflush a method, but if all you wanted was autoflush, you don’t need them.\n\nUpdate 2022, in recent Perls IO::File is loaded by open and all you need in your code is $FileHandle-&gt;autoflush(1); ."
        },
        {
          "id": "posts-catalyst-advent-2013-webservice-solr-errata",
          "title": "Catalyst Advent Calendar 2013: WebService Solr Errata",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Development, Perl",
          "tags": "catalyst, solr",
          "url": "/posts/catalyst-advent-2013-webservice-solr-errata/",
          "content": "Update August 2015:  I’ve switched over to building my own raw queries because WebService::Solr isn’t being maintained and its more effective to deal directly with the Solr API than through the abstracted interface. Search::Elasticsearch is superior to any of the Solr Modules – its a direct mapping of the Elasticsearch JSON API to Perl data structures. For my latest project I’ve been looking at Postgres Tri-Gram and Full Text indexing and ElasticSearch as well as considering sticking with Solr. \n\nThe tarball of the article and code is here   https://techinfo.brainbuz.org/articles/solrcattut.tgz"
        },
        {
          "id": "posts-alternate-perls-revisited",
          "title": "Alternate Perls Revisited",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Perl",
          "tags": "perl-build",
          "url": "/posts/alternate-perls-revisited/",
          "content": "https://techinfo.brainbuz.org/posts/alternate-perls/\n\nSince then my approach has refined.\n\nFirst, I’ve switched from using perl5 as the alternate location of my custom perl to setting my environment to use the correct Perl and env perl in the shebang line. I find this is more maintainable because scripts installed via package management usually specify the path to the system installed perl, but scripts that expect a custom perl can still try to use system perl. While the perl5 approach was very explicit, it became confusing when I moved scripts to new hosts that didn’t have perl5 linked, the env approach falls back to system perl when no alternate perl is set in the environment.\n\nThe other change is that I like the plenv and perl-build combination better than I like perlbrew.\n\nFor development I use plenv to manage my perls, and install perl-build as a plug-in to actually install them. For deployment I can use perl-build standalone.\n\nThe documentation for both plenv and perl-build is on cpan but the author recommends installing them both from github. I recommend following the directions for installing plenv to manage user copies of perl.\n\nPlenv has a few nice features that PerlBrew lacks, for me the killer is the ability to migrate modules, which actually reinstalls them. While this can remain a problem for some of the hard to build modules you might have installed from package management, it still saves you having to remember or document everything you installed.\n\nWhen it is time to deploy an alternate Perl, you only need perl-build, run this as the user that is going to manage the installation, likely either yourself or root. In cases where you’re going to only use one system-wide custom Perl I recommend not using plenv, because you won’t need the switching functionality. The nice thing about perl-build is it lets you specify exactly where to install Perl (even if when used in conjunction with plenv the latter wants to bury them in your home directory) so you can pick something nice and clean like /opt/perl or /opt/perl-x.xx.\n\nOnce you’ve installed your perl just add a line to the end of /etc/profile or bash.bashrc: source PATH=\"/opt/perl/bin:$PATH\". Then in your scripts use the shebang line: #!/usr/bin/env perl.\n\nAnother nice part of this approach is that it is extremely easy to tar /opt/perl and copy it to another system running the same architecture."
        },
        {
          "id": "posts-adventures-in-version-control-servers-featuring-gitolite-and-subversion",
          "title": "Adventures in Version Control Servers Featuring Gitolite and Subversion",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Version Control",
          "tags": "git",
          "url": "/posts/adventures-in-version-control-servers-featuring-gitolite-and-subversion/",
          "content": "Article Update 26 December 2013\n\nBoth viewgit and gitweb have been unable to display the contents of repositories modified with the most recent version of git.\n\nBitBucket offers unlimited private repositories and then charges by the user if you need to allow others access. Since I have numerous private repositories with few others who need access, I’ve been migrating to BitBucket. Both GitHub and BitBucket offer unlimited public repositories. Since, a lot of the Open Source community already use GitHub, as I work on my OpenSource projects I’m going to migrate them there to encourage collaboration and move their bug reporting from rtcpan, since github has a nicer ticket system.\n\nOriginal Article\n\nRecently I had the need to migrate two version Control Servers, one on Linux running Gitosis, the other on Windows running Subversion. The Windows Subversion Repository was being moved to an Ubuntu/12.04 Linux Server. The git server was simply moving from an older 10.04 server to a 12.04 server, unfortunately the Gitosis project has been abandoned so I had to switch to Gitolite.\n\nThe Gitolite Experience.\n\nThere are numerous articles on how to set up your own git server. Despite numerous how-to articles I spent an excessive amount of time trying to make this work because of a handful of things that none of the other authors explained.\n\nGitolite does not run as a service (daemon), it is a process triggered at ssh login. The config files are .hidden files in the git/gitolite user’s home directory. As a sysadmin I expected it to be a service and found it confusing that it had no global configuration and no running processes.\n\nUntil you’ve checked out and added keys to the gitolite-admin project access is a little screwy. The best advice here is to use the public key of the user who will first checkout gitolite-admin as the key specified to the setup program. However, if you need to manually add a key, you need to add the key to authorized keys (like any key being added for ssh), but follow the format of another entry – gitolite shell commands to wrap the session preceding the key. Once you commit the gitolite-admin project expect your manual changes to get wiped away.\n\nIf you’re having trouble with gitolite try to initiate a regular ssh session ‘ssh gitolite@targethost’, you should get some messages from gitolite, if you get a bash shell you did not properly prefix the key with the commands gitolite needs.\n\nOnce the new server was up and running migrating the repositories was trivial. Define the repositories in gitolite-admin/conf, then from each repository run the following  commands.\n\ngit remote rm origin\ngit remote add origin git@myserver:mynewrepo.git\ngit push origin master\n\n\nOther than these issues you’ll find a number of how to documents which will do a pretty good job of getting you going. I didn’t find the Gitolite documentation very helpful. One final word, I strongly prefer viewgit to gitweb, even though more of the how-to docs seem to point to the latter. Either install both and choose for yourself or pick viewgit.\n\nIt makes no sense repeating good instructions by other authors so I’ll link a few that were relevant at the writing of this document.\n\nThe Official Documentation:\n\nhttp://gitolite.com/gitolite/install.html\n\nhttp://blog.countableset.ch/2012/04/29/ubuntu-12-dot-04-installing-gitolite-and-gitweb/\n\nhttp://marian.schedenig.name/2012/07/29/setting-up-a-gitolite-server-in-ubuntu/\n\nInstalling ViewGit\n\nSince there is a little less documentation for viewgit, I’ll go over it here.\n\nAssuming that your gitolite repositories are at:\n\n/home/git/repositories\n\n\nPre-requisite packages (ubuntu): apache2 libapache2-mod-php5 php-geshi\n\ncd /var/www\nsudo git clone http://repo.or.cz/r/viewgit.git\nsudo chown -vR www-data:www-data viewgit\n\ncd /var/www/viewgit/inc\nsudo cp config.php localconfig.php\nsudo chown www-data:www-data localconfig.php\nsudo $EDITOR localconfig.php\n\n\nChange the directive for $conf[‘projects … # watch out for a comment in the middle of the assignment. To  $conf[‘projects_glob’] = array(‘/home/git/repositories/*.git’);\n\nNow edit the apache config for the virtual host that will serve viewgit.\n\nAlias /viewgit /var/www/viewgit\n\n\n&lt;Directory /var/www/viewgit/&gt;\n AuthType Digest\n AuthName \"whateveryoucallit\"\n AuthDigestDomain http://yourserver\n AuthDigestProvider file\n AuthUserFile /etc/secrets/.gitpasswords\n Require valid-user\n &lt;/Directory&gt;\n\n\nUse htdigest to create the password file.\n\nRestart apache, you should be challenged to get to your repo. If you don’t want to password protect the repository then just the alias directive should suffice.\n\nThe Subversion Experience\n\nI like git a lot better than subversion but I will observe that the Documentation for the subversion server was a lot better than for gitolite. To perform the migration required completely dumping and then importing the repository, which took a long time. With Gitolite, once the new server was up cloning back to the new server was much quicker since git uses both compression and deltas as opposed to subversion which repeats the whole repo over with each revision."
        },
        {
          "id": "posts-finding-the-right-speakers",
          "title": "Finding the Right Speakers",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Audio and Video",
          "tags": "",
          "url": "/posts/finding-the-right-speakers/",
          "content": "A couple of months ago I finally repaired them. Then I fixed the Baby Advent speakers that I bought new over 20 years ago, circa 1990. And purchased (and then repaired) a spare pare from Ebay.\n\nThe Aquarius IV is a very special design, it is a unique pillar, and a radiant design. A friend comparing it to my other speakers noticed immediately how differently the sound diffused in the bedroom where they live.\n\nAn employee at a HiFi store offered  $1100 for my pair, and the inflation adjusted price from new in 1977 to 2013 works out to about $1,600 a pair.\n\n\n\nCompared to these the small bookshelf speakers in my office started to sound gratingly bad, my old Baby Advents, especially after repair still sounded good, but were clearly not as good. However the Baby Advents are awesome as AV main speakers in a small room. They handle two things really well – human speech and the low end (except for ultra low), during movies the floor vibrates, there is no reason to ever consider a sub-woofer with them in the room. Coupled with the Yamaha Bookshelves ejected from my office my surround sound is pretty awesome.\n\nBack to the office, the second pair of Baby Advents would have been awkward to wallmount, and there is really no other place to put speakers. The Yamahas got away with not being on the wall by being small. So I figured that if adjusted for inflation the Baby Advents were about $375, I should be able to find a better speaker for less than that price that was wall-mountable.\n\nAt BestBuy nothing I listened to in the target price range sounded good at all, I finally went for Martin-Logan Motion IV speakers at $500 a pair because they came with wall mounting hardware and sounded better than any of the other small speakers. First they needed a few days burn in to be properly heard. To their credit their high end was excellent. Unfortunately with such tiny enclosures they couldn’t do justice to the low end or to human speech and after a week, back to Best Buy.\n\nNext I previewed a Pair of Klipsch Reference Monitor 61 speakers (about $550 and not easy to wall mount). I pointed out that the Baby Advents are weak in the Ultra Low, not so Klipsch and pretty much everything shook the floor. Additionally a DJ or Baseball announcer speaking had a springy artifact (in both cases the voice is typically run through some reverb). For speech the Advents were the better speaker plus they only shake the floor when asked to do so. If I were ignorant of price I would still choose the Baby Advent over the Klipsch 61, back they went.\n\n\n\nYou can find these on sale for about the inflation adjusted price of my Baby Advents. But for about $125 including shipping and the foam repair kit you’ll need, a used Baby Advent is a better overall speaker and a much better buy.\n\nOnce the Klipsches are returned more shopping and listening. The final result: Bowers and Wilkins 685 ($650), about twice the inflation adjusted price of the Baby Advents. The Abbey Road studio (where Dark Side of the Moon and Abbey Road were recorded) has always been a B&amp;W shop, and even in the entry level 600 series demonstrates why. Getting these mounted on the wall is worth an article of its own, but the important thing is I got them there and they compare well with the JBL speakers in my bedroom.\n\nWhile it was easy to criticize the Klipsch and Martin Logan, the B&amp;W 685s produce consistent quality across the spectrum, they don’t have strong characteristics to like or dislike. What they do show is that the JBL speakers in the next room have a very distinct character, when the JBLs were compared to everything else they were just so much better that they just set the standard. This is no surprise, the truly good speakers with designs similar to the Aquarius IV are expensive, as in $60,000 a pair. The Aquarius IV is the most attractive piece of furniture JBL ever produced and is far less sensitive to placement than a pair of 684 or 683 speakers (the floor standing members of the same series as my 685s), making it the ideal for a living room (if someone wants to gift me some vintage JBL studio monitors or another pair of B&amp;Ws they will move there, but right now putting my best speakers in a place where I don’t often listen seems dumb). The B&amp;W 600 Series in contrast are entry level Studio Monitors, and revealing weaknesses in other speakers (even when the other speaker still overall sounds better) is exactly what a Monitor should be able to do. When I was shopping, I listened to the 685 against the CM5 (the same size cabinet in the next series up), the difference in quality was distinct but only incremental, most notably a greater clarity at the high end. Next to the 685s the Baby Advents which sound pretty good by themselves show a distinct fuzziness at times, unlike the other current speakers I rejected, there is nothing that the Baby does better than the 685 which is just a better speaker.\n\n Wall Mounted B&amp;W 685 Speakers\n\n\n\n  \n    \n  \n  \n    \n  \n\n\nConclusion:\n\nThe Baby Advent remains the best value you will find in hi-fi speakers at about $125 per pair used, for an AV system in a small room they may as well be $1,000 a pair speakers for the way they perform.\n\nThe JBL Aquarius IV is an awesome niche speaker, looks like furniture, sound quality still competitive in the under $2,000 price range, ideal for a living room. They’re rare and somewhat of a collectors item, plus I’ve put a lot more than $30 into fixing them. For what you’ll ultimately end up spending a better value would be a pair of B&amp;W CM9s for about $3,000 if you could afford it, or B&amp;W 683/684 ($1,500/$1,1100) if you couldn’t.\n\nThe B&amp;W 685 was the least expensive satisfactory speaker I could find, if your budget is more than Baby (as in Advent) size but you’re not ready or able to spend a lot, the only question is which color 685 you’re going to get (they come in Black, White, and Cherry). If you need to get something up a wall like I did you’re not going to do better for anywhere near this price, and no setup with small speakers plus a sub-woofer will sound nearly as good for anything other than computer games.\n\nProducts Reviewed:\n\nMartin Logan Motion IV ($500)\n\nKlipsch RB-61 ($550 at High Fi dealers, $400 at Crutchfield)\n\nBowers and Wilkins 685 ($650)\n\nAdvent Baby II (inflation adjusted $375 new, used $40-$80 plus shipping and $30 for a foam repair kit).\n\nJBL Aquarius IV S-109 (inflation adjusted $1,600. Used $300-$500 in repairable condition plus shipping, plus parts)."
        },
        {
          "id": "posts-dac-and-soundcards",
          "title": "DAC and Soundcards",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Audio and Video",
          "tags": "",
          "url": "/posts/dac-and-soundcards/",
          "content": "Products Reviewed:\n\n\n  Generic Realtek Integrated Sound.\n  Yamaha RX-471 HDMI AV Reciever with unspecified Burr-Brown DAC.\n  Shiit Audio Modi USB DAC\n\n\nWhen I set up my Yamaha RX-471 I percieved that music playing sounded better than music playing through identical speakers on my other system (using a an old stereo reciever). The Yamaha AV Tuner has an integrated Burr-Brown DAC and probably overall a better amplifier than the old tuner on my main stereo.\n\n\n\nPutting an AV tuner on a purely stereo system seamed a little silly and advice from hi-fi sales in Cherry Hill and some research led to the conclusion that for the stereo a standalone DAC would be a better choice and permit the upgrade of each component individually. Due to the licensing issues surrounding HDMI most small manufacturers have gone with USB for their standalone DACs, Linux recognizes this as an external sound card. With an external DAC, you only have to replace the DAC to upgrade it, since amplifier technology doesn’t change very rapidly while DACs are still rapidly evolving. If my 20 year old receiver weren’t a very entry level model I probably wouldn’t be thinking about replacing it.\n\nThe Hifi store recommended a Dragonfly. My research showed that in its price range ($250) it was probably the best available product. But for about half of that the Schiit Modi was really well reviewed. I decided that the price of the Modi was so low that I could afford to not be happy with it. Besides everything I read about the technical aspects of DACS says that older Burr-Brown DACS were the best mass produced chips. Basically there are 2 approaches and a middle ground to building a Digital to Analog Converter. The cheap way is to use a single switch operating at a very high frequency, this approach, known as Sigma Delta, generates a lot of overhead noise which must be filtered out. The expensive way is known as ladder or R2R, and it uses a separate switch for each bit, so a 24 bit R2R chip would have 24 switches for each channel. R2R chips are much more expensive to manufacture than Delta Sigma. The chipmakers did two things: everything they could to improve their Delta Sigma designs and came up with a hybrid approach – having more than one switch but less than the full number, resulting in less noise to filter out, and these chips are still considerably cheaper than R2R chips. The unfortunate result is that TI no longer produces the Burr Brown designed R2R chips. This means that to get a true R2R DAC the chip either has to be custom fabricated or the manufacturer solders transistors onto a PCB board to make one by hand.\n\nThe question was whether I would be satisfied with a Sigma Delta DAC, and the modi seemed a great way to run the experiment – I could compare it to my sound card, and then have a point of reference (ears already used to listening to better than soundcard output) if and when I decided to buy a better DAC.\n\nInitially I felt the Modi was an improvement over the integrated Realtek Sound. The reviewers who didn’t like the Modi called it Harsh. The more I listened the harsher it sounded to the point that I used the graphic eq in my software to lower the high-end.\n\nI’m now back on the integrated sound, and need to plot my next step. One thought is to get a better sound card, which will also help when I rip vinyl. The other thought is to get a better DAC than the Modi, and the question is whether the Dragonfly will make me any happier. The Modi uses an AKM4393, the Dragonfly uses a more expensive Sabre ESS chip.\n\nAt the moment the RX-471 with the unspecified Burr-Brown chip is in the lead. I think the decision whether to move up one step on the soundcard front vs buying a DAC will come down to whether the Dragonfly or something near to it in price satisfies me, if not I’ll take the small cheap step. And I have a new Shiit Modi on sale for 30% off retail.\n\n\n\n\n\nUpdate 2019\n\nEventually I went with an better sound card for the dedicated stereo, I later picked up a Dragonfly on sale at a really low price. The Dragonfly is definitely better than the DAC I tried out a few years ago. Due to my whole home audio solution I’m using the HifiBerry sound-card in a Pi and am pretty satisfied with the result."
        },
        {
          "id": "posts-in-place-upgrade-of-linux-mint",
          "title": "In Place Upgrade of Linux Mint",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Debian and Ubuntu",
          "tags": "linux-mint, ubuntu",
          "url": "/posts/in-place-upgrade-of-linux-mint/",
          "content": "My workstation at work was still on Maya and I wanted to update it to Olivia. Mint recommends backing up your data and doing a fresh install, which is how I upgraded my home computers. Even though the install itself is much faster with a fresh install, the total time and work is more than an upgrade.\n\nOn Ubuntu there is a do-release-upgrade command which isn’t available on Mint, however, it will let you go forward only one release at a time or from one LTS to the next one. Using this technique should be equally valid for jumping further forward on Ubuntu as it is on Mint.\n\n\nIf you are a beginner at this and don’t know how to fix problems follow their advice. You can use dpkg -l to create a list of your installed packages and you probably want to back that up along with a tarball of your etc directory, and of course your home and any data directories elsewhere on your system. Then do a clean install, reformatting the drive, restore your data and use the backup of dpkg -l and etc to install applications and restore any configuration you had done.\n\nIf you can troubleshoot problems on your system, I can both report success and that it was faster than the two other upgrades I’ve done following the reinstall method.\n\nFor most of the following you will either need to be root or use sudo.\n\n1. Update your sources.\n\nsed -i ‘s/precise/raring/g’ /etc/apt/sources.list\n\nsed -i ‘s/maya/olivia/g’ /etc/apt/sources.list\n\nThen run apt-get update.\n\n2. The upgrade\n\napt-get dist-upgrade\n\n3. You will not have a working system\n\nThe dist upgrade will fail to update many of your packages due to dependencies processed out of order.\n\nrun apt-get -f upgrade to attempt to force packages to upgrade. Then go back to dist-upgrade. You may need to repeat this until there are no or very few un-upgraded packages.\n\n4. Remove leftover packages that didn’t upgrade. Then reinstall them if you need/want them.\n\n5. Reboot the system. As you troubleshoot, reboot as appropriate. This is not Windows, you don’t need to reboot every time you do something, but remember that the boot process sets up things that may not get set/unset the same way when you add/remove something.\n\n6. Deal with the unexpected.\n\nThere were X startup errors relating to VirtualBox, since I don’t use VirtualBox, I identified the installed related packages with dpkg -l, and then used apt-get purge to completely remove them and any configurations.\n\nMy workstation did not participate in NIS, but the installer had found NIS Servers and defaulted to NIS for authentication. Purging NIS set me back to my local accounts.\n\nI had a problem with MDM (Mint Display Manager), to fix it I removed it and then reinstalled it and some related packages:\n\napt-get install mdm\n\napt-get install –reinstall mint-meta-core mintinstall mint-mdm-themes mint-info-cinnamon\n\n7. That was it, cycling through the upgrade/dist-upgrade process a few times, and resolving three  glitches, took about 2 hours, and I had exactly the same system (only better) as a started with, where I would have spent most of the day following the recommended procedure."
        },
        {
          "id": "posts-linux-initialization-daemons",
          "title": "Linux Initialization Daemons",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux",
          "tags": "systemd",
          "url": "/posts/linux-initialization-daemons/",
          "content": "Slides from Presentation at Philadelphia Linux User’s Group February 1, 2012."
        },
        {
          "id": "posts-the-mysteries-of-apache-passwords-revealed",
          "title": "The Mysteries of Apache Passwords Revealed",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Servers, Perl",
          "tags": "apache",
          "url": "/posts/the-mysteries-of-apache-passwords-revealed/",
          "content": "If you’re reading this you probably already know that Apache has two primary authentication mechanisms: Basic and Digest. We have at times needs to manipulate these in Perl – either to directly work with a password file or for a database to be accessed by mod_auth_dbd (or one of its predecessors from older Apaches).\n\nUpdate Feb 2025. Digest is trivial to crack with current tools and shouldn’t be used.\n\nDigest has a single and very straightforward password format. Use the hexadecimal representation of the MD5 encryption of the the string “username:realm:password” where realm is your arbitrary digest authentication realm. In my article on deploying Catalyst with Starman I already provided this code to manage it:\n\nuse Digest::MD5 qw(md5 md5_hex md5_base64); my $authname = 'advent' ; my $user = 'billy' ; my $password = 'themountain' ; my $result = md5_hex( \"$user:$authname:$password\" ) ; say \"$user:$authname:$result\" ;\n\nBasic is the more complex (and should only be used over a secured connection). Modern versions of the htpasswd utility use an algorithm that the documentation describes as “the result of … an iterated (1,000 times) MD5 digest of various combinations of a random 32-bit salt and the password.” The only example of the code is the C source for the module itself, and I lack the expertise to create a Perl Module to wrap it to or re-implement the algorithm in Pure Perl. I would like to tell you that someone already did that, but I can’t find it. Not to despair, it eventually sunk in that Basic Authentication supports at least 3 other methods of encoding a password. PlainText and old-fashioned Crypt aren’t very secure if our passwords file falls into the wrong hands. The third, is based on SHA1, which is stronger than the algorithm used by Digest, and there are Perl Modules existing to do the hard stuff.\n\nUpdate Feb 2025. Digest::SHA1 does not support longer values, other modules on CPAN and Apache support SHA256 and SHA512.\n\nuse Digest::SHA1 qw(sha1 sha1_base64); my ( $realm, $user, $password ) = @ARGV ; my $sha1 = sha1_base64($password); say qq / User $user Password $password Result $sha1 ApacheSHA1 {SHA}$sha1= / ;\n\nThis is simple, and the new mod_auth_form module currently only works with basic authentication, so we need it."
        },
        {
          "id": "posts-deploying-catalyst-with-starman-and-apache",
          "title": "Deploying Catalyst with Starman and Apache",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Servers, Perl",
          "tags": "catalyst",
          "url": "/posts/deploying-catalyst-with-starman-and-apache/",
          "content": "Article Published in 2011 Catalyst Advent Calendar.\n\nThis article was published on December 16, 2011 as part of the Catalyst Advent Calendar. If you’re not familiar with this resource I highly recommend it as a great source of useful articles on the Catalyst MVC Framework.\n\nClick here to read the article\n\nErrata\n\n\n  Github Link. There is a link to the TestApp in the article which is wrong. Either get the Controller from the entry on this blog or https://github.com/brainbuz/Catalyst-Debugging-Controller.\n  NameVirtualHost *:80 is mistyped.\n  The Catalyst Config Directive for the proxy is using_frontend_proxy 1, if you copy and paste you will get the underscores but they are invisible when you read the screen.\n  NameVirtualHost is deprecated in Apache 2.4. It is no longer needed.\n  If you are locating your proxied catalyst application at a non-root location you’ll  need to add a base value into myapp.conf file.\n  If you are changing ports (one of the many reasons to hide starman behind apache is for apache to provide ssl) you’ll need to add these immediately prior to the rewrite rule: RequestHeader set X-Forwarded-Port 443 RequestHeader set X-Forwarded-Path     %{REQUEST_URI}s\n  Upstart currently does not support setuid, although the version in development for Precise Pangolin will have that option. To run your starman job as the ubuntu apache user: exec sudo -u www-data ….."
        },
        {
          "id": "posts-a-catalyst-controller-for-debugging",
          "title": "A Catalyst Controller for Debugging",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Development",
          "tags": "",
          "url": "/posts/a-catalyst-controller-for-debugging/",
          "content": "This is a single controller application with no dependencies beyond Catalyst\n5.90. It provides useful information about your application and environment. If you’re working on your server configuration, create a dummy app and drop this in as your root controller, see what is getting sent back to your server. If you’re having an issue with a specific application add it as a Controller in that application. If you point a form at it, it will dump the raw values being returned from that form.\n\nClick here to view the Source for the Debug Controller.\n\nThe single controller provides 4 pages:\n\n\n  (/) The Default Catalyst Page.\n  brief - A short page.\n  spew - a long page (Includes output of form).\n  form - dumps all form values from a submitted form.\n\n\nIf you create a new app just replace the Root.pm file with this one, except for the first Package line. If you are putting this in a Controller in your application, remove the lines that say \\_\\_PACKAGE\\_\\_-&gt;config(  and sub default Path {.\n\nWhen you deploy your application remember that spew dumps out the contents of your catalyst configuration file and that you may want protect that page or disable the controller in your production environment."
        },
        {
          "id": "posts-goodbye-exchange",
          "title": "Goodbye Exchange",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Messaging, Linux, Windows",
          "tags": "exchange",
          "url": "/posts/goodbye-exchange/",
          "content": "I believe the root cause to have been a replication failure in Active Directory. If this was a work or client situation I would have paid Microsoft for a support incident, but answers not forthcoming, I had to ask myself if running exchange for 1 user was worth the resources it consumed. The answer was NO. In fact, I’ve decided that running active directory for 1 user wasn’t making sense. When my career was supporting Windows maintaining an enterprise (windows) environment at home made sense, now that I’ve professionally shifted to the Unix world, hanging on to a Microsoft-Centric home infrastructure only made sense if it was low maintenance.\n\nFriday night with active directory fairly broken but Login, DNS and DHCP services functional it was time to find another solution.\n\nI saved my postfix configuration and installed dovecot. I installed my certificate to sasl.\n\nThen I installed squirrelmail, it worked right off the bat with my temporary configuration.\n\nIt took some playing around to merge the two configurations, which turned out to be a waste of time because the only thing that needed to change were the transport maps which had formerly specified my home exchange server (on port 58 because verizon blocks 25) to local:node.brainbuz.org.\n\nWith a couple of hours of playing my former relay host server had become a viable imap and webmail server.\n\nThere was one hiccough with Thunderbird being unable to move deleted items to “Trash”. Modifying the imap directive in dovecot.conf fixed it.\n\nprotocol imap {\n        mail_plugins = autocreate\n        }\n\n\nI have more clients now: outlook running on windows and thunderbird on linux, and squirrel as a third client replacing owa. The client that gave me the most trouble was alpine which I eventually got working as an imap client, if I find myself using it I’ll figure out how to make it stop requiring my password. Ironically because I had to set up Alpine as an imap client, I now know how to set Alpine up as an Exchange Client! Both support IMAP.\n\nThe last thing was rules with 4+ clients, rules have to run server side not client side. Activating Sieve for Dovecot was pretty easy, and getting avelseive activated in squirrelmail was  trivial. So squirrelmail is the client that rules my rules (because it is the client I can get to from anywhere).\n\nDone Mail Migrated from Exchange to Dovecot. And a couple of longterm Exchange issues, resolved (but not solved).\n\n\n  Proxying Outlook Web Access behind Apache. no longer an issue. I no longer have to use port 8443 as my work around.\n  Getting task scheduler to run the backup script written in powershell. No longer matters. I’m using maildir and it is just files. If I can’t backup up a bunch of files I’m in the wrong line of work, now the problem is how do I want to back them up!\n  Needing another certificate for my home mail server. OK this wasn’t worth $13 a year to fix, squirrelmail is using the certificate I already bought for brainbuz.org.\n\n\nNext up, the internal network and I think transfering DHCP and DNS to linux is first. I see an rsync based replication and backup strategy that will also lead to replacing my KineticD backup service."
        },
        {
          "id": "posts-apache-2-3-and-2-4-from-source",
          "title": "Apache from Source",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Servers, Linux",
          "tags": "build from source, apache",
          "url": "/posts/apache-2-3-and-2-4-from-source/",
          "content": ""
        },
        {
          "id": "posts-alternate-perls",
          "title": "Alternate Perls",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Perl",
          "tags": "cpanminus, perlbrew",
          "url": "/posts/alternate-perls/",
          "content": "It is a running issue of contention between the Perl Community and Linux/BSD Distributions that the system’s Perl tends to be out of date. For Distribution maintainers they have a huge amount of plumbing written in Perl and until recently major releases of perl happened about every 6 years, and now suddenly Perl has a new release every year and a 2 year shelf life. Distribution Plumbing and nearly everything else have different needs, and the solution as I see it is to have two Perls on your system. My preference would be for the distribution maintainers to install an ancient Perl (or possibly several) with an alternate name, and leave a current perl to answer #!/usr/bin/perl. Since I have absolutely no influence with anyone who maintains a major distribution, the other option is to ignore /usr/bin/perl entirely and put a current Perl at /usr/bin/perl5. I prefer this solution to the env perl approach many others have chosen. It does require that for your scripts you change the #! line by appending the 5, but it is much easier to append a 5 to the line than remember where your alternate perl is buried in the file system (for scripts run outside your user environment).\n\nTo begin, you’ll need to fire up your system CPAN for its first and only ever run. But even before that you may need to install a few things like gcc. On ubuntu it is as simple as (sudo) apt-get install build-essential. You also need to decide which user is going to maintain perlbrew, it is not unreasonable to choose root, it is also quite reasonable to choose a different user.\n\n(sudo) cpan App:perlbrew Say yes to installing dependincies 89 times. There are two tricks you can use to make this a little easier: install App::cpanminus first, it has fewer dependencies and can install perlbrew quietly, alternately Ubuntu Oneric has a perlbrew package, install it and then use perlbrew’s self update option.\n\nIf installation of perlbew fails, open up a cpan session and type\n\nforce notest install App::perlbrew.\n\nFrom this point onwards we consider ourselfs to have 2 perls, our real Perl (perl5) the system perl which we will never touch, but which is a dependency for packages needing perl we might have installed.\n\nexport PERLBREW_ROOT=/opt/perlbrew (/usr/perlbrew and /usr/local/perlbrew are also good choices) perlbrew init\n\n/opt/perlbrew/bin/perlbrew available /opt/perlbrew/bin/perlbrew install\n\nUsing Perl 5.14.1 as an example, type the following two lines\n\nln -s /opt/perlbrew/perls/perl-5.14.1/bin/perl /usr/bin/perl5\nln -s /opt/perlbrew/perls/perl-5.14.1/bin/perl5.14.1 /usr/bin/perl5.14.1\n\n\nThis will make Perl5.14.1 your default perl5, and it will give you a specific alias to perl5.14.1 if in the future you need to invoke it specifically you can type perl5.14.1 from anywhere without having to remember the full path.\n\nInstall cpanminus\n\nNow that you have your new perl installed, even though perlbrew has an option to do this I recommend using the cpan in your new perlbrew directory. ./cpan App::cpanminus, configure cpan and answer the prompts.\n\nWhen this is done go to /usr/bin, if you don’t have links there for cpan and cpanm copy them from your new perl’s directory, if you do, they’re just wrappers, edit the #! line to /usr/bin/perl5, so that if in the future you have a newer perl5, you won’t need to touch them.\n\nSpecial case: Padre.\n\nPadre requires a multi-threaded Perl, Perlbrew does not build a multi-threaded Perl by default. I update Padre about once a month. You have a two choices, you can tell Perlbrew to build a threaded Perl, or you can leave Padre on your system Perl, which means that you need to make sure that you update it in your system Perl, not your real Perl. If you don’t want a threaded perl a third option would be to build a special threaded perl for Padre (but as long as it works with system perl, why bother?).\n\n/opt/perlbrew/bin/perlbrew install perl-5.xx.x \\\n  -Dusethreads -Duselargefiles -Dcccdlflags=-fPIC \\\n  -Doptimize=-O2 -Duseshrplib -Dcf_by=\"Your_name_here\" \\\n  -Dcf_email=\"Your_email@here\""
        },
        {
          "id": "posts-openshot-and-friends-accessible-open-source-video-editing",
          "title": "OPENSHOT AND  FRIENDS: Accessible Open Source Video Editing",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Audio and Video",
          "tags": "",
          "url": "/posts/openshot-and-friends-accessible-open-source-video-editing/",
          "content": "Back in January 2009 when I gave the mini-presentation on Linux Multimedia, I said that there wasn’t a good Open Source choice for Video Editing. Coincidentally the first release of OpenShot was in January 2009.\n\nRead the presentation from  September 11, 2009 (PDF) OpenShot and Friends."
        },
        {
          "id": "posts-the-foolproof-guide-to-apache-virtual-host-configuration",
          "title": "The Foolproof Guide to Apache Virtual Host Configuration",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Web Servers",
          "tags": "apache",
          "url": "/posts/the-foolproof-guide-to-apache-virtual-host-configuration/",
          "content": "Having trouble setting up an Apache Virtual Host Configuration? This little article will set you straight.\n\nUpdate April 9, 2013 -- This article applies to apache 2.2, for apache 2.4 NameVirtualHost is deprecated, among other significant changes."
        },
        {
          "id": "posts-open-source-rescue-disks",
          "title": "Open Source Rescue Disks",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Windows",
          "tags": "",
          "url": "/posts/open-source-rescue-disks/",
          "content": "Presentation about Rescue Disks.\n\nPDF Document:  RescueDisksPresentation\n\nLinks to some of the Rescue Disks:\n\nPartedMagic\n\nTrinity Rescue Kit\n\nClonezilla"
        },
        {
          "id": "posts-exchange-2007-and-2010-relocate-ssl-port",
          "title": "Exchange 2007 and 2010 Relocate SSL Port",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Messaging, Windows",
          "tags": "exchange, iis",
          "url": "/posts/exchange-2007-and-2010-relocate-ssl-port/",
          "content": "This will show you how to relocate https://outlook_web_access to a port other than 443. Although I haven’t updated the article, the IIS changes for 2010 remain the same."
        },
        {
          "id": "posts-exchange-2007-standby-continuous-replication",
          "title": "Exchange 2007: Standby Continuous Replication",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Messaging, Windows",
          "tags": "exchange",
          "url": "/posts/exchange-2007-standby-continuous-replication/",
          "content": ""
        },
        {
          "id": "posts-linux-multimedia-pc",
          "title": "Linux Multimedia PC",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Linux, Audio and Video",
          "tags": "",
          "url": "/posts/linux-multimedia-pc/",
          "content": ""
        },
        {
          "id": "posts-landesk-packaging-case-study-adobe-acrobat-6-0x",
          "title": "LANDesk. Packaging Case Study: Adobe Acrobat 6.0x",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Configuration Management, Windows",
          "tags": "acrobat, landesk, package-management",
          "url": "/posts/landesk-packaging-case-study-adobe-acrobat-6-0x/",
          "content": ""
        },
        {
          "id": "posts-landesk-rebooting-a-computer-if-no-one-is-logged-in",
          "title": "LANDesk: Rebooting a Computer if No One is logged in",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Configuration Management, Windows",
          "tags": "landesk, wmi",
          "url": "/posts/landesk-rebooting-a-computer-if-no-one-is-logged-in/",
          "content": ""
        },
        {
          "id": "posts-restoring-windows-2000-system-state",
          "title": "Restoring Windows 2000 System State",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Windows",
          "tags": "",
          "url": "/posts/restoring-windows-2000-system-state/",
          "content": "This is an article is from 2002, it applies to an obsolete version of Windows, and while it might work with newer versions, there are some features of newer Windows versions that would change how you would want to do this."
        },
        {
          "id": "posts-expect-and-using-expect-with-perl",
          "title": "Expect and using Expect with PERL",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Perl",
          "tags": "expect",
          "url": "/posts/expect-and-using-expect-with-perl/",
          "content": "Presentation for Philadelphia PERLMongers"
        },
        {
          "id": "posts-restoring-active-directory-in-windows-2000",
          "title": "Restoring Active Directory in Windows 2000",
          "collection": {
            "label": "posts",
            "name": "Posts"
          },
          "categories": "Windows",
          "tags": "active-directory",
          "url": "/posts/restoring-active-directory-in-windows-2000/",
          "content": ""
        },
        {
          "id": "404",
          "title": "404",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/404",
          "content": "404\n\nPage Not Found :(\n\nThe requested page could not be found."
        },
        {
          "id": "410",
          "title": "410",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/410",
          "content": "410\n\nContent Removed :(\n\nThe most likely reason is that an article pointed to a link that no longer exists and was updated to this page instead."
        },
        {
          "id": "500",
          "title": "500",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/500",
          "content": "500\n\nInternal Server Error :(\n\nThe requested page could not be delivered."
        },
        {
          "id": "archive",
          "title": "Archive",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/archive/",
          "content": ""
        },
        {
          "id": "search",
          "title": "Search",
          "collection": {
            "label": "pages",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "/search/",
          "content": ""
        },
        {
          "id": "",
          "title": "John Karr’s Techinfo",
          "collection": {
            "label": "data",
            "name": "Posts"
          },
          "categories": "",
          "tags": "",
          "url": "",
          "content": ""
        },
          {
            "id": "",
            "title": "Index",
            "categories": "",
            "tags": "",
            "url": "/",
            "content": ""
          },
          {
            "id": "page-2",
            "title": "Index (Page 2)",
            "categories": "",
            "tags": "",
            "url": "/page/2/",
            "content": ""
          },
          {
            "id": "page-3",
            "title": "Index (Page 3)",
            "categories": "",
            "tags": "",
            "url": "/page/3/",
            "content": ""
          },
          {
            "id": "page-4",
            "title": "Index (Page 4)",
            "categories": "",
            "tags": "",
            "url": "/page/4/",
            "content": ""
          },
          {
            "id": "page-5",
            "title": "Index (Page 5)",
            "categories": "",
            "tags": "",
            "url": "/page/5/",
            "content": ""
          },
          {
            "id": "page-6",
            "title": "Index (Page 6)",
            "categories": "",
            "tags": "",
            "url": "/page/6/",
            "content": ""
          },
          {
            "id": "page-7",
            "title": "Index (Page 7)",
            "categories": "",
            "tags": "",
            "url": "/page/7/",
            "content": ""
          }
]
