The shell

In the early 1990s, during the glory days of UNIX culture, being able to score a telnet window, a shell account on a UNIX server, was a big deal.

Back then, a combination of borrowed credentials, academic accounts, and commercial providers hosted the UNIX shell accounts, that provided finger and talk and FTP and pine mail and usenet readers and IRC.

UNIX culture, what remains of it, has been subsumed into the Linux server culture, which itself is being eaten by cloud and devops. But one thing that remains, for those who want it: the shell. I remember deploying a Linux server 20 years ago — it was non-trivial and required the re-purposing of Wintel metal. That choice remains (a tiny netbook running Linux is like having a tiny mainframe with its own UPS and console), but other choices, like $5 per month cloud servers and VMWare Player guest instances and raspberry pi servers make the shell available to anyone who wants it.

We do not realize just how lucky we are.

De-clouding: hosting virtual servers on-premises to reduce hosting telecom burn

Enterprises that have a significant monthly cloud bill should business-case an approach that uses the cloud for public-facing assets but considers on-promises virtual hosting with no incremental telecom increase. A local deployment of a set of virtual servers can be done in a Linux or Windows context. Depending on hardware, platform, and workload, an on-premises server should be able to host between 1 and 7 virtual server guests.

Most server deployments are now virtual. Aside from edge cases that rely on raw horsepower or low latency, like file servers and voice over ip servers, baremetal rarely wins the business case.

The flexibility and benefits of virtualization have led to practices and tools that require multiple versions of a server image, for devops or redundancy, and powerful automation tools that can script the creation, orchestration, and destruction of virtual servers as needed.

There are also network effect reasons why for a small business to be left with an AWS account by a web developer is not such a bad thing.

However, even enterprises in the 20 employee range will accumulate a number of server processes, most hosted on public cloud services, which will each incur recurring monthly fees. Some of those enterprises would save money by bringing the processes on-premises and in-house.

There are business cases that make sense for the cloud. The web site should live in the cloud. As a related example, though, the web site’s backup can be hosted on a local server connected to the on-premises DSL line.

For Linux workloads, virt-manager with KVM and Qemu is a good combination — Boxes leverages this toolset as well.

A hybrid approach, typically with the web marketing server as well as email and calendar services on public clouds, but with backoffice, ERP, database, and backup operations performed in virtual servers hosted in on-premises equipment, at a lower cost than the equivalent service from an asset hosted externally by a vendor. Of course, this comes with the responsibility for an offsite backup and disaster recovery plan. Start with 2 hard drives, and take one offsite each week. then get fancier, maybe with another on-premises server at another campus.

Systems can even be hybrid, with a public-facing website on a cloud service mounting cheap assets stored on an on-premises server.

For Windows, some shops use VMware quite effectively, especially with its server and management tools. However I would suggest a strong look at Windows Hyper-V, which does just as well hosting Linux guests as it does Windows guests, and fits into a corporate environment, nicely.

In the same big company type-theme, The Azure AD cloud deserves a look. Microsoft has shown a vision of the future in which the cloud acts to orchestrate a mix of cloud and on-premises assets with common active directory.

By considering where the public cloud adds value to a server deployment, and finding savings by bringing some virtual server workloads back on-premises and in-house, enterprises can achieve significant savings that can be re-purposed to other priorities.

(Almost) off the grid

Sitting on the deck in front of a lake in the Laurentians north of Montreal, I find myself almost off the grid. There is no cell phone coverage for about 20KM before the driveway, so no 3G wifi hotspot. A rural data wireless provider with antennas on mountaintops usually provides a decent wifi connection, but a power surge destroyed the base station of a radio, and here I find myself reduced to my last 2 lines of communication: satellite TV and an old-school voice landline.

Yes, I did make a dialup connection over the landline during last week: it was 24Kbps, slow even by dialup standards, and modern web pages, even those optimized for lower-speed connections like the HTML version of Gmail, are completely unusable.

Colleagues are covering for technical support responsibilities in civilization, and my brother will drive me this afternoon to the community center, 7KM away. Until then, I find myself myself essentially cut off: no WhatsApp texts, no checking for latest headlines, weather, or trivia, no streaming audio for my airpods.

So here I am typing on a computer in offline mode, to be pasted to the Internet later today. This reminds me of a project I have put off several times: a complete offline web development environment. Hosting a LAMP server is trivially easy, whether on the baremetal of a Linux laptop, or as a vm guest on a Windows laptop, but one must take precautions to be productive offline: I need to install a local copy of the documentation, and I have found some interface code that must be redone to invoke local copies of JavaScript libraries, rather than pulling them in from remote locations at run time.

People tell me that I will benefit from being “unplugged,” that it will relax me. They are mistaken, although I will survive until Monday morning when I return to the city, sustained this afternoon by a half hour of the community center’s free wifi. The rural data wireless base station will be replaced at some point, I hope soon – I will be back in the city on Monday morning, but my Mom spends the summer up here – I hope for her that she will soon get wifi for her iPad.

By the way, here at the community center: wifi is awesome, never take it for granted.

Using dialup at the cottage due to a rural wireless outage

Back from a weekend at the family cottage. Barbecue in front of the lake, good weather, my brother’s birthday party.

The family cottage is outside cell phone range. Normally, the cottage has wifi from a rural wireless provider, a satellite TV link, and a landline.
The rural data wireless was out. Using a us robotics usb 56K modem, i was able to make a 24Kbps connection, which is a low speed, even by dialup standards. This poor performance is due to the analog exchange and noise on a rural line: in the city one would expect 50Kbps. There are “light” versions of sites like gmail that load faster on slower connections, but even the simplest requests would often time out and require a reload.

It was fortuitous that i had left a us robotics usb modem in the cottage 10 years ago.

I was able make a dialup connection with my windows 10 laptop, but the experience was not as good as with previous versions: sharing the connection via mobile hotspot did not work, and using connection sharing via the wifi did not trigger a wizard with ad-hoc networking set up on the wifi adapter, things that worked well in prior versions of windows, as recently as windows 8.1.

At the community center 7KM away, near the dépanneur, there is free wifi and a picnic table. On my Linux laptop, I was able to apt install wvdial on the free wifi. wvdialconf autodetected the modem and the man page made it easy to create a dialup file /etc/wvdial.conf (even to find the option for pulse dialing: “ATDP”):

[Dialer Defaults]
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
Modem Type = USB Modem
Phone = xxxxxxxxxx
ISDN = 0
Password = xxxxxxxx
New PPPD = yes
Username = xxxxxxxx
Modem = /dev/ttyACM0
Baud = 33600
Dial command = ATDP

wvdial was able to make a 24Kbps ppp connection. I gained some insights, and learned enough to complete a dialup wifi server, based on wvdial, hostapd and dnsmasq. Given the limited speed, there is little point in deploying a dialup server. I will, however, continue to maintain the ability to connect as a dialup workstation, from both my windows and linux laptops.

Modern websites and i/o make dialup almost useless. there may be edge cases especially involving security or remote telemetry, but for consumer use, I suggest driving to the free wifi at the community center.

The 2 simplest devices in my home

The 2 machines in my home that i like best, are simple and not smart. Received as gifts, a new convection toaster oven that goes tick-tick-tick, and a bluetooth soda can speaker with very little intelligence.

Sony SRS-XB10 portable wireless speaker with Bluetooth

This speaker can pair with a phone, iPad, or a computer. It can play audio. It can act as a speakerphone. It is small, rechargeable, wireless, and sounds bigger than it is. It does not have AI, a personal assistant, skills, and does not tie into any home automation. It is just a speaker.

Black and Decker TO1950SBD convection toaster oven

This toaster oven is convection, which means that it has a fan that blows the air around while baking. It is good at baking croissants. It has a temperature control, and a timer. With a spring. That goes tick-tick-tick.

A picture of croissants baked in the toaster oven

Connecting to a Checkpoint VPN from Fedora 29

One of the systems I maintain requires access to a Checkpoint VPN. Until recently, this has meant that I needed a Windows laptop or vm when I traveled. The recipe to connect to the vpn using a command line client called “snx” seems obvious, but is not. Here is how I was able to connect a Fedora 29 Linux machine with version 800007075 of the snx command line client.

Install the Oracle Java JRE

Download Linux x64 RPM:

Use rpm at the command line instead of using the software installer gui.

(change version number as needed)

rpm -ivh jre-8u191-linux-x64.rpm

dnf install pkgconf-pkg-config

dnf install libcanberra-gtk2.i686

dnf install /lib/

According to this link:

versions of the snx command line client > 800007075 are not compatible with recent Linux kernels. So we will obtain a copy of that specific version of the SNX command line client:

[root@server etc]# cd ~desktop/tmp/
[root@server tmp]# wget -O
–2018-12-30 07:34:08–
Resolving (…
Connecting to (||:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 973618 (951K) [application/x-sh]
Saving to: ‘’ 100%[====================>] 950.80K 378KB/s in 2.5s

2018-12-30 07:34:26 (378 KB/s) – ‘’ saved [973618/973618]

and now we make the script executable:

[root@server tmp]# chmod 755

run the installation script:

[root@server tmp]# ./
Installation successful

test a command line connection (use values appropriate for your username and vpnservername)

[root@server tmp]# snx -s vpnservername -u
Check Point’s Linux SNX
build 800007075
Please enter your password:
SNX authentication:
Please confirm the connection to gateway: *
Do you accept? [y]es/[N]o:
SNX – connected.

Session parameters:
Office Mode IP : x.x.x.x
DNS Server : x.x.x.x
Secondary DNS Server: x.x.x.x
DNS Suffix :
Timeout : 12 hours

Some useful links: -O

Installing Fedora 29 on the Raspberry Pi 3 B+


The recent release of Fedora 29 for the Raspberry Pi means that the hobbyist hardware platform can finally be considered as a viable alternative to Windows-on-Intel (“Wintel”) hardware to host Linux server applications. Although too slow to operate as a useful Graphical User Interface (“GUI”) desktop, the Pi does a good job running a text-only web, file, and vpn server. This makes Fedora on Pi perfect for serving video files at home, or as a vpn server for a small home office or satellite office.

Actual practical HOWTO section

This section contains actual procedural information. After the practical information, there is a “Rant section.”

Choosing the version you will install

For now, I suggest you avoid the GUI desktop altogether and stick with a text-only web, file, and vpn server based on Fedora 29 Minimal.

Downloading the Fedora 29 image file

Download the file for Fedora Minimal:

How this is different from a Wintel install

A Wintel build consists of boot media, either a usb drive or a dvd drive, which contains a bootable image that includes an installer used to format another device, typically a hard drive, with boot and other partitions.

The Raspberry Pi Fedora installer consists of an ISO image you will image to a micro-SD card. In the Pi world, everything boots from a FAT32 UEFI partition on a micro-SD card.

Understanding the Fedora 29 image file

The Fedora 29 image contains a FAT32 partition with an implementation of UEFI, and several ext4 partitions.

Decompressing the file containing the Fedora 29 image file

In Windows, use WinRAR to decompress the image file.

In Linux:


xzcat Fedora-IMAGE-NAME.raw.xz | sudo dd status=progress bs=4M of=/dev/XXX # Location of your media (will be sdX or mmcblkX depending on hardware)

Formatting the micro-SD card with the image



(Decompression and image write to media part of the same operation above under “Decompressing the Fedora image file”)

Using a partition tool to expand the / partition on the micro-SD card



gparted /dev/XXX

Following the text setup wizard

On first boot, at the bottom of the screen, you will see a set of questions regarding initial system username, password, and other settings. Follow the wizard – make sure you create a root password, The system will boot.

Rebooting into a standard machine

You will boot into a standard Linux login screen. Login as root.

Doing standard housekeeping and a standard build

From this point on in the build, the machine feels like a “normal” Linux box.

Using nmcli to set a static ip address

Using dnf to install nano, rsync, and net-tools

A lot of things that you take for granted, like nano, rsync, and ifconfig (part of net-tools), do not exist until you add them with dnf.

Editing the selinux config file

Modifying or disabling firewalld

systemctl stop firewalld; systemctl disable firewalld

(some people like firewalls, I think they are lazy – just turn off unneeded ports!)

Enabling an SSHD server

systemctl start sshd; systemctl enable sshd

Adding the rpmfusion repos

dnf clean all; dnf update


Doing a standard LAMP build

(notes: dnf install mariadb mariadb-server instead of dnf install mysql mysql-server, dnf install php-mysqlnd instead of dnf install php-mysql)

Installing a free cert with Let’s Encrypt

Installing Nextcloud

Normally, I would just say, dnf install nextcloud and watch yum dependencies in action. Unfortunately, there is a missing dependency for php-composer(ircmaxell/password-compat) which breaks nextcloud in yum/dnf. This is not a good thing, however it is not specific to the Pi, it is a Fedora 29/Nextcloud thing (and it would appear that Owncloud and Nextcloud do not get a lot of maintainer love in the yum repos).

As it happens, I was able to deploy nextcloud 2 weeks ago on a cloud server using this script, and it worked just as well on this installation of Nextcloud on Fedora on the Pi:

Doing a standard Samba build

Optimizing Samba file shares, especially for MacOS Finder clients:

Doing a standard OpenVPN build

Enabling the rc-local service in system

Rant section

Although some people like the rant, others just want a HOWTO. So the rant goes here, after the practical.

It is a big deal that Fedora treats the Pi’s aarch64 as equal to Wintel’s x64

The ability to use standard Red Hat software tools and procedures means that aside from differences in the installation process, the Pi feels like a normal, if slightly slow, Red Hat machine. Because Fedora on Pi has full standard repositories, you can use standard howtos and procedures to do a build.

Until now, Linux was effectively relegated to dumpster-diving Wintel boxen

Intel hardware originally designed for Windows is the commodity computing platform on which Linux was born. It is comforting to know that Linux now has a second viable hardware platform, which will grow in capability over time.

A motorcycle engine trying to power a car

I remember a TV show where a team of mechanics scoured a junkyard and found a motorcycle engine powerful enough to drive a car-sized frame, chassis, and wheels. The motorcycle engine was large by motorcycle standards – and was able to power the car form factor, but it struggled with the task. Like the motorcycle engine in the TV show, the Pi is now powerful enough to run a full Red Hat Linux (Fedora) server, but it struggles with a full GUI desktop like Gnome 3 or XFCE.

Vision and leadership and big decisions by Fedora and rpmfusion yum repo maintainers

The big decision by Fedora to provide full support for the Pi’s aarch64 ARM cpu gives Linux its own hardware platform, for the first time. The Fedora project maintainers and the rpmfusion repo maintainers did an excellent job of ensuring that the yum repositories contained aarch64 binary rpm packages for everything that had an x64 package. Let me just say it is inspiring to see people with vision actually execute and do something like this.

Using standard procedures and software libraries

What is impressive is that the main yum repositories for fc29, as well as the rpmfusion free and non-free repositories, fully support aarch64. That means you can dnf install vlc filezilla, and it will work. Some repositories are not there yet, such as the remi rpm repo for php56 support on modern fedora, so I will be limited to php72 for the moment. Some third-party repos, like google-chrome, do not yet support aarch64, however I was able to install Chromium.

What the Pi is not so good at: GUI desktop

When I first installed Fedora 29 Workstation on the Pi, the GUI was virtually unusable. It became better over time, but would sometimes freeze. I turned off the Gnome 3 desktop, did all the dnf updates, then turned on Gnome 3. After a few tweaks with the Gnome tweaks took I was able to run LibreOffice Writer, Chromium, and FileZilla, but only very slowly. XFCE was slightly faster but not enough to make a difference. Although rpmfusion allowed me to dnf install vlc, VLC was virtually unusable – but props to everyone in that value chain for vision – a hardware rev or 2 from now the VLC will be usable.

The Pi is fast as a text server

I decided to go the other way and install a text-only server from the ground up. There are some Raspberry Pi-specifics to the build I will address in the Install procedure section. However, the rest was identical to the way one would build a wintel linux box. On my brother’s advice, I decided to use Fedora 29 Minimal. It really is minimal: I had to use dnf to install nano and rsync. However, I was able to do a dnf update including the rpmfusion free and non-free repositories. Because I had full standard repositories, I could use standard howtos and procedures to do my build.

I then built a standard LAMP web server, an SSL cert with let’s encrypt, an installation of nextcloud media server, samba file share server, openvpn vpn server. The server performed well – so well that I am already planning to deploy a few in the field as openvpn servers and rsync backup data dumps.

A random reference to a satirical book about home servers