Installing Fedora 29 on the Raspberry Pi 3 B+


The recent release of Fedora 29 for the Raspberry Pi means that the hobbyist hardware platform can finally be considered as a viable alternative to Windows-on-Intel (“Wintel”) hardware to host Linux server applications. Although too slow to operate as a useful Graphical User Interface (“GUI”) desktop, the Pi does a good job running a text-only web, file, and vpn server. This makes Fedora on Pi perfect for serving video files at home, or as a vpn server for a small home office or satellite office.

Actual practical HOWTO section

This section contains actual procedural information. After the practical information, there is a “Rant section.”

Choosing the version you will install

For now, I suggest you avoid the GUI desktop altogether and stick with a text-only web, file, and vpn server based on Fedora 29 Minimal.

Downloading the Fedora 29 image file

Download the file for Fedora Minimal:

How this is different from a Wintel install

A Wintel build consists of boot media, either a usb drive or a dvd drive, which contains a bootable image that includes an installer used to format another device, typically a hard drive, with boot and other partitions.

The Raspberry Pi Fedora installer consists of an ISO image you will image to a micro-SD card. In the Pi world, everything boots from a FAT32 UEFI partition on a micro-SD card.

Understanding the Fedora 29 image file

The Fedora 29 image contains a FAT32 partition with an implementation of UEFI, and several ext4 partitions.

Decompressing the file containing the Fedora 29 image file

In Windows, use WinRAR to decompress the image file.

In Linux:


xzcat Fedora-IMAGE-NAME.raw.xz | sudo dd status=progress bs=4M of=/dev/XXX # Location of your media (will be sdX or mmcblkX depending on hardware)

Formatting the micro-SD card with the image



(Decompression and image write to media part of the same operation above under “Decompressing the Fedora image file”)

Using a partition tool to expand the / partition on the micro-SD card



gparted /dev/XXX

Following the text setup wizard

On first boot, at the bottom of the screen, you will see a set of questions regarding initial system username, password, and other settings. Follow the wizard – make sure you create a root password, The system will boot.

Rebooting into a standard machine

You will boot into a standard Linux login screen. Login as root.

Doing standard housekeeping and a standard build

From this point on in the build, the machine feels like a “normal” Linux box.

Using nmcli to set a static ip address

Using dnf to install nano, rsync, and net-tools

A lot of things that you take for granted, like nano, rsync, and ifconfig (part of net-tools), do not exist until you add them with dnf.

Editing the selinux config file

Modifying or disabling firewalld

systemctl stop firewalld; systemctl disable firewalld

(some people like firewalls, I think they are lazy – just turn off unneeded ports!)

Enabling an SSHD server

systemctl start sshd; systemctl enable sshd

Adding the rpmfusion repos

dnf clean all; dnf update


Doing a standard LAMP build

(notes: dnf install mariadb mariadb-server instead of dnf install mysql mysql-server, dnf install php-mysqlnd instead of dnf install php-mysql)

Installing a free cert with Let’s Encrypt

Installing Nextcloud

Normally, I would just say, dnf install nextcloud and watch yum dependencies in action. Unfortunately, there is a missing dependency for php-composer(ircmaxell/password-compat) which breaks nextcloud in yum/dnf. This is not a good thing, however it is not specific to the Pi, it is a Fedora 29/Nextcloud thing (and it would appear that Owncloud and Nextcloud do not get a lot of maintainer love in the yum repos).

As it happens, I was able to deploy nextcloud 2 weeks ago on a cloud server using this script, and it worked just as well on this installation of Nextcloud on Fedora on the Pi:

Doing a standard Samba build

Optimizing Samba file shares, especially for MacOS Finder clients:

Doing a standard OpenVPN build

Enabling the rc-local service in system

Rant section

Although some people like the rant, others just want a HOWTO. So the rant goes here, after the practical.

It is a big deal that Fedora treats the Pi’s aarch64 as equal to Wintel’s x64

The ability to use standard Red Hat software tools and procedures means that aside from differences in the installation process, the Pi feels like a normal, if slightly slow, Red Hat machine. Because Fedora on Pi has full standard repositories, you can use standard howtos and procedures to do a build.

Until now, Linux was effectively relegated to dumpster-diving Wintel boxen

Intel hardware originally designed for Windows is the commodity computing platform on which Linux was born. It is comforting to know that Linux now has a second viable hardware platform, which will grow in capability over time.

A motorcycle engine trying to power a car

I remember a TV show where a team of mechanics scoured a junkyard and found a motorcycle engine powerful enough to drive a car-sized frame, chassis, and wheels. The motorcycle engine was large by motorcycle standards – and was able to power the car form factor, but it struggled with the task. Like the motorcycle engine in the TV show, the Pi is now powerful enough to run a full Red Hat Linux (Fedora) server, but it struggles with a full GUI desktop like Gnome 3 or XFCE.

Vision and leadership and big decisions by Fedora and rpmfusion yum repo maintainers

The big decision by Fedora to provide full support for the Pi’s aarch64 ARM cpu gives Linux its own hardware platform, for the first time. The Fedora project maintainers and the rpmfusion repo maintainers did an excellent job of ensuring that the yum repositories contained aarch64 binary rpm packages for everything that had an x64 package. Let me just say it is inspiring to see people with vision actually execute and do something like this.

Using standard procedures and software libraries

What is impressive is that the main yum repositories for fc29, as well as the rpmfusion free and non-free repositories, fully support aarch64. That means you can dnf install vlc filezilla, and it will work. Some repositories are not there yet, such as the remi rpm repo for php56 support on modern fedora, so I will be limited to php72 for the moment. Some third-party repos, like google-chrome, do not yet support aarch64, however I was able to install Chromium.

What the Pi is not so good at: GUI desktop

When I first installed Fedora 29 Workstation on the Pi, the GUI was virtually unusable. It became better over time, but would sometimes freeze. I turned off the Gnome 3 desktop, did all the dnf updates, then turned on Gnome 3. After a few tweaks with the Gnome tweaks took I was able to run LibreOffice Writer, Chromium, and FileZilla, but only very slowly. XFCE was slightly faster but not enough to make a difference. Although rpmfusion allowed me to dnf install vlc, VLC was virtually unusable – but props to everyone in that value chain for vision – a hardware rev or 2 from now the VLC will be usable.

The Pi is fast as a text server

I decided to go the other way and install a text-only server from the ground up. There are some Raspberry Pi-specifics to the build I will address in the Install procedure section. However, the rest was identical to the way one would build a wintel linux box. On my brother’s advice, I decided to use Fedora 29 Minimal. It really is minimal: I had to use dnf to install nano and rsync. However, I was able to do a dnf update including the rpmfusion free and non-free repositories. Because I had full standard repositories, I could use standard howtos and procedures to do my build.

I then built a standard LAMP web server, an SSL cert with let’s encrypt, an installation of nextcloud media server, samba file share server, openvpn vpn server. The server performed well – so well that I am already planning to deploy a few in the field as openvpn servers and rsync backup data dumps.

A random reference to a satirical book about home servers


Using redirection and a free webmail account to host branded email for a domain

A friend registered a domain name, and wanted to send and receive branded email using that domain. If your project has a modest budget, you can send and receive branded domain email using a combination of a free webmail account and a redirection account for US$20/year.

you can use as your receiving post office, and have it forward your inbound email messages for that domain to a free webmail account. You can use the SMTP server as an outbound SMTP gateway, with username and password authentication.

By publishing SPF and DKIM records in the DNS zone file for your domain, you can greatly increase the chances that branded email sent via the server will be accepted by the remote party and not be mistaken for spam.

Checklist: what you need for branded email:

A domain (

a DNS control panel for the domain (I don’t let my hosting ISPs get my control of my DNS, I control it via the free DNS control panel that came from my registrar, GoDaddy.) You could probably do the same with your registrar.

A free webmail account (for example, a free account).

A redirection account (US$20/year)

Setting up DNS

Log into DNS control panel

create MX records for your domain:

MX @ priority 12
MX @ priority 24
MX @ priority 36

create SPF and DKIM records:

TXT “v=spf1”

For the DKIM record, refer to the custom value generated for your domain, and available in the control panel for your account.

Setting up a mail client

Start with an email client, like the Mail app on an iPad or iPhone.

Instead of choosing a branded email service with a logo, like Gmail or Yahoo, choose “other” and define a custom email service.

Name: Firstname Lastname
Password: passwordforgmailaccount

Incoming mail server

Host name:
User Name:
Password: passwordforgmailaccount

Outgoing mail server

Host name:
password: passwordforpoboxaccount

Fixing slow MacOS Finder on Samba file share, optimizing for Windows clients

If you are trying to figure out why your MacOS Finder is slow when it connects to a Samba file share on a Linux server, you are in the right place.

I found the solution in this post:

Here is what you need to add to /etc/samba/smb.conf on the Samba server:

vfs objects = fruit
fruit:aapl = yes
fruit:encoding = native
fruit:locking = none
fruit:metadata = stream
fruit:resource = file

While I was searching for things that could speed up a MacOS Finder client’s session, I found a number of optimizations that helped speed Windows clients connected to a Samba file share.

The best of these was a post:

with these suggestions for /etc/samba/smb.conf on the Samba server:


# EXTENTS, SET "strict allocate = no".
# SEE:

   strict allocate = Yes

# SEE:

   allocation roundup size = 4096


   read raw = Yes

# Thanks to Joe in the comments section!

   server signing = No


   write raw = Yes

# WHEN "strict locking = no", THE SERVER PERFORMS FILE LOCK
# "strict locking = auto" OR "strict locking = no" IS ACCEPTABLE.

   strict locking = No


   socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072

# SMBWriteX CALLS GREATER THAN "min receivefile size" WILL BE
# MAX VALUE = 128k

   min receivefile size = 16384


   use sendfile = Yes


   aio read size = 16384


   aio write size = 16384



Where CPU power matters, and where it does not

The other day, I was thinking about 3 systems, 2 with modest specifications, and 1 system with great specs.

A 2009 desktop old Vista-class class core 2 duo 4GB RAM 120GB SSD

A 2018 netbook Celeron CPU (more like an Atom) 4GB RAM 64GB eMMC

A 2016 laptop core i7 with 16GB RAM and a 1TB SSD

It would not be a difficult quiz were the object to identify the good system vs the bad one. Hint: it’s the i7.

However, I have learned that some tasks run quite well on limited hardware.

The 2009 desktop was never designed to run with 4GB of RAM and a 120GB SSD from my junkpile, but they certainly have the effect of speeding up the system. This machine, running Fedora Linux, is a VPN server, a file server, a web server, a database server, and can play back 1080p video beautifully over a DVI connection.

The 2018 netbook which costs less than US$200 new, is essentially a Chromebook case with modest Wintel guts. Its CPU is called a Celeron, but given its clock speed and meagre 2 cores, it may as well be an Atom. And yet, this netbook is able to run Fedora Linux and Windows 10 Pro, quite well. It can even run Photoshop.

I have tried to run virtual machine emulation under both of these systems. Even with a stripped-down OS installer, the results were not pretty. For some applications, specs matter.

Although I have not yet spent serious time with a Raspberry Pi device, the full support in Fedora 29 has made me take a serious look at the platform. I predict results similar to those on the systems I described earlier.

Of course, if you throw good specs at a problem, like a recent laptop with a core i7, 8 cores, 16GB RAM, and a 1TB SSD, a lot of other things are possible. I am able to run multiple virtual machines under KVM, and have had a situation where a Linux guest was connected to one VPN, a Windows guest was connected to another, and the main desktop (“baremetal computer”) was on the main network connection, not even slowing down while the virtual machine guests did their work.

A recent sighting of a 13″ MSI and a sale for a Dell XPS 13 made me long for a small, but powerful computer. However, for travel, all I need is that little netbook. In theory, it would be fun to virtualize a few server environments for portable LAMP development, but I have been exploring “containers” like Docker that will allow me to isolate the systems with different PHP/MySQL versions without the overhead of a full virtual machine.

So the question is not whether you need more power. The question is how much power do you need for a specific use?

The containers thing is getting important – my goal will be to build 2 containers – one with mysql and php 5.x, and one with mysql and php 7.x

The Linux dialup WIFI server, part 1

My mother has a great cottage on a lake. There is no cell phone service for a few kilometers around. There is a landline. There is a form of rural data wireless that is often down. Thus the need for a dialup wifi server as a backup, for when rural data wireless service is unavailable. Too slow for web surfing, but enough for email and texting. This seems like a perfect task for Linux. After all, Linux is usually a good foundation for web servers, email servers, vpn servers, file share servers, voip servers, and more.

I started this project last year, and found that although Windows could drive the built-in 56K modem on a circa 2009 laptop, Linux could not. Fortunately, I have a US Robotics USB 56K modem, which is recognized by Linux.  I got as far as a dialup connection without DNS.

This year I learned how to override some settings in wvdial to enable dns. So i was able to surf on fedora 28 via dialup. However, I was unable to share the connection via the GUI networkmanager.

To explain why, let me tell you a few stories:

I should point out that sharing a connection like this is something that Windows can do without breaking a sweat. I first installed such a gateway, albeit dialup shared as a NAT over wired Ethernet, on a Windows 98 box in 1999.

Last year (2017), while on vacation at a hotel with a 2 device limit, I confronted the dilemma: how does one choose between a laptop, an iPad, and 2 cell phones? Answer: use a laptop with a second wifi adapter to provide a repeated hotspot under my control.

NetworkManager on Gnome 3 on Fedora offered to share a wired Ethernet connection as a local WIFI hotspot, but was unable to do other permutations — like a second wifi on usb sharing a first wifi connection to hotel wifi. Fortunately, the machine was able to dual-boot into Windows and share the  hotel wifi via a second wifi usb adapter.

Back to the Linux dialup wifi server project.

It would seem that I must configure hostapd, in order to create a hotspot that can share the dialup connection. I have followed the instructions, but have not succeeded so far — there are some things i can try. I find myself consulting blog posts from 2002-2011. I suppose that is the kind of deep time audience that reads these blog posts, a few at a time, in the future.

More to follow at the beginning of October when I return to the cottage to take another run at the problem. for now i will simulate the hostapd problem with a test machine here, then bring that solution to the cottage and test the integrated solution with the already-working dialup connection.

I suppose I could try to test the dialup connection via a voip analog telephone adapter device, but that seems wrong.

[edit 20180924] My brother had a somewhat similar but more successful experience in 2015:

The days of the small email server are over

Although it is still possible to run a small hobby web server, it has become impractical to run a private email server. The challenges of dealing with spam and viruses go back to the beginning of the web hosting industry, in the late 1990s. During that era and until recently, a person or small team with minimal resources were able to deploy a web and email server with a combination of anti-spam and anti-virus countermeasures, using the same tools available to larger companies. That is no longer the case due to collateral damage from the spam wars.

Since the late 1990s, anyone with a Windows-compatible computer could reformat that computer with Linux, an operating system similar to UNIX. A person or small team could host web sites and email accounts with DNS, web, database, and email hosting on a single computer (hopefully with a second server as a backup).

Reputation management and blacklisting services minimize spam arriving in inboxes, but can also result in mistaken blacklisting or collateral damage such as innocent firms hosted by a provider that has one bad client using their infrastructure and causing a block for all. Until recently, this could be managed. However, since about 2016 the email threat environment has escalated even further.

This new email threat environment has resulted in increased resource use to process inbound mail (the majority of which is spam and some with viruses). Worse still, it has become almost impossible to ensure that one’s email will be likely accepted by the remote party. Even following all protocols, smaller email providers are no longer equals with larger ones.

The blocking of email from small servers is not due to any commercial conspiracy, it is simply that some activities require scale, and email has become one of those activities. It may be that the bad actors still exist, and are now hosted on the big providers, but no one will blacklist Gmail or Hotmail, no one will blacklist a company’s mail if it is hosted on one of those systems. There may be 10 or 20 hosting companies of scale that can be described as being in this category.

Of course, there are still thousands of hobby servers exchanging email, and small 1-10 person hosting operations still offering email services to small companies. There are people who have found a successful recipe, for now, to stay on the air. For now. Good luck to them.

To newcomers considering the vocation: try for the challenge and the knowledge, build your own email server for fun, but do not try to host email for anything remotely important, like a paying customer.

Some will say, sure, but a big company can host its own email. To which I would say they can, but they shouldn’t. We have seen telcos and even governments outsource their email. They saw this coming, and got out of the business. So should we all. Gmail for Business is good. Microsoft 365 is good. Others might be good.

Not all is lost. For the web, we are still in a golden age. Linux servers big and small, some virtual, allow people and companies to have unprecedented degrees of control over the most minute aspects of their web computing. An old laptop can become a Linux web server. A virtual server can be deployed for US$6/month. Those willing to work and learn can still have a web server that is the equal of any other web server.

UEFI Mode is necessary for Fedora 28 to create a multiple-boot system

I have concluded that, for Fedora 28 at least, it is better to leave a system in UEFI/Secure Boot mode if your goal is to install a multiple-boot system with a boot menu offering a choice between Windows and Linux.

How I discovered that you need UEFI for multiple-boot with Fedora 28

I am not a fan of UEFI/Secure Boot, but have found that modern hardware has fewer installation issues with Windows and Linux in UEFI mode. So in the past 12 months I have been leaving the BIOS in UEFI/Secure Boot mode.

When I reformatted a system in Legacy BIOS mode, first installing Windows while leaving some unallocated space on the hard drive, then installing Fedora 28 Linux, the grub boot menu did not include a boot menu entry for Windows. Attempts to create a manual grub menu entry for Windows did not result in a successful boot.

How to create a multiple-boot system

Until recently, I installed Linux by first reducing the size the Windows partition, creating unallocated space, then by installing Linux with automatic partitioning enabled. In each case, the Fedora 28 installer used the unallocated space for Linux partitions, and created a multiple-boot menu with Windows as an option. What I did not realize was that the Fedora 28 installer only does this operation correctly if the BIOS is set for UEFI/Secure Boot.

Why use a multiple-boot setup?

Linux on the desktop offers functionality, performance, and security benefits over Windows. However, there are certain edge cases that require Windows. Rather than simply giving up and letting Windows win because of one occasional “must-have” application, I configure some of my machines as dual-boot between Windows and Linux.

Using multiple boot and virtualization with Linux to (almost) eliminate Windows

First, a small and trivial confession: I have been a Linux system administrator since 1998, running big Linux servers that host web pages and email systems. I have also, reluctantly, become a Windows server system administrator, but that is a story for another time. For most of that time, I have earned at least part of my living due to Linux servers, but used a Windows desktop for my personal workstation. I used Linux exclusively on my laptop from 2003-2006, and had a good experience, although during that era you had to be prepared to re-compile your XFree86 subsystem to support DRI for video playback, and things like wifi and power sleep mode were tricky to configure. I have also had Linux laptops for “salesman’s demos” where I wanted to be sure that my website would load during the demo no matter what. I have kept a Linux server at home for many years, mostly for experimentation, but also to serve as a VPN entry point and media file server. However, I have never been “pure” in my adoption or advocacy.

Over the past few weeks since returning from a vacation in Las Vegas, I have intended to write several blog posts about using Linux as a personal desktop. In business, there is the concept of the 80/20 rule: the 20% of features that are needed 80% of the time. With the Windows desktop, I have found, there is a 1/10 rule: the 1% of features needed 10% of the time. Put simply, while I must use Windows at work as it is the corporate standard, I would prefer to use a Linux desktop at home. This post is about keeping Windows in its place: not allowing the occasional need for Windows to allow it to dominate desktop computing by having one killer feature. I have done this using a combination of multiple-boot, virtualization, platform diversity, and remote desktop access. Yes, I know about Wine and Crossover emulation, but I classify those as stupid computer tricks.

People can argue about the relative technical merits of Windows vs other desktop operating systems, and the strengths and deficiencies of Linux as a desktop. For personal use, my most common tools are Chrome for the web, VLC for videos, and few utilities like FileZilla for file transfer, VNC for remote desktop work, and Putty for SSH terminal access (it is worth observing that all of these tools are themselves open source). The occasional Word file can usually be read by LibreOffice, the free Word clone included in most desktop Linux distributions. As for the rest of the edge cases that require an actual Windows workstation, read on.

Free as in freedom (libre), not just free as in free beer

2 of my computers include legal licenses for Windows that came with the hardware, so cost or license compliance is not my primary concern. I think that the term Libre is more accurate than Free, as it immediately dispenses with the distinction “free as in freedom, not just free as in free beer.”

Windows as a security vector

As a desktop operating system, Windows is dangerous. It has poor security, despite hard work by Microsoft and others to improve its defenses. Linux provides a faster, more stable desktop experience, and is more secure — by virtue of its architecture and the intense peer review of open source code. On the minus side, Linux on the desktop is limited in terms of the number of apps that it supports. Although several core apps such as Chrome, common on Windows, are now available on Linux, there is always a mission-mandatory application, like Sage 50 Accounting or Photoshop, that does not exist for desktop Linux or for which there is an acceptable substitute.

Last summer, before a corporate merger, I traveled to Toronto for a long weekend, carrying an Acer netbook (weak CPU, 2GB RAM, 32GB eMMC hard drive) that only ran Linux. I was able to support my internal IT clients using an OpenVPN client, remote desktop to run servers, and TeamViewer (which actually produces a Linux version!) to do remote support tech support tickets. I also had the option of connecting to a Windows computer via remote desktop in order to run Windows-specific software. Post-merger, my new employer uses a kind of VPN for which there is not current Linux support — there is documentation about an older 32 bit version, and I have seen and tried a few howtos to add older 32 bit libraries, logical links to .so files, and other tricks to support the obsolete Linux version of the VPN client, but have thus far not been successful in connecting to the new corporate VPN. A perfect illustration of how Windows needs only one critical app to “win” and ensure its place on a computer desktop.

The netbook: multiple-boot for vacations

Often, when I visit Las Vegas, my flight arrives several hours before I can check into my hotel, and there is not even a paid option to check in early. In those cases, I must leave my luggage with the bell captain and wander the Strip, homeless until checkin. So I travel with the tiniest laptop ever, a netbook that fits into a half-size laptop case, with enough room for an iPad and a few accessories like the power brick, a mouse, and a usb charger battery for my phone. Having a small computer is great when you are stuck in an airport lounge, on a train or bus.

Late last year, I purchased an HP Stream 11, with an 11.6” display and a limited CPU that is branded Celeron, and is technically a 64 bit CPU with 2 cores, but is essentially 1.5 times the speed of an old Atom CPU. This matters less than you would think as the Intel graphics card is fast for video playback and makes graphical desktops fast and responsive despite a weak CPU. Unlike most netbooks on the market, this machine has 4GB instead of 2GB RAM, and a 64GB instead of 32GB eMMC drive. This means that a) the machine has enough RAM to run Windows 10, and b) the drive is big enough to house partitions for both Windows and Linux boot partitions.

Multiple boot because Windows not optional and netbook too limited for virtualization

There are many things you can do with a netbook. Virtualization is not one of them: I did the experiments. There is something cruel about asking a 1.5Ghz Celeron CPU with 2 cores and 4GB of RAM to host a virtual guest, and the results were not pretty. I had a second chance to work with virtualization on another laptop with better specs, but that is discussed later in this blog post.

Creating a multiple boot between Windows and Linux

There are many good howtos on formatting a computer for multiple boot with Windows and Linux, but here are the essentials. If you are formatting an empty hard drive, only partition some of the space on the hard drive, and leave the rest as “unallocated.” Do the full Windows install. Then, run the Linux installer and tell it to use automatic partitioning – it will create a second boot partition for Linux, and even install a multiple-boot menu allowing you to choose between Linux and Windows at boot time. Of the usable 57GB portion of the 65GB eMMC drive, I allocated 35GB to Windows and 22GB to Linux – next time, I may allocate 40GB windows and 17GB Linux. If you want to add a Linux boot partition to a machine that already has Windows, you can use a bootable USB “Live” version of Linux and the gparted utility to re-partition the Windows partition to free up space that is then “unallocated” on the hard drive.

Docker containers for Linux/Apache/MySQL/PHP (LAMP) development

I support several large enterprise applications written in PHP. Although I have seen PHP run on Windows, I consider it a stupid computer trick. PHP works best as part of a LAMP stack. The problem with PHP is that its developers deprecate (drop as obsolete) functions and features quite aggressively. This means that although in theory a Linux laptop would make an excellent LAMP server, a modern desktop distribution of Linux contains a version of PHP that is too modern to run the enterprise code that I maintain (yes, there are re-factoring projects underway).

I listened to several presentations about Docker and Snap containers, and related technologies like Puppet and Ansible, during meetings at my Linux Meetup group. Each time, I thought the presentations were on a subject too esoteric to be of use to me, but as with most presentations to which I have listened in that group, the information was useful to me at a later time. In my case, I have chosen to use Docker containers for PHP 5.x and PHP 7.x development (see my previous blog post, “Formatting a netbook with dual boot Windows and Linux, plus a Docker container for travel and offline LAMP development.”) These containers allow me to host several incompatible versions of PHP with full isolated environments of Apache and MariaDB (MySQL), without the CPU and RAM overhead of virtualized guest machines. The performance is quite good.

The server: a Linux server at home to serve media files

At home, I have an old Vista-class (Core 2 Duo) computer upgraded to 4GB RAM and 120GB SSD, which I use as a personal server. The RAM and SSD make this machine faster than it was when originally deployed in 2007. Unlike my other personal computers, this machine does not have dual-boot, virtual guests, or Docker containers, it is a pure bare-metal Linux server with no multiple boot. I use it as a file server for media files at home, a VPN server to access my home network, and as an rsync server to backup my personal VPS web server. This machine has a Linux desktop which offers mediocre performance. Although the computer can display a 1080p movie without stutter using VLC, its Gnome desktop is not snappy – my 2017 netbook with a weaker CPU but better video card provides a better desktop experience – dialogs and interfaces respond more quickly.

Formattting my best personal computer as multiple boot between Windows and Linux

I have a personal laptop with good specs – i7 CPU, 16GB RAM, 1TB SSD. This machine is connected to an external monitor, keyboard, and mouse to serve as my desktop computer at home. Until recently, this computer (the computer I use the most while at home) ran Windows. Why did this computer, the best computer in the house, get Windows? Because Windows always wins due to the occasional need for a Windows-only tool. What if I need to connect to the VPN at work? What if I have to run Photoshop? What if I need real Excel to run a spreadsheet with VBA macros?

After mostly using a Linux laptop during my vacation (although I had the Windows boot available for Windows-only tasks like the work VPN), I was curious about how well Linux could host virtual machine guests running Windows and Linux on decent hardware. I was also determined to free myself from the need for Windows at home, for security, stability, and privacy. Given today’s threat environment, using a Windows computer seems dangerous. I also figured that the performance of a Linux desktop would be even faster than the same hardware under Windows.

I booted a live USB of Fedora Linux, installed the gparted tool using dnf (yum), then I used gparted to reduce the size of the Windows partition. This created free “unallocated” space. I then ran the Linux installer, and selected auto-partitioning. Linux used the unallocated space, created its own partition, and a multiple-boot menu that allows me to choose between Windows and Linux at boot time.

Building virtual machines under Linux with KVM/QEMU

Using the virt-manager tool, I was able to build a Windows virtual guest running Windows 10. Based on advice from the Internet, I used a raw file image, and allocated 4GB RAM and 2 CPU cores. I discovered a few quirks — for example, you have to install a non-existent “EVTouch USB Graphics Tablet” input device to get rid of mouse stutter, and you have to change the buffer values for pulseaudio. I was able to create a virtual Windows desktop that had great performance for everything except video and audio playback, which had stutter. Unless you tried to play a movie or music file, this virtual guest performed so well that in full-screen it was almost indistinguishable from a desktop running on real bare metal. This guest vm allowed me to connect to the work VPN.

For fun, I also created a virtual guest of Fedora 28 desktop. It ran extremely well. I have used this virtual guests to perform experiments (for example, if a desktop is connected via VPN client to another system, is its desktop still accessible remotely via VNC? Answer: yes.) I have been able to run both the Windows and Linux virtual machines at the same time, with each connected to a different VPN while the main foreground desktop runs on the regular Internet connection.

Keeping the Windows multiple-boot option available

Although I have not yet needed to reboot into real, bare-metal Windows on my main personal computer at home, I feel better knowing that it is there. If I need to spend a day converting video, or running Photoshop, I can boot back into Windows. Preserving the ability to run Windows makes it possible for me to run Linux as the primary operating system on the bare metal of that machine.

A final note: multiple physical computers

After all this experimentation, I found that instead of using virtual guests for downloading and access to Windows, I was relying on separate physical computers. In a typical session, my main desktop runs on Linux, my server runs Linux connected via a privacy VPN, my netbook runs in Windows mode. My server and netbook are available physically as well as virtually, via VNC remote desktop software.

Of course, I would never have known just how well Linux can run a virtual machine guest had I not reformatted the machine. I am glad that I have retained the ability to choose between Windows and Linux at boot time on 2 of my 3 personal computers. Mostly, I am glad to be able to run Linux on my main home and travel machines, by keeping the theoretical ability to run Windows for the edge cases that usually let it win the war for the desktop.

Formatting a netbook with dual boot Windows and Linux, plus a Docker container for travel and offline LAMP development

When I travel, I like to carry the smallest and cheapest computer that can serve my needs, a netbook: the HP Stream 11” netbook (C$250) is a modest Intel single-core 64 bit cpu with 2 threads, 4GB of RAM, and 57GB of usable space on a 64GB eMMC drive. This computer is inexpensive, light, and disposable. This computer can be reformatted often, as it is not my main personal computer. Although this computer has light specs, the RAM and hard drive are double the size of typical netbooks. The amount of drive space makes a multiple boot environment possible, and the amount of RAM makes Windows bearable. I have also added a 64GB micro SD card for extra storage.

I use Linux for a lot of PHP programming. I also prefer Linux as a desktop for the performance, and for the privacy and freedom. I have to retain the ability to use Windows in case I need to connect to certain systems for work, and the computer is too limited in terms of CPU and RAM for virtualization. Don’t even get me started on Wine.

Partitioning a multiple-boot system with Windows and Linux

This has led me to divide the 57GB drive into a 35GB Windows 10 partition, and a 22GB Fedora 28 Linux partition. Windows must be installed first, with unallocated drive space available to the Linux installer to create a new drive partition. The Linux installer will also install a multiple boot manager which will list the Windows boot partition as an available option on startup.

Installing Windows

If you are modifying an existing Windows installation to become multiple-boot, resize the windows partition to create free, “unallocated” space, which can be used by the Linux installer.

If you are doing a fresh installation of a multiple boot system, use the Windows installer to destroy all existing partitions, then Windows system should be installed first, only partitioning the drive space needed for Windows, leaving the rest of the drive space unallocated for later use by the Linux installer.

I don’t plan to provide much further detail on how to install a Windows system, the world gives enough love to Windows already. The rest of this blog post contains my notes on installing Linux and setting up Docker containers for Linux/Apache/MySQL/PHP (LAMP) web development.

Installing Linux

Changing Fedora 28 from a sudo/wheel group implementation to a traditional box with root

Prior to Fedora 28, the operating system reflected a traditional RHEL-style box, with sudo available but requiring a true root password for some operations. It would seem that Red Hat has chosen to emulate the Ubuntu permissions model, which is that of a privileged user that owns the desktop, that belongs to the wheel group (BSD style permission to use sudo command), and that accepts the privieged user’s password to escalate to root using sudo as part of the command. To my mind, this means that the system effectively has no sandboxing or root password protection. I used the sudo su command to escalate to a root prompt, set a root password using the passwd command, edited the group file to remove the user owning the desktop from the wheel group, then rebooted.

Some general notes on setting up a Fedora 28 workstation

(My brother is the king of this kind of list.) I noticed that even when planning to do a minimal install for a temporary format for some experiments, I needed to perform the following steps to get the machine where I wanted it to be, so I decided to note them in a text file:

systemctl stop firewalld; systemctl disable firewalld

systemctl start sshd; systemctl enable sshd

dnf install nano (cause you always need a text editor)

Disable selinux:

cd /etc/selinux

nano config

set SELINUX=disabled

Set hostname:

cd /etc

nano hostname

Change gdm from Wayland back to

cd /etc/gdm

nano custom.conf

remove # in front of WaylandEnable=false


dnf clean all;dnf update

dnf install denyhosts

enable desktop sharing

adjust power settings: change timeouts for screen and hibernation on ac and battery power.

add the following repositories:



remi-release-28 (note: disabled by default you must edit the .repo file to enable prior to dnf)

(again) dnf clean all; dnf update

Accept GPG signatures, watch for missing RPM dependencies or conflicts between repositories.

Installing some favorite open-source desktop applications

vlc: video and audio player

vncviewer: install tightvnc package

Filezilla: FTP and SSH file transfer client

rdesktop/rdp123: Windows remote desktop client

Installing binaries of proprietary software packages on Fedora 28 workstation

Some important utilities are only available as binary-only installations, you can find the installers by using Google:



Installing Google Chrome on Fedora 28 workstation

On Fedora 27, installing Google Chrome was easy, the 64 bit RPM installed without a hiccup. However, on Fedora 28, I got broken RPM dependency errors, which I had to resolve one-by-one, by googling and finding the following commands:

dnf install redhat-lsb

dnf install libXScrnSaver

dnf install libappindicator

dnf install libappindicator-devel

dnf install libappindicator-gtk3

Installing Google Earth on Fedora 28 workstation

Once Google Chrome is installed with its dependencies, you can install Google Earth.

Using Docker containers to create isolated web development platforms without virtualization

On the Linux side, although the Fedora 28 workstation environment can easily support a Linux/Apache/MySQL/PHP (LAMP) server for offline web programming, it supports PHP 7.x, which is incompatible with some older software that is still in production on Centos 7.x boxen (Fedora 19, PHP 5.x). This code is being re-factored, but the new versions are not ready for production. Rant: PHP deprecates far too aggressively, and has created a demand for legacy version PHP parsers. I chose not play with batch files and symbolic links to binaries, as I was worried about creating version mismatches, especially with glue drivers that link php and mysql.

Again, because of the limitations of the hardware, creating and running a virtual machine guest is not a viable option. At a Linux meetup about a year ago, I learned about Docker and containers and thought they were stupid. Now, I realize that many things I learn at Linux Meetup will be useful later on.

Installing Docker

dnf install docker docker-compose docker-common docker-devel

systemctl start docker; systemctl enable docker

Using Docker images and docker compose files to install a Docker container environment

Rather than installing a traditional LAMP stack, I have decided to install containerized environments, one for the old PHP 5.x environment, and one for the current PHP 7.x environment. I found the following links to be helpful:

Despite the Ubuntu-specific reference in one of the above links, I was able to follow the procedure on a Fedora 28 workstation.

There are endless permutations to Docker, some involving virtual machines, which I tried to avoid given the limited specs of the system on which I am installing. The 2 links above created self-contained environments that can be started using the command “docker-compose up –d” and can be stopped using the command “docker-compose down” from within the build context of the Docker container’s directory structure.

Docker has a lot of commands and options. Two commands to help get you started:

docker ps (lists running docker containers, note that what you consider a single container could actually be multiple linked containers, ie one for apache and php, another for mysql, and so on)

docker exec -it 6ca756ef1b50 bash -l (in this case, a shell login to the isolated instance of the Docker container running MariaDB/MySQL so I could run the mysql command line utility)

Due to the nature of LAMP development, most of the time access to files within your normal shell in the Docker file directories should be enough, along with phpMyAdmin on the local web server.

Tip: if you expect to be truly isolated and offline during your development, install an offline copy of the website on your local system.


Operating systems and freedom: deep thoughts on replacing a cell phone

In 2014, I was in Alaska, and wanted to use a Red Pocket SIM card with my iPhone 4 to roam with a lot of data — 3GB was included in the package. I ordered the SIM card, installed it (with some difficulty), and was able to connect voice, text, and data to the local AT&T cell network. However, my personal hotspot for wifi tethering was disabled. I made a point of replacing the iPhone with  a hackable Android phone, the Google Nexus 5. The software environment on the Nexus 5 is ideal, but the hardware died early. I then got a Oneplus One, which is now in its 3rd year of service. Both the Nexus and the Oneplus were unlocked, and i reflashed both of them with rooted versions of Android. I then had the ability to use a SIM card and to edit various registries that control things like whether tethering is permitted on a prepaid SIM. As it turned out, I only used this capability once — on a trip in 2015. After that time, roaming plans for Canadian cell phones have improved considerably.

I had planned to replace my Oneplus One with a Oneplus 6 in July of this year (2018). However, Google’s war with Amazon has produced some collateral damage: Google apps will no longer run on unofficial builds of the Android kernel. There is a mechanism for registering as a developer, but the point is, I would be better off at that point by staying with stock OTA updates and a non-rooted image on an Android phone.

Google picked a bad time to do this: rumors are that a cheaper 6.1″ LCD iPhone will be released in September 2018, at US$550 (C$720). If an iPhone only costs C$60 more than a OnePlus 6, I may as well just buy the iPhone. I have been lusting after wifi calling, call handoff to the iPad, and Airplay to the Apple TV.

This got me to thinking about vendor lockin. Microsoft is trying to get things to go through their app store, if they follow the MacOS path this will soon be the default, and we could see a potential future where apps are locked down fully on Windows and MacOS.

Where does that leave freedom? The multiple-boot partition that runs Linux on my personal laptop is in many ways the last place I will truly be free to control my own computer. I used to see Linux as a great server and a mediocre desktop. I now see it as a free desktop, free as freedom, not simply free as in beer.