Where CPU power matters, and where it does not

The other day, I was thinking about 3 systems, 2 with modest specifications, and 1 system with great specs.

A 2009 desktop old Vista-class class core 2 duo 4GB RAM 120GB SSD

A 2018 netbook Celeron CPU (more like an Atom) 4GB RAM 64GB eMMC

A 2016 laptop core i7 with 16GB RAM and a 1TB SSD

It would not be a difficult quiz were the object to identify the good system vs the bad one. Hint: it’s the i7.

However, I have learned that some tasks run quite well on limited hardware.

The 2009 desktop was never designed to run with 4GB of RAM and a 120GB SSD from my junkpile, but they certainly have the effect of speeding up the system. This machine, running Fedora Linux, is a VPN server, a file server, a web server, a database server, and can play back 1080p video beautifully over a DVI connection.

The 2018 netbook which costs less than US$200 new, is essentially a Chromebook case with modest Wintel guts. Its CPU is called a Celeron, but given its clock speed and meagre 2 cores, it may as well be an Atom. And yet, this netbook is able to run Fedora Linux and Windows 10 Pro, quite well. It can even run Photoshop.

I have tried to run virtual machine emulation under both of these systems. Even with a stripped-down OS installer, the results were not pretty. For some applications, specs matter.

Although I have not yet spent serious time with a Raspberry Pi device, the full support in Fedora 29 has made me take a serious look at the platform. I predict results similar to those on the systems I described earlier.

Of course, if you throw good specs at a problem, like a recent laptop with a core i7, 8 cores, 16GB RAM, and a 1TB SSD, a lot of other things are possible. I am able to run multiple virtual machines under KVM, and have had a situation where a Linux guest was connected to one VPN, a Windows guest was connected to another, and the main desktop (“baremetal computer”) was on the main network connection, not even slowing down while the virtual machine guests did their work.

A recent sighting of a 13″ MSI and a sale for a Dell XPS 13 made me long for a small, but powerful computer. However, for travel, all I need is that little netbook. In theory, it would be fun to virtualize a few server environments for portable LAMP development, but I have been exploring “containers” like Docker that will allow me to isolate the systems with different PHP/MySQL versions without the overhead of a full virtual machine.

So the question is not whether you need more power. The question is how much power do you need for a specific use?

The containers thing is getting important – my goal will be to build 2 containers – one with mysql and php 5.x, and one with mysql and php 7.x

The Linux dialup WIFI server, part 1

My mother has a great cottage on a lake. There is no cell phone service for a few kilometers around. There is a landline. There is a form of rural data wireless that is often down. Thus the need for a dialup wifi server as a backup, for when rural data wireless service is unavailable. Too slow for web surfing, but enough for email and texting. This seems like a perfect task for Linux. After all, Linux is usually a good foundation for web servers, email servers, vpn servers, file share servers, voip servers, and more.

I started this project last year, and found that although Windows could drive the built-in 56K modem on a circa 2009 laptop, Linux could not. Fortunately, I have a US Robotics USB 56K modem, which is recognized by Linux.  I got as far as a dialup connection without DNS.

This year I learned how to override some settings in wvdial to enable dns. So i was able to surf on fedora 28 via dialup. However, I was unable to share the connection via the GUI networkmanager.

To explain why, let me tell you a few stories:

I should point out that sharing a connection like this is something that Windows can do without breaking a sweat. I first installed such a gateway, albeit dialup shared as a NAT over wired Ethernet, on a Windows 98 box in 1999.

Last year (2017), while on vacation at a hotel with a 2 device limit, I confronted the dilemma: how does one choose between a laptop, an iPad, and 2 cell phones? Answer: use a laptop with a second wifi adapter to provide a repeated hotspot under my control.

NetworkManager on Gnome 3 on Fedora offered to share a wired Ethernet connection as a local WIFI hotspot, but was unable to do other permutations — like a second wifi on usb sharing a first wifi connection to hotel wifi. Fortunately, the machine was able to dual-boot into Windows and share the  hotel wifi via a second wifi usb adapter.

Back to the Linux dialup wifi server project.

It would seem that I must configure hostapd, in order to create a hotspot that can share the dialup connection. I have followed the instructions, but have not succeeded so far — there are some things i can try. I find myself consulting blog posts from 2002-2011. I suppose that is the kind of deep time audience that reads these blog posts, a few at a time, in the future.

More to follow at the beginning of October when I return to the cottage to take another run at the problem. for now i will simulate the hostapd problem with a test machine here, then bring that solution to the cottage and test the integrated solution with the already-working dialup connection.

I suppose I could try to test the dialup connection via a voip analog telephone adapter device, but that seems wrong.

[edit 20180924] My brother had a somewhat similar but more successful experience in 2015: http://www.malak.ca/blog/index.php/2015/03/05/having-to-find-multiple-levels-of-internet-access-oh-fun/

The days of the small email server are over

Although it is still possible to run a small hobby web server, it has become impractical to run a private email server. The challenges of dealing with spam and viruses go back to the beginning of the web hosting industry, in the late 1990s. During that era and until recently, a person or small team with minimal resources were able to deploy a web and email server with a combination of anti-spam and anti-virus countermeasures, using the same tools available to larger companies. That is no longer the case due to collateral damage from the spam wars.

Since the late 1990s, anyone with a Windows-compatible computer could reformat that computer with Linux, an operating system similar to UNIX. A person or small team could host web sites and email accounts with DNS, web, database, and email hosting on a single computer (hopefully with a second server as a backup).

Reputation management and blacklisting services minimize spam arriving in inboxes, but can also result in mistaken blacklisting or collateral damage such as innocent firms hosted by a provider that has one bad client using their infrastructure and causing a block for all. Until recently, this could be managed. However, since about 2016 the email threat environment has escalated even further.

This new email threat environment has resulted in increased resource use to process inbound mail (the majority of which is spam and some with viruses). Worse still, it has become almost impossible to ensure that one’s email will be likely accepted by the remote party. Even following all protocols, smaller email providers are no longer equals with larger ones.

The blocking of email from small servers is not due to any commercial conspiracy, it is simply that some activities require scale, and email has become one of those activities. It may be that the bad actors still exist, and are now hosted on the big providers, but no one will blacklist Gmail or Hotmail, no one will blacklist a company’s mail if it is hosted on one of those systems. There may be 10 or 20 hosting companies of scale that can be described as being in this category.

Of course, there are still thousands of hobby servers exchanging email, and small 1-10 person hosting operations still offering email services to small companies. There are people who have found a successful recipe, for now, to stay on the air. For now. Good luck to them.

To newcomers considering the vocation: try for the challenge and the knowledge, build your own email server for fun, but do not try to host email for anything remotely important, like a paying customer.

Some will say, sure, but a big company can host its own email. To which I would say they can, but they shouldn’t. We have seen telcos and even governments outsource their email. They saw this coming, and got out of the business. So should we all. Gmail for Business is good. Microsoft 365 is good. Others might be good.

Not all is lost. For the web, we are still in a golden age. Linux servers big and small, some virtual, allow people and companies to have unprecedented degrees of control over the most minute aspects of their web computing. An old laptop can become a Linux web server. A virtual server can be deployed for US$6/month. Those willing to work and learn can still have a web server that is the equal of any other web server.

UEFI Mode is necessary for Fedora 28 to create a multiple-boot system

I have concluded that, for Fedora 28 at least, it is better to leave a system in UEFI/Secure Boot mode if your goal is to install a multiple-boot system with a boot menu offering a choice between Windows and Linux.

How I discovered that you need UEFI for multiple-boot with Fedora 28

I am not a fan of UEFI/Secure Boot, but have found that modern hardware has fewer installation issues with Windows and Linux in UEFI mode. So in the past 12 months I have been leaving the BIOS in UEFI/Secure Boot mode.

When I reformatted a system in Legacy BIOS mode, first installing Windows while leaving some unallocated space on the hard drive, then installing Fedora 28 Linux, the grub boot menu did not include a boot menu entry for Windows. Attempts to create a manual grub menu entry for Windows did not result in a successful boot.

How to create a multiple-boot system

Until recently, I installed Linux by first reducing the size the Windows partition, creating unallocated space, then by installing Linux with automatic partitioning enabled. In each case, the Fedora 28 installer used the unallocated space for Linux partitions, and created a multiple-boot menu with Windows as an option. What I did not realize was that the Fedora 28 installer only does this operation correctly if the BIOS is set for UEFI/Secure Boot.

Why use a multiple-boot setup?

Linux on the desktop offers functionality, performance, and security benefits over Windows. However, there are certain edge cases that require Windows. Rather than simply giving up and letting Windows win because of one occasional “must-have” application, I configure some of my machines as dual-boot between Windows and Linux.

Using multiple boot and virtualization with Linux to (almost) eliminate Windows

First, a small and trivial confession: I have been a Linux system administrator since 1998, running big Linux servers that host web pages and email systems. I have also, reluctantly, become a Windows server system administrator, but that is a story for another time. For most of that time, I have earned at least part of my living due to Linux servers, but used a Windows desktop for my personal workstation. I used Linux exclusively on my laptop from 2003-2006, and had a good experience, although during that era you had to be prepared to re-compile your XFree86 subsystem to support DRI for video playback, and things like wifi and power sleep mode were tricky to configure. I have also had Linux laptops for “salesman’s demos” where I wanted to be sure that my website would load during the demo no matter what. I have kept a Linux server at home for many years, mostly for experimentation, but also to serve as a VPN entry point and media file server. However, I have never been “pure” in my adoption or advocacy.

Over the past few weeks since returning from a vacation in Las Vegas, I have intended to write several blog posts about using Linux as a personal desktop. In business, there is the concept of the 80/20 rule: the 20% of features that are needed 80% of the time. With the Windows desktop, I have found, there is a 1/10 rule: the 1% of features needed 10% of the time. Put simply, while I must use Windows at work as it is the corporate standard, I would prefer to use a Linux desktop at home. This post is about keeping Windows in its place: not allowing the occasional need for Windows to allow it to dominate desktop computing by having one killer feature. I have done this using a combination of multiple-boot, virtualization, platform diversity, and remote desktop access. Yes, I know about Wine and Crossover emulation, but I classify those as stupid computer tricks.

People can argue about the relative technical merits of Windows vs other desktop operating systems, and the strengths and deficiencies of Linux as a desktop. For personal use, my most common tools are Chrome for the web, VLC for videos, and few utilities like FileZilla for file transfer, VNC for remote desktop work, and Putty for SSH terminal access (it is worth observing that all of these tools are themselves open source). The occasional Word file can usually be read by LibreOffice, the free Word clone included in most desktop Linux distributions. As for the rest of the edge cases that require an actual Windows workstation, read on.

Free as in freedom (libre), not just free as in free beer

2 of my computers include legal licenses for Windows that came with the hardware, so cost or license compliance is not my primary concern. I think that the term Libre is more accurate than Free, as it immediately dispenses with the distinction “free as in freedom, not just free as in free beer.”

Windows as a security vector

As a desktop operating system, Windows is dangerous. It has poor security, despite hard work by Microsoft and others to improve its defenses. Linux provides a faster, more stable desktop experience, and is more secure — by virtue of its architecture and the intense peer review of open source code. On the minus side, Linux on the desktop is limited in terms of the number of apps that it supports. Although several core apps such as Chrome, common on Windows, are now available on Linux, there is always a mission-mandatory application, like Sage 50 Accounting or Photoshop, that does not exist for desktop Linux or for which there is an acceptable substitute.

Last summer, before a corporate merger, I traveled to Toronto for a long weekend, carrying an Acer netbook (weak CPU, 2GB RAM, 32GB eMMC hard drive) that only ran Linux. I was able to support my internal IT clients using an OpenVPN client, remote desktop to run servers, and TeamViewer (which actually produces a Linux version!) to do remote support tech support tickets. I also had the option of connecting to a Windows computer via remote desktop in order to run Windows-specific software. Post-merger, my new employer uses a kind of VPN for which there is not current Linux support — there is documentation about an older 32 bit version, and I have seen and tried a few howtos to add older 32 bit libraries, logical links to .so files, and other tricks to support the obsolete Linux version of the VPN client, but have thus far not been successful in connecting to the new corporate VPN. A perfect illustration of how Windows needs only one critical app to “win” and ensure its place on a computer desktop.

The netbook: multiple-boot for vacations

Often, when I visit Las Vegas, my flight arrives several hours before I can check into my hotel, and there is not even a paid option to check in early. In those cases, I must leave my luggage with the bell captain and wander the Strip, homeless until checkin. So I travel with the tiniest laptop ever, a netbook that fits into a half-size laptop case, with enough room for an iPad and a few accessories like the power brick, a mouse, and a usb charger battery for my phone. Having a small computer is great when you are stuck in an airport lounge, on a train or bus.

Late last year, I purchased an HP Stream 11, with an 11.6” display and a limited CPU that is branded Celeron, and is technically a 64 bit CPU with 2 cores, but is essentially 1.5 times the speed of an old Atom CPU. This matters less than you would think as the Intel graphics card is fast for video playback and makes graphical desktops fast and responsive despite a weak CPU. Unlike most netbooks on the market, this machine has 4GB instead of 2GB RAM, and a 64GB instead of 32GB eMMC drive. This means that a) the machine has enough RAM to run Windows 10, and b) the drive is big enough to house partitions for both Windows and Linux boot partitions.

Multiple boot because Windows not optional and netbook too limited for virtualization

There are many things you can do with a netbook. Virtualization is not one of them: I did the experiments. There is something cruel about asking a 1.5Ghz Celeron CPU with 2 cores and 4GB of RAM to host a virtual guest, and the results were not pretty. I had a second chance to work with virtualization on another laptop with better specs, but that is discussed later in this blog post.

Creating a multiple boot between Windows and Linux

There are many good howtos on formatting a computer for multiple boot with Windows and Linux, but here are the essentials. If you are formatting an empty hard drive, only partition some of the space on the hard drive, and leave the rest as “unallocated.” Do the full Windows install. Then, run the Linux installer and tell it to use automatic partitioning – it will create a second boot partition for Linux, and even install a multiple-boot menu allowing you to choose between Linux and Windows at boot time. Of the usable 57GB portion of the 65GB eMMC drive, I allocated 35GB to Windows and 22GB to Linux – next time, I may allocate 40GB windows and 17GB Linux. If you want to add a Linux boot partition to a machine that already has Windows, you can use a bootable USB “Live” version of Linux and the gparted utility to re-partition the Windows partition to free up space that is then “unallocated” on the hard drive.

Docker containers for Linux/Apache/MySQL/PHP (LAMP) development

I support several large enterprise applications written in PHP. Although I have seen PHP run on Windows, I consider it a stupid computer trick. PHP works best as part of a LAMP stack. The problem with PHP is that its developers deprecate (drop as obsolete) functions and features quite aggressively. This means that although in theory a Linux laptop would make an excellent LAMP server, a modern desktop distribution of Linux contains a version of PHP that is too modern to run the enterprise code that I maintain (yes, there are re-factoring projects underway).

I listened to several presentations about Docker and Snap containers, and related technologies like Puppet and Ansible, during meetings at my Linux Meetup group. Each time, I thought the presentations were on a subject too esoteric to be of use to me, but as with most presentations to which I have listened in that group, the information was useful to me at a later time. In my case, I have chosen to use Docker containers for PHP 5.x and PHP 7.x development (see my previous blog post, “Formatting a netbook with dual boot Windows and Linux, plus a Docker container for travel and offline LAMP development.”) These containers allow me to host several incompatible versions of PHP with full isolated environments of Apache and MariaDB (MySQL), without the CPU and RAM overhead of virtualized guest machines. The performance is quite good.

The server: a Linux server at home to serve media files

At home, I have an old Vista-class (Core 2 Duo) computer upgraded to 4GB RAM and 120GB SSD, which I use as a personal server. The RAM and SSD make this machine faster than it was when originally deployed in 2007. Unlike my other personal computers, this machine does not have dual-boot, virtual guests, or Docker containers, it is a pure bare-metal Linux server with no multiple boot. I use it as a file server for media files at home, a VPN server to access my home network, and as an rsync server to backup my personal VPS web server. This machine has a Linux desktop which offers mediocre performance. Although the computer can display a 1080p movie without stutter using VLC, its Gnome desktop is not snappy – my 2017 netbook with a weaker CPU but better video card provides a better desktop experience – dialogs and interfaces respond more quickly.

Formattting my best personal computer as multiple boot between Windows and Linux

I have a personal laptop with good specs – i7 CPU, 16GB RAM, 1TB SSD. This machine is connected to an external monitor, keyboard, and mouse to serve as my desktop computer at home. Until recently, this computer (the computer I use the most while at home) ran Windows. Why did this computer, the best computer in the house, get Windows? Because Windows always wins due to the occasional need for a Windows-only tool. What if I need to connect to the VPN at work? What if I have to run Photoshop? What if I need real Excel to run a spreadsheet with VBA macros?

After mostly using a Linux laptop during my vacation (although I had the Windows boot available for Windows-only tasks like the work VPN), I was curious about how well Linux could host virtual machine guests running Windows and Linux on decent hardware. I was also determined to free myself from the need for Windows at home, for security, stability, and privacy. Given today’s threat environment, using a Windows computer seems dangerous. I also figured that the performance of a Linux desktop would be even faster than the same hardware under Windows.

I booted a live USB of Fedora Linux, installed the gparted tool using dnf (yum), then I used gparted to reduce the size of the Windows partition. This created free “unallocated” space. I then ran the Linux installer, and selected auto-partitioning. Linux used the unallocated space, created its own partition, and a multiple-boot menu that allows me to choose between Windows and Linux at boot time.

Building virtual machines under Linux with KVM/QEMU

Using the virt-manager tool, I was able to build a Windows virtual guest running Windows 10. Based on advice from the Internet, I used a raw file image, and allocated 4GB RAM and 2 CPU cores. I discovered a few quirks — for example, you have to install a non-existent “EVTouch USB Graphics Tablet” input device to get rid of mouse stutter, and you have to change the buffer values for pulseaudio. I was able to create a virtual Windows desktop that had great performance for everything except video and audio playback, which had stutter. Unless you tried to play a movie or music file, this virtual guest performed so well that in full-screen it was almost indistinguishable from a desktop running on real bare metal. This guest vm allowed me to connect to the work VPN.

For fun, I also created a virtual guest of Fedora 28 desktop. It ran extremely well. I have used this virtual guests to perform experiments (for example, if a desktop is connected via VPN client to another system, is its desktop still accessible remotely via VNC? Answer: yes.) I have been able to run both the Windows and Linux virtual machines at the same time, with each connected to a different VPN while the main foreground desktop runs on the regular Internet connection.

Keeping the Windows multiple-boot option available

Although I have not yet needed to reboot into real, bare-metal Windows on my main personal computer at home, I feel better knowing that it is there. If I need to spend a day converting video, or running Photoshop, I can boot back into Windows. Preserving the ability to run Windows makes it possible for me to run Linux as the primary operating system on the bare metal of that machine.

A final note: multiple physical computers

After all this experimentation, I found that instead of using virtual guests for downloading and access to Windows, I was relying on separate physical computers. In a typical session, my main desktop runs on Linux, my server runs Linux connected via a privacy VPN, my netbook runs in Windows mode. My server and netbook are available physically as well as virtually, via VNC remote desktop software.

Of course, I would never have known just how well Linux can run a virtual machine guest had I not reformatted the machine. I am glad that I have retained the ability to choose between Windows and Linux at boot time on 2 of my 3 personal computers. Mostly, I am glad to be able to run Linux on my main home and travel machines, by keeping the theoretical ability to run Windows for the edge cases that usually let it win the war for the desktop.

Formatting a netbook with dual boot Windows and Linux, plus a Docker container for travel and offline LAMP development

When I travel, I like to carry the smallest and cheapest computer that can serve my needs, a netbook: the HP Stream 11” netbook (C$250) is a modest Intel single-core 64 bit cpu with 2 threads, 4GB of RAM, and 57GB of usable space on a 64GB eMMC drive. This computer is inexpensive, light, and disposable. This computer can be reformatted often, as it is not my main personal computer. Although this computer has light specs, the RAM and hard drive are double the size of typical netbooks. The amount of drive space makes a multiple boot environment possible, and the amount of RAM makes Windows bearable. I have also added a 64GB micro SD card for extra storage.

I use Linux for a lot of PHP programming. I also prefer Linux as a desktop for the performance, and for the privacy and freedom. I have to retain the ability to use Windows in case I need to connect to certain systems for work, and the computer is too limited in terms of CPU and RAM for virtualization. Don’t even get me started on Wine.

Partitioning a multiple-boot system with Windows and Linux

This has led me to divide the 57GB drive into a 35GB Windows 10 partition, and a 22GB Fedora 28 Linux partition. Windows must be installed first, with unallocated drive space available to the Linux installer to create a new drive partition. The Linux installer will also install a multiple boot manager which will list the Windows boot partition as an available option on startup.

Installing Windows

If you are modifying an existing Windows installation to become multiple-boot, resize the windows partition to create free, “unallocated” space, which can be used by the Linux installer.

If you are doing a fresh installation of a multiple boot system, use the Windows installer to destroy all existing partitions, then Windows system should be installed first, only partitioning the drive space needed for Windows, leaving the rest of the drive space unallocated for later use by the Linux installer.

I don’t plan to provide much further detail on how to install a Windows system, the world gives enough love to Windows already. The rest of this blog post contains my notes on installing Linux and setting up Docker containers for Linux/Apache/MySQL/PHP (LAMP) web development.

Installing Linux

Changing Fedora 28 from a sudo/wheel group implementation to a traditional box with root

Prior to Fedora 28, the operating system reflected a traditional RHEL-style box, with sudo available but requiring a true root password for some operations. It would seem that Red Hat has chosen to emulate the Ubuntu permissions model, which is that of a privileged user that owns the desktop, that belongs to the wheel group (BSD style permission to use sudo command), and that accepts the privieged user’s password to escalate to root using sudo as part of the command. To my mind, this means that the system effectively has no sandboxing or root password protection. I used the sudo su command to escalate to a root prompt, set a root password using the passwd command, edited the group file to remove the user owning the desktop from the wheel group, then rebooted.

Some general notes on setting up a Fedora 28 workstation

(My brother is the king of this kind of list.) I noticed that even when planning to do a minimal install for a temporary format for some experiments, I needed to perform the following steps to get the machine where I wanted it to be, so I decided to note them in a text file:

systemctl stop firewalld; systemctl disable firewalld

systemctl start sshd; systemctl enable sshd

dnf install nano (cause you always need a text editor)

Disable selinux:

cd /etc/selinux

nano config

set SELINUX=disabled

Set hostname:

cd /etc

nano hostname

Change gdm from Wayland back to x.org:

cd /etc/gdm

nano custom.conf

remove # in front of WaylandEnable=false

reboot

dnf clean all;dnf update

dnf install denyhosts

enable desktop sharing

adjust power settings: change timeouts for screen and hibernation on ac and battery power.

add the following repositories:

rpmfusion-free

rpmfusion-non-free

remi-release-28 (note: disabled by default you must edit the .repo file to enable prior to dnf)

(again) dnf clean all; dnf update

Accept GPG signatures, watch for missing RPM dependencies or conflicts between repositories.

Installing some favorite open-source desktop applications

vlc: video and audio player

vncviewer: install tightvnc package

Filezilla: FTP and SSH file transfer client

rdesktop/rdp123: Windows remote desktop client

Installing binaries of proprietary software packages on Fedora 28 workstation

Some important utilities are only available as binary-only installations, you can find the installers by using Google:

Teamviewer

Skype

Installing Google Chrome on Fedora 28 workstation

On Fedora 27, installing Google Chrome was easy, the 64 bit RPM installed without a hiccup. However, on Fedora 28, I got broken RPM dependency errors, which I had to resolve one-by-one, by googling and finding the following commands:

dnf install redhat-lsb

dnf install libXScrnSaver

dnf install libappindicator

dnf install libappindicator-devel

dnf install libappindicator-gtk3

Installing Google Earth on Fedora 28 workstation

Once Google Chrome is installed with its dependencies, you can install Google Earth.

Using Docker containers to create isolated web development platforms without virtualization

On the Linux side, although the Fedora 28 workstation environment can easily support a Linux/Apache/MySQL/PHP (LAMP) server for offline web programming, it supports PHP 7.x, which is incompatible with some older software that is still in production on Centos 7.x boxen (Fedora 19, PHP 5.x). This code is being re-factored, but the new versions are not ready for production. Rant: PHP deprecates far too aggressively, and has created a demand for legacy version PHP parsers. I chose not play with batch files and symbolic links to binaries, as I was worried about creating version mismatches, especially with glue drivers that link php and mysql.

Again, because of the limitations of the hardware, creating and running a virtual machine guest is not a viable option. At a Linux meetup about a year ago, I learned about Docker and containers and thought they were stupid. Now, I realize that many things I learn at Linux Meetup will be useful later on.

Installing Docker

dnf install docker docker-compose docker-common docker-devel

systemctl start docker; systemctl enable docker

Using Docker images and docker compose files to install a Docker container environment

Rather than installing a traditional LAMP stack, I have decided to install containerized environments, one for the old PHP 5.x environment, and one for the current PHP 7.x environment. I found the following links to be helpful:

https://github.com/sp0ker/lamp-docker

https://linuxconfig.org/how-to-create-a-docker-based-lamp-stack-using-docker-compose-on-ubuntu-18-04-bionic-beaver-linux

Despite the Ubuntu-specific reference in one of the above links, I was able to follow the procedure on a Fedora 28 workstation.

There are endless permutations to Docker, some involving virtual machines, which I tried to avoid given the limited specs of the system on which I am installing. The 2 links above created self-contained environments that can be started using the command “docker-compose up –d” and can be stopped using the command “docker-compose down” from within the build context of the Docker container’s directory structure.

Docker has a lot of commands and options. Two commands to help get you started:

docker ps (lists running docker containers, note that what you consider a single container could actually be multiple linked containers, ie one for apache and php, another for mysql, and so on)

docker exec -it 6ca756ef1b50 bash -l (in this case, a shell login to the isolated instance of the Docker container running MariaDB/MySQL so I could run the mysql command line utility)

Due to the nature of LAMP development, most of the time access to files within your normal shell in the Docker file directories should be enough, along with phpMyAdmin on the local web server.

Tip: if you expect to be truly isolated and offline during your development, install an offline copy of the php.net website on your local system.

 

Operating systems and freedom: deep thoughts on replacing a cell phone

In 2014, I was in Alaska, and wanted to use a Red Pocket SIM card with my iPhone 4 to roam with a lot of data — 3GB was included in the package. I ordered the SIM card, installed it (with some difficulty), and was able to connect voice, text, and data to the local AT&T cell network. However, my personal hotspot for wifi tethering was disabled. I made a point of replacing the iPhone with  a hackable Android phone, the Google Nexus 5. The software environment on the Nexus 5 is ideal, but the hardware died early. I then got a Oneplus One, which is now in its 3rd year of service. Both the Nexus and the Oneplus were unlocked, and i reflashed both of them with rooted versions of Android. I then had the ability to use a SIM card and to edit various registries that control things like whether tethering is permitted on a prepaid SIM. As it turned out, I only used this capability once — on a trip in 2015. After that time, roaming plans for Canadian cell phones have improved considerably.

I had planned to replace my Oneplus One with a Oneplus 6 in July of this year (2018). However, Google’s war with Amazon has produced some collateral damage: Google apps will no longer run on unofficial builds of the Android kernel. There is a mechanism for registering as a developer, but the point is, I would be better off at that point by staying with stock OTA updates and a non-rooted image on an Android phone.

Google picked a bad time to do this: rumors are that a cheaper 6.1″ LCD iPhone will be released in September 2018, at US$550 (C$720). If an iPhone only costs C$60 more than a OnePlus 6, I may as well just buy the iPhone. I have been lusting after wifi calling, call handoff to the iPad, and Airplay to the Apple TV.

This got me to thinking about vendor lockin. Microsoft is trying to get things to go through their app store, if they follow the MacOS path this will soon be the default, and we could see a potential future where apps are locked down fully on Windows and MacOS.

Where does that leave freedom? The multiple-boot partition that runs Linux on my personal laptop is in many ways the last place I will truly be free to control my own computer. I used to see Linux as a great server and a mediocre desktop. I now see it as a free desktop, free as freedom, not simply free as in beer.

Windows 10: setting network profile to private or public

If you are experiencing problems with file and print sharing, either as a server or as a client, it may be due to the current network profile of your Internet connection. Windows makes a distinction between private networks (home and work) and public networks (hotel wifi, Starbucks wifi, etc.) The idea is to avoid sharing your episodes of Gilligan’s Island with other people at the Starbucks by accident.

Windows often asks you to select whether a network profile should be private or public, but sometimes the issue is unclear.

To see and change the current network profile, right-click on the network icon (wired or wifi) on the bottom right near the time:

Click on “Change connection properties:”

You will be able to view and change the network profile:

 

Muting Chrome audio by default and un-muting tabs selectively

Many web sites play audio without permission, so I usually have audio muted for the entire desktop. However, sometimes I like to watch Netflix or a media file on VLC, on a second screen, while I load other web pages on my main screen. This makes it necessary to mute audio on Chrome itself while allowing other applications to play sound, or to allow one web page to play Netflix or Youtube while others are muted.

Enabling mute function

To allow muting on individual Chrome tabs, enter the following address in the Chrome URL bar:

chrome://flags/#sound-content-setting

enable the option: “Sound content setting.”

Click “Relaunch now.”

Enabling mute controls per tab

To enable a control that allows for muting of individual Chrome tabs, enter the following address in the URL bar:

chrome://flags/#enable-tab-audio-muting

enable the option: “Tab audio muting UI control”

Click “Relaunch now.”

Muting all Chrome tabs by default

To enable a control that allows for muting all Chrome tabs by default, enter the following address in the URL bar:

chrome://settings/content/sound

disable the option:

When disabled, this option shows “Mute sites that play sounds.” When enabled, this option shows: “Allow sites to play sound (recommended).”

Click “Relaunch now.”

Selectively unmuting or muting Chrome tabs playing audio

When a tab is playing audio, an audio icon will appear on the tab, indicating whether sound is muted or not, and on which you can click to unmute or mute sound. There is also a sound control at the right of the URL bar which offers more detailed settings. You can also right-click on the tab label and select unmute or mute from the context menu.