The shell

In the early 1990s, during the glory days of UNIX culture, being able to score a telnet window, a shell account on a UNIX server, was a big deal.

Back then, a combination of borrowed credentials, academic accounts, and commercial providers hosted the UNIX shell accounts, that provided finger and talk and FTP and pine mail and usenet readers and IRC.

UNIX culture, what remains of it, has been subsumed into the Linux server culture, which itself is being eaten by cloud and devops. But one thing that remains, for those who want it: the shell. I remember deploying a Linux server 20 years ago — it was non-trivial and required the re-purposing of Wintel metal. That choice remains (a tiny netbook running Linux is like having a tiny mainframe with its own UPS and console), but other choices, like $5 per month cloud servers and VMWare Player guest instances and raspberry pi servers make the shell available to anyone who wants it.

We do not realize just how lucky we are.

De-clouding: hosting virtual servers on-premises to reduce hosting telecom burn


Enterprises that have a significant monthly cloud bill should business-case an approach that uses the cloud for public-facing assets but considers on-promises virtual hosting with no incremental telecom increase. A local deployment of a set of virtual servers can be done in a Linux or Windows context. Depending on hardware, platform, and workload, an on-premises server should be able to host between 1 and 7 virtual server guests.

Most server deployments are now virtual. Aside from edge cases that rely on raw horsepower or low latency, like file servers and voice over ip servers, baremetal rarely wins the business case.

The flexibility and benefits of virtualization have led to practices and tools that require multiple versions of a server image, for devops or redundancy, and powerful automation tools that can script the creation, orchestration, and destruction of virtual servers as needed.

There are also network effect reasons why for a small business to be left with an AWS account by a web developer is not such a bad thing.

However, even enterprises in the 20 employee range will accumulate a number of server processes, most hosted on public cloud services, which will each incur recurring monthly fees. Some of those enterprises would save money by bringing the processes on-premises and in-house.

There are business cases that make sense for the cloud. The web site should live in the cloud. As a related example, though, the web site’s backup can be hosted on a local server connected to the on-premises DSL line.

For Linux workloads, virt-manager with KVM and Qemu is a good combination — Boxes leverages this toolset as well.

A hybrid approach, typically with the web marketing server as well as email and calendar services on public clouds, but with backoffice, ERP, database, and backup operations performed in virtual servers hosted in on-premises equipment, at a lower cost than the equivalent service from an asset hosted externally by a vendor. Of course, this comes with the responsibility for an offsite backup and disaster recovery plan. Start with 2 hard drives, and take one offsite each week. then get fancier, maybe with another on-premises server at another campus.

Systems can even be hybrid, with a public-facing website on a cloud service mounting cheap assets stored on an on-premises server.

For Windows, some shops use VMware quite effectively, especially with its server and management tools. However I would suggest a strong look at Windows Hyper-V, which does just as well hosting Linux guests as it does Windows guests, and fits into a corporate environment, nicely.

In the same big company type-theme, The Azure AD cloud deserves a look. Microsoft has shown a vision of the future in which the cloud acts to orchestrate a mix of cloud and on-premises assets with common active directory.

By considering where the public cloud adds value to a server deployment, and finding savings by bringing some virtual server workloads back on-premises and in-house, enterprises can achieve significant savings that can be re-purposed to other priorities.

(Almost) off the grid

Sitting on the deck in front of a lake in the Laurentians north of Montreal, I find myself almost off the grid. There is no cell phone coverage for about 20KM before the driveway, so no 3G wifi hotspot. A rural data wireless provider with antennas on mountaintops usually provides a decent wifi connection, but a power surge destroyed the base station of a radio, and here I find myself reduced to my last 2 lines of communication: satellite TV and an old-school voice landline.

Yes, I did make a dialup connection over the landline during last week: it was 24Kbps, slow even by dialup standards, and modern web pages, even those optimized for lower-speed connections like the HTML version of Gmail, are completely unusable.

Colleagues are covering for technical support responsibilities in civilization, and my brother will drive me this afternoon to the community center, 7KM away. Until then, I find myself myself essentially cut off: no WhatsApp texts, no checking for latest headlines, weather, or trivia, no streaming audio for my airpods.

So here I am typing on a computer in offline mode, to be pasted to the Internet later today. This reminds me of a project I have put off several times: a complete offline web development environment. Hosting a LAMP server is trivially easy, whether on the baremetal of a Linux laptop, or as a vm guest on a Windows laptop, but one must take precautions to be productive offline: I need to install a local copy of the php.net documentation, and I have found some interface code that must be redone to invoke local copies of JavaScript libraries, rather than pulling them in from remote locations at run time.

People tell me that I will benefit from being “unplugged,” that it will relax me. They are mistaken, although I will survive until Monday morning when I return to the city, sustained this afternoon by a half hour of the community center’s free wifi. The rural data wireless base station will be replaced at some point, I hope soon – I will be back in the city on Monday morning, but my Mom spends the summer up here – I hope for her that she will soon get wifi for her iPad.

By the way, here at the community center: wifi is awesome, never take it for granted.

Using dialup at the cottage due to a rural wireless outage

Back from a weekend at the family cottage. Barbecue in front of the lake, good weather, my brother’s birthday party.

The family cottage is outside cell phone range. Normally, the cottage has wifi from a rural wireless provider, a satellite TV link, and a landline.
The rural data wireless was out. Using a us robotics usb 56K modem, I was able to make a 24Kbps connection, which is a low speed, even by dialup standards. This poor performance is due to the analog exchange and noise on a rural line: in the city one would expect 50Kbps. There are “light” versions of sites like gmail that load faster on slower connections, but even the simplest requests would often time out and require a reload.

It was fortuitous that i had left a us robotics usb modem in the cottage 10 years ago.

I was able make a dialup connection with my windows 10 laptop, but the experience was not as good as with previous versions: sharing the connection via mobile hotspot did not work, and using connection sharing via the wifi did not trigger a wizard with ad-hoc networking set up on the wifi adapter, things that worked well in prior versions of windows, as recently as windows 8.1.

At the community center 7KM away, near the dépanneur, there is free wifi and a picnic table. On my Linux laptop, I was able to apt install wvdial on the free wifi. wvdialconf autodetected the modem and the man page made it easy to create a dialup file /etc/wvdial.conf (even to find the option for pulse dialing: “ATDP”):

[Dialer Defaults]
Init1 = ATZ
Init2 = ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0
Modem Type = USB Modem
Phone = xxxxxxxxxx
ISDN = 0
Password = xxxxxxxx
New PPPD = yes
Username = xxxxxxxx
Modem = /dev/ttyACM0
Baud = 33600
Dial command = ATDP

wvdial was able to make a 24Kbps ppp connection. I gained some insights, and learned enough to complete a dialup wifi server, based on wvdial, hostapd and dnsmasq. Given the limited speed, there is little point in deploying a dialup server. I will, however, continue to maintain the ability to connect as a dialup workstation, from both my windows and linux laptops.

Modern websites and i/o make dialup almost useless. there may be edge cases especially involving security or remote telemetry, but for consumer use, I suggest driving to the free wifi at the community center.

The 2 simplest devices in my home

The 2 machines in my home that I like best, are simple and not smart. Received as gifts, a new convection toaster oven that goes tick-tick-tick, and a bluetooth soda can speaker with very little intelligence.

Sony SRS-XB10 portable wireless speaker with Bluetooth

This speaker can pair with a phone, iPad, or a computer. It can play audio. It can act as a speakerphone. It is small, rechargeable, wireless, and sounds bigger than it is. It does not have AI, a personal assistant, skills, and does not tie into any home automation. It is just a speaker.

Black and Decker TO1950SBD convection toaster oven

This toaster oven is convection, which means that it has a fan that blows the air around while baking. It is good at baking croissants. It has a temperature control, and a timer. With a spring. That goes tick-tick-tick.

A picture of croissants baked in the toaster oven

Connecting to a Checkpoint VPN from Fedora 29

One of the systems I maintain requires access to a Checkpoint VPN. Until recently, this has meant that I needed a Windows laptop or vm when I traveled. The recipe to connect to the vpn using a command line client called “snx” seems obvious, but is not. Here is how I was able to connect a Fedora 29 Linux machine with version 800007075 of the snx command line client.

Install the Oracle Java JRE

Download Linux x64 RPM:

https://www.java.com/en/download/linux_manual.jsp

Use rpm at the command line instead of using the software installer gui.

(change version number as needed)

rpm -ivh jre-8u191-linux-x64.rpm

dnf install pkgconf-pkg-config

dnf install libcanberra-gtk2.i686

dnf install /lib/ld-linux.so.2 libX11.so.6 libpam.so.0 libstdc++.so.5 libnsl.so.1

According to this link:

https://unix.stackexchange.com/questions/450229/getting-checkpoint-vpn-ssl-network-extender-working-in-the-command-line

versions of the snx command line client > 800007075 are not compatible with recent Linux kernels. So we will obtain a copy of that specific version of the SNX command line client:

[root@server etc]# cd ~desktop/tmp/
[root@server tmp]# wget https://www.fc.up.pt/ci/servicos/acesso/vpn/software/CheckPointVPN_SNX_Linux_800007075.sh -O snx_install.sh
–2018-12-30 07:34:08– https://www.fc.up.pt/ci/servicos/acesso/vpn/software/CheckPointVPN_SNX_Linux_800007075.sh
Resolving www.fc.up.pt (www.fc.up.pt)… 193.137.24.4
Connecting to www.fc.up.pt (www.fc.up.pt)|193.137.24.4|:443… connected.
HTTP request sent, awaiting response… 200 OK
Length: 973618 (951K) [application/x-sh]
Saving to: ‘snx_install.sh’

snx_install.sh 100%[====================>] 950.80K 378KB/s in 2.5s

2018-12-30 07:34:26 (378 KB/s) – ‘snx_install.sh’ saved [973618/973618]

and now we make the script executable:

[root@server tmp]# chmod 755 snx_install.sh

run the installation script:

[root@server tmp]# ./snx_install.sh
Installation successful

test a command line connection (use values appropriate for your username and vpnservername)

[root@server tmp]# snx -s vpnservername -u username@domain.com
Check Point’s Linux SNX
build 800007075
Please enter your password:
SNX authentication:
Please confirm the connection to gateway: *.domain.com
Root CA fingerprint: XXX XXX XXXX XXX XXX XXXX XXXX XXX XXX XXXX
Do you accept? [y]es/[N]o:
y
SNX – connected.

Session parameters:
===================
Office Mode IP : x.x.x.x
DNS Server : x.x.x.x
Secondary DNS Server: x.x.x.x
DNS Suffix : domain.com
Timeout : 12 hours

Some useful links:

https://www.java.com/en/download/linux_manual.jsp

https://kenfallon.com/checkpoint-snx-install-instructions-for-major-linux-distributions/

https://kenfallon.com/installing-snx-on-fedora-28/

https://unix.stackexchange.com/questions/450229/getting-checkpoint-vpn-ssl-network-extender-working-in-the-command-line

https://www.fc.up.pt/ci/servicos/acesso/vpn/software/CheckPointVPN_SNX_Linux_800007075.sh -O snx_install.sh

Installing Fedora 29 on the Raspberry Pi 3 B+

TL;DR

The recent release of Fedora 29 for the Raspberry Pi means that the hobbyist hardware platform can finally be considered as a viable alternative to Windows-on-Intel (“Wintel”) hardware to host Linux server applications. Although too slow to operate as a useful Graphical User Interface (“GUI”) desktop, the Pi does a good job running a text-only web, file, and vpn server. This makes Fedora on Pi perfect for serving video files at home, or as a vpn server for a small home office or satellite office.

Actual practical HOWTO section

This section contains actual procedural information. After the practical information, there is a “Rant section.”

Choosing the version you will install

For now, I suggest you avoid the GUI desktop altogether and stick with a text-only web, file, and vpn server based on Fedora 29 Minimal.

Downloading the Fedora 29 image file

Download the file for Fedora Minimal:

https://alt.fedoraproject.org/alt

How this is different from a Wintel install

A Wintel build consists of boot media, either a usb drive or a dvd drive, which contains a bootable image that includes an installer used to format another device, typically a hard drive, with boot and other partitions.

The Raspberry Pi Fedora installer consists of an ISO image you will image to a micro-SD card. In the Pi world, everything boots from a FAT32 UEFI partition on a micro-SD card.

Understanding the Fedora 29 image file

The Fedora 29 image contains a FAT32 partition with an implementation of UEFI, and several ext4 partitions.

Decompressing the file containing the Fedora 29 image file

In Windows, use WinRAR to decompress the image file.

In Linux:

from: https://fedoraproject.org/wiki/Architectures/ARM/Raspberry_Pi

xzcat Fedora-IMAGE-NAME.raw.xz | sudo dd status=progress bs=4M of=/dev/XXX # Location of your media (will be sdX or mmcblkX depending on hardware)

Formatting the micro-SD card with the image

Windows:

https://sourceforge.net/projects/win32diskimager/

Linux:

(Decompression and image write to media part of the same operation above under “Decompressing the Fedora image file”)

Using a partition tool to expand the / partition on the micro-SD card

Windows:

https://www.easeus.com/partition-manager/epm-free.html

Linux:

gparted /dev/XXX

Following the text setup wizard

On first boot, at the bottom of the screen, you will see a set of questions regarding initial system username, password, and other settings. Follow the wizard – make sure you create a root password, The system will boot.

Rebooting into a standard machine

You will boot into a standard Linux login screen. Login as root.

Doing standard housekeeping and a standard build

From this point on in the build, the machine feels like a “normal” Linux box.

Using nmcli to set a static ip address

https://unix.stackexchange.com/questions/290938/assigning-static-ip-address-using-nmcli

Using dnf to install nano, rsync, and net-tools

A lot of things that you take for granted, like nano, rsync, and ifconfig (part of net-tools), do not exist until you add them with dnf.

Editing the selinux config file

https://www.centos.org/docs/5/html/5.1/Deployment_Guide/sec-sel-enable-disable.html

Modifying or disabling firewalld

systemctl stop firewalld; systemctl disable firewalld

(some people like firewalls, I think they are lazy – just turn off unneeded ports!)

Enabling an SSHD server

systemctl start sshd; systemctl enable sshd

Adding the rpmfusion repos

https://rpmfusion.org/Configuration

dnf clean all; dnf update

reboot

Doing a standard LAMP build

https://www.digitalocean.com/community/tutorials/how-to-install-lamp-linux-apache-mysql-php-on-fedora

(notes: dnf install mariadb mariadb-server instead of dnf install mysql mysql-server, dnf install php-mysqlnd instead of dnf install php-mysql)

Installing a free cert with Let’s Encrypt

https://letsencrypt.org/

Installing Nextcloud

Normally, I would just say, dnf install nextcloud and watch yum dependencies in action. Unfortunately, there is a missing dependency for php-composer(ircmaxell/password-compat) which breaks nextcloud in yum/dnf. This is not a good thing, however it is not specific to the Pi, it is a Fedora 29/Nextcloud thing (and it would appear that Owncloud and Nextcloud do not get a lot of maintainer love in the yum repos).

As it happens, I was able to deploy nextcloud 2 weeks ago on a cloud server using this script, and it worked just as well on this installation of Nextcloud on Fedora on the Pi:

https://help.nextcloud.com/t/fully-automated-nextcloud-on-fedora-installation-script/27276

Doing a standard Samba build

https://www.digitalocean.com/community/questions/installing-configuring-samba-on-centos-7-droplet

Optimizing Samba file shares, especially for MacOS Finder clients:

http://blog.gordonbuchan.com/blog/index.php/2018/10/21/fixing-slow-macos-finder-on-samba-file-share-optimizing-for-windows-clients/

Doing a standard OpenVPN build

http://blog.gordonbuchan.com/blog/index.php/2018/01/28/a-corrected-procedure-for-the-installation-of-openvpn-on-fedora-27/

Enabling the rc-local service in system

https://www.linuxbabe.com/linux-server/how-to-enable-etcrc-local-with-systemd

Rant section

Although some people like the rant, others just want a HOWTO. So the rant goes here, after the practical.

It is a big deal that Fedora treats the Pi’s aarch64 as equal to Wintel’s x64

The ability to use standard Red Hat software tools and procedures means that aside from differences in the installation process, the Pi feels like a normal, if slightly slow, Red Hat machine. Because Fedora on Pi has full standard repositories, you can use standard howtos and procedures to do a build.

Until now, Linux was effectively relegated to dumpster-diving Wintel boxen

Intel hardware originally designed for Windows is the commodity computing platform on which Linux was born. It is comforting to know that Linux now has a second viable hardware platform, which will grow in capability over time.

A motorcycle engine trying to power a car

I remember a TV show where a team of mechanics scoured a junkyard and found a motorcycle engine powerful enough to drive a car-sized frame, chassis, and wheels. The motorcycle engine was large by motorcycle standards – and was able to power the car form factor, but it struggled with the task. Like the motorcycle engine in the TV show, the Pi is now powerful enough to run a full Red Hat Linux (Fedora) server, but it struggles with a full GUI desktop like Gnome 3 or XFCE.

Vision and leadership and big decisions by Fedora and rpmfusion yum repo maintainers

The big decision by Fedora to provide full support for the Pi’s aarch64 ARM cpu gives Linux its own hardware platform, for the first time. The Fedora project maintainers and the rpmfusion repo maintainers did an excellent job of ensuring that the yum repositories contained aarch64 binary rpm packages for everything that had an x64 package. Let me just say it is inspiring to see people with vision actually execute and do something like this.

Using standard procedures and software libraries

What is impressive is that the main yum repositories for fc29, as well as the rpmfusion free and non-free repositories, fully support aarch64. That means you can dnf install vlc filezilla, and it will work. Some repositories are not there yet, such as the remi rpm repo for php56 support on modern fedora, so I will be limited to php72 for the moment. Some third-party repos, like google-chrome, do not yet support aarch64, however I was able to install Chromium.

What the Pi is not so good at: GUI desktop

When I first installed Fedora 29 Workstation on the Pi, the GUI was virtually unusable. It became better over time, but would sometimes freeze. I turned off the Gnome 3 desktop, did all the dnf updates, then turned on Gnome 3. After a few tweaks with the Gnome tweaks took I was able to run LibreOffice Writer, Chromium, and FileZilla, but only very slowly. XFCE was slightly faster but not enough to make a difference. Although rpmfusion allowed me to dnf install vlc, VLC was virtually unusable – but props to everyone in that value chain for vision – a hardware rev or 2 from now the VLC will be usable.

The Pi is fast as a text server

I decided to go the other way and install a text-only server from the ground up. There are some Raspberry Pi-specifics to the build I will address in the Install procedure section. However, the rest was identical to the way one would build a wintel linux box. On my brother’s advice, I decided to use Fedora 29 Minimal. It really is minimal: I had to use dnf to install nano and rsync. However, I was able to do a dnf update including the rpmfusion free and non-free repositories. Because I had full standard repositories, I could use standard howtos and procedures to do my build.

I then built a standard LAMP web server, an SSL cert with let’s encrypt, an installation of nextcloud media server, samba file share server, openvpn vpn server. The server performed well – so well that I am already planning to deploy a few in the field as openvpn servers and rsync backup data dumps.

A random reference to a satirical book about home servers

https://gizmodo.com/342499/microsofts-brainwashing-childrens-book-mommy-where-do-servers-come-from

 

Using pobox.com redirection and a free webmail account to host branded email for a domain

A friend registered a domain name, and wanted to send and receive branded email using that domain. If your project has a modest budget, you can send and receive branded domain email using a combination of a free webmail account and a pobox.com redirection account for US$20/year.

you can use pobox.com as your receiving post office, and have it forward your inbound email messages for that domain to a free webmail account. You can use the pobox.com SMTP server as an outbound SMTP gateway, with username and password authentication.

By publishing SPF and DKIM records in the DNS zone file for your domain, you can greatly increase the chances that branded email sent via the pobox.com server will be accepted by the remote party and not be mistaken for spam.

Checklist: what you need for branded email:

A domain (example.com)

a DNS control panel for the domain (I don’t let my hosting ISPs get my control of my DNS, I control it via the free DNS control panel that came from my registrar, GoDaddy.) You could probably do the same with your registrar.

A free webmail account (for example, a free @gmail.com account).

A pobox.com redirection account (US$20/year)

Setting up DNS

Log into DNS control panel

create MX records for your domain:

MX @ mx-1.pobox.com priority 12
MX @ mx-2.pobox.com priority 24
MX @ mx-3.pobox.com priority 36

create SPF and DKIM records:

TXT “v=spf1 include:pobox.com”

For the DKIM record, refer to the custom value generated for your domain, and available in the control panel for your pobox.com account.

Setting up a mail client

Start with an email client, like the Mail app on an iPad or iPhone.

Instead of choosing a branded email service with a logo, like Gmail or Yahoo, choose “other” and define a custom email service.

Name: Firstname Lastname
Email: firstname.lastname@example.com
Password: passwordforgmailaccount
Description: firstname.lastname@example.com

Incoming mail server

Host name: imap.gmail.com
User Name: username@gmail.com
Password: passwordforgmailaccount

Outgoing mail server

Host name: smtp.pobox.com
username: username@pobox.com
password: passwordforpoboxaccount

Fixing slow MacOS Finder on Samba file share, optimizing for Windows clients

If you are trying to figure out why your MacOS Finder is slow when it connects to a Samba file share on a Linux server, you are in the right place.

I found the solution in this post:

https://medium.com/@augusteo/fixing-slow-macos-finder-connection-to-linux-samba-server-ed7e5ea784c1

Here is what you need to add to /etc/samba/smb.conf on the Samba server:

vfs objects = fruit
fruit:aapl = yes
fruit:encoding = native
fruit:locking = none
fruit:metadata = stream
fruit:resource = file

While I was searching for things that could speed up a MacOS Finder client’s session, I found a number of optimizations that helped speed Windows clients connected to a Samba file share.

The best of these was a post:

https://eggplant.pro/blog/faster-samba-smb-cifs-share-performance/

with these suggestions for /etc/samba/smb.conf on the Samba server:

[global]

# FORCE THE DISK SYSTEM TO ALLOCATE REAL STORAGE BLOCKS WHEN
# A FILE IS CREATED OR EXTENDED TO BE A GIVEN SIZE.
# THIS IS ONLY A GOOD OPTION FOR FILE SYSTEMS THAT SUPPORT
# UNWRITTEN EXTENTS LIKE XFS, EXT4, BTRFS, OCS2.
# IF YOU USE A FILE SYSTEM THAT DOES NOT SUPPORT UNWRITTEN
# EXTENTS, SET "strict allocate = no".
# NOTE: MAY WASTE DRIVE SPACE EVEN ON SUPPORTED FILE SYSTEMS
# SEE: https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=798532

   strict allocate = Yes

# THIS IS TO COUNTERACT SPACE WASTAGE THAT CAN BE 
# CAUSED BY THE PREVIOUS OPTION 
# SEE: https://lists.samba.org/archive/samba-technical/2014-July/101304.html

   allocation roundup size = 4096

# ALLOW READS OF 65535 BYTES IN ONE PACKET.
# THIS TYPICALLY PROVIDES A MAJOR PERFORMANCE BENEFIT.

   read raw = Yes

# SERVER SIGNING SLOWS THINGS DOWN WHEN ENABLED.
# THIS WAS DISABLED BY DEFAULT PRIOR TO SAMBA 4.
# Thanks to Joe in the comments section!

   server signing = No

# SUPPORT RAW WRITE SMBs WHEN TRANSFERRING DATA FROM CLIENTS.

   write raw = Yes

# WHEN "strict locking = no", THE SERVER PERFORMS FILE LOCK
# CHECKS ONLY WHEN THE CLIENT EXPLICITLY ASKS FOR THEM.
# WELL-BEHAVED CLIENTS ALWAYS ASK FOR LOCK CHECKS WHEN IT IS
# IMPORTANT, SO IN THE VAST MAJORITY OF CASES,
# "strict locking = auto" OR "strict locking = no" IS ACCEPTABLE.

   strict locking = No

# TCP_NODELAY:
#    SEND AS MANY PACKETS AS NECESSARY TO KEEP DELAY LOW
# IPTOS_LOWDELAY:
#    [Linux IPv4 Tweak] MINIMIZE DELAYS FOR INTERACTIVE TRAFFIC
# SO_RCVBUF:
#    ENLARGE SYSTEM SOCKET RECEIVE BUFFER
# SO_SNDBUF:
#    ENLARGE SYSTEM SOCKET SEND BUFFER

   socket options = TCP_NODELAY IPTOS_LOWDELAY SO_RCVBUF=131072 SO_SNDBUF=131072

# SMBWriteX CALLS GREATER THAN "min receivefile size" WILL BE
# PASSED DIRECTLY TO KERNEL recvfile/splice SYSTEM CALL.
# TO ENABLE POSIX LARGE WRITE SUPPORT (SMB/CIFS WRITES UP TO 16MB),
# THIS OPTION MUST BE NONZERO.
# THIS OPTION WILL HAVE NO EFFECT IF SET ON A SMB SIGNED CONNECTION.
# MAX VALUE = 128k

   min receivefile size = 16384

# USE THE MORE EFFICIENT sendfile() SYSTEM CALL FOR EXCLUSIVELY
# OPLOCKED FILES.
# NOTE: ONLY FOR CLIENTS HIGHER THAN WINDOWS 98/Me

   use sendfile = Yes

# READ FROM FILE ASYNCHRONOUSLY WHEN SIZE OF REQUEST IS BIGGER
# THAN THIS VALUE.
# NOTE: SAMBA MUST BE BUILT WITH ASYNCHRONOUS I/O SUPPORT

   aio read size = 16384

# WRITE TO FILE ASYNCHRONOUSLY WHEN SIZE OF REQUEST IS BIGGER
# THAN THIS VALUE
# NOTE: SAMBA MUST BE BUILT WITH ASYNCHRONOUS I/O SUPPORT

   aio write size = 16384

 

 

Where CPU power matters, and where it does not

The other day, I was thinking about 3 systems, 2 with modest specifications, and 1 system with great specs.

A 2009 desktop old Vista-class class core 2 duo 4GB RAM 120GB SSD

A 2018 netbook Celeron CPU (more like an Atom) 4GB RAM 64GB eMMC

A 2016 laptop core i7 with 16GB RAM and a 1TB SSD

It would not be a difficult quiz were the object to identify the good system vs the bad one. Hint: it’s the i7.

However, I have learned that some tasks run quite well on limited hardware.

The 2009 desktop was never designed to run with 4GB of RAM and a 120GB SSD from my junkpile, but they certainly have the effect of speeding up the system. This machine, running Fedora Linux, is a VPN server, a file server, a web server, a database server, and can play back 1080p video beautifully over a DVI connection.

The 2018 netbook which costs less than US$200 new, is essentially a Chromebook case with modest Wintel guts. Its CPU is called a Celeron, but given its clock speed and meagre 2 cores, it may as well be an Atom. And yet, this netbook is able to run Fedora Linux and Windows 10 Pro, quite well. It can even run Photoshop.

I have tried to run virtual machine emulation under both of these systems. Even with a stripped-down OS installer, the results were not pretty. For some applications, specs matter.

Although I have not yet spent serious time with a Raspberry Pi device, the full support in Fedora 29 has made me take a serious look at the platform. I predict results similar to those on the systems I described earlier.

Of course, if you throw good specs at a problem, like a recent laptop with a core i7, 8 cores, 16GB RAM, and a 1TB SSD, a lot of other things are possible. I am able to run multiple virtual machines under KVM, and have had a situation where a Linux guest was connected to one VPN, a Windows guest was connected to another, and the main desktop (“baremetal computer”) was on the main network connection, not even slowing down while the virtual machine guests did their work.

A recent sighting of a 13″ MSI and a sale for a Dell XPS 13 made me long for a small, but powerful computer. However, for travel, all I need is that little netbook. In theory, it would be fun to virtualize a few server environments for portable LAMP development, but I have been exploring “containers” like Docker that will allow me to isolate the systems with different PHP/MySQL versions without the overhead of a full virtual machine.

So the question is not whether you need more power. The question is how much power do you need for a specific use?

The containers thing is getting important – my goal will be to build 2 containers – one with mysql and php 5.x, and one with mysql and php 7.x