Making man pages easier to read

The abundance of man (short for manual) pages in Unix and Linux systems is amazing, (almost) all commands are well-documented, and so are the programming APIs that a developer like me need to access on a daily basis. The only problem is that the default reader for them, the man command, displays them as text in a terminal window.

You can coerce the GNU man program to output manual pages as PostScript, however, but you need to feed that into a program that can display the PostScript. I use evince, the GNOME Document Viewer, for that. Unfortunately, there is some sort of interoperability issue here that, at least on my installation, blocks copy-paste from the generated PostScript.

I found that if I convert the PostScript to PDF using ps2pdf, copy-paste works. This is a lot to type to open a single manual page, however, so I ended up writing a script for it. Since I used to write a lot of Perl scripts earlier, the script also lets me look up the Perl manual using perldoc (which can be told to output nroff, the input format for manual pages, so that it can then go through the whole chain and end up as PDF).

This is what my little script has evolved to now. Since I have it running on several different machines, it does some detection of which software is installed, but it does need GNU man and GNOME evince at the very least (or you can replace it with the PS/PDF viewer of your choice). If ps2pdf is installed, it displays a PDF version for better interoperability. You can ask it for a manual page, to read perldoc from a file, or look up the perldoc manual by entering perldoc entry as the parameters. YMMV.

#!/bin/bash
# gman: Display manual page as PDF
# Written by Peter Krefting <peter AT softwolves.pp.se>
# SPDX-License-Identifier: BSD-2-Clause

if [ -z "$1" ]; then
  echo "Usage: gman perldoc <page>"
  echo "Usage: gman <file> | <manpage>..."
  echo
  echo "  Displays manual or perldoc documents as PDF"
  echo "  <page>     Perldoc page to display"
  echo "  <file>     File to open (perldoc or man)"
  echo "  <manpage>  Manual page to display"
  exit 0
fi

if [ -x /usr/bin/mktemp ]; then
  TEMPFILE="$(mktemp "$TMPDIR/XXXXXXXXXX.ps")"
elif [ -x /usr/bin/tempfile ]; then
  # Disable "tempfile is deprecated. Use mktemp instead", since we tried that...
  # shellcheck disable=SC2186
  TEMPFILE="$(tempfile --suffix .ps)"
else
  echo 'FIXME: Unable to create temporary file!' >&2
  exit 1
fi
if [ -f "$1" ]; then
  # Perldoc on file, unless manual file
  if [[ "$1" == *.[0-9]* ]]; then
    man -Tps "$1" > "$TEMPFILE"
  else
    perldoc -onroff -d"$TEMPFILE.nroff" "$1"
    man -Tps "$TEMPFILE.nroff" > "$TEMPFILE"
    rm "$TEMPFILE.nroff"
  fi
elif [ "$1" = "perldoc" ]; then
  # Perldoc manual
  if [ -n "$3" ]; then
    perldoc -onroff -d"$TEMPFILE.nroff" "$2" "$3"
  else
    perldoc -onroff -d"$TEMPFILE.nroff" "$2"
  fi
  man -Tps "$TEMPFILE.nroff" > "$TEMPFILE"
  rm "$TEMPFILE.nroff"
else
  # Assume manual page
  man -Tps "$@" > "$TEMPFILE"
fi
if [ -x /usr/bin/ps2pdf ]; then
  ps2pdf "$TEMPFILE" "$TEMPFILE.pdf"
  ( evince "$TEMPFILE.pdf" 2> /dev/null ; rm "$TEMPFILE.pdf" ) &
  rm "$TEMPFILE"
else
  ( evince "$TEMPFILE" 2> /dev/null ; rm "$TEMPFILE" ) &
fi

Memories of childhood

I was born in 1976, in the early days of the home computer revolution. As a small child, I was amazed by computers like the Luxor ABC-80, my father had one at his workplace, or the Commodore VIC-20 that a friend of mine had. Computers back then cost a fortune, but in 1982, my father imported a Sinclar ZX-81 kit from the U.K. through a friend who was visiting. Having an education in TV repair and working as a teacher in consumer electronics in upper secondary school, assembling the kit into a working computer was simple, and it was much cheaper than buying it pre-assembled. The machine came with a wooping 1 (one) kilobyte of RAM, and that included the memory needed to display the screen, so the first programs I wrote were fairly small. The first attempts (I was five years old, remember) were mainly a couple of PRINT to display som graphics gibberish and a SCROLL to scroll the screen, and then repeat that. The user manual was of course in English, so I could only enter the examples and try them out.

We had a cassette with a few games on it that worked with 1K mode, though (he later did get a 4K RAM expansion, which he hacked into a 16K RAM expansion by stacking RAM chips and some clever cabling), but I wanted to write my own games. I wasn’t very good at that, but a local (OK, half an hour drive, but still) shop started getting some books with computer games in them. I bought the two I found that they published. Stridsspel and Rymdspel, which were translations of some U.K. titles with BASIC games for several computers at once. I have lost my copy of Rymdspel, but I still do have Striddspel:

Book: Stridsspel - för VIC, PET, SPECTRUM, ZX81, BBC TRS-80 OCH APPLE
Stridsspel från Brombergs bokförlag

I loved these games so much that I tended to port them to new computers and languages as I went along. When I set up a web page when starting at the university back in 1995, I even made web versions of two of the games, Iceberg and Space Mines. I wrote them in C, which I was learning then, creating them as CGI programs. Since this was before web pages could store cookies and convey state back and forth, they store their entire state in CGI parameters, run a single iteration and output a new web page that create the next step.

As I said, I lost one of the books several years ago, and I am still sad about that. I have never seen them in any second-hand bookstores, and web searches for them had come up blank. One reason is because I have never known the titles of the original English editions, it was never given in any of the books I have from the series (I also have the titles Bättre BASIC and Maskinkod och assembler). But today I made a breakthrough, I found a similar title archived at Internet Archive, Creepy computer games, and it had the listed the original titles of the two books I had: Computer Battlegames and Computer Spacegames. I guess I could have guessed those, the Swedish titles are just simple translations.

Book: Computer Battlegames. FOR C64, VIC20, APPLE, TRS80, ELECTRON, BBC, SPECTRUM & ZX81

Searching for those titles led me to the publisher’s website, where they actually have the titles available as PDFs for download. Oh, the joy! Comparing the editions, I see that they are a few minor changes, the English original has the ZX81 version as the base, whereas the Swedish one has the “PET” version as standard. The English version does have Commodore 64 versions of the programs, which the Swedish one doesn’t, perhaps it is a later edition. My copy of the Swedish edition of Battlegames does have an insert with some errata, and modifications of the games for Spectravideo SV-318 and SV-328 that the original does not have. I didn’t have that for the Spacegames book.

I never did try to convert the “graphic” programs from the VIC-20 version into Commodore 64. I will have to have a go at the version from the English on-line edition. I’ll try to feed the listing into bastext and see if I can get anything useful out of that.

Ah, the memories!

Reading Icalendar (.ics) file from Outlook on Linux

At $DAYJOB, email is handled through Microsoft’s Office 365, and with that I occasionally get event invitations in Microsofts’s internal format. As I am using an IMAP-based e-mail client (since I cannot stand Outlook Web Access), actually reading those invites can be a bit difficult.

With the default settings, the invitations are presented as a link into the Outlook Web Access client, with only the subject of the event readable (as the email subject). Everything else is completely hidden from the user. Thunderbird does have some built-in code that downloads the calendaring information and displays it to the user, but I am using a different email client and only get the web link.

Entering Outlook Web Access and going into the settings, there is a setting to present invites as Icalendar files (MIME type text/calendar, extension .ics). Enabling this changes the emails so that the event text is presented in the message body, but all the important details, such as start time and location, are only present in the Icalendar file. And while the calendar is “readable” in the sense that it is a text file, it is not readable in the sense that it is easy to find out what it says.

I am running Linux on my desktop, and do not have any calendaring software installed, so nothing wants to pick up the .ics file. And reading it in a text editor isn’t easy. There are several timestamps, and it takes a while to figure out that it is the third DTSTART entry that contains the event start time:

$ grep DT attachment.ics
DTSTART:16010101T030000
DTSTART:16010101T020000
DTSTART;TZID=W. Europe Standard Time:20211103T100000
DTEND;TZID=W. Europe Standard Time:20211103T142500
DTSTAMP:20211102T150149Z

Trying to find software that will just view an ics file in a readable format isn’t easy. I don’t need calendaring software on my desktop (I do have a calendar app on my phone that I could use, though), but it would be nice to display it.

After some intense web searching, I found mutt-ics, a plug-in for the textual Mutt e-mail client. I am not using Mutt, but running the script on the ics file did produce readable output:

$ python ./mutt_ics/mutt_ics.py /tmp/attachment857.ics
[...]
Start: Wednesday, 03 November 2021, 10:00 CET
End: Wednesday, 03 November 2021, 14:25 CET

That’s a step forward. The next issue is that I am using a graphical e-mail client, and this is a text-mode script. The e-mail software runs “xdg-open” to open the file, so I had to create a few items to get it working. First, a script wrapper that runs the script and shows the output using “xmessage” (other software also works, I have not yet found out how to get xmessage to display UTF-8 text properly, so I might need to replace it eventually):

#!/bin/bash
python /home/peter/mutt-ics/mutt_ics/mutt_ics.py "$1" | iconv -c -f UTF-8 -t ISO8859-1 | xmessage -file -
exit 0

Next step was to make a .desktop file that defines the script as a handler for the text/calendar MIME type:

$ cat /home/peter/.local/share/applications/view-ics.desktop
[Desktop Entry]
Type=Application
Version=1.0
Name=View iCalendar
Exec=/home/peter/bin/view_ics
Terminal=false
MimeType=text/calendar;
StartypNotify=false

And to tie it all together, I have to register it as the default handler for text/calendar by running xdg-mime:

xdg-mime default view-ics.desktop text/calendar

There, now running “xdg-open file.ics” opens a xmessage dialog showing the calendar details in a new window. Managed to get it working just in time, the meeting starts in twenty minutes…

Running memtest86 on a Mac Mini

At $DAYJOB, we are having issues with a Mac Mini that is acting up. It crashed on boot, and re-installing macOS didn’t help as it complained about the file system being damaged, no matter if I reformat (“erased” in Apple-speak) or repartition the disk. The built-in Apple Diagnostics tool crashed after about 16 minutes, so I thought I’d run memtest86+ on the machine. But without a working OS boot, I was unable to get it up and running, and googling for information didn’t help.

To get it running, I had to create a bootable USB stick, for which I had to find a Windows machine and run their USB Key installer. However, the disk did not show up in the list of boot options when booting the Mac Mini pressing the Option key. To find it, I had to install rEFInd on a second USB stick (they have a USB flash image ready for download, so no Windows machine needed).

With both USB sticks in the Mac, booting with the Option key let me select the rEFInd USB stick, which in turn found the memtest86+ stick as a “Legacy Windows” image. Now the test started fine.

Sound output from the wrong jack

Debian recently released an update to their stable release, version 8.7, and with it an update to slightly more recent Linux kernel version (up to 3.16 from 3.2). Well, that would be nice to have I thought, and updated my office workstation and rebooted. Everything looked fine, it even picked up and updated the Nvidia graphics driver that I always have problems with. But then, when I tried to play radio over the Internet, the sound suddenly started blaring out from a speaker inside the chassis that I didn’t even know it had, instead of my connected proper speakers.

So, first I thought the driver was broken, so I rebooted back to the old kernel. Still wrong, then I turned power off and back on and started the old kernel, still the wrong output. Strange.

I have a HP Z220 Workstation (from 2013) at the office, with an “Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller (rev 04)” audio controller, with a Realtek ALC221 chip (as per output from lspci -v and /proc/asound/card0/codec#0). It took me an hour of intense googling to find the correct set of keywords to find something, but apparently most English-language threads use “jack” for the outputs. I should have known that.

I eventually stumbled on this ArchLinux thread from 2014 which mentioned a tool called hdajackretask that can be used to rearrange the outputs from the HDA cards. Debian distributes this utility in the alsa-tools-gui package. After installing the package and changing the output type I managed to get sound playing through my speakers again.

hdajackretask screenshot, setting "Green Line Out, Rear side" to "Line out (back)"

Screenshot from hdajackretask, used to select output devices from an HDA audio card

Now to actually get some work done. That is Mondays for you.

The futility of OSX parental control and web browsers

I have kids. Two of them, the youngest is five and the oldest is about to turn eight years old. Since they see me and my wife use a computer regularly, they of course also want to use it. The oldest has access to computers at school, and if they are going to be proficient with computers, they need to start using them at an early age. I have a MacBook Pro that they both have accounts on, both set up with OSX’s default “Parental Control” feature.

That works fairly well when they use the local application (Photo Booth is a favourite, if I hadn’t blocked it their little clips would probably have ended up on YouTube if the knew how to upload them). Well, before getting to the applications, there are all these little pesky pieces of software that phone home on every start-up, under the guise of doing software updates. No matter how many times I block “Google Software Update” or “Paragon Updater” and the like, every time they log in to their accounts, they get a message that they cannot run them. Well, they learn to click “OK” and go on with their life. Using a web browser is a lot more hassle, though.

I had initially set up a whilelist in the Parental Control settings, to only allow them to access certain web sites. That doesn’t work, since every site in the universe now include stuff from other places, either be it CDNs, Google’s web tracking stuff or a JavaScript library that they are too bored to copy to their own domain. I can live with that, a lot of it can be blocked with Ghostery or similar, but that is if you can even get to it.

Trying to even run a web browser on an account that has Parental Control enabled is a chapter in itself. First it is the phone-home auto-update stuff that kicks in every few moments. Then there are the pre-installed shortcuts (at least in Opera) that wants to download screenshots to display inside the Speed Dial screen (why can’t they just ship with default images?). Then even trying to type a web address keeps trying to send every single keystroke to Google, requiring having to close a dialog after every single letter in the URL. In Google Chrome, it seems utterly and completely impossible to disable this behavior. Opera has it, hidden deep inside its configuration options, but I then I have to enter a magic key combination to remove the Search field. And fight the blocked URL pop-ups to remove the pre-installed Speed Dials.

I need to try out Vivaldi for the kids’ accounts. I know it can be configured to be less intrusive, and it doesn’t send all keystrokes to the search engine. When I set up the account for my oldest daughter there wasn’t a stable version around, but it should be fine now.

End of an era

The day had to come, I knew it, I just postponed it for as long as possible. But now it is time to move on, it is time to close down my Fidonet system for good, over twenty years after setting up my first system. My Fidonet history has been going through a lot of different setups, starting out with reading off BBSes using Blue Wave, through a simple point setup with Terminate on MS-DOS, moving on to an OS/2-based system using SquishMail using timEd and Fleet Street as readers, even serving as the Swedish shareware registration agent for Fleet Street for a few years at the Fidonet peak in the late 1990s.

I then moved to a Linux-based system using CrashMail II (for a while, running timEd through an MS-DOS emulator under Linux, before GoldEd was ported to Linux), and lately using a Usenet News reader and the JAMNNTPd software. During my tenure as a Debian developer, I had a lot of this stuff packaged for Debian, but I haven’t checked if they are still there. I have just been using the stuff I compiled several years ago, but lately it has simply stopped working. Maybe my message bases have broken completely, I don’t know, and considering how seldom I read them, I figured now was the time to shut the system down for good.

It is still a bit sad, I remember the peak around 1996–1998, when I moderated a chat area and had enforce a limit of 50 posts per day per author, else it would overflow completely (remember, this was at the time where it could take a day or three for the messages to propagate). Now I don’t know how many years it has been since anyone even posted a single message in any of the derelict Swedish areas. There is some activity in the international areas,

Good-bye, Fidonet!

OS X Time Machine recovery does not find my USB disk

Today the root file system on my MacBook developed an “Invalid index key” error that I was unable to fix by booting into recovery mode and using the Disk Utility, or even by booting into single-user mode and using the fsck_hfs tool, no matter what flags I threw at it. Paragon HFS for Windows could still read (and write) to the partition from the Windows installation and I was able to read the file system, but I couldn’t boot it.

After a few hours of trying to fix the problem, I simply gave up. I saw several mentions of a tool called Disk Warrior that supposedly could fix a lot of the problems fsck couldn’t, but I was a bit reluctant at throwing over 100 US dollars at a tool that I didn’t know if it would make any difference.

I do have backups. Even if the MacBook isn’t set up to do daily backups like most my machines are (I never got the Time Machine interface in my Synology NAS to work with it), so the last backup I had was from December last year. Better than nothing, and I don’t really keep that many important files on the laptop – most of the important files are shared with other computers (using Git version control to synchronize), or in Dropbox.

So I booted from the recovery partition, selected Restore from Time Machine and … my backup didn’t appear.

So I rebooted. Still nothing.

Rebooting, this time booting from the backup disk (which has a convenient OS image installed onto it). Still no disk. I only saw my (failed) attempt of a backup node from the Synology NAS get listed (and I was unable to connect to it, just like Time Machine itself was).

Meh.

Then it struck me. What if I power off the Synology, and then open the recovery program? So that is what I tried, and there it was! Now the recovery finally let me select the disk that was physically connected to the machine, rather than the network share over WiFi (still, it’s quite impressive of it to find it when booting from the recovery partition on the backup disk, I must say that Apple are rather good at making those things just work, even if it failed at what I really wanted to do).

Now the backup is finally restoring. The clock is approaching half past midnight and it is at 7.5 % restored, so I guess I will have to wait until the morning until I see if it actually did work, but at least it is trying now…

Time to go to sleep.

Making OVF images using Packer

At my $DAYJOB, the need recently arose for not only making our software available as an installer that the end-user can install on their machines, but also for providing pre-built OVF (Open Virtualization Format) images, mainly targeted towards costumers running VMware vSphere and wanting to not have software running on bare metal. They can of course run the regular installer, but providing a pre-installed image cuts deployment time considerably and eliminates many of the mistakes that can be done while performing the installation.

Hunting around for solutions on how to actually generate these images, using some kind of automated procedure as we will regenerate the images several times and in slightly different configurations, I eventually landed on Packer. Packer lets me drive VMWare Workstation by submitting a configuration file listing an ISO image to install from and giving the commands necessary to run the installation automatically.

One of the issues with doing this is that most installations will add some unique identifiers in the image, and we do not want that. For instance, SSH host keys are generated, as are MAC addresses for the network cards, and also some other stuff is dropped. Fortunately, I was not the first one to have faced this problem, so it was fairly easy to find a solution that would clean up the generated image. In addition to that, I had the post-install script install VMWare Tools in the virtual image, and then go on to remove various UUIDs and MAC addresses from the generated VMWare configuration file.

The result of running Packer is, however, still a VMWare image. It does have a driver for OVF, but that one is using Oracle VirtualBox instead. OVF is supposed to be platform-independent but there are enough differences between how the images are built to create trouble if we use the wrong build platform. Instead we landed on using VMWare OVF Tool on the generated VMWare image, converting it into an OVF archive (.ova). This is the part that takes the longest time in our build process, which starts out with generating the ISO to install from on-the-fly. But in the end, we have an OVA file that can be imported into VMWare (vSphere, Workstation or Player all work fine) and be up and running in under two minutes.