macOS Server 5.3

macOS Server 5.3 contains a few traps for the unwary—traps which don’t appear to be mentioned in the release notes

  1. It only installs on 10.12.4 (or later, one assumes). This is mentioned in the release notes, but not in the App store notes. NOTE: 10.12.3 and earlier are not supported.
  2. It will control a remote server running version 5.2 on macOS 10.12
  3. It will not control a remote server, running any version, on OS X 10.11!

This last point is a horrible gotcha! If you are running a server on a previous OS X version—such as because it is on older hardware which can’t be updated to macOS 10.12, and you update to sever 5.3 on another machine, you can no longer control/manage the server instance running on the El Capitan machine.

Server 5.2 and Open Directory

Server 5.2 has its own version issues, or rather Open Directory does.
Server 5.2 will happily run on 10.11 and 10.12. However, if you had a Master/Replica on OS X 10.11/Server 5.2, and you upgrade to a mixture of OS X 10.11/Server 5.2 and macOS 10.12/Server 5.2, the Master/Replica breaks, simply because Open Directory insists (for no good reason that I can see) that they be running the same OS X version!

Server 5.2 can control Server 5.3

If you have a system running Server 5.2, it will happily control remote instances of:

  • Server 5.2 on OS X 10.11
  • Server 5.2 on macOS 10.12
  • Server 5.3 on macOS 10.12
Target Manager
macOS Server 5.2 5.3
10.11 5.2 ✖︎
5.3 N/A N/A
10.12 5.2
5.3

Let’s Encrypt OS X Server

(Or, letsencrypt macOS Server if you prefer)

I have been using CACert as my free SSL certificate for some time now, and it’s fine, with one exception—CACert root certificates are not trusted by default by many systems, including most significantly, iOS and Andriod. That in turn means that I can’t retrieve email off my home server from a company provided iPhone, since the company mandated security profile demands SSL authentication.

Letsencrypt aims to address this problem (among others) with their free certificates, which are trusted by Android and iOS. However, Letsencrypt uses a highly automated system (to make things easy for the user) which originally did not support OS X (macOS).

I recently decided to revisit Letsencrypt and have indeed managed to get it to do what I want, albeit with some interesting discoveries along the way.

Measure Twice, Cut Once

Since I did not wish to blow up my existing home server, especially the current certificate, I decided to test things out in a Parallels virtual machine. So, I duly fired up Parallels and started a clean install of El Capitan.

Other things can go wrong

It failed. It said the installer image could not be verified. So I tied a backup image, and got the same result. So I tried a Yosemite install, and it failed with the same error.

A little research showed a possible reason, so I tried resetting the clock, but to no avail. Then I had a light bulb moment. The page says the date must be correct in order to install OS X, specifically the year, because if the date set is prior to the release of OS X, the error will trigger.” It turns out that it’s important not only that the date not be too far in the past, it can’t be too recent either. In particular, the current date is too recent to install older releases!

The solution is to decouple the clock from the Parallels virtual machine (Parallels will keep the virtual clock synchronised to the real machine’s clock) and then set the date back a year or so to a little after the release of the relevant operating system. Voilà! It installed.

I then ran the certbot certonly script in the VM and, after a little fiddling, got things installed.

The Real Thing (with the fiddling done)

The certbot page for Apache on OS X shows how to create the certificate for OS X. It doesn’t work—or at least not on OS X Server.

The problem is neatly explained in the file /Library/Server/Web/Config/apache2/httpd_server_app.conf in the comments at the top:

#
# macOS Server
#
# When macOS Server is installed and set up (promoted), this file is copied
# to /Library/Server/Web/Config/apache2/httpd_server_app.conf. Both macOS
# and macOS Server use the same httpd executable, but macOS uses the config
# file in /etc/apache2.httpd.conf while macOS Server’s Websites service uses
# this config file.
#

The $ certbot –apache command works on the file in the macOS config file, not the macOS Server config file.

The solution is to use the $ certbot certonly command, and then select webroot, as follows:

Webroot

Place the files in /Library/Server/Web/Data/Sites/Default

Import the files into Server’s Certificates and all is good.

El Capitan and me, panic not

I wrote that I had experienced a panic at boot in El Capitan, and that I was hanging off reinstalling some software until I had found the problem.

I found the problem.

I had cause to reboot the system, and it hung again at the boot screen

So, I rebooted in Single User mode (Command-S) and this time I looked closely at the screen.

Enter com_eltima_async_Navel::start(this=<ptr>,provider=<ptr>).
panic(cpu 0 caller 0xffffff80017d6a9a): Kernel trap at …

Debugger called: <panic>
Backtrace (CPU 0), Frame : Return Address

     Kernel Extensions in backtrace:
        com.eltima.SyncMate.kext(0.2.5b15)…
       

A little research pointed to Eltima’s SyncMate, which I had installed to sync an Android phone. I had removed it some time ago, but it had left behind a kernel extension, which hangs El Capitan at boot (sometimes. But once it’s started, it doesn’t seem to stop)..

I completely removed the vestiges of SyncMate, by removing the files/folders:

  • /System/Library/Extensions/EltimaAsync.kext
  • /Library/Application Support/EltimaSyncMate/

and all is now fixed.

Posted in Mac

El Capitan and me

I’ve installed OS X 10.11 (El Capitan) on three machines in the household, and while it’s just fine mostly, I have one—significant—problem and some things I’ve learned along the way.

OS X Server

Just before El Capitan arrived, Apple released OS X Server 5 (which was rapidly bumped to 5.0.4). This Server release brings a much welcomed change. Unlike previous versions it runs on both Yosemite and El Capitan. Previously, upgrading Server has been somewhat of a pain, since the machines running Server (both headless Mac Minis in my case) and the machine running Server simply as a console, have all had to be in lockstep. The console machine could not talk to a newer, or older, Server instance, and as soon as you upgraded OS X on any machine you had to upgrade Server on that machine as well. Essentially, that meant that you had to upgrade Mac OS X and OS X Server simultaneously on all machines. (The clients are OK: Server will work with them more or less regardless of their OS version).

Great! I was able to upgrade all three machines to the new version of Server in anticipation of the later upgrade to El Capitan.

The fragility of Open Directory

Alas, things were not quite so simple. OS X Server seems to have a long standing problem, where the Master and Replica OD instances get confused. In this case, the Master decided it didn’t have a Replica any more, and the Replica decided it couldn’t run OD. OD was always off, and if you turned it on it wouldn’t offer to create a new Master or join a Replica, it just turned itself off. However, it still worked (network users from the Master were still available on the Replica). So I ignored that, pending El Capitan.

I started by upgrading the Master Server to El Capitan. Which worked fine (took a little over 30 minutes). It then needed to upgrade OS X Server when I first ran Server again.

So, while Server 5.0 runs on OS X 10.10 and 10.11, it’s not quite the same thing. While this was going on, the Replica decided its (unacknowledged) Master had vanished and immediately forgot the network users! (It was prepared, now, to replicate a master or create a new Master).

After the El Capitan upgrade, OD Replication was still broken (Replica had the network users back but did not appear linked to Master), so I did what I’ve done before: forcibly remove the Replica and add it back to the master, using

sudo slapconfig -destroyldapserver diradmin

But, when I tried to add the Replica, it refused, saying that the OS X versions of the Server had to be the same!

So, lesson learned. All the Open Directory servers have to be running the same OS X version.

Which raises the question: WHY? Is this really necessary? It’s extremely irritating!

I upgraded the Replica server to El Capitan (which is also running O3X ZFS, so I was prepared for trouble), fortunately without incident (including the ZFS upgrade), and all is now fine with my servers.

El Capitan: Mysterious Hangs at Boot time

I then upgraded the main machine to El Capitan. I had some software that I was a little concerned about (Adobe CS3) but the collective wisdom of the web seemed to indicate that it was OK, so I went ahead.

Incompatible Software

The upgrade went fine, with only a few items put into the “Incompatible Software” folder:

  • GlimmerBlocker (LaunchDaemon and PrefPane)
  • GPGMail.mailbundle
  • WacomTablet.prefPane

I don’t care about the Wacom Tablet. GPGMail and GlimmerBlocker claim to be OK with El Capitan, so I reinstalled the latest versions

BOINC also would not run and asked to be re-installed (as is usual for BOINC after an OS X upgrade).

Reboot to hung screen

Then I restarted, and the machine hung.

It sits at the Apple boot screen with the progress bar at zero (no pixels of progress at all).

I restarted it several times with the same result. I restarted (Command R) into recovery mode and ran Disk First Aid. This worked and reported no problems.Then I restarted again.

It hung at the Boot Screen again.

I restarted with Verbose mode (Command V) and Single User (Command S), and it showed a Panic (but not a Kernel Panic) and stopped. Single User mode would not accept typed input.

So I reinstalled El Capitan from the Recovery Boot, which worked. I noted that it removed GlimmerBlocker, again. I put it back.

I put this down to a one off until the machine restarted (for reasons unknown) and returned me to the same hung boot screen. With the same symptoms (can boot into Recovery; Disk First Aid shows no issues; Panic in Single User Mode). I have resolved the problem the same way, by a reinstall. And I’m typing this Blog using that machine.

However, I’ve not reinstalled GlimmerBlocker, or BOINC, or GPGMail, or anything else that stopped working and asked to be reinstalled). We’ll see if it continues to work and if so I’ll consider adding back items one by one.

To be continued…

Fix for Time Machine “Backup verification incomplete!”

I was getting some issues with Time Machine, where, after backing up, it would always attempt a verification, and then complain that the verification was incomplete. These messages were only visible in the system log—there was no alert popped up.

Sep  9 08:37:30 XXXXX.local com.apple.backupd[72558]: Verifying backup disk image.
Sep  9 08:37:34 XXXXX.local com.apple.backupd[72558]: Backup verification incomplete!

The backup is to an OS X Server, and the TM backup is kept in a disk image, called Machine-Name.sparsebundle.

I thought that I would manually verify the backup and opened up Disk Utility. There I noticed that the Machine-Name.sparesebundle was already in the sidebar, even though the remove volume was not mounted. I then noticed that the path to the volume was

/Root Volume Name/private/var/db/com.apple.backupd.backupverification/Machine-Name.sparsebundle

Aha!

Closer investigation showed that /private/var/db/com.apple.backupd.backupverification contained a slightly modified copy of an out of date version of the Time Machine disk image.

% sudo ls -lhFG /private/var/db/com.apple.backupd.backupVerification/Machine-Name.sparsebundle/
total 24
-rw-r--r--  1 root  wheel    11B  5 Jul 22:18 HC.progress.txt
-rw-r--r--  1 root  wheel   500B  5 Jul 22:15 Info.bckup
-rw-r--r--  1 root  wheel   500B  5 Jul 22:15 Info.plist
drwxr-xr-x  3 root  wheel   102B  5 Jul 23:49 bands/
-rw-r--r--  1 root  wheel     0B  5 Jul 22:15 token

Looking closely at this revealed that the disk image was corrupted, so I simply removed it

% sudo rm -rf /private/var/db/com.apple.backupd.backupVerification/Machine-Name.sparsebundle

and then did a manual verification (hold down option while selecting the Time Machine menu and choose Verify Backups). When the verification finished (successfully), the directory was left empty.

So, in summary, if you end up with the “Backup verification incomplete!” message, try deleting the /private/var/db/com.apple.backupd.backupverification/Machine-Name.sparsebundle and see if that fixes it.

rubycocoa + rvm + Mavericks fixed!

As I wrote previously, rubycocoa did not work properly with Mavericks.

Well, I’m very pleased to discover that it’s been fixed, with a new version of rubycocoa available at SourceForge.

Before:

% ./testRubyCocoa.rb
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require’: cannot load such file — osx/cocoa (LoadError)
   from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require’
   from ./testRubyCocoa.rb:9:in `<main>’

After:

% ./testRubyCocoa.rb
Module RubyCocoa awakes!

testRubyCocoa.rb is:

#!/usr/bin/env ruby
require "osx/cocoa"
include OSX
OSX.ns_import :NSString

module TestRubyCocoa
   puts "Module RubyCocoa awakes!"
end

Adventures with ZFS and Time Machine part 4

This should really be part 5, but part 4 was uneventful enough at the time that I didn’t blog it.

The MacPro upgrade

The main family computer used to be a first generation Mac Pro (model MacPro1,1). This did splendid service for many years, until Mountain Lion (OS X 10.8) came out. The MacPro1,1 was one of a number of models stuck on Lion (OS X 10.7). That was OK, but the machine was also starting to need more expansion than I could afford. Where 5GB had once seemed fine, it was now starting to be a little tight. Alas, the MacPro1,1 uses special FB-DIMM (DDR2) which is eye-wateringly expensive compared to modern RAM pricing. So, eventually, a shiny 27” iMac was purchased and the MacPro became the Time Machine server.

That was effortless, and worked perfectly. Given its internal SATA bays, it was much easier to fill it up with 3 disks (plus a forth for the operating system) which worked a lot faster than the previous FireWire/USB setup

O3X

(Or part 3.5)

The new server worked fine, with only minor annoyances from MacZFS not being actively developed and the odd interaction with AFP and MacZFS.

Then, along came the OpenZFS project, and O3X in particular. I tried out O3X on the iMac (with Mavericks OS X 10.9)  and it worked well. It promised to be faster; to have modern ZFS features; to play nicely with AFP; and it seemed to be stable enough to use for the backups. So, I decided to transition the Time Machine server to O3X.

This has proved to be a little more exciting than I expected.

O3X on Lion

The current (as of writing this blog) version of O3X is 1.2.7. This works on OS X versions up to 10.9—and beyond to the as yet unreleased Yosemite (10.10). However, it requires a 64-bit kernel. The last and only version of O3X that supports a 32-bit kernel is 1.2.0—and that lacks enough features and bug fixes that I’m not happy using it.

While Lion supports a 64-bit kernel (so O3X will happily run under Lion), the MacPro1,1 won’t run a 64-bit kernel. Well, not without some hacking.

Hackintosh

EFI

The MacPro1,1 has a perfectly fine 64-bit processor. However, its EFI firmware that allows it to boot is only 32 bits, so although Lion will fully support  64-bit apps, the kernel itself, along with its low-level drivers, is only 32-bit on 32-bit EFI machines. Since ZFS is a kernel extension, either I need a 32-bit O3X (1.2.0) or a 64-bit OS X kernel.

NVIDIA GeForce 7300 GT

The MacPro1,1 shipped with a number of video card options. The base (and the card I have) is the NVIDIA GeForce 7300 GT.

This card lacks the horsepower to support some of the heavy-lifting that Mountain Lion and after offloaded to the GPU, so is unsupported by 10.8 and beyond, even in an otherwise supported Mac. However it also lacks 64-bit firmware, and thus the ability to fully function in 64-bit mode.

boot.efi

The hacker community has come up with a solution to the EFI problem, in the person of Tiamo and a shim boot.efi that translates between 32 and 64 bit calls. This allows a MacPro1,1 to boot and run Mavericks and even Yosemite

That leaves the problem of the GeForce 7300 GT. Either replace it, or try running using it.

Replacement video card

Although there were a number of cards compatible with the MacPro1,1 available from Apple at one time, they are no longer available except second-hand. There are several modern cards that the community attests work fine, but they tend to be:

  • High-end (and expensive)
  • 64-bit EFI only, so no graphics during boot, until the login screen
  • Reflashed PC cards

Live with the GeForce 7300 GT

Posters to the forums indicate that the GeForce 7300 GT does work, but very slowly, and with a lot of flickering. However, since this mostly runs as a headless server, that may be just fine. I’m not going to be playing games or doing any graphical work on it apart from installing the software.

So, I determined to hackintosh my MacPro1,1 and live with the stock GeForce 7300 GT.

Background

  • The Time Machine server runs purely as a backup server. It contains no original data itself, so that if it is lost, all we’ve lost is the backups. Various family machines back up using Time Machine to the server (into sparsebundle disk images).
  • There are three disks, configured as RAIDZ.
  • A LaunchDaemon job snapshots each day and thins snapshots to the last 7 days and then one per month. This has proved useful to preserve ancient history when Time Machine has decided that a disk is corrupt and wants to start anew.
  • Another job rsyncs the mail from a mail server running OS X Server to ZFS, to a file system, which is snapshotted hourly and thinned to hourly, 7 days then one per month.
  • Finally, BOINC runs in the background to make good use of any spare cycles.

Step 1: backup the backup

Firstly, since I wanted to allow for 4k blocks (the disks are 4k, but the MacZFS pool was not set up with ashift=12) I needed to back up the existing 2TB of data before erasing the current disks and starting them up on O3X.

Since there are no spare disk slots, I needed to back up to an external disk. Fortunately, I have a spare 2TB USB 3.0 drive. Unfortunately, the MacPro1,1 has only USB 2.0.

So, I attached the disk to the iMac (which has USB 3.), and copied the data across over 1Gbps Ethernet.

No so fast

MacZFS doesn’t supports neither zfs send –R, nor piping directly between zfs send and zfs recv, so I had to produce a script that iterated through the file systems and snapshots and copied between systems using a FIFO.

USB strikes again

The transfer being anything but instant (several days worth in fact), the iMac went to sleep, the external USB drive followed suit, and the copy was corrupted beyond ZFS ability to correct.

So, I bought a second 2TB drive, configured it in a ZFS mirror—and also prevented the iMac from sleeping.

gzip and dedup

The amount of data to be backed up was perilously close to the 2TB capacity of the mirror set, so I set compression to gzip and also turned on dedup. As it happened, dedup only saved me a 100GB or so, but that was probably enough to let the copy finish, where it might not otherwise have done so.

It took several days.

Step 2: repartition the proto-Hackintosh

Since I want a fall back position in case the Hackintosh experiment fails, I decided to partition the system disk into two: one with Lion and one with the hacked boot loader and Mavericks.

Disk Utility does this nicely, and non-destructively.

Or not

Disk Utility started the partition, and then stuck at about 50%. A couple of hours later, it’s still stuck. Oops. What happens if I interrupt it I wonder? The more observant among you have realised that Step 3 should have been to backup the system disk and are laughing at my foolishness.

I stop it. I repair the disk. I attempt a reboot.

It works.

Always Have Two Backups

Fortunately, I have spare space on the iMac, and a copy of Carbon Copy Cloner, so I backup everything to there. (Time passes)

Then, since I also have a Mac Mini running OS X Server, I enable Time Machine and backup again, using Time Machine. (Somewhat more time passes).

Repartition

I boot into the Recovery Partition and ask to repartition into two equal disks. Which it won’t do, since it can’t unmount the entire disk. Sigh. Time for the USB key then.

Step 3: Create an Install Drive on a USB key

The instructions from Tiamo on how to do this are somewhat concise. In particular,

3.insert your board-id into OSInstall.mpkg(please google it)

is way more complicated than it seems, since I have to find the Flat Package Editor. I found a video that helps. Suffice it to say that I managed to create OSIinstall.mpkg with the appropriate board-id  board-id for the MacPro1,1 is Mac-F4208DC8, found from the following command

% ioreg -p IODeviceTree -r -n / -d 1

Eventually, I have a bootable USB drive.

Step 4: Repartition (again)

The MacPro1,1 happily boots off the USB key and lets me repartition the disk. After which I have two empty partitions, and no way to boot off the hard drive until I’ve got some sort of O/S back there.

Step 4: Restore Lion from Backup

Time Machine

The Recovery system will happily restore off a Time Machine backup. I point it at the one I just made.

Nope, it finds something wrong with it. Sigh.

I am so very glad I have two backups.

Carbon Copy Cloner

  • I power off the MacPro.
  • I extract the newly partitioned but otherwise empty drive and put it into an external (USB 3) enclosure.
  • I attach it to the iMac
  • I restore using Carbon Copy Cloner.

At this point I discover that I forgot to save the recovery partition when I cloned the drive! Carbon Copy Cloner is happy to create me one, but only using Mavericks since that’s what the iMac runs.

  • Unmount and power off the external drive
  • Remove the disk and replace it in the MacPro1,1, losing one of the four mounting screws in the process. (The last step is purely optional, but somehow seems appropriate. I found the screw on the floor later and replaced it).

Step 5: Boot into the newly restored Lion

Which works, much to my delight. I can at least resume using MacZFS where I left off, if the Hackintosh install fails.

Step 6: Download Lion from the App Store

In order to get a Lion recovery partition, I download it again from the App Store (which is how I upgraded in the first place, so I don’t need to purchase it again). I squirrel it away so I’ll have a copy after it deletes itself (which it does during an install).

Step 7: Re-install Lion

Alarming as this sounds, all it does is create a Lion Recovery partition.

Step 8: Clone Lion to the new partition

More time passes. This is actually slower than copying via USB, since the head is having to shuttle all over the disk.

Step 9: Reboot off the USB Key and install Mavericks

More time passes. Much more time (this process has taken several days so far). The install gets to the migration bit, where the log tells me it is migrating applications. And there it stops.

Step 10: Reboot into Mavericks

Hah! Fooled you there. Regardless of which partition I select, it boots into Lion.

I try reinstalling off the USB key and it refuses to do anything other than boot into Lion.

Step 11: Zap Mavericks partition

I’ll do a clean install and use Migration assistant afterwards.

Step 12: Recreate the USB key

Actually, I do this a couple of times, using various recipes found on the net, trying to find one that works. Eventually I succeed.

Step 13: Install Mavericks

And this time, it works! I resist any inclination to interrupt the process, and it does indeed finish cleanly and reboot. Into Mavericks! Just to be sure, I verify that it will also boot into Lion.

Step 14: Migration Assistant

Mavericks boots and asks me if I want to migrate data. Which I do and it does. It takes its time, but it works just fine. During this, the GeForce 7300 GT seems to perform just fine.

Step 15: Mavericks

And at last, I have a Hackintosh MacPro1,1 running OS X 10.9.4.

I update assorted apps, remove MacZFS and install O3X.

I’m using Screen Sharing and it is really slow. The mouse is quite decoupled at times which makes opening windows challenging. However, ssh and Terminal sessions are fine. System Profiler reports that the graphics card has 3MB of memory (Lion reports 256MB).

Step 16: OS X Server

Since I purchased OS X Server, I install it on the Hackintosh. Without event, and now I have a snappy GUI, at least for Server functionality.

Step 17: O3X

More leaps of faith. I don’t upgrade my pool, but rather reformat using O3X and then restore from the backs USB external drive mirror set.

Alas, the MacPro1,1 only has USB 2.0, so the restore takes a long time. However, at least I can use pipe and a single command per filesystem!

# zfs send –R Backup/Snapshot | zfs receive Target/Filesystem

And it works.

 


Ongoing

Slow Screen

I stopped using Screen Sharing, and everything sped up tremendously.

It seems that the GeForce 7300 GT doesn’t play well with Screen Sharing under Mavericks.

It does mean that need a screen and keyboard to boot.

Turn off Spotlight

A couple of times I found things locked, including Terminal, after a zpool or zfs command. A reboot was necessary to clear it. Then I remembered the instructions to turn off Spotlight, so a sudo mdutil -i off pool/fs or several later, and all is working well.

Summary

It seems to be working well. Our mail is being backed up, Time Machine is backing up, and this time it’s running through OS X Server. I am happy.

rubycocoa + rvm + Mavericks = fail

I am very happy with OS X Mavericks, by and large. It’s slightly more refined than Mountain Lion, and noticeably faster.

I also noted with pleasure that it installed Ruby 2.0 by default, instead of maintaining the out of date Ruby 1.8.

However, yesterday I tried running a script that invokes a module which i wrote that uses rubycocoa. It doesn’t work. A little digging reveals that rubycocoa is not (yet?) supported on either ruby 1.9 or 2.0. While there is a Mavericks update for rubycocoa, it just makes it work with versions 1.8. While I could live with that, since rvm does a good job of managing this sort of thing, in the case of rubycocoa, rvm has come up short. If I explicitly invoke the script with

#!/System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/bin/ruby

It works, but

rvm use 1.8

#!/usr/bin/env ruby

fails with

/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require’: cannot load such file — osx/cocoa (LoadError)
    from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:45:in `require’

 

Sigh.

Things I’d like to see from Apple, part 5

My first musings on what I’d like to see from Apple were basically a home server edition of Mac OS X Server, and suitable hardware, which, as I said was “pretty much a Mac Mini, with the exception of the 802.11n.”.

That was in 2008. Since then, the Mac Mini has moved on, with the addition of 802.11n, plus Thunderbolt and USB 3.0.

Mac OS X Server has also moved on, and in many way what Apple is now delivering is a home server, with, if not a Squid cache, at least App store caching, mail server, Time Machine, etc.. There are excellent reviews of Mac OS X Server (Mountain Lion) at Ars Technica (updated in Jan 2013 when new features were released).

Significantly, the price has plummeted to to being eminently affordable. It’s basically US$20.00—the same price as Mountain Lion itself.

That being the case, I’ve installed it as the home server, and I’m happy with it in that mode. I’ve repurposed my Mac Mini that was running as the Time Machine server (that function has gone to another machine) and it happily acts as the family mail server together with some other functions such as providing a VPN (so we can check our mail on the road).

Mac OS X Server is now basically just Server.app, with all the server components hidden away inside the App bundle. It is also very encouraging to see Apple adding new functionality, like the Caching server which appeared as a new feature in the first Server update.

That leads to a little list of things I’d like to see added (or fixed):

  • Cache Server reporting. At the moment there is no display of what things are in the cache (which I’d like to know for curiosity’s sake).
  • More caching. At the moment, iTunes purchases are not cached, nor is general web browsing.
  • Fetchmail support. I pull mail onto the server for all the family members off multiple ISP accounts. Among other things, this keeps GB—and years—of mail on a local server which is not subject to merger, change of terms of service, bankruptcy or pre-emptive shutdown by foreign powers. It may be antediluvian and I should probably have the mail SMTP direct to my own domain, but in the meanwhile it suits me quite well and I surely can’t be alone in this.
  • The Mail server stops. Well, actually, it doesn’t. Server.app says that it has, but the server is really running quite happily.
  • Documentation. Really, the Apple documentation is inadequate. In transition from an enterprise product to a home product the documentation got left behind. I bought Server because I figured the price was low enough that if it wasn’t useful to me I would not have wasted much. I couldn’t figure out how useful it would be from reading the documentation. In the end, the web provided more useful how to and advice than the Apple documentation ever did.