Reverse SSH to a server behind a firewall

I recently needed to establish an ad-hoc ssh connection to a server behind a firewall. I didn’t control the firewall and couldn’t get a port mapped through it for incoming ssh access, so I had to use a reverse ssh connection. What is a reverse ssh connection, why would you want to establish one and how do you do it?
Read on…

As I was not able to have a port mapped through the firewall for an incoming ssh connection, I needed some other way of establishing a secure shell connection to the server. Enter the technique of reverse ssh.

This will work if you either have incoming ssh access to your workstation, or you have an intermediate server that you can ssh into from the target server.

Assuming you’re going via an intermediate server, the chain then looks something like:

Workstation < – – – > Intermediate Server < – – – > Target Server

As long as your workstation and the target server can both ssh to the intermediate server, you are good to go.

Step 1 – On the Target Server

ssh -f -N -T -R22022:localhost:22

This establishes an ssh connection from the target server to the intermediate server (and assumes that you can reach the intermediate server on port 22)

The various options are as follows:

-f : Tells ssh to put itself in the background after it authenticates. It allows ssh to ask for the password, but then after it’s done so, it puts itself in the background. This is so that you don’t need to keep the terminal window open that established the connection.

-N : This tells ssh not to execute a remote command – normally ssh will start a remote shell and let you type into it to run commands on the remote computer. This option is generally only useful when you’re forwarding ports (with -L or -R) and means that a remote shell isn’t executed, so it saves a small amount of system resources.

-T : This option tells ssh not to allocate a pseudo TTY, again it saves a small amount of system resources if you’re not using it for a remote shell.

-R : This is where the magic happens. I often use SSH to forward a local port to a remote machine with -L. This does the reverse, forwarding a port on the remote machine to the local machine.

22022:localhost:22 : This instruction for -R says to forward remote port 22022 (so, port 22022 on the intermediate server) through to port 22 on localhost. This is from the point of view of the target server, so any traffic sent to port 22022 on the intermediate server is forwarded to port 22 on the target server. : This is the hostname or IP address of the intermediate server. Don’t forget to use a username if you need to, e.g. [email protected]. You can also specify -p if the intermediate server is listening on a port that’s not 22.

Step 2 – On the Workstation

Simply ssh to the intermediate server as normal:


Use a username if required, and specify the port with -p if it’s not listening on port 22.

Step 3 – On the Intermediate Server

Once you have ssh’d from your workstation to the intermediate server, you then ssh again to the port specified in the -R command above (port 22022 in this example) like so:

ssh localhost -p 22022

Again, use a username if you need to.

If the intermediate port (in this example 22022) is open to you on the intermediate server, then you can combine steps 2 and 2 above into one:

ssh -p 22022

This establishes a connection on the intermediate server to port 22022 – which is being listened on, and forwarded through to port 22 on the target server. So while ssh is being told to port 22022 on the intermediate server, the first ssh session from the target server listens to this and forwards it to port 22, where it’s own ssh daemon picks up the connection.


Now, you have an ssh connection being relayed from your workstation, via the intermediate server to the target server. Everything you type into ssh on your workstation will be running on the target server.

This post contains information that’s been condensed and re-worded from a post on StackExchange.

Fix SMB Permissions on OS X Server for Newly Created Files and Folders

I don’t know why it is, but SMB on OS X Server is slower and less reliable than the AFP that it replaces. Despite Apple making it the default for OS X to OS X Server file sharing connections, AFP seems to be more reliable and has less problems with permissions.

The following script may fix the inheritance of permissions being a bit wonky:

SHAREPOINT="/Volumes/Storage/Shared Items/Share"
serveradmin settings sharing:sharePointList:_array_id:$SHAREPOINT:smbCreateMask = “0644"
serveradmin settings sharing:sharePointList:_array_id:$SHAREPOINT:smbDirectoryMask = "0755"
serveradmin settings sharing:sharePointList:_array_id:$SHAREPOINT:smbInheritPermissions = yes

Set OS X Server to Deny SMB 3 Connections

Sometimes an OS X Server will have very poor SMB file sharing performance – whilst I haven’t been able to ascertain with 100% certainty what causes it, something that may be a factor is the use of SMB 3 connections over SMB 2.

SMB 3 connections can be signed and encrypted, and this can put a significant amount of overhead on the server.

To disable SMB 3 at the server end, type the following in to Terminal

sudo scutil --prefs
get /
d.add ProtocolVersionMap # 2
set /

Info from

Mount a Windows partition of an optical disc on OS X

Today I needed to install some software on a Windows server – the software only came on an optical disc and the Windows server didn’t have an optical drive.
“No problem” I thought to myself, I’ll just put the DVD in my Mac and copy the files over, how difficult can it be?
As it turns out, when an optical disc has an HFS filesystem layer on it, the Mac will ignore the underlying ISO9660, Joliet and Rock Ridge extensions and head straight to what it knows best – and this is usually exactly what you want.
It is possible however to burn discs with a completely different set of files for Windows and OS X, and rely on the fact that the Mac will ignore the Joliet extension if there’s an HFS extension whereas Windows doesn’t care about the HFS extension and will happily show you the files on the Joliet extension.

Fortunately, there’s a way around this.

Insert the disc, head over to Terminal and type:
This will show the mounted filesystems – you’ll see something like this:

[[email protected] ~]$ mount
 /dev/disk1 on / (hfs, local, journaled)
 devfs on /dev (devfs, local, nobrowse)
 map -hosts on /net (autofs, nosuid, automounted, nobrowse)
 map auto_home on /home (autofs, automounted, nobrowse)
 /dev/disk2s2 on /Volumes/ArchiCAD 18 (hfs, local, nodev, nosuid, read-only, noowners)

Unmount the optical drive (you can’t just select it and hit eject because it will be physically ejected)

sudo umount "/Volumes/ArchiCAD 18"

Make a temp directory to mount the disc on

mkdir /tmp/AC18

Then mount the disc, but tell the system to ignore any extended attributes and the Rock Ridge extensions

sudo mount_cd9660 -er /dev/disk2 /tmp/AC18

Finally, open the mounted disc window in the Finder

open /tmp/AC18

When you’re done, you can then unmount the disc:

sudo umount /tmp/AC18

Change a Open Directory Group’s GeneratedUID or UUID

I occasionally see OS X Server’s Open Directory flip out, sometimes a simple repair of the LDAP databases seems to fix it, sometimes you need to go deeper.

If repairing the databases doesn’t work, then I try to recover the databases from a recent backup. If that doesn’t work, then it’s probably time to destroy and recreate Open Directory.

In this particular case, a restore from backup appeared to work, except I couldn’t authenticate as the Directory Administrator, or anyone else in the directory for that matter. This meant I couldn’t reset anyone’s passwords either.

I tried resetting the Directory Administrator password from Terminal, but that didn’t work. I was able to use however to make an export of the Users and Groups to text files.

First I destroyed OD, and set it up again from scratch. Next I imported all the users and then finally I imported the groups. All that was left was to reset the passwords for each user. Or so I thought.

Even though I imported all the groups from the export file, thereby ensuring they retained the same UIDs as previously, as creating an OD also creates the workgroup group, this group had a different UID from before, and importing the group just updated the group membership. As this group had been used in ACLs, the ACL uses the GeneratedUID, not the simple numeric GID, so none of my ACLs matched up any more.

Fortunately it’s not difficult to change a group’s UID, here’s how to do it.

  1. Find the current UID for the group:
    sudo dscl /LDAPv3/ -read /Groups/workgroup GeneratedUID
  2. Take note of the GeneratedUID
    GeneratedUID: <Old-UUID>
  3. Using dscl, update the existing group and change it’s UID
    sudo dscl -u diradmin -p /LDAPv3/ -change /Groups/workgroup GeneratedUID <Old-UID> <New-UID>

I then turned Open Directory off and on again just to flush any changes and checking with ls -ale to show ACLs, I could see that it had picked up the correct group and was no longer showing me a UID instead of the group name.

Android adb on Mac OS X recognising Google Pixel C

I’ve had a bit of trouble getting a Google Pixel C to be recognised on my Mac – wasn’t able to get it showing up in System Profiler and wasn’t able to see it with adb.

I was initially trying various combinations of the Apple USB-C to USB-A Female adapter and then different USB cables and dongles. No go.

What worked in the end was using a Belkin USB-C to USB 2.0 cable with a male A plug. I have read that some people are having problems with using USB-C to USB 3.0 cables, so I played it safe (and saved myself an extra ten bucks in the process) and stuck with a USB 2.0 cable. I went with Belkin because they’ve never let me down in the past with cables not working.

I’m able to plug it into a USB3 hub that I’m using and my Mac sees the Pixel C with no problems. The Mac can see the tablet and Android can see that USB debugging is connected. I also didn’t need to install any drivers on the Mac either, just plug it in and away I went.

The Sennheiser Orpheus Experience or the day I went to a holistic health clinic to listen to the best headphones in the world.

Hurt. One song. Two definitive versions. Reznor’s is painful, vivid, fresh and raw. Cash’s is tempered by looking back from the vantage of time.

My 25 words or less that ended up with sitting in a comfy chair in a clinic above an art gallery on a rainy Sydney day with a pair of $75k headphones on my head.

Humans are constantly pushing the boundaries of what’s possible, often creating something for no other reason than “because we can”. Bugatti Veyron, the world’s fastest production car. Over $2 Million of luxury, technology and sheer horsepower. Gravity Probe B, the most perfectly spherical objects ever made, with no imperfections larger than 40 atoms high. Burj Khalifa, the tallest tower in the world, soaring over 800 metres into the sky. Sennheiser Orpheus, the lowest distortion audio reproduction hardware ever made.

Back in 1991 the engineers at Sennheiser were given free reign to create the absolute best headphones in the world, with no compromise. Cost was not a consideration. The end result was the Orpheus HE90 – the pinnacle of audio engineering at the time. Sold for $15,000 USD (over $35,000 in 2016 Australian pesos), these headphones came with a matching amplifier built with a lovely Art-deco aesthetic. 6 valves for the pre-amp were mounted front and centre, chrome was everywhere, there was a beautiful rosewood trim and the electrostatic headphones had people raving about their sound.

Over the years, Sennheiser’s audio engineers often thought back to the technology, materials and construction in the HE90 and wondered if they could do any better, could they improve on the Orpheus in any way? It took until the mid 2000’s until they finally stood up and said “Yes, we can make it better” and thus began a 10 year journey to create the duo of the HE 1060 / HEV 1060.

A decade in the making, the team at Sennheiser were again given an open chequebook to source any materials, use any build techniques, do whatever they had to do to achieve ultimate clarity and fidelity in sound with no compromise whatsoever. If there were no off-the-shelf components that could achieve the sound quality desired, then they went out and had them made especially. Ear cups machined from a solid block of aluminium. Handmade leather and microfibre ear pads. Vacuum tubes housed their own individual clear quartz envelopes. 8 digital to analogue converts. Carrara marble plinth for the base. A platinum coated membrane so thin that you would need to stack 40 of them together to reach the thickness of a sheet of paper.

One of the areas they noticed could be improved from the previous version was in the transmission of the signals from the amplifier to the headphones. Electrostatic drivers require quite high voltages with a low current, and are subject to losses and interference when sent down a couple of metres of cable. This time around the valve preamp sends a low level signal to the headphones where active Class-A amplifiers in both of the cups step it up to the high voltages required, with the power for these amplifiers supplied via the cable.

I was pretty excited when Sennheiser told me that I won their recent Sennheiser Experience Facebook competition. The prize was return flights to Sydney, a pair of Momentum Wireless headphones and some one-on-one time to experience the new Orpheus HE 1060 / HEV 1060 first-hand.

Riding in an Audi A8 long-wheelbase limo

Riding in an Audi A8 long-wheelbase limo

When I arrived in Sydney, I was picked up at the airport in an Audi A8 long wheelbase limo. Suitably cocooned against the miserable Sydney weather we drove around some tiny back-streets in Darlinghurst trying to find the venue – Muse. After squeezing down Little Oxford Street, we located the ivy-covered front of the building and in I went to have my mind blown.

I met Heather and the rest of the team from Sennheiser Australia who had just three days of hands-on time to demo the Orpheus for the lucky few. There were some prominent musicians invited, product managers for some of their larger customers, prospective purchasers and the lucky trio who were chosen. Sarah, Tom and myself.

We arrived with plenty of time to spare before our allotted listening slot, and had a good time relaxing in the downstairs gallery at Muse where Sennheiser had set up an installation. There were half a dozen pairs of headphones and a couple of microphones in perspex boxes on display and some really nice product-related artwork on the walls. The special 70th birthday edition HD800 headphones, with custom blue accents by ColorWare were a particularly special pair, as it turned out they weren’t for listening to.

Muse in Little Oxford Street

Muse in Little Oxford Street

While we were waiting, we discussed many topics, particularly the Orpheus. Some of the more interesting facts about these headphones are at the moment there are only 3 pairs in the world. Around 200-300 people have heard them by this stage however there are possibly only 50 people or so who have have had a chance to listen to their own music selection on them. This set was in Australia for less than a week, one of the few venues on the planet that Sennheiser were offering this listening experience at. They were really excited to be able to see the expression on people’s faces after their auditions. No-one left without a smile.

Each pair of headphones and the attached amplifier are completely hand-made by a team of 10 or more people and they can only produce 250 pairs a year. Yes, there is already a waiting list if you want to buy a set. They retail in Australia for $75,000 however customisation options can take that into the hundreds of thousands. Some of the options are black or white Carrara marble. You can get silver or gold plated knobs if you want. Really, the sky is the limit – if you have the money, Sennheiser will customise them for you pretty much any way you want them.

Everything about the listening experience is absolutely first-class. Even before you have a chance to put them on your head, the way the whole system powers on is a show on it’s own. While the system is turned off, the storage case on top is closed, the valves are retracted flush with the top surface, the volume dial is turned down and the knobs are retracted into the marble plinth. When you power it up, the knobs slide out, the valves rise up (all 8 of them), the volume dial returns to the last level you had set and the piano black lacquer and smoked glass storage case opens up, you don’t even have to get any fingerprints on the high gloss finish. The opening sequence is choreographed so that by the time you can remove the headphones from their case, the valves have had time to come up to operating temperature and the system is ready to go.

The frequency response of the headphones is flat from 8Hz all the way up to 100kHz. Although humans can only hear from 20 to 20k Hz, Sennheiser wanted to ensure that if there were any flaws in the response that they would be pushed out to either end where only elephants or bats would be able to hear it.

Listening to the Orpheus

Listening to the Orpheus

Enough about the technology, all of this is secondary to how they actually sound.

The detail and depth in the music was amazing. Turning up the volume just made more sound, not noise, not distortion, just sound and lots of it. They could play loud, but were never noisy. Even on heavily textured and complex passages like the last parts of Hurt, every track, every layer, every instrument had it’s own space and it’s own definition.

The reproduction of sound was unlike anything else I’ve heard. The bass was a physical presence – warm, smooth and without limit. The treble was clear and distinct, without being harsh or sharp. The mids were all where they were supposed to be – everything was presented as-is without any colouration, exactly as the music was mastered.

The headphones in their storage case

The headphones in their storage case

Nothing seemed to worry the headphones, everything sounded so clear and effortless like they were just striding along and not even breaking a sweat. Even turning up the volume to uncomfortably loud levels just resulted in more sound with no loss of clarity, no distortion, it was purely louder.

The sound was clear and tangible. Instruments were all given their own place in the soundstage. Nothing sounded hurried or strained everything was clearly composed.

In Hurt, you could clearly hear the raw emotion in Trent Reznor’s voice, the pain was right there. The guitar sounded like it was right in front of you and when that first kick drum comes in, it was like a physical impact. Even through the distortion and digital noise, the other sounds were not masked out or blurred, they were still there.

On Johnny Cash’s rendition, you could hear every detail in his fingers on the guitar strings, and his voice was front and centre, every intonation, every inflection was there for the taking. You could even hear that as the song progresses his mouth gets dry and the sound of him opening his mouth to take a breath is like he’s in the room with you.

The two Daft Punk tracks were selected for their use of real instruments, high dynamic range and quality mastering. Oh, and the bass. O.M.G. It was like I’d never heard bass quite like this before. It was deep, warm, inviting and full, all at the same time. There was absolutely no distortion, no breakup, no clipping, just an ocean of clean, pure bass. Of course everything else was there with absolute clarity as well, the bass didn’t overwhelm the vocals or the other instruments, rather it provided a soft velvet cushion for it to all rest on.

All up, it was the most pure listening experience I’ve had. Everything that happened that day all came together to ensure this. It’s the plane and limo ride, the happy and welcoming staff from Sennheiser, the venue, the headphones, the technology, the music. It’s the vibe and, no, that’s it. It’s the vibe. I could have easily spent hours sitting there in my own world, having the music wash over me but unfortunately time was limited.

The 8 valves for the preamp section

The 8 valves for the preamp section

While I’m not sure that I heard things in the music that simply weren’t there when listening on lesser equipment, what I did notice was that subtle details were clearly presented for you with with no effort, you didn’t have to dig around and concentrate as much to hear them. Nothing was blurred or smeared together, it was all there for you on a silver platter. Without even trying you could easily pick out any individual element from the composition and feel it sitting there

Worth the money? Hard to say. I suppose if you had the kind of disposable income where a $75k pair of cans was even a consideration, and if you really enjoyed music, then they’d probably be worth every cent. I would likely get more enjoyment out of these than, say, a $75k Jaeger-LeCoultre or Rolex. Are they 10x better than a pair of HD800’s with the matching amp? Hard to say, possibly not. Are they the best headphones I’ve ever listened to? Absolutely.

Source equipment: Bryston BDP-2 Digital Player. Music was delivered as FLAC on USB.

NIN – Hurt (High-res)
Johnny Cash – Hurt (CD Quality)
Daft Punk – Lose Yourself to Dance (High-res)
Daft Punk – Get Lucky (High-res)
Johnny Cash – Personal Jesus (CD quality)

More information and technical specs:

Set Microsoft Outlook 2016 as default mail client on OS X 10.11 El Capitan

I’ve had a few issues trying to change the default mail client on El Cap. In nearly every case, after changing it in Mail (seemingly the only place you can actually change it), the change doesn’t stick. After quitting and relaunching Mail, it’s back to the default of

I’ve found that if you clear the Launch Services database, this may allow the change to persist.

So, quit all running apps.

Open Terminal and enter in the following (all on one line)

/Versions/A/Support/lsregister -kill -r -all local,system,user

When it returns, you can quit Terminal.

Launch Mail, go into Preferences > General and set the Default email reader to Microsoft Outlook

Quit Mail and the change should stick.

Fix a broken Open Directory

I don’t know why the databases that OpenLDAP uses are so fragile, and therefore why Open Directory looses it’s shit nearly every single time you have to force a server to restart, but they are and it does.

In the majority of cases, it’s pretty straightforward to fix – and again I’ve got no idea why this isn’t part of the startup process for OpenLDAP if something goes wrong…

Anyway, if Open Directory won’t load, or isn’t showing you any users, nine times out of ten, it’s one or the other of the OpenLDAP databases that are corrupt.

Fix them like so:

sudo launchctl unload /System/Library/LaunchDaemons/org.openldap.slapd.plist
sudo /usr/libexec/slapd -Tt
sudo db_recover -cv -h /var/db/openldap/openldap-data/
sudo db_recover -cv -h /var/db/openldap/authdata/
sudo /usr/libexec/slapd -Tt
sudo launchctl load /System/Library/LaunchDaemons/org.openldap.slapd.plist

If this sequence of commands doesn’t fix it, then you will need to restore the LDAP databases from backup, which can generally be done with the following command:

sudo slapconfig -restoredb /private/var/backups/ServerBackup_OpenDirectoryMaster.sparseimage

Re-running a Unix command, until it completes successfully – e.g.: imapsync

I’m doing an email migration for a client from an old 2008 SBS Server into Office 365. For some reason, there were two mailboxes that just wouldn’t migrate using the migration wizard in Office 365.

I switched to the ever-trusty imapsync which I’ve used to migrate more mailboxes than I care to remember.

As an aside, I had a few issues with imapsync from MacPorts so ended up downloading a fork from GitHub that resolved the issue, however I had to install a few CPAN modules for Perl manually. I’ve lost the link to the GitHib version, however it was easy to find initially by searching on the error string that it was returning when trying to run it – something about an SSL error.

Anyway, after building and installing everything required, imapsync kept erroring out on these two mailboxes after some random number of emails migrated. After logging in and restarting it manually a few times, I thought that there had to be a better way.

Looking further into the issues, imapsync was exiting with a return code of 2, indicating that an error occurred. When it completes successfully, it should exit with a return code of 0. This makes it easy to just keep running it until it exits with zero;

until imapsync --option1 --option2 ... --optionn; do
    echo Exited with an error, rerunning...

Nice and easy…

The until loop keeps running the command given to until (often a check for something == 0) exits with a true (or zero) exit status. The echo statement gets executed as part of the until loop, however this is more a side effect of running imapsync as the until loop predicate.