Disable Azure AD users from having to set up a PIN on Windows 10

This information is condensed from Håvard Siegel Haukeberg’s blog over at https://haukeberg.wordpress.com/2016/02/24/disable-pin-code-when-joining-azure-ad/

You need to log in as the actual admin user for the domain, you can’t do this through a partner account with delegated administration.

First up, go to the Azure management portal:

  1. https://manage.windowsazure.com
  2. Go to Active Directory
  3. Select your Domain
  4. Select Applications
  5. Select Microsoft Intune
  6. Select Configure
  7. Under manage devices for these users, select All and click Save.

Next, go to Intune

  1. https://manage.microsoft.com
  2. Go to: Admin > Mobile Device Management > Windows > Passport for Work.
  3. Select: Deactivate Passport for Work on registered devices


Give an Azure AD user Local Administrator user privileges on Windows 10

When you join a Windows 10 machine to Azure AD, the user account you use to join to the domain is automatically given local administrator permissions to the machine.

If you noodle around in the Azure management portal, there doesn’t seem to be an easy way to give additional users local administrator permissions on the same machine.

A client of mine is running some practice management software that, as it turns out, requires all users to have local administrator privileges. Yes, I know.

Fortunately there’s an easy to elevate a local user’s account to give them administrator permissions.

Launch a CMD shell with Administrator privileges and type in:

net localgroup Administrators AzureAD\UserName /add

where UserName is their first and last names as specified in Azure; e.g. mine would be KaiHowells

Fix file timestamps with dates in the future on OS X or Unix/Linux

I recently needed to fix a heap of files with invalid timestamps for a client to sync between two sites.

The issue was that some files had modification dates in the year 2032 or something like that. With a 2-way sync, if one file changes the modification date is used to determine which one wins. If the unchanged file has it’s date set in the future, then it could overwrite the changes made that set the other file to the current date and time.

Fortunately fixing this is really easy:

touch /tmp/now
find -xdev /Path/To/Server/Share\ Point -newer /tmp/now -print0 | xargs -0 touch -t 201610131030
rm /tmp/now

I specifically had to set the date and time to an exact date and time (-t 201610131030) rather than letting touch set the files to the current date and time so that they didn’t have time stamps off by a few seconds at each end, causing them to be resynchronised.

The number given to touch is of the format YYYY MM DD HH MM SS. The timestamp is in the local time zone, so I also had to adjust the timestamp given to the files on the remote server as it’s in a different time zone.

Issues authenticating Windows 10 to OS X/macOS with Server.app

Ever since Apple were forced to ditch Samba due to their changing the licensing to GPL3, we’ve been stuck with a second-rate SMB implementation on OS X and Server called smbx.

As it turns out, there are numerous issues with smbx that may or may not end up getting fixed that were pretty much solved problems in Samba. That, and there’s a lot more support out there for Samba across all of the platforms it runs on than there is for smbx.

Oh well, we must make do with what we’ve got…

Windows 10, it seems, has issues connecting to smbx on recent versions of OS X or macOS – something to do with the LanMan compatibility level – i.e. the version of SMB/CIFS that Windows speaks. What it looks like is happening is that Windows is trying to use the older LM or NTLM authentication whereas the Mac wants the newer, and more secure, NTLMv2 authentication method to be used. The result however is that you’re told you have an incorrect username or password.

I’m still looking for a fix on the Mac server side, however I have found a fix that works when applied to the Windows client.

On Windows, open Regedit and go to:


Find the key for LmCompatibilityLevel and change it to 3 (from the default value of 1)

Quit out of Regedit and be amazed that you can now connect to the Mac server.

There is more info over on TechNet if you want to know exactly what this key does.

I haven’t tested to see if denying SMB3 connections at the server end makes any difference…

Troubleshooting: Fetch a web page as Googlebot

I’ve had a couple of clients come to me recently after their WordPress site was pwned. Sometimes you’re able to use a tool like Wordfence to clean it up and secure the site and it’s all OK. Sometimes however it goes a bit deeper than this.

With one hack recently, even after the WordPress core files were cleaned, all plugins updated with fresh versions from the repository and many miscellaneous php files scattered within the wp-content directory were removed, it all looked OK – except when Google indexed the site.

Whilst Google have a utility in their webmaster tools to fetch a page as Google, there are a few limitations. First of all, you need to have the website added to your Google Analytics account (or be logged into a Google Analytics account that owns the property) and it’s not instant.

I was trying to track down something that didn’t show up on any Wordfence scan and wasn’t a malicious plugin or hacked core file. I was quite sure about this as the spam links on the page persisted even after WordPress was reinstalled and all plugins were disabled.

The one thing that did fix it however was switching the theme.

As it turns out, the hackers had inserted some conditional code into the main theme files so that whenever a regular browser was viewing the site, everything looked as it should. When Googlebot viewed the site (actually, any user agent on a long list of user agents used by search engine spiders) there were a huge number of spam links inserted into the page.

I was able to find the snippet of code in one of the theme files and removed it easily, however I found that the Fetch as Google was slow to use in practice. Through using a quick trick with Curl, I could give the website a user agent that triggered the spam links like so:

curl -L -A "Googlebot/2.1 (+http://www.google.com/bot.html)" http://example.com

The above command runs curl with -L (or –location) and -A (or –user-agent) to set the user agent. The Location switch tells curl that if it’s given a redirect code, to send the same request to the new location (i.e. it will send the same user agent to the new location).

This was able to quickly show me the spam content in the page and give me instant feedback that the html output was clean once I had found and removed the suspect chunk of code.

Product Review: Bidsketch – easily and quickly send out proposals to clients.

I’ve been trialling Bidsketch as a more efficient way to get written proposals to my clients. I don’t write a huge number of proposals, and am prone to reinvent the wheel each time I need to do one, which is a large waste of my time.

Bidsketch is a great way for me to have modular proposal templates, and I can pick and chose which sections to include and then customise them if required.

Most of my proposals follow a fairly standard template that generally has three main sections. The first starts with a brief introduction of who Automatica is, a paragraph about the client and their identified requirements and an overview of the problem we are solving. The middle section is the nuts and bolts, the technology and services that we are recommending and pricing information. The final section is a brief recap (if necessary) and a call to action.

By using a set of predefined chunks, Bidsketch makes it quick to reuse content (and this content can have variables in it, such as the client name that’s automatically replaced) and lets me focus on writing the actual meat of the proposal – the technical recommendations.

Through the ability to add pricing information, there are some nice additions that are more flexible than a straight quote from Xero. You can separate out hardware and services and add in optional items, as well as monthly and annual subscription services and presents them to the client in a straightforward fashion that’s clear and easy to understand.

Once you have completed the proposal, it can be viewed in a browser, downloaded as a PDF or sent to the client directly. Bidsketch allow you to brand your instance of their service, so instead of having the proposals come from automatica.bidsketch.com, instead they can come from proposals.automatica.com.au (or whatever you want).

You can get a notification when a client visits the site to view the proposal, which is an incredibly powerful tool. It really gives you the ability to strike while the iron is hot. Give your client 5 minutes after they’ve opened the proposal and then call them, you can’t get better timing after this.

Pretty much the only downside to the service that I have encountered so far is the default proposal templates are not amazing. Whilst I would consider them to be OK to send to “business” clients, I work with a lot of clients in creative and visual industries, and really need my proposals to stand out.

I contacted support about this and received a very quick and friendly reply that pointed me to information where I can build my own templates – this flexibility is very good, but now I need to find the time to tweak it.

I was able to find one proposal template to use that was acceptable – it was clean and modern looking and not too fancy – and it must look OK as the client accepted my proposal. That’s one for one so far. I hope I can keep up this strike rate.

All in all, I can see that there is definitely value to be had by making the proposal generation process simpler and breaking it down into a more modular fashion. By creating the pieces of the puzzle ahead of time, you can standardise the information in your proposals, and ensure consistency between different proposals created by different people within your organisation. Tracking when a client opens the proposal is a powerful tool in your toolkit and can ensure you’re having the right conversation at the right time with your client.

Reverse SSH to a server behind a firewall

I recently needed to establish an ad-hoc ssh connection to a server behind a firewall. I didn’t control the firewall and couldn’t get a port mapped through it for incoming ssh access, so I had to use a reverse ssh connection. What is a reverse ssh connection, why would you want to establish one and how do you do it?
Read on…

As I was not able to have a port mapped through the firewall for an incoming ssh connection, I needed some other way of establishing a secure shell connection to the server. Enter the technique of reverse ssh.

This will work if you either have incoming ssh access to your workstation, or you have an intermediate server that you can ssh into from the target server.

Assuming you’re going via an intermediate server, the chain then looks something like:

Workstation < – – – > Intermediate Server < – – – > Target Server

As long as your workstation and the target server can both ssh to the intermediate server, you are good to go.

Step 1 – On the Target Server

ssh -f -N -T -R22022:localhost:22 intermediate.example.com

This establishes an ssh connection from the target server to the intermediate server (and assumes that you can reach the intermediate server on port 22)

The various options are as follows:

-f : Tells ssh to put itself in the background after it authenticates. It allows ssh to ask for the password, but then after it’s done so, it puts itself in the background. This is so that you don’t need to keep the terminal window open that established the connection.

-N : This tells ssh not to execute a remote command – normally ssh will start a remote shell and let you type into it to run commands on the remote computer. This option is generally only useful when you’re forwarding ports (with -L or -R) and means that a remote shell isn’t executed, so it saves a small amount of system resources.

-T : This option tells ssh not to allocate a pseudo TTY, again it saves a small amount of system resources if you’re not using it for a remote shell.

-R : This is where the magic happens. I often use SSH to forward a local port to a remote machine with -L. This does the reverse, forwarding a port on the remote machine to the local machine.

22022:localhost:22 : This instruction for -R says to forward remote port 22022 (so, port 22022 on the intermediate server) through to port 22 on localhost. This is from the point of view of the target server, so any traffic sent to port 22022 on the intermediate server is forwarded to port 22 on the target server.

intermediate.example.com : This is the hostname or IP address of the intermediate server. Don’t forget to use a username if you need to, e.g. [email protected]. You can also specify -p if the intermediate server is listening on a port that’s not 22.

Step 2 – On the Workstation

Simply ssh to the intermediate server as normal:

ssh intermediate.example.com

Use a username if required, and specify the port with -p if it’s not listening on port 22.

Step 3 – On the Intermediate Server

Once you have ssh’d from your workstation to the intermediate server, you then ssh again to the port specified in the -R command above (port 22022 in this example) like so:

ssh localhost -p 22022

Again, use a username if you need to.

If the intermediate port (in this example 22022) is open to you on the intermediate server, then you can combine steps 2 and 2 above into one:

ssh intermediate.example.com -p 22022

This establishes a connection on the intermediate server to port 22022 – which is being listened on, and forwarded through to port 22 on the target server. So while ssh is being told to port 22022 on the intermediate server, the first ssh session from the target server listens to this and forwards it to port 22, where it’s own ssh daemon picks up the connection.


Now, you have an ssh connection being relayed from your workstation, via the intermediate server to the target server. Everything you type into ssh on your workstation will be running on the target server.

This post contains information that’s been condensed and re-worded from a post on StackExchange.

Fix SMB Permissions on OS X Server for Newly Created Files and Folders

I don’t know why it is, but SMB on OS X Server is slower and less reliable than the AFP that it replaces. Despite Apple making it the default for OS X to OS X Server file sharing connections, AFP seems to be more reliable and has less problems with permissions.

The following script may fix the inheritance of permissions being a bit wonky:

SHAREPOINT="/Volumes/Storage/Shared Items/Share"
serveradmin settings sharing:sharePointList:_array_id:$SHAREPOINT:smbCreateMask = “0644"
serveradmin settings sharing:sharePointList:_array_id:$SHAREPOINT:smbDirectoryMask = "0755"
serveradmin settings sharing:sharePointList:_array_id:$SHAREPOINT:smbInheritPermissions = yes

Set OS X Server to Deny SMB 3 Connections

Sometimes an OS X Server will have very poor SMB file sharing performance – whilst I haven’t been able to ascertain with 100% certainty what causes it, something that may be a factor is the use of SMB 3 connections over SMB 2.

SMB 3 connections can be signed and encrypted, and this can put a significant amount of overhead on the server.

To disable SMB 3 at the server end, type the following in to Terminal

sudo scutil --prefs com.apple.smb.server.plist
get /
d.add ProtocolVersionMap # 2
set /

Info from https://support.apple.com/en-au/HT204021

Mount a Windows partition of an optical disc on OS X

Today I needed to install some software on a Windows server – the software only came on an optical disc and the Windows server didn’t have an optical drive.
“No problem” I thought to myself, I’ll just put the DVD in my Mac and copy the files over, how difficult can it be?
As it turns out, when an optical disc has an HFS filesystem layer on it, the Mac will ignore the underlying ISO9660, Joliet and Rock Ridge extensions and head straight to what it knows best – and this is usually exactly what you want.
It is possible however to burn discs with a completely different set of files for Windows and OS X, and rely on the fact that the Mac will ignore the Joliet extension if there’s an HFS extension whereas Windows doesn’t care about the HFS extension and will happily show you the files on the Joliet extension.

Fortunately, there’s a way around this.

Insert the disc, head over to Terminal and type:
This will show the mounted filesystems – you’ll see something like this:

[[email protected] ~]$ mount
 /dev/disk1 on / (hfs, local, journaled)
 devfs on /dev (devfs, local, nobrowse)
 map -hosts on /net (autofs, nosuid, automounted, nobrowse)
 map auto_home on /home (autofs, automounted, nobrowse)
 /dev/disk2s2 on /Volumes/ArchiCAD 18 (hfs, local, nodev, nosuid, read-only, noowners)

Unmount the optical drive (you can’t just select it and hit eject because it will be physically ejected)

sudo umount "/Volumes/ArchiCAD 18"

Make a temp directory to mount the disc on

mkdir /tmp/AC18

Then mount the disc, but tell the system to ignore any extended attributes and the Rock Ridge extensions

sudo mount_cd9660 -er /dev/disk2 /tmp/AC18

Finally, open the mounted disc window in the Finder

open /tmp/AC18

When you’re done, you can then unmount the disc:

sudo umount /tmp/AC18