Installing WSUS on Windows Server 2012 Server Core   Leave a comment

Installing the WSUS Windows Feature
This only covers a default installation using the locally installed Windows Internal Database. For a more comprehensive walkthrough, have a read of this article by Boe Prox.

  1. Open an elevated Powershell session on the server
  2. Run: Install-WindowsFeature -Name UpdateServices -IncludeManagementTools
  3. Run: wsusutil postinstall CONTENT_DIR=D:\Wsus

The Wsusutil.exe utility can be found by default under “C:\Program Files\Update Services\Tools”.

The CONTENT_DIR directive is optional, but given how large the update repository can become, it’s fairly common to dedicate a separate drive to it. The command itself – amongst other things, creates the database within the WID.

If the host you’re installing WSUS on to also happens to be a virtual guest – or even if it’s physical, this still isn’t a bad idea, you might want to specify an upper memory limit for the WID – much as you would for SQL Server itself. You can do this by:

Optional: Configuring WID (SQL Server 2012 base) memory usage

  1. Download and install the SQL 2012 native client from here. See installation notes below.
  2. Download and install the SQL command line tools also from here. See installation notes below.
  3. Open an elevated command prompt
  4. Change directory to “C:\Program Files\Microsoft SQL Server\110\Tools\Binn”
  5. Run: sqlcmd -S \\.\pipe\MICROSOFT##WID\tsql\query -E
  6. Run each of the following at the interactive prompt:
    sp_configure ’show advanced options’, 1
    sp_configure ‘max server memory’, 256

The figure of 256 indicates 256MB. You can tune that upwards or downwards as you see fit. Just keep in mind that the W3WP.exe processes will end up consuming a fair bit of memory as well, and you don’t want the two fighting each other for physical memory to only end up seeing one lose and subsequently thrashing the page file.

With the SQL components downloaded in steps 1 and 2 above, you can install them on Server 2012 Server Core with the following commands:

  • SqlCmdLnUtils.msi /qb



Enabling the IIS Management Service on Server Core 2012   Leave a comment

Install the IIS Management Service (assuming IIS is already installed)

  • Open an elevated Powershell session
  • Run: Install-WindowsFeature -Name Web-Mgmt-Service
  • Run: sc config WMSVC start=auto
  • Run Regedit.exe and navigate to HKLM\Software\Microsoft\WebManagement\Server
  • Change the binary value of EnableRemoteManagement from 0 to 1
  • Run: Start-Service WMSVC

Optional: Enrol a certificate from an internal AD CA

  • Open an elevated Powershell session
  • Launch Notepad
  • Add the following lines to the new file:
  • Save the file as something ending in .inf, for example iis.inf
  • Run: certreq -new d:\temp\iis.inf d:\temp\request.txt
  • Run: certreq -submit d:\temp\request.txt d:\temp\iiscert.cer
  • Run: certreq -accept d:\temp\iiscert.cer

Optional: Changing the listener certificate

  • Open an elevated PowerShell session
  • Run: Get-ChildItem -Path “cert:\localmachine\my”
  • Copy the thumbprint for the certificate you enrolled above
  • Run the following
    del sslcer ipport=
    For the next command, replace yourCert with the thumbprint copied from step 3:
    add sslcert ipport= certhash=yourCert appid={00000000-0000-0000-0000-000000000000} certstorename=MY verifyrevocationwithcachedclientcertonly=disable usagecheck=enable dsmapperusage=disable clientcertnegotiation=disable
  • Run: show sslcer, just to just to check the binding was successfully applied with the nominated settings (even if the output from the above command was successful)

Assuming you completed the optional steps, you can now bind to the IIS Management Service without receiving the certificate trust warning.

If you elected to skip the optional procedures, you will still be able to connect, you’ll just have to put up with the warnings.


Installing and configuring the Outlook Live Management Agent with Forefront Identity Manager 2010.   9 comments

Hi folks,

As much as it surprises me, I still receive the odd question about the Outlook Live Management Agent when used in conjunction with Forefront Identity Manager 2010. It’s with that in mind that I’m providing the following brief write-up on how to manually install the OLMA, and although there’s not a lot of value in covering the attribute flow in depth, I’ll at least provide a guideline on how to configure the management agent to work with Live@EDU.

Please pay attention to the fact I mentioned this relates to a manual installation of the agent and subsequent configuration. We do not use the Self Service Portal component form FIM 2010 as Sharepoint is not our university’s standard for collaboration. As such, we only use the Synchronisation Service Manager along with writing the code ourselves.

Part 1: Installing the Outlook Live Management Agent.

  1. Download the “OLSync R4 Download” file from – you’ll need to use your Live@EDU registered admin account to do this;
  2. Extract the contents of the .zip file;
  3. Run the Galsync_R4_v2.msi installer:
    1. Welcome screen = Next;
    2. License agreement page = I agree & Next;
    3. Installation option = Extract files for manual installation & Next;
    4. Extract files = choose a directory & Extract;
    5. Finish.
  4. Using Explorer, navigate to the location where you extracted the files to from step 3 above where you should see the following three sub-directories:
    1. Extensions;
    2. SourceCode;
    3. UIShell
  5. Copy all of the contents of each directory as follows (I’m just using the default installation directory for FIM as the destination):
    1. Extensions -> C:\Program Files\Microsoft Forefront Identity Manager\2010\Synchronization Service\Extensions
    2. SourceCode -> C:\Program Files\Microsoft Forefront Identity Manager\2010\Synchronization Service\SourceCode
    3. UIShell\XMLs\PackagedMAs -> C:\Program Files\Microsoft Forefront Identity Manager\2010\Synchronization Service\UIShell\XMLs\PackagedMAs
  6. You have now completed a manual installation of the Outlook Live Management Agent.

Part 2: Configuring the Outlook Live Management Agent.

  1. Start the Forefront Synchronisation Manager;
  2. Select the “Management Agents” tab;
  3. Choose the Create action either from the side menu, or the context-sensitive menu;
  4. Choose “Outlook Live Management Agent” from the drop-down list named “Management agent for”;
  5. Give it whatever name you feel suits the purpose;
  6. Next;
  7. Set “Connect to” equal to “;”
  8. Set “User” equal to the account you created as your service account, which by default will look something like “”;
  9. Set “Password” equal to whatever you set the password to be;
  10. Next;
  11. Click the New button:
    1. Set “Parameter Name” equal to “ProvisioningDomain”;
    2. Set “Value” equal to your Live@EDU domain name, for example “”;
    3. For a list of this and other parameters you can set, have a read of this help page.
  12. OK;
  13. Next;
  14. Next (skipping “Configure Attributes”);
  15. Next (skipping “Map Object Types”);
  16. Next (skipping “Define Object Types”);
  17. Next (skipping “Configure Connector Filter” – though you may want to come back to this depending on your requirements);
  18. Configuring the “Join and Projection Rules” section depends on your current FIM topology and could take a light year to discuss. If you’ve work with ILM/FIM before, just do what you do best here. If you have absolutely no idea, then you can use the following as a simplistic example for creating mailboxes. We use the metaverse attribute “accountName” as our primary key, meaning our configuration for this screen is as follows:
    1. Highlight “Mailbox”;
    2. Click “New Join Rule”;
    3. On the left side (“Data source”) choose “Alias”;
    4. On the right side (metaverse) choose “accountName”;
    5. Click the “Add Condition” button, and if you’re prompted about the attribute being non-indexed, just accept that and move on;
    6. OK;
  19. Next;
  20. Okay, with this screen you’re largely on your own – sorry. There’s just too much scope for variance here between organisations/institutions, and it’s extremely likely you’re also going to be dealing with writing your own rule extensions here, too. Still, just so you have some point of reference, here’s the attributes we populate with what they’re based on in brackets:
    1. UserPrincipalName (custom e-mail address attribute);
    2. Name (accountName metaverse attribute);
    3. DisplayName (displayName metaverse attribute);
    4. Alias (accountName metaverse attribute);
    5. WindowsLiveID (custom e-mail address attribute – same as UserPrincipalName);
    6. FirstName (firstName metaverse attribute);
    7. LastName (lastName metaverse attribute);
    8. EmailAddresses (rule extension as there are multiple addresses added to accounts, and we also have to be able to handle name changes – as I suspect you will, too);
  21. Next;
  22. Next (skipping “Deprovisioning” – again, it’s up to you as to how you handle this – if at all);
  23. If you have enabled PCNS – or intend to, then you can use this final screen (“Configure Extensions”) to enable password management, and if you have written one, to include the “Rules extension name” (a .DLL file – which is beyond the scope of this article).
  24. You have now finished defining the structure of your Outlook Live Management Agent.

Part 3: Rules extensions.

This is an exceptionally important part of the process, but beyond the scope of this article. Essentially, if you’re not already familiar with ILM/FIM then you’re possibly not aware that you will need to create at least one rule extension which handles the provisioning of new objects into the Live@EDU connector space.

If you deployment requires it, you may also need to write another rules extension that handles the customised calculation of values to flow back out from the metaverse to the connector space for the OLMA. To give you a simple example, the code might do something as simple as combine a student’s given and surnames to produce a display name. You can’t do this with the “Direct” flow (in the attribute flow screen of the MA). It needs to be an “Advanced” export flow, for which you specify the rule name and write the code to go along with it.

At this point you have done enough to get the OLMA talking to Live@EDU – so long as there no other peripheral issues such as ports being blocked by firewalls and whatnot. You can proceed to run a Full Import and Full Synchronise cycle(s) to populate the connector space and metaverse respectively, though before you can provision accounts into Live@EDU, you’ll have to write your own code to handle the provisioning of the object within the Provision() function.


AD LDS SSL woes   Leave a comment

This is a classic case of one of those problems you spend too much time on and the root cause is quite simple to resolve.

I created a new AD LDS installation last week and spent a bit of time plodding through the security side of things setting up a couple of AD LDS accounts and the appropriate delegation so that the appropriate service accounts could be used by the applications sitting either side of the directory service in only the intended manner.

Yesterday, I knocked up quickly the provisioning side of things in Forefront Identity Manager 2010 and pushed the accounts across just as a test and everything seemed fine. That changed when I added the code required for setting the initial password, at which point I started getting an error in FIM of the following nature:

  • In the FIM Synchronisation Service Manager output: cd-error
  • In the properties of the connection space object: “Illegal modify operation. Some aspect of the modification is not permitted.”

When I ran a quick test in LDP I connected just fine, but this was over 389 and without SSL. What I didn’t realise until I checked with the Microsoft FIM forums is that SSL connections are required if you’re going to be performing passwords sets (on the unicodePwd attribute).

So, no big deal. I added a certificate to the workgroup machine at the computer store level and thought I’d be good to go. Wrong!

Okay, so I’d seen this issure before where the service was configured to use the Network Service account, so I dug up this article ( and quickly ran through the directory permission changes required, restarted the AD LDS instance and tried again. Still no joy.

At this point I was feeling a bit lost. As you do, I checked the event log, and fortunately, the Security event log held the key to the problem. Here’s the event details:

Event ID: 5061
Source: Microsoft Windows Security Auditing
Keywords: Event Failure

Cryptographic Parameters:
Provider Name: Microsoft Software Key Storage Provider
Algorithm Name: Not Available.
Key Name: {removed}
Key Type: Machine key.

Cryptographic Operation:
Operation: Open Key.
Return Code: 0x80090011

So, I still had a problem opening the key, which was a little confusing as I expected the above article to have resolved that issue.

Fortunately, I had ProcessMonitor lying around on my machine, so running it remotely from the server proved to be the trick that solved the problem. When I ran it while filtering for the Result of Access Denied while trying to connect to AD LDS with LDP, I trapped the issue, and determined that the directory published in the Technet article didn’t apply! (This is while using AD LDS on Server 2008 R2)

So, the key I was meant to edit was actually under C:\ProgramData\Microsoft\Crypto\Keys. After finding the correct key, I simply added the Network Service account with Read access and LDP immediately was able to bind with the AD LDS account over SSL! (And so was FIM, more to the point!)

I’m not sure if R2 changed the location as was referenced in the Server 2008 Technet article, but for such a simple fix, it was an obscure troubleshooting process to have worked through!


Exchange 2010 dynamic distribution lists using non-standard Active Directory attributes.   6 comments

Update 01/02/2017:

Just a quick note to point out that this still works with Exchange Server 2013 and 2016.


Hi again,

This is a very brief post about a topic I’d often thought about but only acted upon recently, and it’s to do with the creation of dynamic distribution lists in Exchange 2010 where the attributes you want to base the query on are not part of those made available through the OPath syntax. This often comes up where you already have useful data stored in Active Directory and do not wish to store redundant copies of it in the CustomAttribute attributes (data redundancy peeves me!).

Firstly, it’s important to note that you cannot manage dynamic distribution lists created in this manner through the EMC – they won’t even appear. You can’t truly manage them from the Powershell console either, as the property in question – LdapRecipientFilter, is deemed read-only. What you’re left with is having to manage this kind of dynamic distribution list directly from something like Users and Computers, AdsiEdit, LDP – or something of the nature where you can edit Active Directory attributes directly.

Speaking of Active Directory attributes, these custom queries primarily revolve revolve two:

  • msExchQueryFilter: Holds the OPath syntax query;
  • msExchDynamicDLFilter: Holds the LDAP syntax query.

The first step in this process is straight foward enough. Use either the EMC or Powershell environments to create your dynamic distribution list. From my perspective, I don’t worry about what the user query component involves, as I’ll be changing this straight afterwards anyway. Just make sure things like your recipientContainer and organizationalUnit values are what you’d like them to be and leave it at that.

Next, open up the group in Adsiedit – or whatever your tool of choice is, so that you can see real Active Directory attributes (as opposed to the OPath aliases). Here, you want to clear the value from msExchQueryFilter so that it is null.

Next, edit the msExchDynamicDLFilter attribute, placing the LDAP syntax query you wish to run inside. Again, make sure you understand LDAP queries and put the correct value in here, as you’re not going to get any user friendly feedback as you might from either the EMC or Powershell environments.

Quick tip: LDP and Users and Computers are nice for the above phase, as with both of these you can construct your query and test it in them before you get to the point of pasting the query into msExchDynamicDLFilter, saving from using trial and error (which you shouldn’t be doing anyway) to get your distribution list working!

Technically, that’s all you need to do most of the time. I say most of the time because there’s one factor you need to remember, and it may lead to you having a few more steps to do.

As you no doubt remember, Exchange queries are executed against the global catalog (listening on port 3268) rather than LDAP (port 389). And as you no doubt also remember, the global catalog is only a partial set when compared against the data stored in Active Directory for a given object. What this means is that you may have to enable additional attributes to be stored in the global catalog before in order to utilise your desired dynamic distribution list. Don’t take this decision lightly, as the one of the considerations to keep in mind when working with the global catalog is to keep it lightweight.

If you do find that you pick an attribute to filter on – such as employeeType, that isn’t included in the global catalog by default, you can register schmmgmt.dll from the command line with:

regsvr32 schmmgmt.dll

After which you can add the Active Directory Schema MMC snap-in into a new MMC, navigate to the attribute you want to enable, and do so.

And now, the only thing left to do is make use of your new dynamic distribution list!

As I mentioned at the start, you won’t be able to manage it from the EMC at all, however you can manage all facets other than the query itself from Powershell. For the query itself, you are restricted to whatever Active Directory editing tool you prefer to maintain it.


Volume activation and you!   Leave a comment

It seems to be that time of the season again where folks come out in force to tackle what seems to be one of the more enduring topics of discussion: volume activation for Windows Server 2008 and Vista / Windows 7.
There’s a lot of information and documentation around on this topic, perhaps so much so that it overwhelms people. With that in mind, I thought I’d drop a few lines in here to help simplify – not so much to demystify, the process people embark on. For those of you who love technical documents – in fact, I’ll recommend this reading to anyone working with KMS or MAK activation at all, take a look at this article, which is one of the more recent and helpful Microsoft documents. Though beware, it’s a comprehensive article that contains far more information than you need to actually get activation working, so if you’re not the type of person who can identify just the parts you need, you might find it overwhelming.
Anyway, on we go with the simplified version.
First, let me clarify the audience. This write-up is aimed at people wanting to get either a KMS host working in a corporate environment, or just folks looking to get a MAK activation working.
Assumptions for the corporate folks:
1. You want to set up a KMS 2.0 host that can activate Windows Server 2008 R2 and Windows 7.
2. You access the Internet through a proxy server.
3. You have either a Server 2008 R2 (preferred) or Server 2008 server that you intend to perform this role.
Licensing keys
This is always something people get stumped on. For the purposes of this discussion there’s two kinds of keys you need to know about:
  • KMS: These keys are a special key that typically are only available to volume licensing customers. To find your KMS key, you can either ask your account manager for it – if that’s who handles your licensing, or if you are a little more hands-on and have configured an account with which you can log into Microsoft’s Volume Licensing Service Center, then you can get it from there. The important thing to note is you only need one of them!
  • MAK: This key is what folks that own Technet or Technet Plus-style subscriptions typically have access to. Media Activation Keys (MAK) are used to activate a product once, and once only. Unlike KMS activations where the client checks in every 180 days (by default), MAK activations do not check in ever again once activation has successfully completed. The important thing to note here is that you do not use MAK keys with a KMS host-style topology! Just think of MAK and KMS as being mutually exclusive.
So, which key do you need? If you are looking to deploy a KMS licensing topology, where the KMS host activates all your clients, then you need just the one KMS key. If you are running a smaller environment and don’t require the services (or don’t meet the minimum licensing quantities to be able to use a KMS host) of a KMS host, then MAK keys are your best option. In order for a KMS host to start activating your clients, you need to have at least five Server 2008 registration or 25 Windows 7 / Vista registrations. The KMS host will make no attempt to register your licenses with Microsoft until either of these minimum requirements have been met.
So, onwards into the technical.
One of the most common mistakes I come across is that people do not have the correct proxy configuration in place. It’s important to remember that regardless of whether you use a KMS topology or MAK keys, your KMS service hosts or MAK clients need to be able to talk to specific Microsoft resources out in the wild Internet (note the deliberate exclusion of KMS clients, since they only need to talk to the KMS host). I’m not talking about whether you can get out to the Internet, but whether the computer can – there’s a difference.
Now, not all proxy servers are equal here. Here’s a quick guideline:
  • No authentication: Simplest configuration. Simply run the NETSH command that follows to configure the server for Internet access.
  • NTLM authentication: Still simple, though you will need to ensure that either the computer account has Internet access, or that the sites (also found after this section) are put into a group associated with anonymous authentication. Other than this one caveat, it’s the same as the no authentication scenario: just run the NETSH statement.
  • Basic authentication: The computer won’t be able to authenticate with basic credentials, so you will need to organise it so that the list of sites used in activation are added to an anonymous authentication group.

So at this point we have identified the key(s) we need and can get on with the job or setting up the KMS host.

  1. Log onto the server that you wish to be your KMS activation host,
  2. Bring up an elevated command prompt,
  3. Run (optional based on the above points): netsh winhttp set proxy proxy-server=”″
  4. Run: slmgr /ipk <5-part KMS key>
  5. That’s all!
For reference, here are the URLs that the KMS host (or MAK client, for those of you waiting on that section) needs to be able to get to:
You can verify the success of the KMS key with the following command: slmgr /dlv
This command will display whether or not this server is now enrolled in the KMS volume activation channel. The following is an example of the first couple of lines of output:
Software licensing service version: 6.0.6002.18112
Name: Windows Server(R), ServerStandard edition
Description: Windows Operating System – Windows Server(R), VOLUME_KMS_R2_B channel
The section highlighted in yellow is the crucial section, as this confirms the server is capable of responding to Server 2008 R2 and Windows 7 requests. Further down you will notice another section, for which the following is an example:
Key Management Service is enabled on this machine
    Current count: 50
    Listening on Port: 1688
    DNS publishing enabled
    KMS priority: Normal
This is a good section if you have to do any troubleshooting, as firstly it tells you that it is actually configured as a KMS host. Importantly, it also tells you what port it is listening on – which can be good to know when checking if the inbound firewall rule exception has been made. (Please don’t tell me you’re one of those strange people who disables the firewall?!)
There’s also some stats listed, but the only thing that’s important to know about these stats until either 25 clients or 5 servers have registered, the KMS host will not even try to contact the Internet to validate their activation requests.
Just for reference, if you have successfully set up the KMS host, and you aren’t having dramas with any of the following, then you should find you now have a working KMS activation server:
  • Proxy server authentication.
  • DNS SRV record registration (_VLMCS._tcp record).
  • Windows Firewall blocking inbound traffic for the the KMS listening port.
As an example of a successful client registration, if you run the slmgr /dlv command on a Windows 7 machine, you will see a bit of output that starts with the following:
Software licensing service version: 6.1.7600.16385
Name: Windows(R) 7, Enterprise edition
Description: Windows Operating System – Windows(R) 7, VOLUME_KMSCLIENT channel
Note the different channel name? This is indicative of what you should be seeing on all of your clients if they’re successfully registering (and once you have passed the initial activation requirements of 5 servers or 25 clients – prior to that the activation status will differ).
MAK activation:
Okay! This is the simple one!
For MAK activation, this couldn’t be any simpler. In terms of the steps required, you have some of the same concerns as the KMS folk in that you need to be sure that each and every computer can talk to the relevant Microsoft URLs required for successful activation (this is one of the benefits of a KMS infrastructure: only the KMS server needs to talk to the Internet!).
Assuming the clients can all talk to the Internet, then your process for activation is similar to that for setting up a KMS host:
  1. Log onto the client that requires activation.
  2. Open an elevated command prompt.
  3. Run: slmgr /ipk <5-part MAK key>
  4. Done!
Again, as with the KMS folks, you can also verify your KMS activation success (or failure) by running the slmgr /dlv command.

Posted March 15, 2010 by Lain Robertson in Windows Server 2008 R2

Tagged with , ,

IIS7 and UNC-based virtual directories.   2 comments

Today threw up another one of those wonder “if you say so” type errors. This time, it was with IIS7 on Windows Server 2008 SP2 (not R2, in this case). If you want to skip the story, head to the end for some tips on troubleshooting the issue (keep in mind though that this is for a specific issue, mainly revolving around apparent access denied and server HTTP 500.19 errors).
We house our intranet photos on the server that also houses related data, and one of our team is in the process of drumming up a new web application that he wanted to use the photos in. This led to the fairly common scenario of wanting to create a virtual folder on his web server that pointed to the JPEGs on the external server.
Now, in the days of IIS6, this was both simple and intuitive, but woe unto me for trying to get this going in a hurry on IIS7! Strictly speaking, the resolution is very few steps, but as per usual, I was trying to jump into something feet first without having kept myself up to date with yet another component’s changes. As a refresher, under IIS6, all one had to do was create the new virtual directory and utilise the “connect as” and voila, you were done. The “tricksy” (just watched Lord of the Rings yesterday, and I had to find a reason to use that word!) nature of this task in IIS7 is the UI looks pretty much the same. However, I ran into half a days worth of dramas while I tried to work through what the issue was.
As part of the diagnostic process, I quickly established via the Security event log that for whatever reason, IIS7 was trying to verify this “connect as” account existed locally before using it to connect with the foreign system. This was borne out by the fact that there were no entries in that servers event log – not even anonymous ones. That problem was taken care of inside of the first five minutes of looking at this issue by simply creating a local account with the same username and credentials. Okay, but we still didn’t have liftoff, Captain. Why not?!
At this point, I was still getting a lot of errors, and they all seemed to get back to the good old “Access Denied” root cause, which of course I didn’t buy because I had verified with a net use command that access wasn’t in fact the issue. At least, it wasn’t an issue with the “connect as” account, and a quick scan of the foreign Security event log confirmed that the computer account – for whatever reason, was trying to authenticate with this foreign machine. That was never going to happen, since the foreign machine is a workgroup machine, unlike the IIS7 server which is domain joined. The error indicated was HTTP error 500.19.
Naturally, I decided that was an oversight on my part and headed straight to the Authentication applet to enable Windows authentication, except I couldn’t get into the applet. After double clicking it, the status column remained stuck on something to the effect of “refreshing”, much to my annoyance, and trying to open any option was bringing up a new error that the web.config file couldn’t be found. Suffice to say I was a little bewildered. A web.config for a virtual directory? I don’t think so, Tim.
So, jumping to the end of this chain of events, I ended up deleting the virtual directory, going back to the site root and enabling Windows Authentication at that level, then recreating the virtual directory, and lo and behold, it worked exactly as it should of from the outset!
So, in point form, here’s what I found to be different from IIS6:
  • Connect As account needs to exist locally on the IIS7 server if both the IIS7 server and the foreign host do not live in the same domain (or forest, if you want to expand into larger topologies).
  • You may need to enable one of the additional authentication mechanisms before you create the virtual directory to avoid the 500.19 errors the server will throw if it’s trying to track down the web.config file. Personally, I used the Windows Authentication option, but I suspect the others would have their uses depending on what you’re hosting your content on.
  • When filling out the Connect As window, do NOT use the local hostname in front of the account name, as that creates a disassociation between the local version of the Connect As account and the foreign system’s version. This is one point you should pay extra attention to, as I read a number of articles suggesting you should in fact use the hostname. As I said already, do NOT do this under IIS7. So, to use an example, you want to use “JohnDoe” as the username, and not “hostname\JohnDoe”.

Anyway, as I say, the change itself is just as quick to make as it ever was, there’s just a few things to watch out for in terms of new configuration traps.


Posted February 9, 2010 by Lain Robertson in Windows Server 2008

Tagged with , , , ,

2008 R2 Certificate Services   1 comment

I could almost cry over this one – it’s a classic case of trying to be too thorough being my undoing.
I’ve had issues with Server 2003 machines not enrolling correctly against a 2008 R2 enterprise CA, but not matter what error I typed in verbatim into the various seach engines, I couldn’t find much at all, let alone anything useful. Eventually, I plonked “Server 2003” SHA512 error into Google and low and behold, the best possible result comes up: a knowledgebase article! (And as a tidbit of interest, I put the same search into Bing, and it returned the result second in the list, compared to Google’s seventh! Way to go, Bing!)
Here’s some details:
Server 2008 R2 CA information
Signing is in SHA512 (though the article applies to SHA256 as well)
Server 2003 information
Partial error text:
  • From viewing the Trusted Root certificate:
    • General tab: The integrity of this certificate cannot be guaranteed. The certificate may be corrupted or may have been altered.
    • Certification Path tab: Certificate status: The certificate has a nonvalid digital signature.
  • From trying to request a new certificate:
    • The wizard cannot be started because of one or more of the following conditions:
      – There are no trusted certification authorities (CAs) available.
      – You do not have the permissions to request certificates from the available CAs.
      – The available CAs issue certificates for which you do not have permissions.

In reality, that last dialog should have had a fourth option, I think: You might need an update to your cryptographic provider! Yeah! Anyway, without further ado, you can find the Microsoft support article here! Hopefully this helps if you’re having trouble resolving the same problem I had! (The Server 2003 problem, that is – not the poor searching issue!)


PS, here’s two other links I stumbled across that are potentially really good to have on hand, even if they apply to different versions of ADCS:

Posted February 5, 2010 by Lain Robertson in Windows Server 2008 R2

Tagged with ,

Group policy: Bug with Scheduled Tasks preferences.   Leave a comment

Just a quick update that might interest anyone trying to utilise multiple Scheduled Tasks Preferences. Rather than rehashing the post here, take a read of the bug description and acknowledgement in the Microsoft Group Policy forums.

Automatic Updates and Windows Server 2008 Server Core installations.   Leave a comment

Just a quick update to share a script used to install updates that have been downloaded already by the Windows Update client. Thinking about it, it’s a shame that the wuauclt.exe application doesn’t have an /install switch, since it already has the pre and post-requisites of /detectnow and /reportnow respectively. Oh well, the script was simple enough.
As an important aside, I wrote this for the WSUS 3 client, since that’s what we have here in our environment. This isn’t an issue for us given it’s destined for use on Server 2008 Core installations, but if you intend to use it elsewhere, just keep that in mind.
This is not designed to be a detection script, since you should already have configured your Windows Update settings via Group Policy. It’s purely meant to allow updates to be installed from the command line.
To execute the script, copy and save the content below into a JavaScript file (for example, installUpdates.js), then run it from the command line with the following syntax:
cscript //nologo installUpdates.js
You will not be able to run this through the wscript host (which is by design). In any case, here’s the code.
  A simple script written for Server 2008 Server Core installations to install updates that
  have already been downloaded.

var oWSUSSession, oWSUSSearcher, oWSUSUpdates, oWSUSInstaller;
var oUpdates, oUpdate, oInstallResult;

var oError, oDebug = true;  // Switch Debug to false to avoid some of the text output.

try {
  oWSUSSession = WScript.CreateObject("Microsoft.Update.Session");
  oWSUSUpdates = WScript.CreateObject("Microsoft.Update.UpdateColl");
  oWSUSSearcher  = oWSUSSession.CreateUpdateSearcher();
  oWSUSInstaller = oWSUSSession.CreateUpdateInstaller();

  // Perform a search against the WSUS server to see what updates are required.
  oUpdates = oWSUSSearcher.Search("IsInstalled=0 AND Type='Software'");
  if (Debug) WScript.StdOut.WriteLine("Detected updates = "+ oUpdates.Updates.Count);

  if (oDebug) WScript.StdOut.WriteLine("Enumerating search results:");

  for (var i = 0; i < oUpdates.Updates.Count; i++) {
    oUpdate = oUpdates.Updates.Item(i);

    if (Debug) WScript.StdOut.WriteLine("- Downloaded="+ oUpdate.IsDownloaded + "," + oUpdate.Title);

    // We're only interested in updates that are already downloaded.
    if (oUpdate.IsDownloaded) {

  if (Debug) WScript.StdOut.WriteLine();

  // If we have some updates that are classed as downloaded, then it's now time to install them!
  if (oWSUSUpdates.Count > 0) {

    WScript.StdOut.WriteLine("Updates to install = "+ oWSUSUpdates.Count);
    oWSUSInstaller.Updates = oWSUSUpdates;

    // Run the installer
    oInstallResult = oWSUSInstaller.Install();
    WScript.StdOut.WriteLine("Reboot required = "+ oInstallResult.RebootRequired);
} catch(oError) {

Posted January 29, 2010 by Lain Robertson in Server Core, Windows Server 2008 R2

Tagged with , ,