Installing WSUS on Windows Server 2012 Server Core   Leave a comment

Installing the WSUS Windows Feature
This only covers a default installation using the locally installed Windows Internal Database. For a more comprehensive walkthrough, have a read of this article by Boe Prox.

  1. Open an elevated Powershell session on the server
  2. Run: Install-WindowsFeature -Name UpdateServices -IncludeManagementTools
  3. Run: wsusutil postinstall CONTENT_DIR=D:\Wsus

The Wsusutil.exe utility can be found by default under “C:\Program Files\Update Services\Tools”.

The CONTENT_DIR directive is optional, but given how large the update repository can become, it’s fairly common to dedicate a separate drive to it. The command itself – amongst other things, creates the database within the WID.

If the host you’re installing WSUS on to also happens to be a virtual guest – or even if it’s physical, this still isn’t a bad idea, you might want to specify an upper memory limit for the WID – much as you would for SQL Server itself. You can do this by:

Optional: Configuring WID (SQL Server 2012 base) memory usage

  1. Download and install the SQL 2012 native client from here. See installation notes below.
  2. Download and install the SQL command line tools also from here. See installation notes below.
  3. Open an elevated command prompt
  4. Change directory to “C:\Program Files\Microsoft SQL Server\110\Tools\Binn”
  5. Run: sqlcmd -S \\.\pipe\MICROSOFT##WID\tsql\query -E
  6. Run each of the following at the interactive prompt:
    sp_configure ’show advanced options’, 1
    reconfigure
    go
    sp_configure ‘max server memory’, 256
    reconfigure
    go
    exit

The figure of 256 indicates 256MB. You can tune that upwards or downwards as you see fit. Just keep in mind that the W3WP.exe processes will end up consuming a fair bit of memory as well, and you don’t want the two fighting each other for physical memory to only end up seeing one lose and subsequently thrashing the page file.

With the SQL components downloaded in steps 1 and 2 above, you can install them on Server 2012 Server Core with the following commands:

  • sqlncli.msi /qb IACCEPTSQLNCLILICENSETERMS=YES
  • SqlCmdLnUtils.msi /qb

Cheers,
Lain

Advertisements

Enabling the IIS Management Service on Server Core 2012   Leave a comment

Install the IIS Management Service (assuming IIS is already installed)

  • Open an elevated Powershell session
  • Run: Install-WindowsFeature -Name Web-Mgmt-Service
  • Run: sc config WMSVC start=auto
  • Run Regedit.exe and navigate to HKLM\Software\Microsoft\WebManagement\Server
  • Change the binary value of EnableRemoteManagement from 0 to 1
  • Run: Start-Service WMSVC

Optional: Enrol a certificate from an internal AD CA

  • Open an elevated Powershell session
  • Launch Notepad
  • Add the following lines to the new file:
    [NewRequest]
    Subject=”cn=yourServer.yourDomain.com”
    Exportable=TRUE
    [RequestAttributes]
    CertificateTemplate=”WebServer”
  • Save the file as something ending in .inf, for example iis.inf
  • Run: certreq -new d:\temp\iis.inf d:\temp\request.txt
  • Run: certreq -submit d:\temp\request.txt d:\temp\iiscert.cer
  • Run: certreq -accept d:\temp\iiscert.cer

Optional: Changing the listener certificate

  • Open an elevated PowerShell session
  • Run: Get-ChildItem -Path “cert:\localmachine\my”
  • Copy the thumbprint for the certificate you enrolled above
  • Run the following
    netsh
    http
    del sslcer ipport=0.0.0.0:8172
    For the next command, replace yourCert with the thumbprint copied from step 3:
    add sslcert ipport=0.0.0.0:8172 certhash=yourCert appid={00000000-0000-0000-0000-000000000000} certstorename=MY verifyrevocationwithcachedclientcertonly=disable usagecheck=enable dsmapperusage=disable clientcertnegotiation=disable
  • Run: show sslcer, just to just to check the binding was successfully applied with the nominated settings (even if the output from the above command was successful)

Assuming you completed the optional steps, you can now bind to the IIS Management Service without receiving the certificate trust warning.

If you elected to skip the optional procedures, you will still be able to connect, you’ll just have to put up with the warnings.

Cheers,
Lain

Installing and configuring the Outlook Live Management Agent with Forefront Identity Manager 2010.   9 comments

Hi folks,

As much as it surprises me, I still receive the odd question about the Outlook Live Management Agent when used in conjunction with Forefront Identity Manager 2010. It’s with that in mind that I’m providing the following brief write-up on how to manually install the OLMA, and although there’s not a lot of value in covering the attribute flow in depth, I’ll at least provide a guideline on how to configure the management agent to work with Live@EDU.

Please pay attention to the fact I mentioned this relates to a manual installation of the agent and subsequent configuration. We do not use the Self Service Portal component form FIM 2010 as Sharepoint is not our university’s standard for collaboration. As such, we only use the Synchronisation Service Manager along with writing the code ourselves.

Part 1: Installing the Outlook Live Management Agent.

  1. Download the “OLSync R4 Download Package.zip” file from connect.microsoft.com – you’ll need to use your Live@EDU registered admin account to do this;
  2. Extract the contents of the .zip file;
  3. Run the Galsync_R4_v2.msi installer:
    1. Welcome screen = Next;
    2. License agreement page = I agree & Next;
    3. Installation option = Extract files for manual installation & Next;
    4. Extract files = choose a directory & Extract;
    5. Finish.
  4. Using Explorer, navigate to the location where you extracted the files to from step 3 above where you should see the following three sub-directories:
    1. Extensions;
    2. SourceCode;
    3. UIShell
  5. Copy all of the contents of each directory as follows (I’m just using the default installation directory for FIM as the destination):
    1. Extensions -> C:\Program Files\Microsoft Forefront Identity Manager\2010\Synchronization Service\Extensions
    2. SourceCode -> C:\Program Files\Microsoft Forefront Identity Manager\2010\Synchronization Service\SourceCode
    3. UIShell\XMLs\PackagedMAs -> C:\Program Files\Microsoft Forefront Identity Manager\2010\Synchronization Service\UIShell\XMLs\PackagedMAs
  6. You have now completed a manual installation of the Outlook Live Management Agent.

Part 2: Configuring the Outlook Live Management Agent.

  1. Start the Forefront Synchronisation Manager;
  2. Select the “Management Agents” tab;
  3. Choose the Create action either from the side menu, or the context-sensitive menu;
  4. Choose “Outlook Live Management Agent” from the drop-down list named “Management agent for”;
  5. Give it whatever name you feel suits the purpose;
  6. Next;
  7. Set “Connect to” equal to “https://ps.outlook.com/powershell;”
  8. Set “User” equal to the account you created as your service account, which by default will look something like “olsync@university.edu.au”;
  9. Set “Password” equal to whatever you set the password to be;
  10. Next;
  11. Click the New button:
    1. Set “Parameter Name” equal to “ProvisioningDomain”;
    2. Set “Value” equal to your Live@EDU domain name, for example “university.edu.au”;
    3. For a list of this and other parameters you can set, have a read of this outlook.com help page.
  12. OK;
  13. Next;
  14. Next (skipping “Configure Attributes”);
  15. Next (skipping “Map Object Types”);
  16. Next (skipping “Define Object Types”);
  17. Next (skipping “Configure Connector Filter” – though you may want to come back to this depending on your requirements);
  18. Configuring the “Join and Projection Rules” section depends on your current FIM topology and could take a light year to discuss. If you’ve work with ILM/FIM before, just do what you do best here. If you have absolutely no idea, then you can use the following as a simplistic example for creating mailboxes. We use the metaverse attribute “accountName” as our primary key, meaning our configuration for this screen is as follows:
    1. Highlight “Mailbox”;
    2. Click “New Join Rule”;
    3. On the left side (“Data source”) choose “Alias”;
    4. On the right side (metaverse) choose “accountName”;
    5. Click the “Add Condition” button, and if you’re prompted about the attribute being non-indexed, just accept that and move on;
    6. OK;
  19. Next;
  20. Okay, with this screen you’re largely on your own – sorry. There’s just too much scope for variance here between organisations/institutions, and it’s extremely likely you’re also going to be dealing with writing your own rule extensions here, too. Still, just so you have some point of reference, here’s the attributes we populate with what they’re based on in brackets:
    1. UserPrincipalName (custom e-mail address attribute);
    2. Name (accountName metaverse attribute);
    3. DisplayName (displayName metaverse attribute);
    4. Alias (accountName metaverse attribute);
    5. WindowsLiveID (custom e-mail address attribute – same as UserPrincipalName);
    6. FirstName (firstName metaverse attribute);
    7. LastName (lastName metaverse attribute);
    8. EmailAddresses (rule extension as there are multiple addresses added to accounts, and we also have to be able to handle name changes – as I suspect you will, too);
  21. Next;
  22. Next (skipping “Deprovisioning” – again, it’s up to you as to how you handle this – if at all);
  23. If you have enabled PCNS – or intend to, then you can use this final screen (“Configure Extensions”) to enable password management, and if you have written one, to include the “Rules extension name” (a .DLL file – which is beyond the scope of this article).
  24. You have now finished defining the structure of your Outlook Live Management Agent.

Part 3: Rules extensions.

This is an exceptionally important part of the process, but beyond the scope of this article. Essentially, if you’re not already familiar with ILM/FIM then you’re possibly not aware that you will need to create at least one rule extension which handles the provisioning of new objects into the Live@EDU connector space.

If you deployment requires it, you may also need to write another rules extension that handles the customised calculation of values to flow back out from the metaverse to the connector space for the OLMA. To give you a simple example, the code might do something as simple as combine a student’s given and surnames to produce a display name. You can’t do this with the “Direct” flow (in the attribute flow screen of the MA). It needs to be an “Advanced” export flow, for which you specify the rule name and write the code to go along with it.

At this point you have done enough to get the OLMA talking to Live@EDU – so long as there no other peripheral issues such as ports being blocked by firewalls and whatnot. You can proceed to run a Full Import and Full Synchronise cycle(s) to populate the connector space and metaverse respectively, though before you can provision accounts into Live@EDU, you’ll have to write your own code to handle the provisioning of the object within the Provision() function.

Cheers,
Lain

AD LDS SSL woes   Leave a comment

This is a classic case of one of those problems you spend too much time on and the root cause is quite simple to resolve.

I created a new AD LDS installation last week and spent a bit of time plodding through the security side of things setting up a couple of AD LDS accounts and the appropriate delegation so that the appropriate service accounts could be used by the applications sitting either side of the directory service in only the intended manner.

Yesterday, I knocked up quickly the provisioning side of things in Forefront Identity Manager 2010 and pushed the accounts across just as a test and everything seemed fine. That changed when I added the code required for setting the initial password, at which point I started getting an error in FIM of the following nature:

  • In the FIM Synchronisation Service Manager output: cd-error
  • In the properties of the connection space object: “Illegal modify operation. Some aspect of the modification is not permitted.”

When I ran a quick test in LDP I connected just fine, but this was over 389 and without SSL. What I didn’t realise until I checked with the Microsoft FIM forums is that SSL connections are required if you’re going to be performing passwords sets (on the unicodePwd attribute).

So, no big deal. I added a certificate to the workgroup machine at the computer store level and thought I’d be good to go. Wrong!

Okay, so I’d seen this issure before where the service was configured to use the Network Service account, so I dug up this article (http://technet.microsoft.com/en-us/library/cc725767(WS.10).aspx) and quickly ran through the directory permission changes required, restarted the AD LDS instance and tried again. Still no joy.

At this point I was feeling a bit lost. As you do, I checked the event log, and fortunately, the Security event log held the key to the problem. Here’s the event details:

Event ID: 5061
Source: Microsoft Windows Security Auditing
Keywords: Event Failure

Cryptographic Parameters:
Provider Name: Microsoft Software Key Storage Provider
Algorithm Name: Not Available.
Key Name: {removed}
Key Type: Machine key.

Cryptographic Operation:
Operation: Open Key.
Return Code: 0x80090011

So, I still had a problem opening the key, which was a little confusing as I expected the above article to have resolved that issue.

Fortunately, I had ProcessMonitor lying around on my machine, so running it remotely from the server proved to be the trick that solved the problem. When I ran it while filtering for the Result of Access Denied while trying to connect to AD LDS with LDP, I trapped the issue, and determined that the directory published in the Technet article didn’t apply! (This is while using AD LDS on Server 2008 R2)

So, the key I was meant to edit was actually under C:\ProgramData\Microsoft\Crypto\Keys. After finding the correct key, I simply added the Network Service account with Read access and LDP immediately was able to bind with the AD LDS account over SSL! (And so was FIM, more to the point!)

I’m not sure if R2 changed the location as was referenced in the Server 2008 Technet article, but for such a simple fix, it was an obscure troubleshooting process to have worked through!

Cheers,
Lain

Exchange 2010 dynamic distribution lists using non-standard Active Directory attributes.   4 comments

Update 01/02/2017:

Just a quick note to point out that this still works with Exchange Server 2013 and 2016.

Cheers,
Lain


Hi again,

This is a very brief post about a topic I’d often thought about but only acted upon recently, and it’s to do with the creation of dynamic distribution lists in Exchange 2010 where the attributes you want to base the query on are not part of those made available through the OPath syntax. This often comes up where you already have useful data stored in Active Directory and do not wish to store redundant copies of it in the CustomAttribute attributes (data redundancy peeves me!).

Firstly, it’s important to note that you cannot manage dynamic distribution lists created in this manner through the EMC – they won’t even appear. You can’t truly manage them from the Powershell console either, as the property in question – LdapRecipientFilter, is deemed read-only. What you’re left with is having to manage this kind of dynamic distribution list directly from something like Users and Computers, AdsiEdit, LDP – or something of the nature where you can edit Active Directory attributes directly.

Speaking of Active Directory attributes, these custom queries primarily revolve revolve two:

  • msExchQueryFilter: Holds the OPath syntax query;
  • msExchDynamicDLFilter: Holds the LDAP syntax query.

The first step in this process is straight foward enough. Use either the EMC or Powershell environments to create your dynamic distribution list. From my perspective, I don’t worry about what the user query component involves, as I’ll be changing this straight afterwards anyway. Just make sure things like your recipientContainer and organizationalUnit values are what you’d like them to be and leave it at that.

Next, open up the group in Adsiedit – or whatever your tool of choice is, so that you can see real Active Directory attributes (as opposed to the OPath aliases). Here, you want to clear the value from msExchQueryFilter so that it is null.

Next, edit the msExchDynamicDLFilter attribute, placing the LDAP syntax query you wish to run inside. Again, make sure you understand LDAP queries and put the correct value in here, as you’re not going to get any user friendly feedback as you might from either the EMC or Powershell environments.

Quick tip: LDP and Users and Computers are nice for the above phase, as with both of these you can construct your query and test it in them before you get to the point of pasting the query into msExchDynamicDLFilter, saving from using trial and error (which you shouldn’t be doing anyway) to get your distribution list working!

Technically, that’s all you need to do most of the time. I say most of the time because there’s one factor you need to remember, and it may lead to you having a few more steps to do.

As you no doubt remember, Exchange queries are executed against the global catalog (listening on port 3268) rather than LDAP (port 389). And as you no doubt also remember, the global catalog is only a partial set when compared against the data stored in Active Directory for a given object. What this means is that you may have to enable additional attributes to be stored in the global catalog before in order to utilise your desired dynamic distribution list. Don’t take this decision lightly, as the one of the considerations to keep in mind when working with the global catalog is to keep it lightweight.

If you do find that you pick an attribute to filter on – such as employeeType, that isn’t included in the global catalog by default, you can register schmmgmt.dll from the command line with:

regsvr32 schmmgmt.dll

After which you can add the Active Directory Schema MMC snap-in into a new MMC, navigate to the attribute you want to enable, and do so.

And now, the only thing left to do is make use of your new dynamic distribution list!

As I mentioned at the start, you won’t be able to manage it from the EMC at all, however you can manage all facets other than the query itself from Powershell. For the query itself, you are restricted to whatever Active Directory editing tool you prefer to maintain it.

Cheers,
Lain

Volume activation and you!   Leave a comment

It seems to be that time of the season again where folks come out in force to tackle what seems to be one of the more enduring topics of discussion: volume activation for Windows Server 2008 and Vista / Windows 7.
 
There’s a lot of information and documentation around on this topic, perhaps so much so that it overwhelms people. With that in mind, I thought I’d drop a few lines in here to help simplify – not so much to demystify, the process people embark on. For those of you who love technical documents – in fact, I’ll recommend this reading to anyone working with KMS or MAK activation at all, take a look at this article, which is one of the more recent and helpful Microsoft documents. Though beware, it’s a comprehensive article that contains far more information than you need to actually get activation working, so if you’re not the type of person who can identify just the parts you need, you might find it overwhelming.
 
Anyway, on we go with the simplified version.
 
First, let me clarify the audience. This write-up is aimed at people wanting to get either a KMS host working in a corporate environment, or just folks looking to get a MAK activation working.
 
Assumptions for the corporate folks:
1. You want to set up a KMS 2.0 host that can activate Windows Server 2008 R2 and Windows 7.
2. You access the Internet through a proxy server.
3. You have either a Server 2008 R2 (preferred) or Server 2008 server that you intend to perform this role.
 
Licensing keys
This is always something people get stumped on. For the purposes of this discussion there’s two kinds of keys you need to know about:
  • KMS: These keys are a special key that typically are only available to volume licensing customers. To find your KMS key, you can either ask your account manager for it – if that’s who handles your licensing, or if you are a little more hands-on and have configured an account with which you can log into Microsoft’s Volume Licensing Service Center, then you can get it from there. The important thing to note is you only need one of them!
  • MAK: This key is what folks that own Technet or Technet Plus-style subscriptions typically have access to. Media Activation Keys (MAK) are used to activate a product once, and once only. Unlike KMS activations where the client checks in every 180 days (by default), MAK activations do not check in ever again once activation has successfully completed. The important thing to note here is that you do not use MAK keys with a KMS host-style topology! Just think of MAK and KMS as being mutually exclusive.
So, which key do you need? If you are looking to deploy a KMS licensing topology, where the KMS host activates all your clients, then you need just the one KMS key. If you are running a smaller environment and don’t require the services (or don’t meet the minimum licensing quantities to be able to use a KMS host) of a KMS host, then MAK keys are your best option. In order for a KMS host to start activating your clients, you need to have at least five Server 2008 registration or 25 Windows 7 / Vista registrations. The KMS host will make no attempt to register your licenses with Microsoft until either of these minimum requirements have been met.
 
So, onwards into the technical.
 
One of the most common mistakes I come across is that people do not have the correct proxy configuration in place. It’s important to remember that regardless of whether you use a KMS topology or MAK keys, your KMS service hosts or MAK clients need to be able to talk to specific Microsoft resources out in the wild Internet (note the deliberate exclusion of KMS clients, since they only need to talk to the KMS host). I’m not talking about whether you can get out to the Internet, but whether the computer can – there’s a difference.
 
Now, not all proxy servers are equal here. Here’s a quick guideline:
  • No authentication: Simplest configuration. Simply run the NETSH command that follows to configure the server for Internet access.
  • NTLM authentication: Still simple, though you will need to ensure that either the computer account has Internet access, or that the sites (also found after this section) are put into a group associated with anonymous authentication. Other than this one caveat, it’s the same as the no authentication scenario: just run the NETSH statement.
  • Basic authentication: The computer won’t be able to authenticate with basic credentials, so you will need to organise it so that the list of sites used in activation are added to an anonymous authentication group.

So at this point we have identified the key(s) we need and can get on with the job or setting up the KMS host.

  1. Log onto the server that you wish to be your KMS activation host,
  2. Bring up an elevated command prompt,
  3. Run (optional based on the above points): netsh winhttp set proxy proxy-server=”proxy.yourDomain.com:8080″
  4. Run: slmgr /ipk <5-part KMS key>
  5. That’s all!
For reference, here are the URLs that the KMS host (or MAK client, for those of you waiting on that section) needs to be able to get to:
 
You can verify the success of the KMS key with the following command: slmgr /dlv
 
This command will display whether or not this server is now enrolled in the KMS volume activation channel. The following is an example of the first couple of lines of output:
 
Software licensing service version: 6.0.6002.18112
Name: Windows Server(R), ServerStandard edition
Description: Windows Operating System – Windows Server(R), VOLUME_KMS_R2_B channel
 
The section highlighted in yellow is the crucial section, as this confirms the server is capable of responding to Server 2008 R2 and Windows 7 requests. Further down you will notice another section, for which the following is an example:
 
Key Management Service is enabled on this machine
    Current count: 50
    Listening on Port: 1688
    DNS publishing enabled
    KMS priority: Normal
 
This is a good section if you have to do any troubleshooting, as firstly it tells you that it is actually configured as a KMS host. Importantly, it also tells you what port it is listening on – which can be good to know when checking if the inbound firewall rule exception has been made. (Please don’t tell me you’re one of those strange people who disables the firewall?!)
 
There’s also some stats listed, but the only thing that’s important to know about these stats until either 25 clients or 5 servers have registered, the KMS host will not even try to contact the Internet to validate their activation requests.
 
Just for reference, if you have successfully set up the KMS host, and you aren’t having dramas with any of the following, then you should find you now have a working KMS activation server:
  • Proxy server authentication.
  • DNS SRV record registration (_VLMCS._tcp record).
  • Windows Firewall blocking inbound traffic for the the KMS listening port.
As an example of a successful client registration, if you run the slmgr /dlv command on a Windows 7 machine, you will see a bit of output that starts with the following:
 
Software licensing service version: 6.1.7600.16385
Name: Windows(R) 7, Enterprise edition
Description: Windows Operating System – Windows(R) 7, VOLUME_KMSCLIENT channel
 
Note the different channel name? This is indicative of what you should be seeing on all of your clients if they’re successfully registering (and once you have passed the initial activation requirements of 5 servers or 25 clients – prior to that the activation status will differ).
 
 
MAK activation:
Okay! This is the simple one!
 
For MAK activation, this couldn’t be any simpler. In terms of the steps required, you have some of the same concerns as the KMS folk in that you need to be sure that each and every computer can talk to the relevant Microsoft URLs required for successful activation (this is one of the benefits of a KMS infrastructure: only the KMS server needs to talk to the Internet!).
 
Assuming the clients can all talk to the Internet, then your process for activation is similar to that for setting up a KMS host:
  1. Log onto the client that requires activation.
  2. Open an elevated command prompt.
  3. Run: slmgr /ipk <5-part MAK key>
  4. Done!
Again, as with the KMS folks, you can also verify your KMS activation success (or failure) by running the slmgr /dlv command.
 
Cheers,
Lain

Posted March 15, 2010 by Lain Robertson in Windows Server 2008 R2

Tagged with , ,

IIS7 and UNC-based virtual directories.   2 comments

Today threw up another one of those wonder “if you say so” type errors. This time, it was with IIS7 on Windows Server 2008 SP2 (not R2, in this case). If you want to skip the story, head to the end for some tips on troubleshooting the issue (keep in mind though that this is for a specific issue, mainly revolving around apparent access denied and server HTTP 500.19 errors).
 
We house our intranet photos on the server that also houses related data, and one of our team is in the process of drumming up a new web application that he wanted to use the photos in. This led to the fairly common scenario of wanting to create a virtual folder on his web server that pointed to the JPEGs on the external server.
 
Now, in the days of IIS6, this was both simple and intuitive, but woe unto me for trying to get this going in a hurry on IIS7! Strictly speaking, the resolution is very few steps, but as per usual, I was trying to jump into something feet first without having kept myself up to date with yet another component’s changes. As a refresher, under IIS6, all one had to do was create the new virtual directory and utilise the “connect as” and voila, you were done. The “tricksy” (just watched Lord of the Rings yesterday, and I had to find a reason to use that word!) nature of this task in IIS7 is the UI looks pretty much the same. However, I ran into half a days worth of dramas while I tried to work through what the issue was.
 
As part of the diagnostic process, I quickly established via the Security event log that for whatever reason, IIS7 was trying to verify this “connect as” account existed locally before using it to connect with the foreign system. This was borne out by the fact that there were no entries in that servers event log – not even anonymous ones. That problem was taken care of inside of the first five minutes of looking at this issue by simply creating a local account with the same username and credentials. Okay, but we still didn’t have liftoff, Captain. Why not?!
 
At this point, I was still getting a lot of errors, and they all seemed to get back to the good old “Access Denied” root cause, which of course I didn’t buy because I had verified with a net use command that access wasn’t in fact the issue. At least, it wasn’t an issue with the “connect as” account, and a quick scan of the foreign Security event log confirmed that the computer account – for whatever reason, was trying to authenticate with this foreign machine. That was never going to happen, since the foreign machine is a workgroup machine, unlike the IIS7 server which is domain joined. The error indicated was HTTP error 500.19.
 
Naturally, I decided that was an oversight on my part and headed straight to the Authentication applet to enable Windows authentication, except I couldn’t get into the applet. After double clicking it, the status column remained stuck on something to the effect of “refreshing”, much to my annoyance, and trying to open any option was bringing up a new error that the web.config file couldn’t be found. Suffice to say I was a little bewildered. A web.config for a virtual directory? I don’t think so, Tim.
 
So, jumping to the end of this chain of events, I ended up deleting the virtual directory, going back to the site root and enabling Windows Authentication at that level, then recreating the virtual directory, and lo and behold, it worked exactly as it should of from the outset!
 
 
So, in point form, here’s what I found to be different from IIS6:
  • Connect As account needs to exist locally on the IIS7 server if both the IIS7 server and the foreign host do not live in the same domain (or forest, if you want to expand into larger topologies).
  • You may need to enable one of the additional authentication mechanisms before you create the virtual directory to avoid the 500.19 errors the server will throw if it’s trying to track down the web.config file. Personally, I used the Windows Authentication option, but I suspect the others would have their uses depending on what you’re hosting your content on.
  • When filling out the Connect As window, do NOT use the local hostname in front of the account name, as that creates a disassociation between the local version of the Connect As account and the foreign system’s version. This is one point you should pay extra attention to, as I read a number of articles suggesting you should in fact use the hostname. As I said already, do NOT do this under IIS7. So, to use an example, you want to use “JohnDoe” as the username, and not “hostname\JohnDoe”.

Anyway, as I say, the change itself is just as quick to make as it ever was, there’s just a few things to watch out for in terms of new configuration traps.

Cheers,
Lain

Posted February 9, 2010 by Lain Robertson in Windows Server 2008

Tagged with , , , ,