Welcome to Dirteam.com/ActiveDir.org Blogs Sign in | Join | Help

Windows Server 2008 R2, among other changes, brings a new interface to access directory services – the Active Directory Web Service (ADWS). It is also available for older systems – Windows 2003 and 2008 – as Active Directory Management Gateway (available as separate download).

(cc) paprikaOptic

ADWS I being used so far by a few Windows 2008 R2 components like the new AD interface AD Administrative Center and Powershell module for AD (yes, this Powershell module uses Web Service, not LDAP). This Powershell module was a cause of e-mail I got from one of my friends (and customers also).

When he tried to use Powershell module from workstation to connect to ADWS on a newly deployed Windows Server 2008 R2 box he got the following message:

Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.
WARNING: Error initializing default drive: 'Unable to find a default server
with Active Directory Web Services running.'.
PS C:\Windows>

Now, here comes the ultimate question: how does the client locate the ADWS instance? – in this case the client being the Powershell module.

The ultimate answer to this question is as always ... DC locator. DC Locator is a process which allows clients to locate an optimal domain controller. Optimal in the AD meaning of this word: closest to a client from a network perspective, where network is represented through sites and subnets in AD configuration.

A client can also pass some additional requests to a DC locator process, which are being used to choose a DC with specific roles, required by the client in this moment. This might be a request for a writable DC or a DC acting as a GC. The Domain controller passes such information in DS_FLAGS structure.

To allow clients to located DCs with ADWS instances an additional flag was added to the DS_FLAGS structure. Description of this new flag states as follows:

DS_WS_FLAG, The Active Directory Web Service, as specified in [MS-ADDM], is present on the server.

And this information can be used to locate a DC with ADWS instance, when a client will specify the additional DS_WEB_SERVICE_REQUIRED flag in the DC request. Same goes for DCs with ADMG installed.

This might be the end of this post, but life isn't perfect and often we will have to deal with mixed environments with W2008R2 and other, older DCs in the same network. Problem is that 2003/2008 DCs doesn't understand this new flag. To correct this, an additional hotfix has to be installed, KB969249 (2003) or KB967574 (2008).

If you will plan to deploy W2008R2 and use Powershell module or other software which uses ADWS, especially in larger environments, remember to deploy enough ADWS instances to handle client traffic and to allow DC locator to locate DCs which host such service. This is especially important in environments with large number of DCs deployed. This way you won't be surprised if your newly created powershell script will fail to locate an ADWS instance.

Enough for today ... but we will get back to ADWS soon...

This post is the beginning of new troubleshooting saga – Everybody lies ... except network traffic. Chapter 1: the case of the mysterious DHCP Server.

In my daily job as an IT consultant sometimes I have to deal with problems. Different kind of problemss – easy, hard ... it depends on the case. A few days before my current week off (SPRING !!!) I got a problem which was described as follows;

  • Workstation is working in a network segment with DHCP server. Workstation was able to acquire a correct IP address, but ...
  • Later, after acquiring network configuration from DHCP server, workstation configuration was altered with a different set of DNS servers, DNS name suffix etc. This configuration suggested another network segment, connected with this network segment through some routers.

Everything worked, except some problems with accessing network resources with different addresses in both network segments (but similar names – similar if you eliminate the DNS suffix from the equation).

After some speculations – well ... it is with Windows 7 so maybe it is DirectAccess or IPv6 in general or maybe it is just a little gnome altering configuration – it's time for an ultimate question and answer: gathering network traffic from affected workstation.

Network traffic never lies

The main reason DHCP clients exist on a host is to acquire IP configuration. To fulfill this goal DHCP clients use a series of packets, broadcasted to request IP configuration. And the workstation which was a subject of this problem was doing exactly what we expected. In network traffic a chat between the client and the DHCP server was visible in the form of following request exchange:

  • DHCP Discover –> DHCP Offer
  • DHCP Request –> DHCP Ack.

Our client, which was exchanging packets with a DHCP server in local network segment (Server A), in result acquired IP address and some configuration.

(click to enlarge)

But after the DHCP client acquired an IP address it has a right to request some additional information about the network (if such are available from server). To do this client is sending another packet which is called DHCP Inform. As a response the client gets another DHCP Ack message which contains the requested information. And this was what our client did:

(click to enlarge)

However, instead of getting DHCP Ack from the DHCP server on the local network segment the answer came from Server B, which was located in another network segment and was configured with slightly different options.  Because the network configuration (router) allowed such messages to cross network segment boundaries it was completed successfully and  this was the mystery to be solved.

Conclusions ...

We have at least two conclusions from this story:

  1. If you have two DHCP servers serving clients from one network segment for the same IP scope, make sure that these are configured in the same way.
  2. (This will be common for the entire troubleshooting saga (I hope next stories will follow shortly)) If there is a problem in a network environment, in most cases the easiest and quickest way to solve it is to take a look at network traffic.

P.S.#1 If anyone is interested in DHCP protocol flow -  KB 169289 and RFC2131 are there for you.

P.S. #2 Just BTW – in another case I was working on lately, I learned about a small change in behavior of a DHCP server in Windows Server 2008 R2 - KB 2005980 describes it to you.

P.S. #3 This question was asked on my Polish blog when I described the same post – what about ipconfig output in this case. Well ... it showed ServerA as the one who served the client.

I'm on the train which is taking me to Poznań for meeting with a customer, so this gives me an opportunity to finally write something. For a start I'll share a quick tip about ADU&C and the delegation tab as an introduction to further posts.

If you want to set a right to delegate user credentials in the Kerberos authentication schema to some account, one way of doing this is through ADU&C. In the object properties there is a Delegation tab which allows you to configure appropriate delegation settings. But what if it is not there? (the tab, not the object)

 

If the delegation settings tab is not visible, it simple means that no SPNs have been configured for the given account. And the deal is simple ... no SPNs, no delegation scenarios. So how does one configure the delegation settings in this case? Just add SPN value to the object and the delegation tab will appear in the object properties!


Magic is done. Careful readers know right now that some Kerberos post is on the way ... to Poznań ;).

PS. Train was ~45 minutes late at arrival.

1 Comments
Filed under:

... on my Polish blog a question was asked on Sunday evening if I can provide some description on the SYSVOL location process and the pitfalls which might wait there. I said ... 'Why not'  ... and then you have to keep your promise. So today it will be about SYSVOL volume. Recently it is common topic for me as I gave a talk for local communities in Warsaw about GPO mechanics, which also touches this topic. If you can read Polish and you are interested, the slide deck is available on my Polish blog.

So  ... regarding SYSVOL, everyone can see that it is there and it does a job ... until something bad happens. That's the short version. Its primary goal is to serve domain clients files on a DC, in particular to serve GPO templates which are the file based part of GPO. Remember:  a GPO consists of two parts – the GP container (GPC) in the directory and the GP Template (GPT) in SYSVOL. Plus some extras like logon scripts etc.  If there is no SYSVOL or it is not up to date  because of FRS problems (sounds familiar?) there are no or outdated GPOs processed on a client side (actually if there is no SYSVOL share, a DC will not do its job).

(cc) swingnut

From a technical point of view, SYSVOL is just a DFS domain based namespace which content is being replicated with FRS in pre-Windows 2008 operating systems and with DFS-R for Windows 2008 and higher if migration was done (if not ... what are you waiting for????). In fact, SYSVOL content can be replicated in any way, as long as you know how to keep it in sync (don't tell our PSS guys I wrote this ;) ).

SYSVOL is present on every DC, it is a DFS namespace so ... how can you tell which replica is our client using at a given time??? And here is the problem we will be talking about today.

Theory …

As I wrote a few times on this blog (and Jorge wrote also about it  + he will give a talk about this on the upcoming TEC 2010 – if you will be there, don't miss it – I will miss it ;) ) a client is locating DCs using DNS records and information about sites and subnets in what is called the DC location process. This way, DS client can (at least should) locate the closest (in terms of AD configuration) DC which can handle its requests. Problem is that this is not the case with SYSVOL, as SYSVOL location process does not follow the same path as the DC location process. Many AD administrators have learned this in a more painful way, when they were trying to figure out why clients are using SYSVOL replicas in some small village north of whatever country it was.

A directory service client is receiving a list of SYSVOL replicas, divided into two lists:

  • SYSVOL replicas in the same site
  • SYSVOL replicas outside of the client site.

By default, both lists are in random order and are not reflecting things like costs or location in which DCs are located, except obvious information about local DCs. This behavior does not ensure that clients will use the same DCs for logon and SYSVOL within the same site, when multiple DCs are in this site (the word random is key).

To ensure that the DC which handles logon request will be the one which will also be used for SYSVOL location some tweaks have to be performed. These tweaks (and update) are described in KB831201. After applying the tweaks, the DC which handles the request will return its own name as the first DC on the list of SYSVOL replicas returned to a client.

However the problem remains if a client, for whatever reason, is using a SYSVOL replica outside of its site. The list of replicas in the second list, which is replicas located outside of clients site, is not ordered with taking into consideration the cost of getting to this site – it is random. So it might happen that the first DC on the list is in some place far north (or south if you prefer) of the globe. With slow WAN links between them, affecting clients in terms of performance. It is also a common case I observe in customer networks, where customers are not able to access this replica anyway, because of firewall policies which are in place and are prohibiting network traffic between branches.

How to deal with this? It can be easily resolved with additional configuration for DCs, which will enable calculation of the SYSVOL replicas list with taking cost of connection between client and replica into consideration. This option is available for Windows Server 2003-based DCs by default (there is also a fix described in KB823362 for Windows 2000 Server – remember , support for 2K ends on July this year) and it is called SiteCostedRefferals. To enable this option configure this registry key:

HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Dfs\Parameters
Value Name: SiteCostedReferrals
Data Type: REG_DWORD
Value: 1

However, to make this work you have to provide additional information through configuration at the directory level. This information is required for the directory service to calculate possible routes and this is information about which sites can be accessed by the client. To do this we can enable Bridge all site links (BASL) option, however this might not be the preferred way to do this. Why? Because this will also disrupt the replication topology calculation process from KCC standpoint. But if we want to enable SYSVOL cost based replica list calculation while not disturbing KCC with BASL information we can choose to enable it for given sites, which will cause KCC to ignore information about site bridging during calculations, but still seeing it (bridged) for SYSVOL replica cost calculation.

In theory you can think about it as maintaining site bridges manually as alternative to BASL however I don't know if this will work in a real world scenario (but with right people following right process ... it might).

And with the information provided above, for those who were not aware of it so far, I hope life with SYSVOL is much simpler right now.

Toolkit …

Here's some short information about tools that can be used in information gathering or troubleshooting process. The basic tool to start with is dfsutil. Dfsutil allows you to see the list of replicas from the client point of view and see which one is active at given point in time. Two switches to remember: 

DFSUTIL /SPCINFO
DFSUTIL /PKTINFO

In Windows Server 2008, these  switches have changed and have become:

DFSUTIL CACHE DOMAIN
DFSUTIL CACHE REFERRAL

To have access to DFSUTIL in Windows Server 2008 and later you will have to install DFS management tools using features.

And that's all for now ... at least about SYSVOL.

... Kim Cameron has linked and quoted my previous post on browser identification based on its characteristics available in public. There is EFF project which focus on checking how unique Your browser is against others based on public information.

As it turned out Kim's browser has even higher score (19.29) in this test then my original score (18.73). Then higher the score is then browser is more unique, thus easier to identify as unique on web sites without my consent.

As Kim said about himself and what I can say about myself (...) It’s not that I really think of myself as super competitive (...) but I couldn't resist ;) ...

 

It appears that what makes my browser so unique is set of plug-ins which are installed in my browser.

 

It looks like that there is not a lot of people with QuickTime, iTunes and Windows Live plug-ins installed together on same machine.

As Kim summarized his post:

I have to disagree.  It is already a problem.  A big problem.  These outcomes weren’t at all obvious in the early days of the browser.  But today the writing is on the wall and needs to be addressed.  It’s a matter right at the core of delivering on a trustworthy computing infrastructure.    We need to evolve the world’s browsers to employ minimal disclosure, releasing only what is necessary, and never providing a fingerprint without the user’s consent.

And now I will agree on that completely ... especially that my browser is so unique.

Just to calm down my friends reading this blog … no, I haven’t developed personal relationship with my browser, however as many of us I’ve personalized it and I feel comfortable with it right now. All the plugins, configuration etc. It is our daily used tool now so probably all of us have done something to customize it.

Is our browser also attached to us or does it flirt (how strange it may sound) with others on the network???

(cc) bored-now

Through Kim Cameron’s blog I‘ve found project Panopticlick page started by Electronic Frontier Foundation (EFF). This project aims to try how easy is to identify person identity in Internet based on characteristics of its main tool … web browser. Question is how easy is to distinguish You from other Internet users based on elements like Your browser user agent, fonts, screen resolution and other data which can be accessed from browser by any web page.

Let see  - this is example of check performed on my browser:

So my browser has unique footprint among almost 400k of other browser tested. In other words – yes, my browser is cheating on me and it allows web sites to track me without my knowledge … definitely not nice.

Another example which shows that this approach might work came from information about OpenOffice market share. Method which was used to identify OO users was based on checking fonts installed on system through browser. OO install unique fonts – which might be used as indicator that OO is present on a system – without user interaction at all. Scary … ???

Also Kim Cameron posted another example:

(…) The authors claim the groups in all major social networks are represented through URLs, so history stealing can be translated into “group membership stealing”.  This brings us to the core of this new work.  The authors have developed a model for the identification characteristics of group memberships – a model that will outlast this particular attack, as dramatic as it is. (…)

So browser can be used to identify a user in Internet or to harvest some information without its consent. Will it really become a problem and will it be addressed in some way in browsers in a future? This question has to be answered by people responsible for browser development. We will see …

FIM 2010 is still being cooked in Redmond area but in the meantime we got brand new ILM 2007 Service Pack 1 package which just was published on Downloads web site. ILM 2007 SP1 is cumulative hotfix package but also it brings support for provisioning objects with Exchange 2010.

 

This is nice progress if you remember how long we had to wait for Exchange 2007 to be supported with ILM … way to go for future ILM team.

Information how to use ILM AD MA to provision objects to Exchange 2010 is published on Technet in Deploy Exchange 2010 in a Cross-Forest Topology article.

Whit Exchange 2010 support we also are getting a new code example and description “Prepare for Online Mailbox Move”. Quote from download description:

Microsoft Exchange Server 2010 supports online mailbox migration from a remote Exchange Server 2010, Exchange Server 2007, or Exchange Server 2003 forest to your Exchange 2010 forest. Prior to performing the online mailbox migration, mail-enabled users with a predefined list of attributes must be present in the target Exchange 2010 forest where the mailbox will be moved to. You can use either the sample code or the sample script to help with your online mailbox migration:

Enjoy your reading …

Where is a question there is an answer (at least in most cases). This time question was “How to check schema extension introduces to a forest?” and it was asked on ActiveDir.org. There was even more than one answer … apparently some consultants are watching this list :).

So how we can capture what was changed in schema since it was established together with our forest.

(cc tobym)

One of option is using Schema Analyzer tool which comes with AD LDS (ADAM) as it is described on Ask DS Team blog. If we have AD LDS instance and LDFI file with schema we want to analyze it will allow us to get difference between target and base schema.  Easy but …

  • it requires access to AD LDS instance and LDIF file with schema
  • sometimes it is a bit overhead to get LDI file with difference and we require something easier.

So next approach, also not perfect but a bit simpler and in some cases might be good enough. Just take a(dfind.exe)ny LDAP query tool and query all schema including whenCreated in output. This attribute is replicated among all DCs and we can track date of creation of object. Simple example:

adfind -schema -f "(|(objectClass=attributeSchema)(objectClass=attributeClass))" ldapDisplayName whenCreated –adcsv

now redirect output to file … open it in Excel, sort it on whencreated collumn and voile…

Of course it is not perfect. Still it requires tool like Excel and it gives You only overview when attributes where created. And what about modifications?

In cases we need such information SchemaDiff.cmd script created by Dean Wells  (included in archive) comes handy. This tool is based on querying replication metadata and this will give You information about new and updated attributes. Let see how it works:

C:\Temp>SchemaDiff.cmd w2k.pl

SchemaDiff 1.1 / Dean Wells (dwells@msetechnology.com) - March 2006

STATUS - Working [review title bar for progression] ...

       - Forest/schema creation timestamp: 2009-08-23 @ 22:51:06
       - base-schema has been MODIFIED since Forest creation
       - counting classSchema and attributeSchema instances: 1438
       - querying schema ...

*MOD: CN=Schema,CN=Configuration,DC=w2k,DC=pl
       - schemaInfo........................ {modified post-instantiation}

*MOD: CN=User,CN=Schema,CN=Configuration,DC=w2k,DC=pl
       - auxiliaryClass.................... {modified post-instantiation}

+NEW: CN=AstContext,CN=Schema,CN=Configuration,DC=w2k,DC=pl
+NEW: CN=AstExtension,CN=Schema,CN=Configuration,DC=w2k,DC=pl

(…)

Done - 57 schema object(s) added, 4 schema object(s) modified
       in Forest "DC=w2k,DC=pl"

Quick, nice and easy … and no additional tools required (I don’t count repadmin.exe as an additional tool in AD environment).

In general best way to answer such question is to have implemented schema governance process in your environment. It doesn’t have to be something very complicated, sometimes simple file with some procedures is enough … or WSS site in more advanced case. Key is to stick to it and follow it. Think about it …

It is common knowledge that in AD environment client (like workstation) will always (at least it should) try to connect to most optimal domain controller. Optimal from network and AD infrastructure configuration standpoint. This process is based on DNS queries and information stored in AD configuration and in perfect case should lead to situation when client has contacted most optimal DC at given moment.

So we have all subnets defines, connected with appropriate sites and DCs placed in these sites or covered in other way. And suddenly some clients from some small location are starting to use some random DCs instead one we designated for them in our bright and shiny configuration.  In such case sys admin is entering his most favorite mode … troubleshooting

 

(cc) trriseesthings

AD configuration has been extensively reviewed and checked, network checked … event logs are not giving us a clue … what next (besides calling cavalry of some sort :) )?

In such case we have at least one additional troubleshooting mechanism which might be extremely useful in this process, which is enabling debug logging for DC locator process. In each Windows version netlogon service comes with ability to log debug information. What has to be done is enabling this mechanisms through registry change and settings some flags … these flags are described in  KB 109626 Enabling debug logging for the Net Logon service.

When this will be done netlogon service will start to log diagnostic data in %widir%\debug\netlogon.log. These information might be very useful in troubleshooting process or at least should give us idea what is going on during this process. Sample netlogon.log part (slightly modified for better reading) from my lab environment is presented below .

[SITE] Setting site name to '(null)'
[SESSION] \Device\NetBT_Tcpip_{33941FFA-DFED-4744-BF9A-972228BC6FF0}: Transport Added (192.168.1.10)
[SESSION] Winsock Addrs: 192.168.1.10 (1) List used to be empty.
[SESSION] V6 Winsock Addrs: (0)
[CRITICAL] Address list changed since last boot. (Forget DynamicSiteName.)
[SITE] Setting site name to '(null)'
[DNS] Set DnsForestName to: w2k.pl
[DOMAIN] W2K: Adding new domain
[DOMAIN] Setting our computer name to wss wss
[DOMAIN] Setting Netbios domain name to W2K
[DOMAIN] Setting DNS domain name to w2k.pl.
[DOMAIN] Setting Domain GUID to ce28b6f7-a26a-4e0f-9f39-0e63e525493e
[MISC] Eventlog: 5516 (1) "wss" "W2K"
[INIT] Replacing trusted domain list with one for newly joined W2K domain.
[SITE] Setting site name to '(null)'
[LOGON] NlSetForestTrustList: New trusted domain list:
[LOGON]     0: W2K w2k.pl (NT 5) (Forest Tree Root) (Primary Domain) (Native)
[LOGON]        Dom Guid: ce28b6f7-a26a-4e0f-9f39-0e63e525493e
[LOGON]        Dom Sid: S-1-5-21-1855823386-3643518527-1754427229
[INIT] Starting RPC server.
[SESSION] W2K: NlSessionSetup: Try Session setup
[SESSION] W2K: NlDiscoverDc: Start Synchronous Discovery
[MISC] NetpDcInitializeContext: DSGETDC_VALID_FLAGS is c00ffff1
[INIT] Join DC: \\resfs.w2k.pl, Flags: 0xe00013fd
[MISC] NetpDcInitializeContext: DSGETDC_VALID_FLAGS is c00ffff1
[MAILSLOT] NetpDcPingListIp: w2k.pl.: Sent UDP ping to 192.168.1.1
[MISC] NlPingDcNameWithContext: Sent 1/1 ldap pings to resfs.w2k.pl
[MISC] NlPingDcNameWithContext: resfs.w2k.pl responded over IP.
[MISC] W2K: NlPingDcName: W2K: w2k.pl.: Caching pinged DC info for resfs.w2k.pl
[INIT] Join DC cached successfully
[SITE] Setting site name to 'Default-First-Site-Name'
[MISC] NetpDcGetName: w2k.pl. using cached information
[PERF] NlAllocateClientSession: New Perf Instance (001E6688): "\\resfs.w2k.pl"
    ClientSession: 00237D58
[SESSION] W2K: NlDiscoverDc: Found DC \\resfs.w2k.pl
[SESSION] W2K: NlSetStatusClientSession: Set connection status to 0
[DOMAIN] Setting LSA NetbiosDomain: W2K DnsDomain: w2k.pl. DnsTree: w2k.pl. DomainGuid:ce28b6f7-a26a-4e0f-9f39-0e63e525493e
[LOGON] NlSetForestTrustList: New trusted domain list:
[LOGON]     0: W2K w2k.pl (NT 5) (Forest Tree Root) (Primary Domain) (Native)
[LOGON]        Dom Guid: ce28b6f7-a26a-4e0f-9f39-0e63e525493e
[LOGON]        Dom Sid: S-1-5-21-1855823386-3643518527-1754427229
[SESSION] W2K: NlSetStatusClientSession: Set connection status to 0
[SESSION] W2K: NlSessionSetup: Session setup Succeeded
[INIT] Started successfully

Does it look useful??? I think so … happy troubleshooting and don’t forget that Network Monitor or WireShark will tell  You the truth about what’s going on on a wire. And this is ultimate troubleshooting tool.

In topic of ADFS Laura said once “If your ADFS is broken, it’s PKI. If it’s not PKI, you’ve got a typo. If it’s not a typo, it’s PKI”.  Very true … in different aspects of PKI.

Because of Christmas break I have a bit more free time than usual, still taking under consideration free time which is available when I put my son to sleep. I decided to take a look at just released ADFSv2 RC bits. And I know that it is probably because of me but I managed to produce little problem during the setup procedure, which I think might affect also others so as always … time for blog post.

 

ADFSv2 and PKI requirements …

For those who have not gone through ADFSv2 setup procedure quick outline of its PKI requirements.

After ADFSv2 is installed as a service on machine next step is to configure it as federation server either standalone or part of a farm. Part of this setup is to provide information about certificate, which will be used as token signing certificate and card space signing certificate. This certificate has to be present in local system store for ADFS setup to be able to pick it up.

 

My setup …

In order to setup ADFS I’ve created a single machine on which I’ve loaded AD \ Certificate authority and IIS server (not best practice but in my lab I have to take care about available RAM and spindles so … less VMs is better).

To get certificate for my IIS server and later for ADFS service I’ve created cert request using IIS console and based on this request I’ve issued certificate from CA, installed it on my IIS and what *is important* I set this certificate to be used in HTTP binding on my IIS machine.

ADFS service setup and later procedure for configuring it as federation server went smooth and everything worked as it was expected. When I was asked which certificate to use I just choose certificate I’ve created earlier and configured for my IIS machine.

 

So where the problem begins?

Later I’ve decided that I want to setup my ADFS server using FQDN to avoid problems with SPNs configuration etc (BTW – I believe that after PKI and typos SPNs will be next common issue with ADFS v2 setup … I don’t know why … maybe it is called experience).

So I’ve added DNS record for new name, I’ve done all IIS stuff and among others I’ve revoked previous certificate and removed it from IIS configuration (just deleted it using IIS console), issued new request, new cert ... installed … done. Almost.

Next step is to change ADFS configuration to:

  • use new FQDN (easy)
  • use different certificate (should be easy).

First step was OK, but then when I wanted to change token signing certificate in ADFS i got error message which said something similar to:

The SSL certificate with thumbprint 42161585196B80292A675BA95D54429D1E1CF7CE is configured in IIS but could not be found in the Local Computer Personal certificate store.  SSL Certificates configured in IIS must also be present in the Local Computer Personal certificate store in order for AD FS 2.0 to use them.

Thumbprint referred to certificate which I previously revoked and removed.  Checked things few times … even if I was asked to select new certificate for ADFS to use, and I was ale to choose new certificate every attempt changed in way similar to described above.

 

Cause and solution …

After thinking about it for a while I’ve checked what certificate is assigned to HTTPS binding for IIS. And it turned out that there is no certificate … at least none was shown in UI. But apparently some reference to previously configured certificate was hold somewhere in IIS configuration and this was causing problem with ADFS configuration.

Once I selected new certificate to be also used for HTTP binding in IIS I was able to change signing certificate for ADFS and finish my setup.

So as it turned out:

  • deleting certificate in IIS setup and replacing it with new one is not enough. Remember about *BINDINGS*.
  • error messages are right but no always are pointing you directly in right place.
  • “If your ADFS is broken, it’s PKI. If it’s not PKI, you’ve got a typo. If it’s not a typo, it’s PKI” … with addition of SPNs :).

 

Hope this will help at least one person in a future ;).

 

Update: through Laura's blog I found Juan Pablo entry on this problem with a bit different solution. Look at it here. I don't know if cause was the same but probably it will help even if not (I don't have same lab right now to check it). Worth to know.

1 Comments
Filed under: ,

Kerberos in Windows Operating System is around for about 10 years and it is still causing problems and for many people it is like black magic voodoo. In most cases organizations and people in it are not aware that it is now working until it problem will occur on a surface with some application not working or reports not being displayed on MOSS web page …

… and when problem occurs some troubleshooting starts. To make this process a bit easier here is a short explanation of Kerberos, IE and and services running on non-standard port issue.

 

(cc) TheCX

This post is sponsored by letter A like Architect, because of our Architects inspired me to write it with his ranting about this problem.

Issue which is subject of this post is not related to Kerberos protocol itself, but to Internet Explorer and how IE handles such requests by default.

 

Never ending story,  SPNs …

Short reminder what SPN is  … when client application is trying to get access to resources and is using Kerberos authentication it requests at some point Ticket Granting Service (TGS). To specify to service to which it is requesting access in TGS request client specifies Service Principal Name (SPN). SPN then is being used by KDC to find an account which is related to this service and to prepare tickets for it. This is in short words how it works …

SPNs are just string values for servicePrincipalName attribute in form which consist of service prefix, host name and optionally port number.

For example for standard HTTP service running on www.w2k.pl host address SPN would be specified as HTTP/www.w2k.pl.

As I mentioned above there is also optional element of SPN which can be used to specify port on which service is running. In case of our HTTP service running on 8080 port SPN which will contain this port number will look like this HTTP/www.w2k.pl:8080.

Simple … it is helpful if we have services running on different ports and using different accounts – like application pools running on separate accounts associated with web sites on two different ports.

 

And here comes Internet Explorer …

Problem with Internet Explorer is that when it is being used as client application to request access to Kerberos enabled service on non standard port by default it will not include port number in SPN sent in TGS request. In such case network traffic capture will look somewhat like this (click to enlarge):

As we can see in this traffic IE is trying to request access to web site running on port 8080 but in TGQ request it is not exposing this information and instead of HTTP/lhr2dc01.w2k.pl:8080 it sends request with HTTP/lhr2dc01.w2k.pl as SPN value.

This behavior was first fixed for IE 6 with KB 908209. For IE6 it required fix to be installed and additional registry entry being made.

This article is not mentioning this but same behavior is present in IE7 and IE8. To fix this it doesn’t require fix to be installed but still it has to be enabled through same registry entry specified in KB mentioned above.

If this will be done same situation in network traffic looks as it is presented below (click to enlarge):

As it can be seen in this traffic analysis IE is requesting access to a web site with port specified in SPN and this allows authentication to be completed in this scenario.

 

When this is useful  …

“Why bother???” This is required in scenarios when we have multiple services running on single host, different ports and under different security accounts. Good examples are multiple application pools on single IIS machine.

Probably anyone who will deploy MOSS sites with multiple accounts will came across this scenario and will have to deal with it.

 

Why not make it default …

Question is … why this is not enabled by default in IE 7 and 8? problem was fixed for IE6 but for later versions it might be included in a default configuration.

I don’t know official answer but first thing which cross my mind is  - backward compatibility (you can call it IE6 curse if You want it :) ).  Because IE6 worked in this way and many applications were configured to work in this way, which was allowed by IE6 problem turning it on by default in next versions would break all these applications.

IE6 was not specifying a port in SPN request and if there was suitable account with only one SPN without port being specified, and there was another service running on the same host with different port number but under the same service account it just works.

If You will enable this behavior applications running on different ports would break … registering additional SPN will fix it of course, but this would require some planning up front or quick troubleshooting (basic level of network traffic analysis required).

What I would like to see is configuration option which would enable this behavior through GPO … feedback given :).

One of my friends PFE has asked me a question regarding userPassword attribute in directory which was related to some behavior he was observing in customer environment. We had a little chat about it and then I thought that maybe other has such questions as well so … here’s a topic for a blog. 

Behavior my friend was observing was related to a fact, that after some operations performed in environment customer has noticed that on some objects affected by these operations this attribute contained user password in clear text … now I can hear screams of all security guys :) … Yes, clear text and password has some connotations .. in most cases negative once.

(cc) Somewhat Frank

Of course fact that this password was there didn’t mean that it was available for anyone willing to read it … some ACLs still apply in directory … however the fact was that IT WAS THERE.

Matched DNs:
Getting 1 entries:
>> Dn: CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl
    4> objectClass: top; person; organizationalPerson; user;
    1> cn: jan Kowalski;
    1> sn: Kowalski;
    1> userPassword: P@ssw0rd!;

Whatever You thin at this point this is not a bug and there is no point in calling MSFT 112 number (if such exists at all :) ). It is expected and it is a result of userPassword attribute behavior dualism in AD.

 

 

 

userPassword …

userPassword is an attribute which can act differently when it is being written or read depending on directory configuration. Depending of directory settings it can be treated as:

  • ordinary unicode attribute which can be written and read as any other unicode attribute in directory
  • shortcut to user password in directory which will allow password change operation to be performed over LDAP.

In first case, when domain is below Windows 2003 level or at this level specific value in dsHeuristics, is not set this attribute is just an unicode attribute. We can write it and read it … let’s try:

admod -b "CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl" userPassword::P@ssword!!1

AdMod V01.10.00cpp Joe Richards (joe@joeware.net) February 2007

DN Count: 1
Using server: w2003r2base.w2k.pl:389
Directory: Windows Server 2003

Modifying specified objects...
   DN: CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl...

The command completed successfully

So we could modify this attribute … now let try to read it:

adfind -b "CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl" -s base userPassword

AdFind V01.40.00cpp Joe Richards (joe@joeware.net) February 2009

Using server: w2003r2base.w2k.pl:389
Directory: Windows Server 2003

dn:CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl
>userPassword: 5040 7373 776F 7264 2121 31

1 Objects returned

Success! So apparently we can write and read this attribute and if you will conduct this test on your own this will not affect user password in any way. We have just altered a text in a directory attribute.

However the game rules changes if we will set 9’th char in dsHeuristics to 1 (in fact according to documentation any character other than 0 or 2 should work) writes to this attribute will behave differently. After this modification userPassword attribute is write-only and we can’t read anymore.  But it will allow us to modify user password. Let see …

First dsHeuristics modification:

admod -b "CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuratio
n,DC=w2k,DC=pl" dsHeuristics::000000001

AdMod V01.10.00cpp Joe Richards (joe@joeware.net) February 2007

DN Count: 1
Using server: w2003r2base.w2k.pl:389
Directory: Windows Server 2003

Modifying specified objects...
   DN: CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=w2k,DC
=pl...

The command completed successfully

Done … now let’s try to do same modification as we did earlier:

admod -b "CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl" userPassword::P@ssword!!1

AdMod V01.10.00cpp Joe Richards (joe@joeware.net) February 2007

DN Count: 1
Using server: w2003r2base.w2k.pl:389
Directory: Windows Server 2003

Modifying specified objects...
   DN: CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl...: [w2003r2base.w2k.pl] Error 0x35
(53) - Unwilling To Perform

Wow … Error, ... but why? We’ve just tried to modify user’s password over LDAP protocol and in AD this is only allowed over SSL connection which was not specified in this case. So one more try using LDAPS this time:

admod -b "CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl" userPassword::P@ssword!!1 -ssl -h w2003r2base.w2k.pl:636

AdMod V01.10.00cpp Joe Richards (joe@joeware.net) February 2007

DN Count: 1
Using server: w2003r2base.w2k.pl:636
Directory: Windows Server 2003

Modifying specified objects...
   DN: CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl...

The command completed successfully

Now it has succeed, and now read test:

adfind -b "CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl" -s base userPassword

AdFind V01.40.00cpp Joe Richards (joe@joeware.net) February 2009

Using server: w2003r2base.w2k.pl:389
Directory: Windows Server 2003

dn:CN=jan Kowalski,OU=DRLab,DC=w2k,DC=pl

1 Objects returned

Nothing. This mean that we can use userPassword attribute to modify user password but of course we can’t read it afterwards … which is somehow expected..

problem …

Actual topic which started this conversation was KTPASS tool behavior which was observed in customer environment (KTPASS is a tool which allows keytab files to be created –> keytab are used with Unix boxes to allow authentication with Kerberos against AD … in short words).

So … in cases when KTPASS was used for an account, in which none modification to dsHeuristics was made password set for account with KTPASS was available for read with LDAP from appropriate directory object. Apparently KTPASS is trying to set a password using LDAP which leaves it in this attribute. Quick test shows that this is case. If we will try to generate new keytab for an account and specify a password:

ktpass -princ HOST/ubuntu.w2k.pl@W2K.PL -mapuser ubuntu$@W2K.PL -ptype KRB5_NT_SRV_HST -mapop set -pass P@ssw0rd1 -out ubuntu.keytab

(…)

Reset UBUNTU$'s password [y/n]?  y
Key created.
Output keytab to ubuntu.keytab:
Keytab version: 0x502
keysize 60 HOST/ubuntu.w2k.pl@W2K.PL ptype 3 (KRB5_NT_SRV_HST) vno 2 etype 0x17
(RC4-HMAC) keylength 16 (0xae974876d974abd805a989ebead86846)

 

and then we will use ADFIND:

adfind -b "CN=ubuntu,OU=DRLab,DC=w2k,DC=pl" -s base userPassword

AdFind V01.40.00cpp Joe Richards (joe@joeware.net) February 2009

Using server: w2003r2base.w2k.pl:389
Directory: Windows Server 2003

dn:CN=ubuntu,OU=DRLab,DC=w2k,DC=pl
>userPassword: 5040 7373 7730 7264 31

We will see that userPassword gets populated and if you will check its value it will be password specified with KTPASS. The same will happen with any other tool which will try to use LDAP to change or reset user password in such setup.

If we will modify this behavior with setting value on dsHeuristics it will change directory behavior and userPassword will contain no trace of password data in readable form.

Solution …

I think that there is no need for special solution as we don’t have a problem. Best way is to know how it works and if we are concerned with that just use this knowledge to enforce correct behavior … either by establishing some policy around usage of tools which uses LDAP to modify password or through altering directory settings to allow password change through LDAP and thus stopping userPassword from being holding current user password just “by accident”.

Of course ACLs still applies but one might be in hard position of explaining to some AUDITOR why THE PASSWORD IS THERE. In such case … you can redirect them to my blog or better … to MSDN pages.

Some time ago, when Windows 2008 was released I had some spare time (where are those days) and I wanted to master some of my .NET coding skills. What is better than find an idea to use them … and that’s how 1Identity Snapshot Recovery Tool was created.

Snapshot Recovery Tool is command line tool which might be used to un-delete existing tombstone and later to populate all or some of attributes with data from directory services snapshot data.

Snapshots is nice feature introduced in Windows 2008 which allows you to inspect Active Directory content at given point in time when snapshot was taken.  My opinion is that this was half-backed attempt to introduce something like Recycle Bin which is present now days in W2008R2. But hey .. it was there so I decided to use it.

Using this tool and snapshot data one can recover all attributes including links as in memberOf attributes for a user and member for a group.

It can recover single object or multiple objects based on GUID list or LDAP query.

Few words about original place where this tool was published – 1Identiyt. 1Identity was initiative of mine to build an independent network of directory services and identity experts which would build tools and documentation … it didn’t worked this time. Mostly because lack of time from my side. Maybe one day I will get back to this idea.

In the meantime … snapshot recovery tool is back here on DirTeam.org and if You want it, and You can find use for it … feel free to use it.

Comments, bug reports and suggestions welcomed here or on t.onyszko@w2k.pl.

 

P.S.#1 I want to say big THANK YOU here for Jorge who has tested this tool and provided very useful feedback regarding functionality and bugs. And showed this tool few times at some (DEC\TEC) occasions.

P.S.#2 I also want to say Thank you to all of You who have tried this tool and liked it. I read some blog posts and comments about it and it was nice to read that my work has actually helped somebody.

This post is probably first of TEC 2009 follow-up series, at least partially as I thought about covering it just before going to TEC. However Brian Desmond has touched this topic during his session so it is good reason to follow-up on it.

This will be about usage of catch-all subnets in AD topology design. What catch-all subnet means?? Let start from definition.

 

(cc) f-l-e-x

What’s this about …

When client computer is trying to locate domain controller it is performing location process during which it will try to discover its site based on network subnet information which will be send to DC  (Jorge has put nice description of DC location this process in three parts – I, II, III). If client will determine its site it will try then to locate DC in this site using DNS queries. Site location process fro Active Directory perspective is based on site and subnets defined in AD. If client network subnet matches subnet object defined in AD client is assigned to site to which this subnet was assigned at directory level.

But there might be situation in which client is not able to determine it’s site because subnet object corresponding to client’s network subnet was not defined in Active Directory.  In such situation client will pick one of available DCs (I will cover what “available” means in this context later) it can reach and will use it for its operation. Problem is that this might be far from most optimal DC for this client to use – for example it might be DC in one of far and poor connected branch site.

So what if we will create on subnet object (or few of them) which will span across multiple sites and will cover all of our subnets used in network. If these super subnets objects will be connected to some site our client will always be able to determine its site and at the end determine corresponding DCs. In worst case client will use not optimal DC but one in a site for which catch all subnet was configured. Done. Some explanation on this topic can be found in article in TechNET Magazine.

So what’s the catch …

Looks promising, but – can we do this in other way? Yes we can and I wrote about it earlier in my post How to cover un-covered – the case of missing subnet. In short words we can use DNS registration to control which site will be chosen by client in case it will not be able to determine exact site it belongs to. This can be achieved through proper registration of site and domain specific domain SRV records. If client will not be able to locate its own site it will pick one of DCs which registered domain specific records.

And that’s it … is it better approach than catch-all subnet? Is this better approach than catch-all subnet?? Probably it is just a personal preference but I like to use DNS records over such subnets. It is more elegant solution for me and I think that it is easier to manage and troubleshoot in case of some problems. The choice is Yours …

 

When catch-all subnet can benefit …

However I can see scenarios in which catch-all subnet can have some benefit. Let’s take a look at topology which is not exactly hub-n-spoke but is something which sometimes is called snow flake. In such topology we have central site (hub) and two or more tires of satellite sites.


If we would want to gather traffic from all clients from 3’rd tier at the 2’nd tier level and even if they can’t find their site not re-direct them to one of DCs in a hub we can’t do this with DNS records. In such case we can use catch-all subnet for each region \ sub configured at 2’nd tier sites level to control behavior of clients and keep all clients attached to correct site at 2’nd tier of our topology as on this picture.

 

 

 

Of course DNS records registration should also be correctly planned and configured for such design.

 

And that’s basically it – this just came out on Brian’s session on TEC and maybe it would not catch my ear\eye if I would not read this article on TechNET just before TEC.

 

So what do you think about using catch-all subnets? Are You using them? Any other ideas or comments? Comments are open … so is contact form :).

During preparation to TEC sessions and during TEC I noted some topics to blog about in a future so I hope that I will find time to blog about them soon. I noted also some URLs to tools which are out there so today’s post is some kind of web press release.

Patch management. If you have ever wondered how to deploy updates maybe You will get interested in script which was posted by  Brian Desmond on his blog. Pretty interesting if you will ask me. Worth to check.

Group nesting. If you are managing AD environment and You have nothing against using W2008R2 Powershell you might take a look at script posted on AD Powershell team blogs site. It allows You to select group and analyze how it is nested in other groups and even present it in (sic!) tree form. Nice example how to utilize R2 Powershell capabilities.

Speaking about R2 Powershell. As some of You may know these cmdlets are not utilizing LDAP but brand new AD Web Service which is also being shipped with R2. For down level DCs (look how quickly W2008 has become down level :) ) there is web download which delivers this service for Windows 2003 and 2008 DCs and ADAM \ AD LDS. It is called Active Directory Management Gateway and will allow You to manage these DCs with Powershell.

At the end something from other area – file server. New tool has hit Downloads web site – it is File Server Capacity Tool which comes in 32-bit i 64-bit flavor. I think name of this tool is self explaining.

So that’s all from web review for today …