Welcome to Dirteam.com/ActiveDir.org Blogs Sign in | Join | Help

That was probably longest quiet period on this blog. Changes happens an that was also result of these changes. That was a past. Now time to move on.

I've decided to consolidate my blogging activities in single place - from now on this place is HTTP://ONYSZKO.COM. Update your links in Feedly, RSS readers, personal notes ... time to get to blogging again :)

All content which is present on DirTeam will stay here. Maybe one day I will move TOP 10 of these posts to my new site - we will see. For now enjoy it and stay tuned to my new place :).

I spend a lot of time working with VMs as probably most of us in tech world nowdays. One thing I noticed about VMs is that virtual disks have tendency to grow … and grow … and grow. That’s why I often follow the procedure:

  • Run SDelete with –c switch within VM and shut it down
  • Run diskpart, then select VHD (select vidisk) and compact it (compact vdisk).

This keeps my VHDs size under control (for some degree). However I noticed that VMs with FIM are growing constantly and there is no free space to be compacted. My friend Borys from MCS (kudos goes to Borys) did a little digging on this and found this KB article: KB 941789.

As it turned out Sharepoint Services on which FIM builds its application are logging a lot of information, which can be easily deleted (past logs) and disabled from future grow with instructions from this KB. You might be surprised how much free space this will give You if Your FIM instance is running for some time.

Just a quick tip for all the FIMsters

It was rather long time since last blog post. Changes in professional life, TEC US (it was a blast – You can watch one of my sessions on YouTube – sorry for sound quality), projects … now trying to get back on track with my community work. So hopefully more entries here will be posted soon.

Today just a quick tip for something I was struggling with recently and there is pretty good chance that someone else will have the same problem.

Well – in Europe we have many languages and with this culture legacy we have also something more – diacritic characters in our alphabets. Of course this isn’t specific only to Europe – diacritic characters are present in many alphabets.

If You are working with FIM you have to live with RCDCs (RCDCs are definitions of how FIM UI is being rendered for particular types of objects). If you will start with editing RCDC downloaded from FIM (editing default one or creating new for new object type) maybe You want to put there some caption in your local language – like “Członkostwo w grupach” (which will translate to “Group membership”).  When such RCDC will get uploaded to FIM (iisrestart goodness) You may see that Your carefully crafted RCDC is broken and all diacritic characters are replaced with “something”.

If You want to make sure that this won’t happen ensure that your editing tool of choice saves RCDC XML as Unicode UTF-8 with Signature.

In select Visual Studio: File-> Advanced Save Options and then proper option. 

Unfortunately next time You will download Your RCDC from FIM for editing You will have to remember about it as well. Another reason to keep RCDCs as part of Your solution outside of FIM.

BTW – it looks like I’m officially listed as a speaker for TEC Europe which takes places this October in Frankfurt, Germany. If You are interested in AD and IdAM or in hearing me speaking and You still don’t have plans for October maybe attending TEC is good idea !

Filed under:

I'm breaking the silence on the blog with just a quick post on a tool for FIM.

It looks like the FIM 2010 Management Pack for SCOM 2007 has hit the web and is available for download. If you are using FIM and SCOM now you have proper tool to monitor its state.

If you are wondering (which I doubt is the case) why it is so silent here recently: Connected Dots, preparation for TEC. But getting back on track with blogging … serious one, not just news like this one.

It was more than 6 years ago when I’ve decided to join the other side as Microsoft employee with Microsoft Consulting in Poland. And this was good move. As every such move it brought some changes in private and professional live. One of those changes was that my MVP award, Directory Services at that time was retired. Well – these are the rules and you are playing by them. This is how it is.

Well, my adventure with MCS has come to its end (never say never) when I have decided to enter an unknown territory of consulting practice with Connected Dots.

And now history has run the full cycle and I after 4 months being outside of MS I was again awarded with Microsoft Most Valuable (MVP) award, this time in Enterprise Security area (hmm…).


It is really good start into new year and I want to say Thank You for awarding me with this award and also big THANK YOU  for those who thought I’ve done enough to earn it and they submitted my candidacy to Microsoft.

Now its time to earn renewal next year.

Filed under: ,

Back from TechED, time for another troubleshooting case for ILM.

I have a customer who is using ILM (FIM synch service actually) to manage objects in Sun One directory (or Enterprise Directory how Oracle is calling this now). After some changes in the infrastructure, which resulted in one of Sun LDAP servers being taken out of service they have started to be hit by an issue that delta import operations has stopped to work. When ILM was configured against new LDAP replica it has not recognized it as a replica which supports delta import operations. So there was a difference between the old and new server somewhere.

For test purposes we have used another Sun ONE LDAP replica to check if ILM will work with it correctly when it comes to deltas, and it did. So apparently there was configuration difference between those two servers (we have confirmed using LDP.EXE that ILM agent account is allowed to read this information on both servers).

So what’s the difference? This was immediately visible when we have analyzed network traffic between ILM box and Sun One server, first looking at network traffic which is going on between ILM and server which was not recognized correctly as a source of change log, and later comparing this with network traffic which goes to the server which operated normally.

In first case, among other we could see following LDAP queries:

And response

In second case we could see similar information and requests but the answer was slightly different. Query:

And response:

So when ILM has sent query for rootDSE object (NULL as query base) and was looking for attribute called changelog it has been found and returned. For ILM this is indication that it can use this particular LDAP replica as a source of changes information.

If you are spending your daily job life in AD world this is something which you might be a bit surprised as in AD all servers can be used as a source of delta information. This is because of AD multimaster replication model and how it works. However in this case, even that information about the change log exists in given LDAP replica, if it is not marked with additional attribute in its rootDSE information it won’t be used as a source of such information.

Another problem solved quickly when network traffic was analyzed. This should be like rule of thumb for you that if you are troubleshooting something over the network you should start to look at what is being sent over a network first. It makes life simpler.

I just got back home from TechED from Berlin. This was kind of special TechED for me, for at least three reasons:

  1. I was first time in last 5 years on TechED, kind of side effect of my departure from Microsoft
  2. I was speaking for the first there at TechED !!!
  3. Last but by any means not least – I managed finally to meet our good soul here Sander … this was something.

So yes, I was speaking this year at TechED together with Paula Januszkiewicz, great speaker and personally my close friend. Result … you can judge by yourself as our session is already posted on TechED web site (image is linked to it).

Speaking to big room full of people was a good experience – feedback to be taken and lessons to be learned. But it was FUN !!! First of all thanks goes to PAULA !!! (You rock). I hope this was fun also for people in audience.

Personally TechED for me is more about networking with peers and meeting new people than attending a sessions, but I managed to see few of them. Now I will have to view the rest on-line. Few thoughts also which I will share here soon. Short summary – it was all about a cloud (I think some people got even tired of it) and we will see huge federation push from Microsoft. So stay tuned.

While ago I wrote short entry about adding new claim mapping to existing definition of identity token provider. After this post I got following comment from one of readers (good that I still have some of them here :) ):

When I run the powershell command it fails wit the following error: Add-SPClaimTypeMapping : Incoming claim types do not include claim type 'http://schemas.microsoft.com/ws/2008/06/identity/claims/role'

I had no time to dig into this issue since then but as it often happens I had to do this on my own – so here is part duex of this tip – what to do if You have new claim definition and You have to add it to SPS 2010 identity provider definition.


(cc) Tiger Pixel

So let assume that we have new claim with type description as follows:


which is being issued by our ADFS 2 server for SPS 2010 application. Earlier we have defined Identity Token issuer in our SPS 2010 configuration (Jorge has gathered together some articles which describe in details how to do this) – in our case called ADFS20Server.

So how to add this new claim definition to identity token issuer in SPS 2010. Here comes a recipe:

Get IdentityTokenIssuer object:

$tokenIssuer = Get-SPTrustedIdentityTokenIssuer -Identity "ADFS20Server"

Add new claim type:



Create new claim mapping:

$companyClaim = New-SPClaimTypeMapping -IncomingCLaimType "http://schemas.microsoft.com/ws/2010/07/identity/claims
/company"  -IncomingClaimTypeDisplayName "Company" -LocalClaimType


And add it to our token issuer configuration:

$companyClaim | Add-SPClaimTypeMapping -TrustedIdentityTokenIssuer $tokenIssuer

And voile:


PS. Thanks' goes to Bryan who pointed me in right direction when I was struggling with figuring this one out based on SPS2010 Powershell help :).

I'm done with an intensive month of sessions, delivered for different user groups and other communities online. When you managed to attend my session about Kerberos I hope you liked it ;). Now it's time for some blogging activities.

A friend asked on his blog (PL only, sorry) a question how to quickly determine the groups a computer account belongs to. Question was asked, time for answer, or at least: one of the possible answers :). Actually I was sure that I wrote about it here before but a quick search determined that I'm wrong (I'm sure I talked about it on last TEC in Berlin). If not ... time to do this now.  Starting with the basics.

Constructed attributes

First let me introduce the concept of constructed attributes in Active Directory: Active Directory (among other capabilities) can handle dynamically constructed attributes, which are calculated on the fly when a query is issued to get them. If one looks at the object using a standard LDAP client (like LDP.EXE) or other tool these attributes will not be present on the object. However, when a query is issued to the directory to return them – magic happena and the value (if exists) will be calculated and returned.

(cc) Swansea Photographer

First example, which everybody is familiar with, are back-link attributes. Back-link attributes are pair attributes with forward links, which are used to store information about references among the objects – think member –> memberOf.

If we will take a look at user object properties using the new fancy attribute editor feature from Windows Server 2008 R2 Active Directory Users & Computers (ADUC) we can't see memberOf attribute.

However if we issue a query for this attribute using ADFIND.EXE, we find:

C:\ >adfind -b CN=tom.tom,ou=Accounting,DC=w2k,DC=pl -s base memberOF

AdFind V01.42.00cpp Joe Richards (joe@joeware.net) April 2010

Using server: FIMDC01.w2k.pl:389
Directory: Windows Server 2008 R2

>memberOf: CN=Ksiegowosc,OU=FIMGroups,DC=w2k,DC=pl

1 Objects returned

We get a response ... magic Wink

All the magic is being done by the directory service which is calculating, on the fly, the attribute value which was requested. There is more attributes which can be constructed by AD, and they all fall into one of three categories (at least based on available documentation):

  1. Attribute is marked as constructed in the schema using ATTR_IS_CONSTRUCTED bit in the systemFlags attribute value.
  2. Attribute is a back link. (as showed above)
  3. It is the rootDSE attribute..

A list of constructed attributes is available on MSDN for anyone who is interested.


And here is an answer (one of possible) to the question how to determine group membership for a workstation: One way is to query for tokenGroups attribute of a computer object. Attribute description is presented below:

These two computed attributes return the set of SIDs from a transitive group membership expansion operation on a given object

So if we query AD for a security principal and we ask for the tokenGroups attribute we will get a list of SID identifiers of groups, to which this computer object belongs when it logs on. The computer object in a domain is a security principal as others, so the query can be issued to retrieve its attributes and retrieve computer attributes values.

Once again using ADFIND.EXE:

C:\ >adfind -b CN=STS,CN=Computers,DC=w2k,DC=pl -s base tokenGroups
AdFind V01.42.00cpp Joe Richards (joe@joeware.net) April 2010

Using server: FIMDC01.w2k.pl:389
Directory: Windows Server 2008 R2

>tokenGroups: S-1-5-21-2045789631-2668715847-4178987103-1162
>tokenGroups: S-1-5-21-2045789631-2668715847-4178987103-515

As you can see, we've got a list of SIDs corresponding to the groups. How to translate these SIDs to names? Use ADFIND.EXE with SID as query parameter:

C:\ >adfind -b dc=w2k,dc=pl -s subtree -f "(&(objectSid=S-1-5-21-2045789631-2668715847-4178987103-1162))" name

AdFind V01.42.00cpp Joe Richards (joe@joeware.net) April 2010

Using server: FIMDC01.w2k.pl:389
Directory: Windows Server 2008 R2

dn:CN=ADFS Servers,OU=FIMGroups,DC=w2k,DC=pl
>name: ADFS Servers

1 Objects returned

And that's all of the trickery for today ...

A recent post from Brad Turner reminded me of something I wanted to blog about since I setup my Forefront Identity Manager (FIM) lab for self-password reset for users. So here it is – WMI permissions …

… If you want to enable the self-password reset scenario for users (which is one of scenarios you definitely want to enable when you deploy FIM) there is a number things to do – enabling MPRs, set permissions in AD, configure sync engine settings and also configure WMI permissions. All steps required are outlined in this TechNet document. One of the steps is setting up permissions on the WMI space on the FIM synchronization service machine.  

The instructions tell you to set up some permissions on the ROOT\CIMV2 namespace and all the child namespaces for the FIM 2010 service account. The reasons behind these changes is the actual password reset on the object is being performed through WMI calls to the FIM synchronization service, which enables lookups for MV objects and CS objects and the password reset call. Actually, the same scenario is possible for ILM 2007 or even MIIS 2003 with not so magical code and I’ve used it in the past to deploy similar solutions for customers … but this is just on the side note.

There is nothing wrong with these instructions, but well … I tend to think in problems when instructions tell me to delegate “all rights” on an entire tree of objects, when it is not completely necessary. 

This comes from my experience and if you want to stick to the completely supported way it probably means that this has to be done by the TechNet or ask Microsoft representative about support. However I don’t think that it will break the scenario – in my case it works great and it worked for ILM 2007 … but well, I got a reason to write a disclaimer ;).

If you take a look at the WMI tree on the FIM synchronization service you will notice that there is a specific namespace: ROOT\MicrosoftIdentityIntegrationServer. It is enough to set permissions outlined in the TechNet document only on this namespace to make FIM self-password reset scenario work, when it comes to WMI permissions.

It should be also easy to fix it in Brad Turner's script (great work Brad).

May and June this year are speaking engagements months for me. Mostly these engagements are local events in Poland - if you're interested in them, please visit my Polish blog.

One event will be in English and is organized on-line through Live Meeting by my friend, Polish MVP - Robert.

If You have spare 75 minutes on the 19th June please join me through Live Meeting for the *free* VirtualStudy Conference 2010. I have the honor and pleasure to be part of this event as a speaker among MVPs and other well recognized members of the community.


I will deliver an English version of my session from last Microsoft Technology Summit 2009 conference about the Kerberos protocol, how it works and how to troubleshoot it: Catching one tail with three heads - Kerberos explained.

DirTeam.Com will have strong presence on this event, as Sander will also speak at this event. Just sit comfortable, grab a drink and have fun listening about tech.

Filed under: ,

I’m playing a little bit with Sharepoint 2010 and the claims model (probably more posts on this topic will follow) where ADFS v2 (yes, it has shipped in case you missed it) acts as a trusted claims provider for SPS 2010. It is a great scenario which I think will find its use in many organizations, however re-thinking all access and role models for Sharepoint applications might be a tough work at start. More on this approach soon.  Right now, a quick configuration tip …


(cc) Tiger Pixel

… if you have defined trusted claim provider in Sharepoint, like ADFS 2 server for example, part of its configuration is a set of claims it can provide to SPS and mapping of these claims. In claim provider properties it looks like this:

PS C:\> (Get-SPTrustedIdentityTokenIssuer -Identity "ADFS20Server").ClaimTypes

What if you want to add another one for example “Role”. Nothing simpler – run Powershell and you will find the Add-SPClaimTypeMapping cmdlet which should allow you to do exactly what is requested. Problem is that when you take a look at an example provided in TechNet documentation or the cmdlet help you will get examples, which not necessary fits cmdlet syntax, like this one:

Get-SPTrustedIdentityProvider –Name "LiveIDSTS" | Add-SPClaimTypeMapping -IncomingClaimType "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" -IncomingClaimTypeDisplayName "PUID" -LocalClaimType http://schemas.xmlsoap.org/ws/2005/05/identity/claims/thumbprint

What to do then? Simple example how to add new claim mapping to trust provider is presented below:

$map2 = New-SPClaimTypeMapping "http://schemas.microsoft.com/ws/2008/06/identity/claims/role" -IncomingClaimTypeDisplayName "Role" –SameAsIncoming
$ti = Get-SPTrustedIdentityTokenIssuer -Identity "ADFS20Server"
Add-SPClaimTypeMapping -Identity $map2 -TrustedIdentityTokenIssuer $ti

Quick check:

PS C:\> (Get-SPTrustedIdentityTokenIssuer -Identity "ADFS20Server").ClaimTypes

Done !

Few weeks ago I wrote about FIM 2010 support for Windows 2008 R2 Active Directory environment with Recycle Bin enabled. Basically it wasn't supported configuration at that time. But world is changing ... and FIM as well.

Few days ago FIM 2010 Update 1 was released  to Windows Update and also Windows Catalog. You can use Windows Catalog to download this update in case Your FIM can't access Internet or is not configured to update automatically.


This update brings also support for AD with Recycle Bin enabled for FIM Active Directory MA and synchronization engine. Prerequisite for this update is hotfix described in KB 979214 which brings changes to DirSync control behavior. This hotfix might be also useful if You have application which uses DirSync control in conjunction with Windows 2008 R2 directory.

Unfortunately there still is no update for this issue for ILM 2007 FP1, which still will have problems with importing some changes when Recycle Bin will be enabled. Hopefully some solution will be released soon.

I promised to get back to AD WS topic so here I am.  My last post was about the process of Active Directory Web Services (AD WS) instance location from a client perspective. When a client locates the service, in most cases, it is with the purpose to do something with it – query, update ... . But what if something goes wrong and we want to troubleshoot this? Of course there is always network traffic analysis, but there is also an AD WS debug logging mechanism which can be used for it. All you need to do is turn it on. How??

(cc) ehpien

AD WS is a web service written in WCF and installed on every Windows Server 2008 R2-based DC. It is also available as the AD Management Gateway option for Windows Server 2003 and Windows Server 2008. The service has its own configuration stored in a file named Microsoft.ActiveDirectory.WebServices.exe.config, placed in the AD WS installation folder (%WINDIR%\ADWS by default).

Configuration parameters are described on these TechNet pages, however information about the diagnostic logging option is missing there. To configure this mechanism, alter the configuration file and add in an <appSettings> section with the following entries:

<add key="DebugLevel" Value="<log_level>" />

where log_level might be one of following values: None, Error, Warn or Info. Info is the highest level of debug mode, which will log full debug info and also the communication exchange between clients and the service. To configure where the debug information will be stored, add the following key to the config file:

<add key="DebugLogFile" value="<path to log file>" />

I think that options in this case are self explaining. Final configuration might look something like this:

<add key="DebugLevel" Value="Info" />
<add key="DebugLogFile" value="C:\ADWSLog\Adws_trace_log.txt" />

After making these changes in the configuration file, restart the service to make them take effect.

This change has to be introduced into each instance configuration separately. But it might be only a file copy operation – it depends on your environment.

One thing to remember – there is nothing like free debug operation – it always has some cost attached in performance. I don't know what this cost is in AD WS case but always consider it when you will decide to use it – especially in Info mode...

Some time ago I wrote about issues with the ILM 2007 FP1 Active Directory MA connecting to Windows 2008 R2 forests. In short words: it is supported as long as Recycle Bin is not enabled.

Someone asked a question ActiveDir.org, whether it is supported in regards to FIM 2010. I've asked a few people (thanks Andreas) and it looks the same. FIM 2010 AD MA is supported to work with Windows 2008R2 Active Directory if Recycle Bin is not enabled. However there is light at the end of the tunnel ...


... the problem with the ILM \ FIM AD MA is related to the usage of the DirSync control, which can be used in conjunction with LDAP queries to retrieve changes from AD since the last query. Because links to deleted objects in AD with Recycle Bin enabled are treated in a different way (links are disabled \ enabled instead of being deleted) it caused the effect, that when a user is restored, group membership is not correctly imported in these delta cycles. However, the AD team has released a hotfix. It is described in KB 979214 which corrects behavior of DirSync control in this scenario.

This of course won't fix the problem with the FIM AD MA itself. This will be fixed when an update to FIM will be released (sorry, no date known to me at this moment).

However this fix is also important to know to application developers using DirSync to pull changes from AD. Good to know, if such applications are (not) working in your environment maybe it is worth to deploy this fix.

More Posts Next page »