Skip navigation

So there is a single line in the release notes for FIM 2010 R2 that says:

Synchronization Service: The value of boolean type outbound scoping filter in Filter based Sync Rule is case sensitive.
It must be all lowercase. ie. "true" or "false" If the casing is different, it will crash.

They mean this. It took me a minute to find this but I had created a temporary sync rule with UPPERCASE true and false…IT KILLED THE ENTIRE SERVER PROCESS. That’s right. It took down the entire synchronization service.

I’m astonished. I don’t know what to say…I’m just speechless.

http://blog.ioshints.info/2013/04/netconfyangnetmod-versus-smi-s.html

Ivan’s blog is a “must read” for all things networking. He’s right on the money here about the networking industry’s inability to come up with standard management technologies.

“A decade after NETCONF started, we don’t have a single usable RFC that two vendors could take to implement a common way to configure something as simple as an IP address and a mask on an interface.”

These are also the same people that brought you ISDN after all.

I’ve been using and developing against the product that has become known as FIM 2010 R2 since 2003. This particular piece of software started life at a company called Zoomit and was developed originally by a guy named James Booth. (Please correct me if I have the backstory wrong.) Zoomit was acquired by MS and they changed the name to Microsoft Metadirectory Services. I encountered this in the form of MMS 2.2 and I helped migrate a large IT department to the first version of what would become the FIM Synchronization Service, MIIS 2003, in 2003. The next revision was in 2007 and came with another name change, Identity Lifecycle Manager 2007. The product was essentially unchanged in that it was a synchronization engine that could be extended with C# or VB.Net to sync data  between LDAP, SQL, flat files, Lotus Notes…you name it. I found this extensibility really empowering. It’s the main reason that I ended up focusing on development instead of system administration.  Despite the sort of “can do” attitude that it fostered in administrators and developers, when polled, end users kept telling MS to “get the coding” out of the product. The next release would add “declarative provisioning” capabilities to accommodate this desire. I think the results are mixed. It would also have another name change. The product had been moved into the Forefront brand group, containing such “winning” pieces of software as IAS. (Yes, that IAS…the “proxy server of doom.”)

FIM 2010 provided some very compelling use cases. Secure Self Service Password Reset is a feature that can provide a single unified password reset experience whether it’s performed from the GINA/Credential Provider, that’s the CTRL+ALT+DEL prompt to the rest of the world., or from a web based portal. The Synchronization Service could now be controlled via declarative rules in a SharePoint based portal that also included the ability to trigger workflow tasks when an object changed state. It also allowed delegated user and group administration. This was improved in the R2 release and then again with the SP1 that came out in early 2013.  There are still some rough edges, however.

MIIS 2003 could be customized by providing implementations of .Net interfaces packaged in a .Net DLL extension. The sync service scans a directory for extensions and loads them when it starts. This provided extensibility for every part of the sync cycle.  When mapping an attribute in AD to an internal object the service would call into your DLL and you could process it any way that you liked with the full capabilities of the .Net Framework. There were guidelines about the kinds of things that one shouldn’t do while the engine was doing its work, but it was the definition of flexible. While in the sync manager UI you could even have it create a Visual Studio project for you with skeleton code for you to customize. All of that functionality remains, but there are some problems with this approach. Don’t get me wrong, I loved working on this stuff, but it’s requirements are becoming a “non-starter” in lots of IT shops. Namely, to work with the sync management service you have to logon to the console of the server via RDP. Not a big deal, but as of Server 2012 the official default deployment model is Server Core. You can’t use the sync service on Server Core. You need the full UI. In addition to this, if you want to debug any errors that you run into you need Visual Studio installed on the server. In 2003, it was recommended for production servers…I still do it to trap errors in the debugger that only arise with the full production data set. This really isn’t a good model going forward.

But what about declarative provisioning you ask? Wouldn’t that eliminate the need for having custom code in the sync service? I’m sure that there is an 80th percentile case where yes, it would do just that. The problem is, though, the real utility of this software has been its flexibility in an area of IT that is rife with idiosyncrasies. The process of discovering, processing, and then syncing changes to and from the different identity “silos” around an enterprise can be incredibly complicated. This is often for reasons that are more political than technical, but because of that you might need to take steps that you might not otherwise. So, for instance, you may have an attribute in a database view exposed by PeopleSoft that needs to be modified ever so slightly before it’s considered “gold” and ready to be sync’d to other systems, like AD. (And maybe the PeopleSoft guys hear “Microsoft” and tell you that they are afraid that it will “break” their database. Seriously.) Currently, FIM 2010 R2 provides some limited means of manipulating it with some functions like “TRIM()” or “LEFT()” but the experience is akin to writing JavaScript in notepad after you have become used to using Visual Studio. Did it work? Who knows until you actually get the new “sync rule” into the engine via an import and start synchronizing….then its entirely possible that the “expected rule entry” that links every object to it’s particular matching sync rule could just say “unapplied”. Without going into the guts of how sync works on FIM 2010 (I’m saving that for another post) lets just say that in some ways the current declarative system is easier for most things, but when you want to do something that it can’t do you are in the weeds. Just how far in the weeds is kind of amazing…like I said, I’m saving that for another post, but wow. Just wow. I earn a living working with this software and every now and then it occurs to me just how much weird behavior that I have grown accustomed to over the years.

Anyway, as of a few months ago the Forefront brand is no more. Most of the products in it’s line up have been killed. “Endpoint Protection” has been moved to the System Center line of products. I expect that the next major release will continue the pattern of name changes that have accompanied every major release. I’ve been giving some thought to what the future of this thing should look like…at least if someone cared what I thought about the subject.  Smile

First off, it needs to run in Server Core. There needs to be a “development version” that you can run on your workstation where you can model the desired behavior for your environment and then upload it to the actual sync server. The current SharePoint portal just isn’t a good way to configure this incredibly complex software. It’s fine for delegating access to users and groups (as long as you don’t need to do much customization), but when you are doing admin or development work it is really death by a thousand cuts. They use AJAX wherever possible in the UI, but it’s always loading something…and even though it uses AJAX it still blocks the UI! This is due to the main thing that needs to be taken out and shot in the head. The “FIM Service”.

FIM 2010 introduced a new component called the “FIM Service”. This provided a new database separate from the meta-directory database that the sync manager operates on. It also provides a hosted workflow engine via Windows Communication Foundation and Windows Workflow Foundation. This interacts with the SharePoint based portal via a WCF soap-based web service. The portal surfaces that functionality in the web service, so while you do your administration and configuration through the portal you are actually using the web service. Awesome, right? That must mean that you can fire up your favorite development tools and customize it to fit perfectly into your environment, right? No. Well, you could. If you were really a hardcore WCF developer, I’m sure this would be easy. No one, let me repeat, no one who works with this software is a hardcore WCF developer. No one. For that matter, even hardcore WCF developers don’t want to be WCF developers. WCF made writing web services incredibly easy compared to DCOM. Open heart surgery is easy and agile compared to DCOM, so that’s kind of a low bar.

It seems like FIM 2010 specs were designed by well meaning people. There are lot’s of things about it that one would have thought were a good idea. It’s web service is the most standards compliant I have used. You could use it from Java if you were so inclined. It uses SharePoint, which lots of people use, right? It uses Kerberos constrained delegation. It has it’s own Secure Token Service to issue claims for the web portal. Lot’s of buzz in those words there. The problem  is that while this was going on the world shifted to light weight REST based web services that humans can use and understand. Microsoft has another Secure Token Service called ADFS that works pretty well from what I hear. No one uses the web service in a way that warrants the added complexity. Because of SharePoint and the web service together the portal always feels slow. It has it’s own little AJAX spinner to let you know its loading data from the service…learn to love that little spinner. That’s all I can say. The workflow stuff is genuinely useful, but they have another workflow engine that is getting a great deal more development energy…It’s called System Center Orchestrator and it’s awesome.

System Center has it’s own self-service platform called Service Manager that can be coupled with SC Orchestrator to do everything from provision servers to running arbitrary PowerShell code. You can define service offerings and expose them to your enterprise customers and  tie that all together with approvals and notifications. Service Manager can integrate with SC Operations Manager to open a ticket if there’s something wrong with a service. FIM can do approvals. FIM can do custom workflow’s. (Like I said, there’s a future post coming on the pains of developing for FIM.) FIM can run PowerShell by an unsupported workflow module developed in the community. There are a couple, in fact. The problem here is that the System Center products are getting the major energy and focus at MS. That’s really what I see happening in the future. It’s something that I will be investigating in my own dev lab very soon, as well.  The sync engine could be paired with Service Manager and Orchestrator to do everything that the current FIM solution does…and more. All without that painful SharePoint portal. Did I mention that you get a Service Manager license with FIM 2010 R2 for logging? Hmmm….The Forefront brand is dead. Other Forefront products have been transferred to System Center. Service Manager is already integrated with FIM…It doesn’t take a crystal ball to see where that’s going. In fact, I’m pretty sure that I could build a passable replacement for the “FIM Service” components my self in the mean time.

I’ve been working with this software since it came out and recently I’ve gotten as deep into this thing as you can get without the source code. It’s been really, really painful. The next post will just be about what I would like to see if I were building some future version.

AD LDS has a great feature called “bindable proxy objects”. These are objects that refer to an AD DS object by its ‘objectSid’ attribute. For all intents and purposes these can be treated as plain user objects by any consuming application. The real benefit is that the password for the account is stored in AD DS. This can give you a lot of flexibility in the way that you store user data for applications. It provides a layer of indirection between applications and your domain allowing you to keep information that might not be a good fit for AD DS out of the domain while still making use of the users centralized domain credentials.

One place this might come in handy would be in an extranet scenario where you have external user accounts that could be created as normal ‘user’ objects and internal users who can be created as ‘userProxy’ or ‘userProxyFull’ objects. Any applications that service both user communities can use one directory to authenticate either type of account. It’s pretty handy.

They are a little difficult to work with, however. Since the proxy account refers to the domain account via ‘objectSid’ the domain account needs to exist prior to creating the proxy in AD LDS. Once this is done if the domain account is deleted or disabled the proxy account will no longer authenticate the user.

The following PowerShell function will create a ‘userProxyFull’ object with a provided email address.

function Create-ADLDSProxyObject([string] $emailAddr)
{
    #this uses the .Net framework wrapper for ADSI
    $output = [System.Reflection.Assembly]::LoadWithPartialName('System.DirectoryServices')
    
    #this is the container in LDS where the account will be created
    $userContainerPath = 'LDAP://DC:50000/CN=Users,CN=APPDIR,DC=DEV,DC=TEST'

    #get the users AD DS account...
    $dsUser = Get-ADUser -Filter { mail -eq $emailAddr}

    if($dsUser)
    {
        #this does the work.
        #first create the object under the parent then set the sid
        #userprincipalname and mail are set, but are both optional
        $sid = $dsUser.SID
        $userContainer = New-Object System.DirectoryServices.DirectoryEntry -ArgumentList $userContainerPath
        $proxyObject = $userContainer.Children.Add('CN=' + $emailAddr, 'userProxyFull')
        $proxyObject.InvokeSet('ObjectSID', $sid.Value)
        $proxyObject.InvokeSet('userPrincipalName', $emailAddr)
        $proxyObject.InvokeSet('mail', $emailAddr)
        $proxyObject.CommitChanges()
    }
    else
    {
        Write-Host "Sorry, pal."
    }
}

 

There are quite a few scenarios that I run into routinely where I need a random value. A unique file name or password, perhaps. PowerShell makes this incredibly easy. This is one of those features that are so useful it makes you wonder how you got along without it.

Examples

 

Get ten random characters:

Get-Random -Count 10 -InputObject (65..90) | %{ [char]$_ }

Get fifteen random computer accounts for XP Workstations:

Get-ADComputer -LDAPFilter "(operatingsystem=*XP*)" | Get-Random -Count 15

Get some random records from your DNS cache:

Get-DnsClientCache | Get-Random

 

And that’s really the beauty of it…it works with any kind of collection.

I made it through the MCSE Server Infrastructure boot-camp and got certified! In other news, I also got permission to re-post the PowerShell training blog posts that I did for work here as long as I don’t mention them at all…so expect a ton of those sometime soon.

Haven’t posted in a long while. At work I’ve been using an internal blog platform to teach PowerShell which didn’t leave a lot of time for my own blog. So far that experience has actually been pretty rewarding. Most of the people that I have interacted with aren’t developers, they are IT people with varying levels of experience and skill. People get really excited when they discover that they can command a computer to do something. It’s more satisfying than using a GUI somehow, more assertive somehow. When they realize that if you can do it to ONE machine you could do it to A THOUSAND machines people start to get that “Pinky and the Brain” look in their eyes. The genuinely strange thing is that this organization is incredibly wary of automation. I’ve never seen anything like it. These days almost every element of a datacenter supports some type of automation. With just a little development provisioning and deployment become effortless. I don’t get it. Anyway, Windows Server 2012 has incredibly broad PowerShell support and I have been knee deep in most of it for months. I’ll be posting a lot of PowerShell goodness in the coming weeks.

I’ve been doing a lot of internals work with FIM 2010 R2. I’ve written a functional Salesforce.com Management Agent/Connector. I’ve also built some custom workflows for simple tasks that I needed to accomplish. Hopefully all of that code will end up on Github really soon.

It’s actually the weird hesitance to use automation that has made me turn back to public blogging. Every organization has political environments that have to be navigated to introduce a new technology. Maybe I’m just not the right guy to do that here. I can certainly say that I’ve grown really tired of hearing “NO!” for anything that I propose. Maybe it should take a month to spin up a new server and I’m just a jerk for thinking it should be quicker. Who knows…So, I’m going to start working on more open source stuff and writing about identity management. Why not just leave? I work from home three days a week. It’s hard to argue with. I’m also still working on some things that I really enjoy. There’s just plenty of stupid peppered into an otherwise very tasty soup.

I’m also going to be trying to get a specific certification this year…so I’ll probably be writing about that as well. This weekend I’m off to California for an MCSE: Server Infrastructure boot-camp.

msiexec /a mysql-instllation.msi

This command will create a directory in the root of %SYSTEMDRIVE% and extract all of the individual products. This took me some time to fins out. From there you can silently install individual products with msiexec.

For instance:

msiexec /i mysql-workbench-gpl-5.2.37-win32.msi /quiet /log log-wk.txt

What a pain. This isn’t really documented anywhere.

<SARCASM> But hey! You can build the whole thing from source!  </SARCASM>

One reason why people (normal people) don’t like hardcore, Asperger’s syndrome, type techies…hard and fixed stances on things like INDENTATION.

There can be only one!…and now I’m going to make it a point to never use it.🙂

Indentation in Source Code

There was an article this morning that mentioned that Metro was replacing Aero as the overall theme in the next version of windows. I’ve been using both Windows 8 and Windows Server 8 since the BUILD conference, and I just don’t see how that is the case. In some of the recent posts by Steven Sinofsky there have been some screen shots of task manager and other desktop apps using a very basic theme. 

This one illustrates the new flat basic theme.

image

I haven’t tried the Client OS on a non-3D accelerated  system yet, but this is the default for the Server OS accelerated or not.

In fact on the Server OS it’s a feature “Server Graphical Shell”:

image

 

Aero came enabled by default on the client OS.

image

On the Server OS you can get Aero by installing the “Desktop Experience” feature with the PowerShell command “Add-WindowsFeature Desktop-Experience”…This same feature is also available on Server 2008 R2 via the Server Manager interface.

While it may not be the most attractive interface ever, it is very consistent when using Remote Desktop Services. Using the default Server OS install gives you an experience that is the same locally and remotely.  Since a large number of virtual desktop deployments end up not enabling Aero this might be an effort to start managing expectations about remote UI in general. VDI is a MASSIVE push by industry and MS in particular since it really does away with a lot of the headache of managing desktop systems. At BUILD they were really hyping the potential for device makers to build cheap RDP terminals that serve as thin VDI clients. There was at least one session on it, though I didn’t see it personally.

While you can enable Aero in VDI sessions with RemoteFX, starting in Server 2008 R2, that actually requires installing some fairly expensive co-processors that allow virtualized 3D acceleration.

@the_gadgeteur asked me to post some screen shots and build numbers…so, cheers.

Follow

Get every new post delivered to your Inbox.

Join 127 other followers