I am not aware that anyone has has yet found a way to automate full testing of a FIM solution. I know some people unit test their extension code but that doesn’t tell you anything beyond the inputs and outputs of the code. Full testing may need to encompass data entry in the Portal or from connected systems, workflows, synchronization, and resultant modifications in target systems.
We can achieve a reasonable level of testing in the Sync Service using the built-in functionalities of sync preview and “drop a log file and stop the run”. However what if we also need to test FIM’s response to particular ways that data is entered and modified in connected systems, and what about the Portal?
The only way that I know to end-to-end test a FIM solution is with Test Cases. These are documented procedures with particular inputs that should lead to demonstrable outcomes. They have to be conducted manually which is, unfortunately, a thoroughly boring job for someone.
As a solution grows you may end up with many hundreds of test cases. Is it really necessary to run through all of them at each change? We have to be practical here so perhaps a subset of the most common use cases can be tested, plus any that are particularly linked to the change.
I am also very glad when someone other than me does the testing. If I made the change then of course I know it’s fine, so may completely miss where it actually isn’t. For complex environments with a lot of test cases to run through having people other than the FIM administrators/designers to do the testing is definitely a best practice.
You might look at the wrappers in the FIM PowerShell Module on Codeplex. The commands in there wrap much of the out of box functionality for FIM Sync and Service that you would use to do automated testing of a deployment. I’ve done this on a small scale for a couple projects recently and I’ve seen large scale automated testing done with it also.
Are you saying that by using powershell you don’t need any documented test cases? I use powershell scripts as a regular part of all my implementations to report on various things and check data consistency – but if you’re saying that you could replace all manual testing with powershell automation I suspect the effort in maintaining those scripts would actually be far greater (and more highly skilled) than running through the test cases.
No, the first step is documenting the test cases. Once you do that, you can look at automating them. Certainly doing them by hand works, but, what happens when you have 50-100 test cases (or more)? Manually testing that would be a full time resource if not more. By automating it you can practice TDD – write the test case before you implement the tested feature, you can check that you haven’t regressed anything with your latest changes, and if you’re especially together, put those test cases in TFS and then apply the results of your automated testing to the test cases in TFS and you can track progress, automatically file bugs when things break, etc.
If you spend a couple days writing a framework that does most of the automation in the scope of your project’s requirements, it’s pretty easy to glue together more test scripts as you go as you just setup the scenario (in terms of data) and check that the outcome is what you expect.
I look forward to you posting some examples 😉
I do something similar to Brian in our development environment, and found it to be an incredible time-saver to be able to run a suite of automated integration tests – and a great way to learn about/debug ILM/FIM.
I have also written a suite of standard ILM-levers to pull from code (C# – not Powershell).
A test case could be:
– Run a cleanup procedure (empties the metaverse)
– Insert a user in the SAP import table (SQL Server)
– Run a SAP full import full sync
– Assert user exists in ILM with some key attribute values are set to expected values
– Assert user does not exist in AD
– Run an Active Directory Export Delta Import Delta Sync
– Assert the user exists in AD and has expected values for key attributes
My current suite of tests take 30 minutes for a full run (mostly because of the cleanup between each test) for about 100 automated tests. We have many times discovered unexpected consequences as a result of running these, that would otherwise have made it into production, because it is simply not cost effective to test every single test case every time we deploy a feature (we run a 2 week release cycle).
Interesting to hear about someone doing this in reality. However – is this just Sync? Or Portal too? With the clearing out of all data first are the tests mostly focussed on provisioning? Also I’m interested to know the size of the environment – the effort required to automate testing is probably more worthwhile in a larger environment.
We are just in the process of migrating to FIM from ILM, so it is just sync. Clearing out the data is to make sure, that the environment is as clean as can be before running the tests. There is some tests of provisioning, but it is mostly tests of custom functionality related to archiving users, sending out notifications to other services (such as email reminders to managers) – etc.
The approach has been to write tests for the very most basic provisioning, and then to write tests for the new functionality, we develop. So the tests we have are for the functionality developed since we adapted this strategy.
The size of the environment is in the tens of thousands.
It is not simple, but as Klaus says you do learn a LOT more about the product AND your deployment through automation.
Automated testing is the best way to measure quality, and it is possible with FIM. I’ve done it with FIM Sync, FIM Service, and also FIM CM.
I have a TONNE of respect for SDETs, those sneaky buggers can automate anything!
I actually used it to understand the flows, because I could write tests, understand the order of things and verify it in tests – and when it did something I did not understand, I could attach the debugger in VS and step through the extension-code.
However, without a solid background in development and some in automated testing, it is not easy, as ILM/FIM are certainly not developed with automated testing in mind.