Tuesday, June 29, 2010

SQL Server – Grouping By Date

If you have a dateTime column in your database and you need to group by it, then here is a quick way to do it:

hint: Use the Convert method with an appropriate style number so that you can get rid of the time value. (in my case I used 111, which output yyyy/mm/dd).

SELECT distinct convert(varchar,[dateColumn],111) as date, count(1) as count
FROM [tableName]
where [dateColumn] >= '06-01-2010'
group by convert(varchar,[dateColumn],111)
order by date

Sunday, June 27, 2010

FTC goes after Twitter for security lapses

via http://www.ftc.gov/opa/2010/06/twitter.shtm:

The FTC went after Twitter for security lapses that resulted in user information being leaked and accounts being hijacked. Of importance are the list of items that the FTC said that Twitter didn’t do to safe-guard it’s users’ accounts. It’s a good set to learn from:

Lessons to learn from the FTC case against Twitter that you can use in your organization:

  • require employees to use hard-to-guess administrative passwords that they did not use for other programs, websites, or networks;
  • prohibit employees from storing administrative passwords in plain text within their personal e-mail accounts;
  • suspend or disable administrative passwords after a reasonable number of unsuccessful login attempts;
  • provide an administrative login webpage that is made known only to authorized persons and is separate from the login page for users;
  • enforce periodic changes of administrative passwords, for example, by setting them to expire every 90 days;
  • restrict access to administrative controls to employees whose jobs required it; and
  • impose other reasonable restrictions on administrative access, such as by restricting access to specified IP addresses.

Excerpts from the release:

Twitter Settles Charges that it Failed to Protect Consumers’
Personal Information; Company Will Establish Independently Audited Information Security Program

The FTC’s complaint against Twitter charges that serious lapses in the company’s data security allowed hackers to obtain unauthorized administrative control of Twitter, including access to non-public user information, tweets that consumers had designated private, and the ability to send out phony tweets from any account including those belonging to then-President-elect Barack Obama and Fox News, among others.

The FTC’s complaint alleged that between January and May of 2009, hackers were able to gain administrative control of Twitter on two occasions. In January 2009, a hacker used an automated password-guessing tool to gain administrative control of Twitter, after submitting thousands of guesses into Twitter’s login webpage. The administrative password was a weak, lowercase, common dictionary word. Using the password, the hacker reset several passwords, and posted some of them on a website, where other people could access them. Using these fraudulently reset passwords, other intruders sent phony tweets from approximately nine user accounts. One tweet was sent from the account of then-President-elect Barack Obama, offering his more than 150,000 followers a chance to win $500 in free gasoline. At least one phony tweet was sent from the account of Fox News.

During a second security breach, in April 2009, a hacker was able to guess the administrative password of a Twitter empoyee after compromising the employee’s personal email account where two similar passwords were stored in plain text. The hacker reset at least one Twitter user’s password, and could access nonpublic user information and tweets for any Twitter users.

Under the terms of the settlement, Twitter will be barred for 20 years from misleading consumers about the extent to which it protects the security, privacy, and confidentiality of nonpublic consumer information, including the measures it takes to prevent unauthorized access to nonpublic information and honor the privacy choices made by consumers. The company also must establish and maintain a comprehensive information security program, which will be assessed by an independent auditor every other year for 10 years.

Friday, June 25, 2010



Following information is excerpted from MSDN documentation:

Namespace: System.Collections.ObjectModel
Assembly: System (in System.dll)

Represents a dynamic data collection that provides notifications when items get added, removed, or when the whole list is refreshed.

The ObservableCollection<T> class, is an implementation of a data collection that implements the INotifyCollectionChanged interface. (The INotifyCollectionChanged interface exposes the CollectionChanged event, an event that should be raised whenever the underlying collection changes.)


If you ever need to implement your own collection, consider using IList, which provides a non-generic collection of objects that can be individually accessed by index. Implementing IList provides the best performance with the data binding engine.

More info:

How to: Create and Bind to an ObservableCollection

Thursday, June 24, 2010

Tuesday, June 22, 2010

Layered Architecture Sample – VS 2010


Project Description
Layered Architecture Sample is designed to demonstrate how to apply various .NET Technologies such as Windows Presentation Foundation (WPF), Windows Communication Foundation (WCF), Windows Workflow Foundation (WF), Windows Form, ASP.NET and ADO.NET Entity Framework to the Layered Architecture Design Pattern. It is aimed at illustrating how code of similar responsibilities can be factored into multiple logical layers which are applicable in most of today's enterprise applications.


Atomized Strings - .Net technique for memory optimization

Via: Atomize your strings to improve memory usage

Atomization is the technique of reusing string references as a way of optimizing memory usage. It can also lead to speed optimizations as reference comparisons is much faster than actual string comparisons.

As the post points out the NameTable class used for processing XML in .Net uses it for storing element names. You can use the NameTable class to get the same kind of optimizations.

From the post:

The process of taking a string and checking whether you already had one with the same value to reuse it is called atomizing a string. This has two nice properties.

  1. You end up using less memory to hold all the strings.
  2. You can compare the strings faster. The value is the same if and only if they are the same reference, so you can do a by-reference comparison, which is much faster. Even if you eventually mix non-atomized strings, you'll still have cases where the by-reference gives you a "quick yes" on equality comparison.

Sample code for using the NameTable class for optimizing your code by using atomized strings:

System.Xml.NameTable nt = new System.Xml.NameTable();
foreach (var detail in details)
string value =
(detail == null) ? null :

NameTable: http://msdn.microsoft.com/en-us/library/system.xml.nametable(v=VS.80).aspx

Hanselman: http://www.hanselman.com/blog/XmlAndTheNametable.aspx


It is important to remember that the CLR also attempts to optimize memory used by string literals by using the string intern pool. The string intern pool will use the same reference for s1 and s1 in the following example, but s3 will have a totally separate reference:

string s1 = "hello";
string s2 = "hello";
object.ReferenceEquals(s1,s2); //true
StringBuilder sb = new StringBuilder();
string s3 = sb.ToString();
object.ReferenceEquals(s1,s3); //false

Opting out of iAds on your Apple device

To opt-out of the iAds program on your Apple device (iPhone, iPod Touch, iPad), just visit http://oo.apple.com from your device.

For more information: How to opt out of interest-based ads from the iAd network (Apple.com)

Upgrading my iPhone 3G to iOS4

And here is what I have to look forward too…. (it will be interesting to see how many of the following features will be available on my 2nd gen 3G phone)

From dialog just before I began updating my phone….

This update contains over 100 new features, including the following:

• Multitasking support for third-party apps*
  - Multitasking user interface to quickly move between
  - Support for audio apps to play in the background
  - VoIP apps can receive and maintain calls in the
     background or when device is asleep
  - Apps can monitor location and take action while
     running in the background
  - Alerts and messages can be pushed to apps using
     push and local notifications
  - Apps can complete tasks in the background
• Folders to better organize and access apps
• Home screen Wallpaper*
• Mail improvements
  - Unified inbox to view emails from all accounts in one
  - Fast inbox switching to quickly switch between
     different email accounts
  - Threaded messages to view multiple emails from the
     same conversation
  - Attachments can be opened with compatible third-
     party apps
  - Search results can now be filed or deleted
  - Option to select size of photo attachments
  - Messages in the Outbox can be edited or deleted
• Support for iBooks and iBookstore (available from the
   App Store)
• Photo and Camera improvements
  - 5x digital zoom when taking a photo**
  - Tap to focus during video recording**
  - Ability to sync Faces from iPhoto
  - Geo-tagged photos appear on a map in Photos
• Ability to create and edit playlists on device
• Calendar invitations can be sent and accepted wirelessly
   with supported CalDAV servers
• Support for MobileMe calendar sharing
• Suggestions and recent searches appear during a web
• Searchable SMS/MMS messages**
• Spotlight search can be continued on web and Wikipedia
• Enhanced location privacy
  - New Location Services icon in the status bar
  - Indication of which apps have requested your location
     in the last 24 hours
  - Location Services can be toggled on or off for
     individual apps
• Automatic spellcheck
• Support for Bluetooth keyboards*
• iPod out to navigate music, podcasts and audiobooks
   through an iPod interface with compatible cars
• Support for iTunes gifting of apps
• Wireless notes syncing with IMAP-based mail accounts
• Persistent WiFi connection to receive push notifications*
• New setting for turning on/off cellular data only**
• Option to display the character count while composing
   new SMS/MMS**
• Visual Voicemail messages can be kept locally even if
   they have been deleted from the server**
• Control to lock portrait orientation*
• Audio playback controls for iPod and third-party audio
• New languages, dictionaries and keyboards
• Accessibility enhancements*
• Bluetooth improvements
• Better data protection using the device passcode as an
   encryption key* (Requires full restore.)
• Support for third-party Mobile Device Management
• Enables wireless distribution of enterprise applications
• Exchange Server 2010 compatibility
• Support for multiple Exchange ActiveSync accounts
• Support for Juniper Junos Pulse and Cisco AnyConnect
   SSL VPN apps (available from the App Store)
• More than 1,500 new developer APIs
• Bug fixes

Products compatible with this software update:
• iPhone 3G
• iPhone 3GS
• iPhone 4
• iPod touch 2nd generation
• iPod touch 3rd generation (late 2009 models with 32GB
   or 64GB)

* Requires iPhone 3GS, iPhone 4, and iPod touch 3rd generation.
** Requires iPhone 3G, iPhone 3GS, and iPhone 4. SMS/MMS messaging and Visual Voicemail require support from your wireless carrier.

Friday, June 18, 2010

Why you cant make cross-domain XMLHttpRequest(s)

XMLHttpRequest method is an important part of any AJAX enabled website. But if you ever try to get data from a web-service that is hosted on a different server using XmlHttpRequest, you will quickly learn that this will not work.

For the longest time, I just knew that this would not work, but I could not give a specific reason as to why this was the case. Finally, I decided to spend a little time and learn the why behind this restriction and here is the answer:

Basically it is because of what is known as the “Same Origin Policy” and its a policy that is enforced by the browser which do it because of a W3C standard for XmlHttpRequest. (“The XMLHttpRequest object can be used by scripts to programmatically connect to their originating server via HTTP”)

There is a lot more in-depth information available on this issue from the Mozilla developer site: https://developer.mozilla.org/En/Same_origin_policy_for_JavaScript. This page also talks about what constitutes a cross domain request.

So are there ways around this: the answer is Yes. But some are more hacky then others and the best solution is dependent on your specific circumstance. (example: http://www.nathanm.com/ajax-bypassing-xmlhttprequest-cross-domain-restriction/)

The solutions that I lean towards are: If the web-service can return a special JSON object called JSONP object – then this is probably the easiest solution. A lot of publicly consumable web-services today expose this format. But what if the returned data is XML or cannot be converted to JSONP? In this case the next best solution is to write server side code that acts as a proxy to the original web-service. Your javascript code, calls a web-service on your server, which in turn calls the original web-service. Obviously for this to work you need to be able to write server side code (PHP or WCF, etc…)

Now you know!

More info:

Same Origin Policy: http://en.wikipedia.org/wiki/Same_origin_policy

JSONP: http://en.wikipedia.org/wiki/JSON#JSONP

Ajaxian: JSONP: JSON With Padding

Inside RIA - What in the heck is JSONP and why would you use it?

Thursday, June 17, 2010

Viewing Entity Framework generated SQL statements

Sometimes you might need to peer into the innards of Entity Framework and determine if the SQL statements its generating are valid and efficient. Here are a couple of ways one can do this:

Method 1: in code

Use the following code to view the statements being generated.

var sql = ((System.Data.Objects.ObjectQuery)query).ToTraceString();
(where query is an object of type IQueryable<>)

The above code displays statements that look like the following:

[Extent1].[Id] AS [Id],
[Extent1].[Name] AS [Name]
FROM [dbo].[Customers] AS [Extent1]
WHERE [Extent1].[Name] = @p__linq__0

But what if you need to determine what the value of p_linq__0 is? (look at Method 2)

Note: another problem with method 1 is that it cannot be used view the insert and update statements being generated – just queries.

Method 2: Use a profiler

The profiler can be used to get more in detail information about the sql statements being run against the database. The Profiler tool comes with Sql Server. Unfortunately, the one problem with the profiler tool is that its not available for Express editions of Sql Server.

Solution: AnjLab has a free tool that provides similar profiling capabilities that works pretty well. It is called the AnjLab SqlProfiler. Unfortunately it does not come with much documentation on how to set it up to display the sql statements being executed against the database.

Here is how I had to setup the events to begin capturing the sql statements that were being generated by Entity Framework:


I am sure there will be other events that I might need later – but for now the above shown Events list displays almost all the information to determine what SQL statements EF is generating. (I found the RpcStarting event typically contained the parameters being passed in and the SpStmtStarting and SqlStmtStarting contained the queries being executed).

Using the SqlProfiler here is the output I got for the same statement that I showed in method 1 (that was generated using the ToTraceString method):

exec sp_executesql
[Extent1].[Id] AS [Id],
[Extent1].[Name] AS [Name]
FROM [dbo].[Customers] AS [Extent1]
WHERE [Extent1].[Name] = @p__linq__0',
N'@p__linq__0 nvarchar(4000)',
@p__linq__0=N'Raj Rao'

It is interesting to note that a stored procedure was used to execute the query and that the name used in the where clause was passed in as a parameter (parameterized queries make me happy)

The future belongs to geeks

GapingVoid: The future belongs to geeks. Nobody else wants it.


Wednesday, June 16, 2010

Do you need an external connector license?

We had this question for a Sql Server licensing scenario recently.

The Sql Server licensing model for most organizations is derived from the Microsoft volume licensing model.

External connector licenses are also called EC licenses or Internet Connector licenses.

An EC license is required when you have users external to your organization that need to access to any Microsoft application server (such as Sql Server) and the application server was licensed originally using a (Server + Client Access) license model . An EC license is NOT required when the application server is licensed using a per-processor based license.

 cal_external cal_processor
External Connector license is required when you license the the server software using a CAL model. External connector licenses are not required when you license using a per processor licensing model

More Info:

CAL licenses (from Microsoft Volume Licensing): http://www.microsoft.com/licensing/about-licensing/client-access-license.aspx

SQL Server 2005 licensing and Internet Connector License: http://www.microsoft.com/sqlserver/2005/en/us/pricing-licensing-faq.aspx

Q. What exactly is a processor license and how does it work?

A. ……..
….. In addition to the installation rights to the actual server software, processor licenses also grant any number of devices or users the right to access and use the server software running on those processors. These access rights are available to all devices or users, regardless of whether they are inside the organization (intranet scenarios) or outside the organization (intranet or extranet scenarios). The processor license contains all that you need. With a processor license, there is no need to purchase separate server licenses, CALs, or Internet connector licenses.

(this information is for the most part also valid for Sql Server 2008)

SQL Server 2008 Licensing Information: http://www.microsoft.com/sqlserver/2008/en/us/licensing-faq.aspx

Wednesday, June 09, 2010

ASP.Net - Performing Asynchronous Work, or Tasks

An indepth post on performing Performing Asynchronous Work, or Tasks, in ASP.NET Applications by. Thomas Marquardt

Also see:

PageAsyncTask: http://msdn.microsoft.com/en-us/library/system.web.ui.pageasynctask.aspx

IHttpAsyncHandler: http://msdn.microsoft.com/en-us/library/system.web.ihttpasynchandler.aspx

Async Pages: http://msdn.microsoft.com/en-us/library/ydy4x04a.aspx

Scalable Apps with Asynchronous Programming in ASP.NET: http://msdn.microsoft.com/en-us/magazine/cc163463.aspx with information on Asynchronous Pages, Asynchronous HTTP Handlers
and Asynchronous HTTP Modules.

WCF, SSL Certificates and Certificate Validation Errors

When you use secure transport (such as SSL over Http), WCF by default requires that you have a valid certificate for the server hosting your service.

For the certificate to be valid the CN value needs to match the server name and the chain has to be valid (i.e., the root or one of the children of the root need to be in the trusted root authority on the machine from where you are running the WCF client).

But often times (especially in dev), you might be using a certificate that you created and hence does not have a valid root authority. In these cases, WCF will throw the following exception of type “System.ServiceModel.Security.SecurityNegotiationException”:

Could not establish trust relationship for the SSL/TLS secure channel with authority '<name>'.

In these cases, WCF provides you with an easy mechanism to by pass the validation of the certificate by assigning your own delegate to “ServicePointManager.ServerCertificateValidationCallback ”

                    += new RemoteCertificateValidationCallback(RemoteCertificateCallBack);

Typically, most examples on the Internet tell you to setup RemoteCertificateCallBack to return true. This will be ok for small simple apps, where you are working against only one web-service. But in scenarios where you might have multiple web-services running on different servers with different certificates, this can be a security issue. The reason for this is that the ServicePointManager.ServerCertificateValidationCallback delegate is invoked for all web-services running over HTTPs running within the same app-domain as your application. In such circumstances, what you need to do is to be as specific as possible when returning true for a certificate on which the ServerCertificateValidationCallback is being called. Here is some sample code that does just that using the CN name.

//call when app starts up
static Setup()
    if (!string.IsNullOrEmpty("comma separated list of CN names that should pass - use App.config to store this"))
            += new RemoteCertificateValidationCallback(RemoteCertificateCallBack);

//callback method
static bool RemoteCertificateCallBack(object sender, X509Certificate certificate, X509Chain chain, SslPolicyErrors sslPolicyErrors)
    bool certIsValid = sslPolicyErrors == SslPolicyErrors.None;
    //only check if the cert is not valid.
    if (!certIsValid)
        //make sure the issuer has been set
        if (certificate != null && !string.IsNullOrEmpty(certificate.Issuer))
            string listOfCertNames = "comma separated list of CN names that should pass - use App.config to store this";
            if (!string.IsNullOrEmpty(listOfCertNames))
                string[] certNames = listOfCertNames.Split(',');
                for (int index = 0; index < certNames.Length; index++)
                    certIsValid = ValidateIfKnownCertificateName(certificate, certNames[index]);
                    if (certIsValid)
                        break;//cert has been determined to be valid - break!
    return certIsValid;

/// <summary>
/// returns true if the certificate's issuer has a CN defined in the IgnoreCertErrorsList appsetting.
/// </summary>
/// <param name="certificate"></param>
/// <param name="certName"></param>
/// <param name="index"></param>
/// <returns></returns>
private static bool ValidateIfKnownCertificateName(X509Certificate certificate, string certName)
    bool certIsValid = false;
        int cnIndex = certificate.Issuer.IndexOf("CN=");
        if (cnIndex >= 0)
            cnIndex = cnIndex + 3;//"CN="
            int cnEndIndex = certificate.Issuer.IndexOf(",", cnIndex);
            if (cnEndIndex > cnIndex)
                //get the CN value
                string CNinIssuer = certificate.Issuer.Substring(cnIndex, cnEndIndex - cnIndex);
                if (string.Compare(CNinIssuer, certName, StringComparison.OrdinalIgnoreCase) == 0)
                    //value was the same as one of the ones we know of.
                    certIsValid = true;
        certIsValid = false;
    return certIsValid;

Tuesday, June 08, 2010

WCF, TransportWithMessageCredential and MustUnderstands

I had to consume a SOAP 1.1 web-service that was being hosted by Oracle’s Enterprise Service Bus (ESB) using WCF. The web-service was setup with transport security (Http over SSL) and message level security using a UsernameToken.

The above configuration should map to a basicHttpBinding (Soap11) and a security mode of “TransportWithMessageCredential” in WCF. This is what that configuration would look like:

                <binding name="basicBinding">
                    <security mode="TransportWithMessageCredential">
                        <transport clientCredentialType="None" proxyCredentialType="None"
                            realm="" />
                        <message clientCredentialType="UserName" algorithmSuite="Default" />

But when I attempted to use the web-service using this configuration – I got an error with the following message:

“SOAP must understand error:{http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security, {http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd}Security.”

When I looked at the request message in Fiddler, I found that the mustUnderstand header was being set to true (or 1).


Obviously, the first thing for me to try was to set the mustUnderstand value to 0 or false. Trying this using Fiddler’s response builder, allowed me to confirm that this was indeed true. Unfortunately WCF will not directly allow you to change the mustUnderstand value of the security header. WCF adheres a lot more strictly to the W3C recommendations and hence will always set the mustUnderstand to true on the Security header (as it wants the server to throw a fault if it does not understand the Security header’s contents).

The only way that I could find around the “mustUnderstand” issue is to intercept the message just before the client sends it off to the server and modify the security header manually. I hate this solution as it looks very hacky and would love to hear from anybody else who figured out a much more graceful solution around this.

Solution: WCF messages can be intercepted by any object that implements the IClientMessageInspector interface. In turn this inspector is registered with an end-point using a class that implements the IEndpointBehavior interface. (The inspector is registered in the ApplyClientBehavior method of the interface).

Here is the code for IEndpointBehavior and IClientMessageInspector classes that allows you to modify the headers so that you can add a Security header without the mustUnderstand value set to true.

using System;
using System.ServiceModel.Channels;
using System.ServiceModel.Description;
using System.ServiceModel.Dispatcher;
using System.Xml;

public class MessageBehavior : IEndpointBehavior
    string _username;
    string _password;

    public MessageBehavior(string username, string password)
        _username = username;
        _password = password;

    void IEndpointBehavior.AddBindingParameters(ServiceEndpoint endpoint, System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
    { }

    void IEndpointBehavior.ApplyClientBehavior(System.ServiceModel.Description.ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.ClientRuntime clientRuntime)
        clientRuntime.MessageInspectors.Add(new MessageInspector(_username, _password));
    void IEndpointBehavior.ApplyDispatchBehavior(System.ServiceModel.Description.ServiceEndpoint endpoint, System.ServiceModel.Dispatcher.EndpointDispatcher endpointDispatcher)
    { }
    void IEndpointBehavior.Validate(System.ServiceModel.Description.ServiceEndpoint endpoint)
    { }


public class MessageInspector : IClientMessageInspector
    string _username;
    string _password;

    public MessageInspector(string username, string password)
        _username = username;
        _password = password;

    void IClientMessageInspector.AfterReceiveReply(ref System.ServiceModel.Channels.Message reply,
        Object correlationState)

    object IClientMessageInspector.BeforeSendRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel)
        string headerText = "<wsse:UsernameToken xmlns:wsse=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd\">" +
                                "<wsse:Username>{0}</wsse:Username>" +
                                "<wsse:Password Type=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-username-token-profile-1.0#PasswordText\">" +
                                "{1}</wsse:Password>" +

        headerText = string.Format(headerText, _username, _password);

        XmlDocument MyDoc = new XmlDocument();
        XmlElement myElement = MyDoc.DocumentElement;

        System.ServiceModel.Channels.MessageHeader myHeader = MessageHeader.CreateHeader("Security", "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd", myElement, false);

        return Convert.DBNull;

The things to notice are:

  • In the MessageBehavior class, I register the MessageInspector in the “ApplyClientBehavior” method.
  • The message headers are transformed in the “BeforeSendRequest” method of the MessageInspector class. The headerText contains the header just as I would like it to be formatted (without a mustUnderstands value in it). The rest of the code is for passing along the userName and password so that it can be placed into the Security header.

The MessageBehavior needs to be assigned to the WCF client. This is done using the following code:

MessageBehavior messageBehavior = new MessageBehavior("username", "password");

Where client is an instance of the WCF client.

Finally, and this ended up being a key factor that I had to learn after a lot of trial and error, is that I needed to modify the bindings used by WCF to use basicHttpBinding with just transport level security (https). The reason for this is that if use the TransportWithMessageCredential, then even though you might be adding the security headers in the BeforeSendRequest method, WCF will continue adding its version of the Security headers (and these contain the mustUnderstand set to true and will cause the server to fault). With the correct bindings and the MessageInspector in place…. everything began working.

Here is what the updated bindings looked like:

      <binding name="basicBinding">
            <security mode="Transport">
                 <transport clientCredentialType="None" proxyCredentialType="None"
                            realm="" />
                 <message clientCredentialType="UserName" algorithmSuite="Default" />

Note 1:

Instead of using the pre-defined basicHttpBinding you can use the following custom binding.

     <binding name="basicHttpTransportSecurityUserNameMessage">
          <textMessageEncoding messageVersion="Soap11"/>
          <httpsTransport />

Note 2:

Like I mentioned before, even though this is a solution around the mustUnderstand issue when using UserNameToken over https with Soap11 (basicHttpBinding), I dont like the fact that I am inserting my own header XML into the message. The main reason is that this solution might need a lot of testing and learning to get it working for your problems. If you find that this solution does not work for you, the first thing I would suggest is to use SoapUI and determine what the headers being sent look like, when you perform a successful request to the web-service. Compare this header with the one in my BeforeSendRequest method. Try and replace the header text to see if this makes your client work correctly. Another extremely useful tool was Fiddler, which allowed me to monitor the data being sent by the WCF client.

Note 3:

Custom Message Encoders might be another solution for suppressing the mustUnderstand attribute. I haven’t tried this yet. But it looks more flexible, although it is also the more complex solution too.

CustomMessageEncoder sample: http://msdn.microsoft.com/en-us/library/ms751486.aspx

Note 4:

Another issue that you might face is when the certificate being used by the server for the HTTPS connection is not valid for some reason (most common of which is the name of the server does not match the CN name in the certificate). In such cases, you need to call the following code before any call to WCF.

ServicePointManager.ServerCertificateValidationCallback = new RemoteCertificateValidationCallback(delegate{return true;});

More Information:

Message Inspectors: http://msdn.microsoft.com/en-us/library/aa717047.aspx

WCF Security and Silverlight 2: http://www.netfxharmonics.com/2008/11/Understanding-WCF-Services-in-Silverlight-2#WCFSilverlightIntroduction

ESRI – MapObjects – End of Life

MapObjects will be end-of-lifed this summer. My initiation to the GIS world came through MapObjects, while I worked at VLS, many moons ago (Summer, 2003).

So it is kinda sad to see that it is being retired.


My first project with MapObjects was an overview window that I had to create as a proof of concept, that showed a thumbnail of where the user was digitizing when he was zoomed real close to a feature. Ah! the good times! (actually not – MapObjects gave me many heart-burns with its sparse documentation).

MapObjects end-of-life announcement

Debugging .Net source code in VS2010

If you did not know… VS allows you to step into .Net framework code so as to allow you to track down those intractable bugs. (I think this feature has been around since VS 2008)

To enable this you need to first go to Tools –> Options –> and under Debugger Options, Disable “Enable Just My Code” and Enable “Enable .Net Framework source stepping”.


Next make sure that you have a Symbol server defined (in VS 2010, this is already provided).


That's it – you should only have to hit F5 and if everthing is working, you should see an EULA, after which you should be able to step into .Net framework code.

Note: Remember to turn off this option when you dont need to step into the .Net framework code, as loading symbols from the server can be a slow process.

More Info: http://referencesource.microsoft.com/Default.aspx

Old but useful FAQ: http://blogs.msdn.com/b/sburke/archive/2008/01/16/configuring-visual-studio-to-debug-net-framework-source-code.aspx#faq

Monday, June 07, 2010

WCF Tool: Convert binding to a custom binding

An extremely useful tool that converts any WCF binding to a custom binding: WCF Binding Box. Why? Sometimes it is easiest to start from a simple pre-defined binding and then modify it to do your thing. (this is also a good way of learning what goes into the predefined bindings – for example: its clear below that the BasicHttpBinding implments Soap version 1.1)

Here is an example of converting a BasicHttpBinding to a custom binding.

        <binding name="AddressBinding">
          <security mode="Transport">


<!-- generated via Yaron Naveh's http://webservices20.blogspot.com/ --> <customBinding>
  <binding name="NewBinding0">
    <textMessageEncoding MessageVersion="Soap11" />
    <httpsTransport />

Sunday, June 06, 2010

C# – Asynchronous Programming using BeginInvoke

In C#, delegates automatically get the BeginInvoke and EndInvoke methods defined for them. BeginInvoke is used for running the processes defined by the method and assigned to the delegate on a different thread (asynchronously).

The BeginInvoke method takes a AsyncCallback delegate parameter as a callback method. A common misconception about the BeginInvoke method is that if one does not care about when the method invoked by BeginInvoke completes or its return value, then one can pass in null for this method. THIS IS WRONG! and is partly based on old documentation. If you look at this MSDN document, you can clearly see that the EndInvoke method is not optional but required.

CAUTION   Always call EndInvoke after your asynchronous call completes

Here is more information from IanG: EndInvoke is Required.

So what do you do in Fire-and-Forget scenarios where you dont care about when the method completes or the return value of the method. Simple – use the CallBack parameter to define a call-back method that then calls EndInvoke on the delegate instance. Important: note that I pass in the delegate itself as a parameter to the BeginInvoke method, this allows me to access the delegate via the AsyncState property of the IAsyncResult object.

Sample Code:

Begin Async processing:

//using Action as a shortcut, you could use any delegate that you define instead
Action<int> t = new Action<int>(LongRunningMethod);
IAsyncResult r = t.BeginInvoke(0, new AsyncCallback(CallBack), t);

CallBack method is defined as follows:

private void CallBack(IAsyncResult ar)
      Action<int> t = ar.AsyncState as Action<int>; //AsyncState is set by passing in the delegate to the BeginInvoke’s method.
      t.EndInvoke(ar);//calling EndInvoke - REQUIRED

Friday, June 04, 2010

VS 2010 and TFS 2008 and Office 2003

If you are stuck on TFS 2008 and Office 2003, then after you install VS 2010, you will find the office integration to TFS is broken. This is by design as the VS team decided to only support Office 2007 and up with VS 2010.

If you are truly stuck in Office 2003 then here is what you can do, to get back the integration:

1. If you already dont have VS2008 and Team Explorer 2008, then install just Team Explorer 2008.

2. Install VS 2008 SP1.

3. Install the VS 2008 Forward Compatibility update.

The office integration functionality is provided by a COM dll called “TFSOfficeAdd-in.dll”. To get back integration with Office 2003, you need to unregister the VS2010 version of this dll and then register the VS2008 version of this dll. (The dll is found in the “Common7\IDE\PrivateAssemblies” folder of the version specific Visual Studio installation folder).

Here are the commands:

Unregister VS2010 version of TFSOfficeAdd-in.dll
regsvr32 /u "c:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\PrivateAssemblies\TFSOfficeAdd-in.dll"

Register VS2008 version of TFSOfficeAdd-in.dll
regsvr32 "c:\Program Files\Microsoft Visual Studio 9.0\Common7\IDE\PrivateAssemblies\TFSOfficeAdd-in.dll"

When you are ready to upgrade to Office 2007, just reverse the above 2 commands (i.e., run regsvr32 /u on the VS2008 dll and regsvr32 on the VS2010 dll).

Wednesday, June 02, 2010

Sample Log4Net Configuration

This sample provides 2 types of loggers – a rolling logfile and an SMTP based logger.

    <appender name="RollingLogFileAppender" type="log4net.Appender.RollingFileAppender">
      <file value="logs//appLog.log"/>
      <lockingModel type="log4net.Appender.FileAppender+MinimalLock"/>
      <appendToFile value="true"/>
      <datePattern value="yyyyMMdd"/>
      <rollingStyle value="Date"/>
      <MaxSizeRollBackups value="180" />
      <filter type="log4net.Filter.LevelRangeFilter">
        <acceptOnMatch value="true"/>
        <levelMin value="DEBUG"/>
        <levelMax value="FATAL"/>
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%-5p %-25d thr:%-5t ${COMPUTERNAME}-[%property{SessionID}] %9rms %c{1},%M: '%m'%n"/>
    <appender name="SmtpAppender" type="log4net.Appender.SmtpAppender">
      <subject value="App Name - Error Log" />
      <smtpHost value="Relay.gov.dnvr" />
      <from value="fromEmail" />
      <to value="toEmail" />
      <bufferSize value="512" />
      <lossy value="true" />
      <datePattern value="yyyyMMdd"/>
      <threshold value="ERROR" />
      <evaluator type="log4net.Core.LevelEvaluator,log4net">
        <threshold value="ERROR" />
      <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="Application Error%newline${COMPUTERNAME} %date %-5level %logger - %message%newline%newline%newline" />
      <appender-ref ref="SmtpAppender" />
      <appender-ref ref="RollingLogFileAppender" />


The rolling log file appender is setup to have a minimal lock. This is required if you are using a cluster, otherwise it might be an overkill.

To use the loggers in an app you need to initialize log4Net. I do this in Asp.Net apps in the application startup method using code similar to this:


void Application_Start(object sender, EventArgs e) 
    //the log4net.ConfigurationFile needs to be set as an appsetting in your config file
    //the following code configures log4Net using the log4net.config file settings
    string logFileName = ConfigurationManager.AppSettings["log4Net.ConfigurationFile"];
    if (string.IsNullOrEmpty(logFileName) == false)
        System.IO.FileInfo fi = new System.IO.FileInfo(System.IO.Path.Combine(System.AppDomain.CurrentDomain.BaseDirectory, logFileName));
        if (fi != null && fi.Exists)

where log4net.configurationFile is an appSetting in your web.config file pointing to the location of the log4net file.

To use the logger, you first need to get a logger object using the following code:

private static readonly ILog Log = LogManager.GetLogger(MethodBase.GetCurrentMethod().DeclaringType);

Logging is performed using the following code:

Log.Debug(“message”) or Log.Error(“message”), etc.

Tuesday, June 01, 2010

SQL Server – Clean up log file

Command to reduce the size of the log file used by a database.

backup log [databaseName] with truncate_only

Another useful setting is to change the recovery mode to simple from full, if you dont require the full recovery mode (simple recovery – allows to recover to the last backup, whereas full recovery allows you to recover to the last point of failure).

Loading IIS logs into a SQL Server Database

Tools required:
Microsoft Log Parser : http://www.microsoft.com/downloads/details.aspx?FamilyID=890cd06b-abf8-4c25-91b2-f8d975cf8c07&displaylang=en
SQL Server Instance


LogParser "SELECT * INTO webLog FROM “S:\ex100515.log" -i:W3C -o:SQL -server:sqlServerName -database:IISLogs -driver:"SQL Server" -createTable:ON -username:yourSQLUsername -password:yourSQLPassword

note: if you wish to use integrated security (your windows account) to access the database, then, simply dont use the username and password fields.

If you need to import multiple log files, then you can specify a wild card instead of a specific file name (eg: s:\*.log).

List of fields that are imported:


note 2: Depending on the speed of your machine, size of the log files, etc., this can be a long operation and you wont get any indication on whether it is running or not, just be patient. (it took me about 4 minutes to import a file with 300k lines).