VS Extension of the day : CodeMaid

Working with poorly written long code files can be a pain. It’s especially painfully when you’re either attempting to track down a bug or introduce a new feature. In comes CodeMaid!

Let’s be practical. CodeMaid will not automatiacally fix bugs for you, neither will it write features for you. It will however, help you navigate through your code quicker and faster. Frankly, that’s all I needed.

Code maid is very well documented so I won’t duplicate everything on their website. However, I do want to hightlight one feature which I use a lot to get a general sense of what is happening in a class.

Code Digging

CodeMaid gives you a visual tree of the file you’re looking at and gives you the ability to quickly switch between sorting methods to get a better overview.

Digging     Digging sort order

If you like visually code then I would definately recommend this visual studio extension. You can download it from the visual studio gallery or from their site.

Bulk upload into SQL Server using SQLBulkCopy – Part 2

In Part 1 of this series, we looked at the basics of how the SQLBulkCopy class works. It’s time to setup our situation drill & dig into some real code.

Situation Drill: You’ve just been hired as the lead developer on a short-term project for a mulitnational retail cooperation. Your manager wants a reporting application that is able to import a large XML file and extract only certain fields
each day into a SQL Server database. Develop a prototype app which uses SQL Bulk Copy to import this XML file into the database.

Download source files here.

Our goal is to populate a table  called TopOrders with the contents of an XML file. So let’s begin by taking a look at the XML file that we want to import. The file comprises a list of orders. For the purposes of this drill, we will extract 3 fields, OrderID, CustomerID & Ship Address.

xml

 

 

 

 

 

 

 

 

Download XML file here

TopOrders schema

Run the following script in a database of your choice to setup your target table.

CREATE TABLE [dbo].[TopOrders] (
    [NewOrderID]         INT           NOT NULL,
    [NewCustomerId]      NVARCHAR (50) NULL,
    [NewShippingAddress] NVARCHAR (50) NULL,
    PRIMARY KEY CLUSTERED ([NewOrderID] ASC)
);

The next step is to set up projects in Visual Studio. Add two console applications and one class library project to look something like this.

bulkcopy
Importer.Console.DataTable– Implementation of SQLBulkCopy using a DataTable

Importer.Console.DataReader– Implementation of SQLBulkCopy with IDataReader

Importer.Console.Core– Common files

Configuring SQLBulkCopy

Next, create a strongly typed class to represent the SQLBulkCopySettings & Column Mappings.

public class ColumnMapping
    {
        public string SourceColumn { get; set; }
        public string DestinationColumn { get; set; }
    }
public class SqlBulkCopySettings
    {
        public int BatchSize { get { return 10; } }
        public int NotifySize { get { return BatchSize*5; } }
        public int NumberOfMergeAttemptsBeforeGivingUp { get { return 2; } }
        public string DestinationTableName { get { return "TopOrders"; } }
        public List<ColumnMapping> ColumnMappings { get; private set; }
 
 
        public static SqlBulkCopySettings GetSettings()
        {
            return new SqlBulkCopySettings
            {
                ColumnMappings = new List<ColumnMapping>
                {
                    new ColumnMapping
                    {
                        SourceColumn = "OrderID",
                        DestinationColumn = "NewOrderID"
                    },
                    new ColumnMapping
                    {
                        SourceColumn = "CustomerID",
                        DestinationColumn = "NewCustomerId"
                    },
                    new ColumnMapping
                    {
                        SourceColumn = "ShipAddress",
                        DestinationColumn = "NewShippingAddress"
                    }
 
                }
            };
        }
    }

In our main program ( Importer.Console.DataTable), we’ll begin by setting up the connection string to our target database and read the XML data into a DataTable.

string connectionString = @"Data Source=[Server];Integrated Security=True";
     var sourceDataTable = GetSourceDataTable();
     SqlBulkCopySettings settings = SqlBulkCopySettings.GetSettings();
private static DataTable GetSourceDataTable()
        {
            DataSet dataSet = new DataSet();
            dataSet.ReadXml(@"c:\bulkuploadorders.xml");
            var sourceDataTable = dataSet.Tables[0];
            return sourceDataTable;
        }

The code below is simple to understand. The main method opens a connection to the target database. A new instance of SQLBulkCopy is created and assigned the necessary properties for the operation. (For more details about these properties refer to Part 1 of this series) . The method WriteToServer is called to complete the operation.

try
{
 using (SqlConnection destinationConnection = new SqlConnection(connectionString))
       {
        destinationConnection.Open();
 
        Logger.Info("Database Opened");
 
         using (SqlBulkCopy bulkCopy = new SqlBulkCopy(destinationConnection.ConnectionString))
         {
            bulkCopy.SqlRowsCopied += bulkCopy_SqlRowsCopied;
            bulkCopy.NotifyAfter = settings.NotifySize;
            bulkCopy.BatchSize = settings.BatchSize;
            bulkCopy.DestinationTableName = settings.DestinationTableName;
            foreach (var columnMapping in settings.ColumnMappings)
              {
               bulkCopy.ColumnMappings.Add(columnMapping.SourceColumn, columnMapping.DestinationColumn);
               }
                 bulkCopy.WriteToServer(sourceDataTable);
                       
                   Logger.Info("Bulk copy setup sucess");
                   }
 
               }
           }
           catch (Exception exception)
           {
               Logger.Error("Bulk copy was unable to complete the operation with the follwing error {0}", exception.Message);
           }

 

 

Result
toporders

 

 

 

 

 

 

Using a DataTable works well until you have millions of records at once and start running out of memory. An alternative to the DataTable is to use an implmentation of IDataReader.

Watch out for part 3 in this series to learn how to implement IDataReader to stream custom objects in bulk to your database.

Bulk upload into SQL Server using SQLBulkCopy – Part 1

Uploading large files can be made easier with the help of SQLBulkCopy. SQLBulkcopy is a simple and easy to use tool for transferring complicated or simple data from one data store to another.  This article will look primarily at the two main ways (via Datatable & IDataReader) of uploading a large data source into SQLServer.

Send-large-files-at-one-go

How it works

SQLBulkCopy essentially works by configuring properties on the class to determine how the transfer to SQLServer will be done. After, the settings are complete, you then call the method  WriteToServer() which accepts either a DataTable or IDataReader. Here is a list of the most important properties:

Construction

BatchSize

Batchsize is integer property which determines how many records are sent to the server in a batch. If you don’t set this property all the records are sent.

 bulkCopy.BatchSize = 50;
ColumnMappings

The column mappings property allows you to map columns from your source to destination (SQLServer). You only need to set this property if columns names are different. The code below matches ShipAddress from the source to the destination column of NewShippingAddress.

bulkCopy.ColumnMappings.Add("ShipAddress", "NewShippingAddress");  
DestinationTableName

The DestinationTableName property determines which table you want to copy your data to.

bulkCopy.DestinationTableName = "TopOrders"; 
SqlRowsCopied & NotifyAfter

One cool thing about the bulkcopy object is that it has an event, SqlRowsCopied which is raised after a specific number of records are uploaded. You can set the NotifyAfter property to any integer value. It has a default value of 0.

bulkCopy.SqlRowsCopied += new SqlRowsCopiedEventHandler(OnSqlRowsTransfer);
bulkCopy.NotifyAfter = 100;

private static void OnSqlRowsTransfer(object sender,SqlRowsCopiedEventArgs e)
{
        Console.WriteLine("Copied {0} so far...", e.RowsCopied);
}   
WriteToServer

The WriteToServer method processes your source table data to a destination table. It accepts an array of DataRows or DataTable orIDataReader. With DataTable you can also specify the state of the rows that needs to be processed.

The following code will process rows from sourceData DataTable which has RowState as Added to DestinationTable.

bulkCopy.WriteToServer(sourceData, DataRowState.Added);

In part 2 of this article, we’ll look at a fully working program which will demonstrate how to upload a big dataset using datatables and SQLBulkCopy.

How to fix “The VPN Client Driver has Encountered an Error” in windows 8

Disclaimer : Do this at you own risk. Be sure to back up your registry before attempting this on your system.

Due to compatibility problems, users may experiences errors when trying to establish a connection with the Cisco AnyConnect client on Windows 8. Upon trying to connect, they may get an error such as “The VPN Client driver has encountered an error” or “Cannot initiate VPN.”

Fortunately, there is an easy registry edit that has been found to frequently resolve this error:

  1. Go to the Windows 8 Start Screen by pressing the Windows key.
  2. Type regedit to initiate an application search, then press Enter to launch regedit.
  3. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\vpnva
  4. Locate the key DisplayName.
  5. Right-click this key and select Modify.
  6. In the value data field, erase everything and replace it with Cisco AnyConnect VPN Virtual Miniport Adapter for Windows x64
    • If you are running a 32-bit system, perform the same steps, but instead replace the entry withCisco AnyConnect VPN Virtual Miniport Adapter for Windows x86
  7. Restart the Cisco AnyConnect client. The issue should now be resolved.

IIS 7 taking over 400 error message from Web API

Recently I’ve run into a problem on one of our staging servers, where the custom error result from an API endpoint  returns IIS error pages instead of the custom error content generated in code (In this case a Json object).

Overview
So essentially our front end application accepts an SMS number and POSTS the accepted input to the ASP.NET Web API for validation and persistence in the backend.
 [DataContract]
    public class Sms
    {
        [DataMember(Name = “model”)]
        public SmsDTO Model { get; set; }
        [DataMember(Name = “mobile”)]
        [Required]
        [SmsNumber]
        public string Mobile { get; set; }
    }
The validation of the model is done via the built in model validation features in MVC/API. A custom attribute [SMSNumber] inherits from Validation Attribute & is responsible for this functionality
 public class SmsNumberAttribute : ValidationAttribute
    {
        protected override ValidationResult IsValid(object value, ValidationContext validationContext)
        {
            string number = (string)value;
            if (string.IsNullOrEmpty(number))
            {
                return base.IsValid(value, validationContext);
            }
            if (SmsNumberPatternManager.Instance.IsNumberValid(number))
            {
                return ValidationResult.Success;
            }
            return new ValidationResult(string.Format(“{0} is not a valid SMS Number. Please enter a valid number.”, value));
        }
    }
Expected
Ultimately, what we want is that when a users enters an invalid number we give them a descriptive message which explains the error.
Actual
What is actually happening on the staging server is this.
The idea is that the request should generate a 400 error, but still provide appropriately formatted error information – in this case JSON – to the client. This works fine on my development machine on Windows 7 with IIS 7.5, but fails in the staging environment on IIS 7 Windows Server 2008.
On the staging server the response that the server generates in not my JSON object, but rather IIS HTML error page.
Response on Staging Server
Cache-Control:public, no-store, max-age=0, s-maxage=0
Content-Length:11
Content-Type:text/html
Server:Microsoft-IIS/7.0
Vary:*
X-AspNet-Version:4.0.30319
X-AspNetMvc-Version:4.0
X-Powered-By:ASP.NET
Response on Local Development Box
  1. Cache-Control:public, no-store, max-age=0, s-maxage=0
    Content-Encoding:gzip
    Content-Length:108
    Content-Type:application/json; charset=utf-8
    Server:Microsoft-IIS/7.5
    Vary:*
    X-AspNet-Version:4.0.30319
    X-AspNetMvc-Version:4.0
    X-Powered-By:ASP.NET
Why is this happening?
This occurs because the error is trapped by ASP.NET, but then ultimately still handled by IIS which looks at the 400 status code and returns the stock IIS error page.
The Solution

Enter Response.TrySkipIisCustomErrors

There’s a solution to this problem with the deftly named TrySkipIisCustomErrors property on the Response object which is tailor made for this particular scenario. In a nutshell this property when set to true at any point in the request prevents IIS from injecting its custom error pages.
The last line of my SMSNumberAttribute will now look like this :
public class SmsNumberAttribute : ValidationAttribute
    {
 protected override ValidationResult IsValid(object value, ValidationContext validationContext)
        {
          .
          .
HttpContext.Current.Response.TrySkipIisCustomErrors = true;
return new ValidationResult(string.Format(“{0} is not a valid SMS Number. Please enter a valid number.”, value));
          }
   }

Reading web.config key settings in HTML markup

In order for html pages to recognize settings from our web.config we’ll need to tell IIS to treat .html files as dynamic pages

This can be done by the following:
....<system.web>
    <compilation ...>
        <buildProviders>
            <add extension=".html" 
                 type="System.Web.Compilation.PageBuildProvider" />
        </buildProviders>
    ....

and

....<system.webServer>
    <handlers>
        <remove name="WebServiceHandlerFactory-Integrated" />
        <add name="PageHandlerFactory-Integrated-HTML" path="*.html" 
             verb="GET,HEAD,POST,DEBUG" type="System.Web.UI.PageHandlerFactory" 
             resourceType="Unspecified" preCondition="integratedMode" />
    </handlers>....
You can then call any setting from your web.config in your .html page like this
<%=ConfigurationManager.AppSettings["MyAttribute"]%>
or
<%$ AppSettings: MyAttribute %>

This uses the expression builder syntax which means you can decoratively assign appSettings values.

 

Using access tokens in Swagger with Swashbuckle

Securing access to your API using access tokens is common practice. In this post, we’ll learn how to call secure API endpoints using the swagger specification specifically using Swashbuckle (An implementation of Swagger for .NET)

swag1

Understanding Swagger Schema:
This outline shows the basic structure of a swagger specification document. This file is represented in Json which is in turn used by Swagger-UI to display the interactive API documentation.
{
"swagger": "2.0",
 "info": {
"version": "v1",
"title": ".NET Latest API",
"description": ".NET Latest API",
"termsOfService": "Some terms",
"contact": {
"name": "donetlatest Team",
"email": "team@dotnetlatest.com"
}
},"host": "local.api.donetlatest.com:80",
"schemes": [

"http"

],"paths": {

"/V1/api/Authentication": {},
"/V1/api/Countries": {},
"/V1/api/Clients": {
},"definitions": {

"CountryDTO": {},
"StateDTO": {},
"ClientDTO": {}
}
}
Parameters
The Paths item object describes the operations on a single path. Each path has a parameters object which are a list of inputs for a given endpoint.
"/V1/api/LitmusClients": {
"post": {
"tags": [
"LitmusClients"
],
"summary": "GET /api/clients\r\n Gets an array of all clients",
"operationId": "Clients_Index",
"consumes": [
],
"produces": [
"application/json",
"text/json"
],
"parameters": [
{
"name": "Authorization",
"in": "header",
"description": "access token",
"required": true,
"type": "string"
}
],
"responses": {
"200": {
"description": "OK",
"schema": {
"type": "array",
"items": {
"$ref": "#/definitions/ClientDTO"
}
}
}
},
"deprecated": false
}
}
}
Types of Parameters
  • Path – Used together with  Path Templating
  • Query – Parameters that are appended to the URL
  • Header – Custom headers that are expected as part of the request
  • Body – The Payload that’s appended to the HTTP request.
  • Form – Used to describe the payload of an Http request
The swagger specification describes in detail about parameter types and how you can configure them. https://github.com/swagger-api/swagger-spec/blob/master/versions/2.0.md
Extending Swagger to add a new parameter:
Swashbuckles implementation of swagger reads XML code comments to generate the required swagger specification. Unfortunately, if you require an authorization header (access token) to make requests, the XML code comments cannot provide this info to Swashbuckle. You’ll have to manually inject this new parameter during swagger specification generation.
Swashbuckle provides an interface called IOperationFilter  to apply new parameters. Implementing this interface will look something like this.
public class AddAuthorizationHeaderParameterOperationFilter: IOperationFilter
    {
        public void Apply(Operation operation, SchemaRegistry schemaRegistry, ApiDescription apiDescription)
        {
            var filterPipeline = apiDescription.ActionDescriptor.GetFilterPipeline();
            var isAuthorized = filterPipeline
                                             .Select(filterInfo => filterInfo.Instance)
                                             .Any(filter => filter is IAuthorizationFilter);

            var allowAnonymous = apiDescription.ActionDescriptor.GetCustomAttributes<AllowAnonymousAttribute>().Any();

            if (isAuthorized && !allowAnonymous)
            {
                operation.parameters.Add(new Parameter {
                    name = "Authorization",
                    @in = "header",
                    description = "access token",
                    required = true,
                    type = "string"                    
                });
            }
        }
    }
public class SwaggerConfig
    {
        public static void Register()
        {
            var thisAssembly = typeof(SwaggerConfig).Assembly;

            GlobalConfiguration.Configuration
                .EnableSwagger(c =>
                   
                   
                    c.SingleApiVersion("v1", "Wordfly API").Description("An API for the wordfly messaging platform")
                            .TermsOfService("Some terms")
                            .Contact(cc => cc.Name("Wordfly Team")
                            .Email("team@wordfly.com"));
                                                                          
  c.OperationFilter(() => new AuthorizationHeaderParamterOperationFilter()));
                   
                    c.IncludeXmlComments(GetXmlCommentsPath());
          }
     }

s

Converting Enums to User Friendly strings

Enums are useful C# language constructs. There are times however, where it’s functionality seems some what limited in regard to generating custom descriptions for your enum values.

Here is a simple technique to give enum values a customized user friendly string.

  • Firstly, Decorate your enum with the friendly string (“Microsoft SQL Server”) with the Description attribute from the System.ComponentModel namespace.
public enum DatabaseType
 {

        Mysql= 0,

       Oracle= 1,

        [Description(“Microsoft SQL Server”)]

        SqlServer= 2,

         DyanamoDb

    }

  • Secondly, Create a static class and add the extension method GetDescription() below.

The GetDescription method uses reflection to retrieve the field of that enum and then calls the GetCustomAttribute method to pull the friendly description we gave it.

public static string GetDescription(this Enum value)
    {
   Type type = value.GetType();
        string name = Enum.GetName(type, value);
        if (name != null)
        {
            FieldInfo field = type.GetField(name);
            if (field != null)
            {
                DescriptionAttribute attr = Attribute.GetCustomAttribute(field,typeof(DescriptionAttribute)) as DescriptionAttribute;
                if (attr != null)
                {
                    return attr.Description;
                }
            }
        }
          // If we couldn’t find a custom description, then return the default name
        return value.ToString();
    }
}
Usage
DatabaseType db = DatabaseType.SqlServer;
string description = db.GetDescription();
Download Code here : http://share.linqpad.net/6sfxvq.linq
[In order to view the source file you’ll need to download Linqpad ]

Null dereferencing in C#

The C# language comes built in with the ?? operator (null-coalescing operator) which is useful if you want very basic null checking.

For example

nobody = null;
anybody = nobody == null ? "Bob Saget" : nobody; // returns Bob Saget

However, it’s not very useful if you want to continue to drill down into properties or methods of a sometimes null reference. What you need is a “null-dereferencing” operator which would allow you to chain long property and/or method chains together, without having to test each intermediate step for nullness.

Ugly Code

Example :

bossName = Employee != null && Employee.Supervisor != null
                           && Employee.Supervisor.Manager != null
&& Employee.Supervisor.Manager.Boss != null ? Employee.Supervisor.Manager.Boss.Name : null;

NullSafe() Dereference Operator

Luckily there are a couple of ways to avoid this ugly code and one of which has already been written.

How it works

  1. This extension method receives two arguments, the first of which is the object it operates on and the second is a generic Func<T, TResult> delegate (the lambda expression).
  2. The return type of the extension method is automatically inferred from the return type of the lambda that is passed in.
  3. The nice thing about extension methods is, besides that they can operate on any existing class without changing it, that they can also operate on Null references without causing a NullReferenceException!!
  4. If the object it is called on is not null it evaluates the lambda expression and returns the result. Otherwise it returns the default value of the expected return type, for reference types this is Null.
public static class ObjectExtensions
    {
         /// <summary>
        /// Acts as a null dereference operator
        /// </summary>
        /// <typeparam name=”T”> Type of Object for null check</typeparam>
        /// <typeparam name=”TResult”> resulting type of the expression</typeparam>
        /// <param name=”item”> Object to check for null on </param>
        /// <param name=”deriver”> delegate to return result</param>
        /// <returns></returns>
        public static TResult NullSafe<T, TResult>( this T item, Func<T, TResult> deriver)
        {
            return Nullify(item, deriver, default(TResult));
        }
 
        public static TResult NullSafe<T, TResult>( this T item, Func<T, TResult> deriver, TResult defaultValue)
        {
            if (item == null)
                return defaultValue;
 
            return deriver(item);
        }
              
       }
Usage:
 if(javascriptDeveloper.NullSafe(e=> e.Supervisor)
                       .NullSafe(s => s.Manager)
                       .NullSafe(p =>p.Boss)
                       .NullSafe(q => q.Name))
                       {
                    //do something with the address
                     Console.WriteLine(javascriptDeveloper.Supervisior.Manager.Boss.Name);
                       }

The idea is that all parts of the path are expressed in separate lambda expressions which are chained together. If any link in the chain returns null, the rest of the path is never evaluated and the result of the whole expression will also be null.

Microsoft embraces Open Source

Microsoft has made a dramatic shift to open source a number of it’s core technologies with a central focus on community development. Here’s a brief summary of everything going on in the .NET world.

Visual Studio 2015 Preview, C# 6 & ASP.NET 5, .NET Core 5

.NET Core 5 : is the new name for the cloud optimized version of .NET. You can use it on Windows, Linux or Mac.

.NET Core has two major components. It includes a small runtime that is built from the same codebase as the .NET Framework CLR. The .NET Core runtime includes the same GC and JIT (RyuJIT), but doesn’t include features like Application Domains or Code Access Security. The runtime is delivered via NuGet, as part of the ASP.NET 5 core package..NET Core also includes the base class libraries. These libraries are largely the same code as the .NET Framework class libraries, but have been factored (removal of dependencies) to enable them to ship a smaller set of libraries.

The focus and value of .NET Core is three-part:

  1. deployment,
  2. open source
  3. and cross-platform.

.NET Framework 4.6 : is the next version of the .NET Framework. Some new features include

  • WPF Improvements and Roadmap
  • Windows Forms High DPI
  • Next Generation JIT Compiler — RyuJIT
  • CLR Performance Improvements
  • Support for converting DateTime to or from Unix time
  • ASP.NET Model Binding supports Task returning methods

Visual Studio 2015 Preview – Visual Studio Community

There is now a new Visual Studio edition that is very similar to Pro and free for students, open source developers and many individual developers. It supports Visual Studio plugins like Xamarin or Resharper.

Performance Tips

The Visual Studio team has built something truly great for determining the performance characteristics of your code and to help discover performance bottlenecks. PerfTips allow you to quickly and easily see performance bottlenecks as you are debugging your application.

 

  • Intuitive Breakpoint Settings
  • Setting breakpoints on auto-implemented properties
  • Lambdas in the debugger windows
  • Core IDE and Editing Improvements

C# 6

Here’s the direct link to the C# changes:

https://t.co/7nU9UOjJLC

ASP .NET 5 :

ASP.NET 5 is the new Web stack for .NET. It unifies MVC, Web API and Web Pages into a single API called MVC 6. You can create ASP.NET 5 apps in Visual Studio 2015 Preview.

ASP.NET 5 has the following overall characteristics.

  • ASP.NET MVC and Web API, which have been unified into a single programming model.
  • A no-compile developer experience.
  • Environment-based configuration for a seamless transition to the cloud.
  • Dependency injection out-of-the-box.
  • NuGet everything, even the runtime itself.
  • Run in IIS, or self-hosted in your own process.
  • All open source through the .NET Foundation, and takes contributions in GitHub.
  • ASP.NET 5 runs on Windows with the .NET Framework or .NET Core.
  • .NET Core is a new cloud optimized runtime that supports true side-by-side versioning.
  • ASP.NET 5 runs on OS X and Linux with the Mono runtime.
Dependencies node for Bower and NPM dependencies

ASP.NET projects now integrate Bower and NPM into solution explorer, under a new dependencies node. You can uninstall a package through the context menu command, which will automatically remove the package from the corresponding JSON file.