Cyclomatic Complexity

Cyclomatic Complexity is a software metric used to calculate the complexity of a program.

It is a measure of number of possible use case flows for a given program’s source code and is directly dependent upon the number of control flow statements (if/else statements and switch cases) within the code.

For example, a method with no control flow statements would have Cyclomatic Complexity  value of 1.

Cyclomatic Value as calculated by Visual Studio for a method that has no control flow statements:

And if we add one control flow statement, the Cyclomatic Complexity value will increase by 1.


private static void Divide(int x, int y)
{
if (x > y)
{

}
else
{

}
}

The code above has Cyclomatic Complexity of 2.

Basically, the Cyclomatic Complexity tells us how much complex our code is.

Since the more complex a code is, the more difficult it becomes to write a Test Case and validate it and so the higher value for Cyclomatic Complexity is usually associated with higher error prone code.

Hence, it is generally considered a good practice for software developers to write programs with low Cyclomatic Complexity.

Uploading Unknown Format Excel File To A SQL Database

Introduction

Usually it is expected from client side to upload only structured Excel sheets (sheet with defined Column Names and data types for Rows) to load to server. But sometimes you might also require to dump whatever is in the Excel directly to your database. While there is a tool that is provided by SQL Management Studio for importing data from Excel files, this tutorial covers how you can automate such task using C# and .NET framework.

Steps

Two important pieces for uploading excel file of unknown column format to SQL database are:

  1. Reading Data from Excel into a Data Table
  2. Creating Table in SQL Server based on the Data Table

Once these two steps are done, the final step is simply to bulk load the data table into the table in SQL server.

Reading Data from Excel into a Data Table

I am using Excel Data Reader nuget package for this purpose.


/// <summary>
/// Gets a data table based on provided Excel File
/// </summary>
/// <param name="pathToExcelFile"></param>
/// <returns></returns>
public static DataTable GetTableFromExcel(string pathToExcelFile)
{
FileStream stream = File.Open(pathToExcelFile, FileMode.Open, FileAccess.Read);

IExcelDataReader excelReader;
if (Path.GetExtension(pathToExcelFile) == ".xls")
{
//1. Reading from a binary Excel file ('97-2003 format; *.xls)
excelReader = ExcelReaderFactory.CreateBinaryReader(stream);
}
else
{
//2. Reading from a OpenXml Excel file (2007 format; *.xlsx)
excelReader = ExcelReaderFactory.CreateOpenXmlReader(stream);
}

excelReader.IsFirstRowAsColumnNames = true;

//...
//3. DataSet - The result of each spreadsheet will be created in the result.Tables
DataSet result = excelReader.AsDataSet();
DataTable dt = null;
if (result != null && result.Tables.Count > 0)
{
dt = result.Tables[0];//get first sheet only
}
return dt;
}

Excel Data Reader package supports multiple sheets of Excel to be read into a Data Set at once and also allows other configurations such as: Supporting multiple file formats and Checking Column Names in First Row.

Creating Table in SQL Server based on the Data Table

Now, to generate a “Create Table” query, I found a very useful code on the Stack Overflow site for this purpose.


/// <summary>
/// Generates sql create table query for given data table
/// </summary>
/// <param name="tableName">name of the table to be created</param>
/// <param name="table">data table</param>
/// <returns></returns>
public static string GenerateCreateTableQuery(string tableName, DataTable table)
{
var sqlsc = "CREATE TABLE " + tableName + "(";
for (int i = 0; i < table.Columns.Count; i++)
{
sqlsc += "\n [" + table.Columns[i].ColumnName + "] ";
string columnType = table.Columns[i].DataType.ToString();
switch (columnType)
{
case "System.Int32":
sqlsc += " int ";
break;
case "System.Int64":
sqlsc += " bigint ";
break;
case "System.Int16":
sqlsc += " smallint";
break;
case "System.Byte":
sqlsc += " tinyint";
break;
case "System.Decimal":
sqlsc += " decimal ";
break;
case "System.DateTime":
sqlsc += " datetime ";
break;
case "System.String":
default:
sqlsc += string.Format(" nvarchar({0}) ", table.Columns[i].MaxLength == -1 ? "max" : table.Columns[i].MaxLength.ToString());
break;
}
if (table.Columns[i].AutoIncrement)
sqlsc += " IDENTITY(" + table.Columns[i].AutoIncrementSeed.ToString() + "," + table.Columns[i].AutoIncrementStep.ToString() + ") ";
if (!table.Columns[i].AllowDBNull)
sqlsc += " NOT NULL ";
sqlsc += ",";
}
return sqlsc.Substring(0, sqlsc.Length - 1) + "\n)";
}

Once we have the data and create table script, uploading to database is quite simple and can be achieved with following steps:

  1. Decide upon a table name
  2. Check if it exists or not in the server
  3. Create table if it doesn’t exist
  4. Bulk load data table into your newly created table.

 

A Knight’s Watch

Recently I came across this interesting problem through Toptal (on Codility) which kept me thinking hard for a few days. Finally, I now have a solution to this problem and would like to share it here.

Problem:

Basically, the problem deals with a knight piece on an infinite chess board. Assuming the knight is positioned at (0,0) you are asked to calculate and print the shortest path that the knight can take to get to the target location.

knight1

My Approach:

So what is given here?

  1. The Knight’s movement is well defined i.e. it can only move in a ‘L’ shape.
  2. The Knight has option to move to any of the 8 different locations from it’s current position.

 

knight4

Now with these key points in mind, we can calculate which move will take the Knight closest to the given target i.e. move the knight in the direction of shortest distance.

For example: Suppose our target is located at (6,7).

cap1.PNG

Now, from (0,0) the Knight can move to following points: (1,2), (2,1), (-1,2), (-2,1), (-1,-2), (-2,-1), (1,-2) and (2,-1).

But out of these 8 points, the closest one is (1,2). So we move the knight to this position in first move. For each step we can use the same logic to move the Knight.

Hence, the Knight moves to (1,2) in first step, (3,3) in second, (4,5) in third and (6,6) in fourth move.

However, once the Knight reaches a proximity distance of 1 unit, we will have to keep in mind a separate logic to hit the target.

This is because if we stick to our logic of moving the knight towards the shortest distance in all places, what will happen is once the knight reaches a close proximity, the Knight will start going round and round the target but never actually hit the target.

From (6,6), the Knight could jump to (7,8). From (7,8) it could jump to (5,7) and to (7,5) and all but never actually jump to (6,7).

So, we have to create a separate rule for this scenario for our Knight.

Close Proximity Rule:

There are again 8 different close proximity points from the target location’s view point : (7,7), (7,8), (6,8), (5,8), (5,7), (5,6), (6,6) and (7,6).

These eight points can be categorized into 2 types. Either they lie on the axes or they lie on the diagonals from the given target.

Points on Axes: (7,7), (6,8), (5,7) and (6,6)

cart1

Points on Diagonals: (7,8), (5,8), (5,6) and (7,6)

cart2

Based on these two types, the Knight can hit the target in either 2 or 3 moves.

If the Knight was at a point on the diagonal say (5,6), from there it can jump (7,5) and then to (6,7) in 2 two moves. All diagonal points can access the target in the center in two steps.

Similarly, all points on the x-y axes can hit the target in 3 steps. For example, if the Knight was at (6,6), it can jump to points (8,7), then to (7,9) and finally t (6,7).

Now with this much knowledge we can create a program that can calculate the shortest path that our Knight has to take to hit any target on an infinite chess board!

My Implementation:

I have created a console application in C# to calculate the shortest path for the Knight to reach any stated target point with the above mentioned logic.

The application/project is available in git hub @ https://github.com/psovit/knightswatch

Please feel free to like, share and comment.

Thanks!

vitChess.jpg

Image File as Response from .NET Web API

When developing a web API, it is often required to send response as a PDF or image file.
Following lines of code will assist to create method that will override the default media type of the response determined by Content Negotiation.

public HttpResponseMessage GetImage()
        {
            byte[] bytes = System.IO.File.ReadAllBytes(
            HttpContext.Current.Server
            .MapPath(“~/Content/myImage.png”));
            var result = new HttpResponseMessage(HttpStatusCode.OK);
            result.Content = new ByteArrayContent(bytes);
            result.Content.Headers.ContentType
            = new MediaTypeHeaderValue(“image/png”);
            return result;
        }

The key to this implementation is the use of the HttpResponseMessage class, which
helps to create a raw HTTP response in the controller itself.

We then set the HTTP 200 status code using the enum HttpStatusCode.OK and set the image bytes as response content.

The last step is to set the media type header to image/png, and the response is returned as HttpResponseMessage type.

Reference:
Building Mobile Applications Using Kendo UI Mobile and ASP.NET Web API

Automating Development in ASP.NET MVC

Inspired by rapid development framework Artisan in Laravel for PHP, I am thinking maybe we (.NET developers) can do something similar in our environment as well.

http://laravel.com/docs/5.0/artisan

Basically I am thinking of automating the whole process of creating Repository classes (including required entries for properties and fields), Model classes,  Service layer (layer where you put your business logic) interfaces and classes and maybe even Controllers and Views! Wouldn’t this be real good treat if all of this code writing could be automated!! Unless you have already seen this automation implemented by someone somewhere elsewhere, I am sure you are excited by this idea.

So this is the big idea. How do I plan to achieve this?

Well, I do all my projects in MVC pattern implementing Entity Framework and Unit of Work Pattern for Data Layer and Service Layer. Basically, I keep a project for Data Access Layer where the .edmx file generated by EF stays along with repositories for each Database objects. Then I expose the repositories for each Entities by a central repository using a Unit of Work pattern.

Then I keep another project for Model classes. For every entities generated by EF, I have models in this project.

Then I have third project for Service (or Business Logic) Layer where I have interfaces and their implementations to do all CRUD activities.

Finally for displaying the View to the user, I use a ASP.NET MVC project where I have controllers for each Entities. These controllers receive request from views and process data as per the request using the Interface exposed by Service Layer. The response sent from controllers to View is usually in JSON format.

The big realization is that no matter what type of entities we are dealing with the whole process of creating Repositories, Models, Services, Controllers and even the Views stays the same. The only changes that we will find are in the Members (Properties) of our Entities.

If we are dealing with User entity, we will have properties like Username, Password and Email and if we are dealing with Product entity, we will have properties like ProductName, ProductPrice, Category and so on.

Hence, if we just replace the properties and entity names, everything stays the same. This opens up the possibility for automating this whole process like I mentioned in the beginning of the post. Heck if I am able to automate only 70% of the process and make necessary adjustments for the remaining 30%, I would be more than happy.

I will be giving this a try and will write about it once its done. What are your thoughts on this?

Flat File Splitter

At work the other day, I needed to write a program for splitting a text file (.txt) to smaller pieces. The file that we received from our client was a flat file of size of about 2.5 GB. I needed to import the contents of the file to our SQL Server database.  Since the file was in pipe delimited format with 33 columns of data elements, simply using Import feature of SQL would have solved the case for me. But the import turned out unsuccessful. There were some faulty rows that didn’t match up with the required format of 33 columns. So, I needed to open the file and check where the issue was, however because of the huge size of the file, text editor (notepad) couldn’t open the file. Thus, the following code was developed.
using System;
using System.Collections.Generic;
using System.Configuration;
using System.IO;
using System.Linq;
using System.Text;
namespace FileSplitter
{   
    class Program
    {
        static void Main(string[] args)
        {
            FileSplitter();
            // Suspend the screen.
            Console.WriteLine(“Program completed”);
            Console.ReadLine();
        }       
        /// 
        /// Reads the specified input file one line at a time and creates output file(s)         based upon the required number of lines
        /// to break the file with
        /// 
        public static void FileSplitter()
        {
            int counter = 0;
            int fileLengthLimit = int.Parse( ConfigurationManager.AppSettings[“lineLimit”]);
            int fileCursor = 0;
           
            string filePath = ConfigurationManager.AppSettings[“input”];
            string outputFile = ConfigurationManager.AppSettings[“outputDir”];
            //StringWriter allows to create/write new text files
            var output = new System.IO.StreamWriter(System.IO.Path.Combine(outputFile, string.Format(“1-{0}”, fileLengthLimit)));//output file
            //StringReader allows to read from text files
            var file = new StreamReader(filePath);//read file   
            string line=string.Empty;
            while ((line = file.ReadLine()) != null)
            {
                output.WriteLine(line);
                counter++;
                if (counter == fileLengthLimit)
                {
                    
                    output.Close();
                    fileCursor += counter;
                    Console.WriteLine(“finished processing line: “ + fileCursor);
                    output = new System.IO.StreamWriter(System.IO.Path.Combine(outputFile, string.Format(“{0}-{1}”, fileCursor, fileCursor + fileLengthLimit)));
                    counter = 0;
                }               
            }
            //          
            file.Close();
            output.Close();
        }       
    }   
}
App.Config
<?xml version=1.0?>
<configuration>
  <appSettings>
    <add key=input value=“”/>
    <add key=outputDir value=“”/>
    <add key=lineLimit value=“”/>
  </appSettings>
</configuration>
The code is pretty straight forward. The static method FileSplitter() opens the specified text file and reads one line of text at a time from the file. Then it writes output files with desired number of lines (file length limit) per file.  The input file path, output filepath and line limit needs to be provided from the config file.
I think I used 10000 lines per text file and got about 1200 output files. I forgot the exact numbers. Thankfully, the error with the file was in the last record i.e. in the last line of the file. It had only six elements instead of 33. Since, I looked at the last file output first, so I didn’t need to dig into more than one file.