App mediator for mobile photo upload with ASP.Net MVC 4

The emergence of social networking sites and mobile phone usage explosion has created a need for providing photo , video uploads via mobile devices . Up until now this was very difficult , because Safari did not allow uploading of photos from a mobile device into a website . However things are opening up with Safari allowing photo uploads to a website and HTML5 media capture facilities maturing in a manner where it’s possible to pick photo and video from a native directory and upload to the website. Then there is Phone Gap which allows tapping the native api via Javascript , another option that can be used to upload photos to a website.

However , with the both of these options yet , there is a learning curve as well as writing code to crop images , arrange them on the canvas or the UI before and after upload. If you have an immediate need to put something together with least amount of code without having to worry about cropping the images with code or a layout for tiling the images , an in between approach is available. A best user experience is always recommended , however it is subjective based on the context of usage.

Here the user installs an app only the first time there is a need to upload and following times just one click takes care of uploads to the websites for a seamless experience. There are pros and cons to this approach however . I will discuss both.

First let me demonstrate how the code looks like when we use ASP.Net MVC 4 for the web.

As a first step you will need to go to the AurigmaUp website and get a Free License Key for a domain you have registered .

Then in your html, here is how a link to AurigmaUp looks like  ( Razor and JQuery Mobile Syntax ).

@if(!String.IsNullOrEmpty( Model.AuthCookie) )
{            
    <a data-role="button" data-theme="a" data-iconpos="right" id="aurLink" href="aurup:?uploadUrl=http://yourdomain/upload/photo&redirectUrl=http://yourdomain/upload/gallery&redirectDelay=60&returnUrl=http://yourdomain/upload/gallery&uploadTimeOut=10000&licenseKey=licensekey&minimumVersion=1.4&multiSelection=true&imageMode=mode=thumbnail,autorotate=true,size=600,resizeQuality=high,jpegQuality=100;mode=thumbnail,size=50,resizeQuality=high,jpegQuality=100;mode=thumbnail,size=150,resizeQuality=high,jpegQuality=100&debugMode=true&minimumVersion=1.4&cookies=AuthCookie=@Model.AuthCookie"> Upload Photos</a>     
}
else
{
    <div>Authentication error: cannot upload photos , please contact administrator</div>
}

As you see above the code is simple enough to declare just a hyperlink , which sends a request to an AurigmaUp App .Everything is configured through this link ,the Upload URL , the image sizes , the URL to redirect to after upload , the license and authentication . AurigmaUp uses cookies to authenticate the request : so in this case , I am passing back my Forms authentication cookie via my Model . This works fine , error free. A Javascript code provides a mechanism to download the IPhone / Android app -

<script type="text/javascript">

(function() {
    var ua = navigator.userAgent;
    var link = document.getElementById("downloadAppLink");
    if (ua.indexOf("Android") > -1) {
        link.setAttribute("href", "https://play.google.com/store/apps/details?id=com.aurigma.aurigmauploader");
        link.style.display = "";
    } else if (ua.indexOf("iPhone") > -1 || ua.indexOf("iPad") > -1) {
        link.setAttribute("href", "http://itunes.apple.com/us/app/aurigma-up/id432611633");
        link.style.display = "";
    }
})();

</script>

Here is the server side controller code that demonstrates how cookie is passed via the Model:

 public ActionResult Photo()
 {
            _logMessages.Write("Into Get Aurigma Upload");

            UploadAlbum ua = new UploadAlbum();
            ua = SetUpAuthCookieForAurigma(ua);
            return View(ua);            
 }
private UploadAlbum SetUpAuthCookieForAurigma(UploadAlbum ua)
{
            HttpCookie authCookie = HttpContext.Request.Cookies.Get(FormsAuthentication.FormsCookieName);

            if (authCookie != null)
            {
                _logMessages.Write(authCookie.Value);
                ua.AuthCookie = authCookie.Value;
            }
            else
            {
                _logMessages.Write("cookie in post does not exist");
                //Redirect to error page
            }

            return ua;
}

The Uploads work error free , without any issues no matter how many photos you upload and they are reasonably fast.

For those of you ,  who are 100% bent on providing an integrated non clunky user experience ,  this is not the best solution . Although, this is very simple to configure with HTML code , the server side code will have some debugging challenges , because once the request is submitted to the AurigmaUp App, you lose control because there is no way to debug the App in development mode. You are going to have to survive by logging to persistence mediums.  However once you have a good logging mechanism in place , it works pretty smoothly.

In the end I would have to conclude with mixed feelings . If you have a site where the user activity is predominantly media uploads , then for the ultimate integrated experience  is better to go with HTML5 or PhoneGap type of API . However if you want to quickly provide the user also the capability to upload photos via mobile devices as an enhancement to the user experience , this is certainly worth considering because it is extremely easy to code and functions correctly , quickly for uploads.

I have not provided full source code – if any question feel free to contact directly or through comments.

, , ,

Leave a comment

Program to an interface , not an implementation

I have been lately discussing SOLID in my previous posts – some of the basic principles of OOD . Recently while doing my project work , we had to decide between Entity Framework  or NHibernate which are popular ORM layers in .Net framework .  The decision was not easy , we got stuck at Second Level caching support …and could not reach any decision right away. I started thinking that may be for now we have to write our layer in such a way that we choose one ORM , and then we can switch to a different one worst case if need be. Ideally a situation like this should not arise , but if it does happen , you need to be able to do it.  Once again I felt how a very fundamental OOD principle could come in handy -

Program to an interface , not an implementation.

These words above can be very ambiguous and abstract when you just read them .

However when seen from a context , you understand how powerful this is , and how it can increase flexibility and ability to switch . Seen from my own situation where we needed the flexibility to switch data access mechanism if need be, let’s understand this by constructing a scenario. For example , a common problem could be implementing a subscription mechanism for a data service that your system may provide. Users can subscribe to the data service and get notified when a change occurs. We need to persist the subscription information to the database , where you will need to use some database access mechanism.

As usual there is a presentation layer that gathers some subscription information and passes it along to the data service which persists the information to the database . For simplicity’s sake , let’s say we have a SubscriptionService , which is our data service for persisting subscriptions . We can discuss this further in the context of an ASP.Net MVC application – where a View is our Presentation layer through which Subscription details are collected. A controller action method is typically invoked to pass along the information to the server.

We can assume for our article purposes that our application provides notifications on Food Recalls. So the member user to our website subscribes to recalls for a certain type of food issued by different organizations like FDA etc. The Subscription in simplicity could be :

Please note : all code is pseudo only , this is not a working solution that can be downloaded and built.

public class RecallsSubscriptionService
{
    Boolean Subscribe( int userId, string food_type, string organization)
    {
           // data access code to insert subscriptions
           return false; 
    }         
}

Most of the times the controller action method would look something like this:

public class SubscriptionController : Controller
{
        [HttpPost]
        public ActionResult Subscribe(int userId, string food_type, string organization)
        {
            try
            {
                 SubscriptionService myservice = new SubscriptionService();
                 myservice.Subscribe(userId, food_type, organization);

                return RedirectToAction("Index");
            }
            catch
            {
                return View();
            }
        }
}

So , we make a decision we will use Entity Framework 6 to implement our data access to store subscription information. The above code will work fine . The code goes into production . However later we decide that due to certain project requirements we have to switch to NHibernate. This is a tough situation – our controller here , who is a client of the SubscriptionService , is heavily dependent on instantiating this specific service which accesses the database with Entity Framework 6. Although we expected the problem could arise we didn’t design our system for this flexibility. Even though we encapsulated the data access code into a Service , we did not free the client from the implementation of data access because the client program is indirectly tied to EF implementation.

We need to make some changes , so our controller is independent of the future changes that could happen to the database access mechanism. To bring transparency into how data access may happen from a client’s perspective we need to introduce interfaces. We write an interface instead for the SubscriptionService :

interface IRecallsSubscriptionService
{
   Boolean Subcribe(int userId, string food_type, string organization);       
}

We could write two implementations of the above interface or we could just use the same implementation we had earlier , and replace it with NHibernate.

public class RecallsSubscriptionServiceEF
{
       Boolean Subscribe(int userId, string food_type, string organization) 
       { 
                //   implement using EF ;
                return false;
       }         
}

public class RecallsSubscriptionServiceNH
{
       Boolean Subscribe(int userId, string food_type, string organization) 
       {
              //implement using NH ;
              return false;
       }         
}
/*( not that the above approach is recommended , where you keep 
two implementations available for the same requirement. 
This is for understanding purposes only )*/

The above mechanism allows us to drive the Subscribe method call through the IRecallsSubscriptionService interface. In order to effectively use the method above , we will need to pass IRecallsSubscriptionService as a parameter to the controller constructor and use dependency injection in this scenario to have the right concrete class to be instantiated during run-time. I will not dive deeper into that because it is outside of the scope of this topic.

This is the whole basis of Dependency Inversion and Injection ( this is also the ‘D’ in SOLID ): here we program to an interface , not an implementation as we saw above. This gave us the flexibility to change our implementation completely without changing the client code. So the basic idea is programming to an interface decouples the client from the internal implementation freeing calling programs from knowing the internals leading to flexibility in changing implementation as the need arises.

Even in scenarios where there are published APIs , when you use objects  go up in hierarchy to program to the interface as opposed to the concrete class. So if you are using a List object and comparing individual strings-  then if this object implements IComparer , then program to IComparer -

instead of writing:

List myList = new List();
myList.Compare( myList[0] , myList[1]);

//Write as :

IComparer myList = new List();
myList.Compare( myList[0], myList[1] );

//This gives you the freedom to use a different flavor of Compare method if you need to ,

IComparer myList = new Array();
myList.Compare( myList[0], myList[1] );

The advantage of the above is ability to switch different comparison methods if they are available. And also test program can easily switch different implementations and see what works the best.

We also need to understand that “Program to an interface” should not be taken literally . Even Abstract classes can act as interfaces in this context and give us the same flexibility.

Interfaces should be used wisely and only when needed. If there is no potential for an implementation to ever change there is no need to bother with interfaces. For example , a Utility class that converts a string to a number. There is no need to write interface to this . Interfaces should also be used in projects where testability is a big concern and mocking may be required.

, , , , , ,

5 Comments

SOLID conclusions with ISP and DIP

In my last post we went over LSP ( Liskov Substitution principle ) , how it helps achieve Subtype Polymorphism. We learnt how to think in terms of client programs and how client’s call to  programs can influence the quality and design of applications. The SRP and OCP are foundational principles on which LSP and the next one we discuss today , ISP manifest themselves.  It is not enough if Single Responsibility and Open Closed are understood – the rest of the three principles are where you see their applications and extensions. LSP , ISP and DIP all three teach us how to design from a client’s point of view. Uncle Bob calls it as “Clients exerting forces “  . 

The interface-segregation principle (ISP) states that no client should be forced to depend on methods it does not use

ISP helps formulate mechanisms on how interfaces should be segregated so the same interface can be useful to different clients. It’s a very powerful thought that can bring strong results. Single Responsibility talks about writing interfaces that have very cohesive functions . ISP takes it one step further and gives concrete ideas that give rise to reusable interfaces . This is where the relationship and the difference between SRP and ISP exists .

So how does ‘Segregating’ Interfaces help the clients ? Interfaces are meant to represent a certain type of behavior to the client and by virtue of their contractual representation promote pluggability. So when interfaces represent more than one type of behavior , they provide more than what a client needs . This leads to the interface becoming incapable of being used for plug-in purposes. Also they become ‘Fat’ in nature , with unncessary behavious not useful in the client’s context. Basically this is solidifying more on the SRP and OCP and stressing a whole lot more on how interface contracts should represent one type of behavior. Almost all patterns have a basis in SRP , OCP and ISP. Let’s say we are writing a class to represent a Persistence Medium. One of the most popular persistence medium is Database . So is XML . And a lot of applications use simple JSON files to store simple content.  Say , we start out by writing a PersistenceMedium class. So a simpe IPersistenceMedium looks like this :

Code is pseudo only , to describe the principle , not a working , compiled example :

// IResult is an imaginary interface 

interface IPersistenceMedium
{
       string fileName { get; set; }
       string connectionString { get; set; }

       void Open();       
       void IResult ExecuteQuery(string query);
       void IResult ReadFile();
       void Close();
}

//So our database class can be written like this:

public class Database : IPersistenceMedium
{
    private string _connectionString;

    public string connectionString
    {   get{ return _connectionString; }  
        set{ _connectionString = value; }    
    }

    public void Open(){  /* database open connection code */ }
    public IResult ExecuteQuery( string query );
    public void Close(){  /* database close connection code */ }
}

// The JSONStore can be written like this :

public class JSONStore : IPersistenceMedium
{
     private string _fileName;

     public string fileName
     {   get{ return _fileName; }  
         set{ _fileName = value; }    
     }

      public void Open(){  /* open a JSON file code */}
      public IResult ReadFile();
      public void Close(){  /* close file code  */}
}

See what happened above ? The Database class could not use ReadFile() and fileName and JSONStore could not use ExecuteQuery() and connectionString . Although both are variants of a PersistenceMedium behaviour , they are unnecessarily clubbed together from the perspective of Database and JSONStore classes which are the clients of IPersistenceMedium.

A Better way would be :

interface IFileConnection 
{ 
     string fileName { get; set; }
     void Open();
     void Close();
}

interface IDatabaseConnection 
{   
      string connectionString {  get ; set; }   
      void Open();
      void Close();
}

interface IDBOperation
{
     IResult ExcecuteQuery();
}

interface IFileOperation
{
      IResult ReadFile();
}

Now this provides interfaces that can be specifically used for Database or JSONStore because we segregated them keeping in mind different clients / client groups. The interfaces which can be used for JSONStore can be used for XML or any other type of file store. Similarly the interfaces that got written for Database can be used for any type of database , SQL , Oracle or NoSQL . The interface is less fat , with interfaces ‘Segregated’ strictly based on different client usages . This is SRP in effect, by using ISP.  What are the different ways of using these with the Clients ? One obvious way is multiple inheritance.

public class JSONStore : IFileConnection , IFileOperation
{
    private string _fileName;
    public string fileName
    {   get{ return _fileName; }  
        set{ _fileName = value; }    
    }

    public void Open(){ /* open file*/ }
    public IResult ReadFile(){ /* Read File */}
    public void Close() { /* Close File */ }

}

public class Database : IDatabaseConnection , IDatabaseOperation
{
    private string _connectionString;
    public string connectionString
    {   get{ return _connectionString; }  
        set{ _connectionString = value; }    
    }

    public void  Open() { /* Open Connection */}
    public IResult ExecuteQuery(){  /* execute query to obtain results */}
    public void Close(){ /* Close Connection */}

}

The above JSONStore could be very well be written as FileStore , in that case different FileStores can be used and IResult representing different result types form different types of files.

There are different ways to implement the above without multiple inheritance. One way is to use something like a Strategy pattern where IFileOperation has concrete classes for JsonFileOperation and XmlFileOperation – because each may have subtle read differences. XML requires special parsing and JSON is more of a string representation.

We have been able to establish in many different ways the value of SOLID – how it  leads to good design practices and understanding of patterns.

I will conclude without diving into Dependency Inversion – there are fantastic blogs including Uncle Bob’s report. Dependency Inversion distinguishes itself by establishing how layers should interact  without depending on each other. Again a powerful concept that has led to great programming practices including Test Driven Development and several patterns that facilitate the principle. SOLID is definitely worth your time !

, , , , ,

1 Comment

Way to Patterns : even more SOLID

In the last post we saw how OCP facilitates extensibility and reusability . OCP is the foundation on which several patterns have been written. The Strategy pattern is a great example of OCP where subclasses are written based on different algorithms. The manifestation of OCP happens in the third SOLID principle , the ‘L’ as we know ,  Liskov Substituion principle – perhaps the more involved and less understood principles in SOLID .

We saw how SRP can lead to OCP – LSP takes OCP and establishes clear rules that will ensure polymorphism is accomplished correctly. LSP attempts to achieve what we call subtype polymorphism through it’s rules. In short we can represent this in pseudocode – a client call to a subtype method call , through a base class interface:

public class Supertype{   public virtual outparam SomeMethod( inparam );   }     
public class Subtype : Supertype { public override outparam SomeMethod( inparam );  } 

 //Client call:   
 Supertype baseType = new Subtype();    
 outparam = baseType.SomeMethod(inparam);

Liskov Substitution statement  translates to : Subtype(derived type) must be behaviorally equal to their base types. They must be usable through the base type interface without the need for the user to know the difference.

LSP needs to be understood from a client’s perspective . Client here means : Calling programs , Users of your interfaces and abstract classes within the organization  or outside the organization. A Client perceives the behavior of a class through methods of a class : arguments passed , value returned and any state changes after the method execution. So basically to be able to use a subclass / subtype in place of a superclass/supertype the sub type successfully needs to preserve argument requirements and return expectations by the client.  A client code that gets written keeping in mind the Supertype to call it’s method should not change or break when replaced with subclass method calls . Compilers do enforce Signature compliance when methods are overridden from abstract classes . However internally within the implementation if arguments or return values were treated in a manner that could break the client code , the code is in violation of LSP because essentially the contract with the client was broken . Compilers typically will not catch these violations.

It is the programmers responsibility to ensure LSP compliance for the most part.  In order for the internal subtype behavior to keep in consistency with supertype behavior , these principles were formed.

There is a very subtle and interesting nuance that one needs to understand here. You could very well write a ‘Superclass’ , and then write a ‘Subclass’ – override the Superclass method to give a specific implementation. Use the subclass in your programs to execute the subclass specific method. You may never see anything wrong up until you expect the subclass to be a ‘subtype’ of the Superclass which is a Supertype. It’s when a ‘subclass’ is expected to become a ‘subtype’ is when LSP comes into play.

A lot of blogs have been written to explain these rules.  I have cited at the end some good ones in order to understand them with code examples : instead in this blog let’s try and understand some of the confusing rules that mostly I have seen people asking questions about .  Let’s just list all of them first :

  • Contravariance of method arguments in the subtype.
  • Covariance of return types in the subtype.
  • No new exceptions should be thrown, unless the exceptions are subtypes of exceptions thrown by the parent.
  • Preconditions cannot be strengthened in the subtype
  • Postconditions cannot be weakened in the subtype
  • Invariants must be preserved in the subtype.
  • History Constraint – the subtype must not be mutable in a way the supertype wasn’t.

Now, let’s take just the two which seem to confuse most people :

  • Preconditions cannot be strengthened in the subtype.
  • Postconditions cannot be weakened in the subtype.

Preconditions apply to arguments which will be used as part of the implmentation. Postconditions mostly relate to return values or the state after the implementation is executed. Preconditions are requirements on users of the functions, while postconditions are requirements on the functions themselves. Preconditions get executed before the actual implementation is executed , whereas postconditions are executed after.

Preconditions cannot be strengthened in the subtype.

Wikipedia explains this as :

In the presence of inheritance, the routines inherited by descendant classes (subclasses) do so with their preconditions in force. This means that any implementations or redefinitions of inherited routines also have to be written to comply with their inherited contract. Preconditions can be modified in redefined routines, but they may only be weakened. That is, the redefined routine may lessen the obligation of the client, but not increase it.

What is the obligation of the client ?  The arguments that need to be passed  , is the obligation of the client. If the preconditions  are set in such a way in the subclass method  that the choice of the arguments which can be passed from the client is lesser or restricted then you actually strengthened the precondition .  This increases the obligation of the client.

Let’s understand this with an example. We will modify the IFormatter interface from the last post to an abstract base class with some implementation.

 abstract class Formatter{            

         public virtual string Format( String message)
         {
                if ( String.IsNullOrEmpty( message ) ) 
                    throw new  Exception ();
                // do formatting
         }      

     }
    // strengthened precondition
    public class MobileFormatter : Formatter{

         public override string Format( String message)
         {
                if ( String.IsNullOrEmpty( message ) || message.Length > 250  ) 
                    throw new  Exception ();
                // do formatting
         }

    }
    // weakened precondition
    public class MobileFormatter : Formatter
    {
        public override string Format(String message)
         {
                if ( message == null  ) 
                   throw new  Exception ();
                // do formatting
         }

    }

As we see above the MobileFormatter placed more restriction on the arguments in the strengthened precondition – this will force the client to change their code to accommodate for this if they want to avoid getting an exception.  So behaviorally the base and subtype are different.

In the weakened precondition what happened is that the client now does not need to accommodate the code for MobileFormatter, the argument that gets passed to MobileFormatter , works for base Formatter as well because the validation in Formatter is stronger or the validation in MobileFormatter is weaker.

Postconditions cannot be weakened in the subtype.

Wikipedia explains this as :

In the presence of inheritance, the routines inherited by descendant classes (subclasses) do so with their contracts, that is their preconditions and postconditions, in force. This means that any implementations or redefinitions of inherited routines also have to be written to comply with their inherited contract. Postconditions can be modified in redefined routines, but they may only be strengthened. That is, the redefined routine may increase the benefits it provides to the client, but may not decrease those benefits.

Let’s understand this with a code example:

   abstract class Formatter
    {

        public virtual string Format(String message)
        {

            // do formatting
            return message.Trim();
        }

    }
    // weakened postcondition
    public class MobileFormatter : Formatter{

         public override string Format( String message)
         {

                //do formatting
             return message;
         }

    }
    // strengthened postcondition
    public class MobileFormatter : Formatter
    {
        public override string Format(String message)
        {

            //do formatting
            return message.Trim().PadLeft(5);
        }
    }

What we see above in the MobileFormatter is that the postcondition got weakened by removing the Trim method. This provides the client less than what was provided in terms of the result .

Then to correct it , we strengthened the postcondition by adding left padding. This does not require the client to change any code , however the code gets the extra benefit of padding. The above example is rather crude but serves the purpose of explaining .

There is a very interesting pattern called Template pattern accomplishes LSP via template methods written inside a base class which are overridden in derived classes.  For now , this is enough to contemplate about – more in the next blog .

Here are some really good blogs on Liskov that discuss other rules as well:

http://www.ckode.dk/programming/solid-principles-part-3-liskovs-substitution-principle/#contravariance

http://msdn.microsoft.com/en-us/magazine/hh288081.aspx

Until then happy programming !

, , , , , ,

1 Comment

Finding the way to Design Patterns : more SOLID

So continuing into the process of finding our way into design patterns from my last post , we will try and unfold SOLID a bit more. Some of the SOLID principle interpretations and applications can be very subjective and a matter of debate. How do you know the classes that you wrote have adhered to the Single Responsibility Principle ? Is there a way to determine ? What does Single Responsibility exactly mean?  How far do you try and take these things…blah ..blah.

There needs to be a balance in everything of course.One needs to find a mid way between over-engineering and under-engineering. We need to create small classes with very few cohesive/related functions without getting carried away by having so many classes that we cannot manage those either. At the same time , understanding exactly how to create classes with just one relevant behavior or few cohesive functions can get tricky. It’s important to ask the question while writing a class , what is it that will potentially change in this class later , that can be re-factored into a separate class , so minimal changes / or no changes need to be made to the existing implementation when the change needs to be made.

It’s important to ask the question while writing a class , what is it that will potentially change in this class later , that can be re-factored into a separate class , so minimal changes / or no changes need to be made to the existing implementation when the change needs to be made.

If we take the Logging example that we discussed in the previous post , we discussed three functions a logger can do. Initializing a medium to log , Formatting to the medium itself  and Writing to the medium . So, they all look like related functions don’t they ? So they all can go in one class , which is a violation of SRP . If we take one step further into re-factoring  perhaps it’s easy to recognize right away that there are different mediums to log , each requires a different method to write to itself. So we create separate classes for the medium . Yes , we applied SRP here.

What about formatting ?  Can formatting change ? Yes, it can . Say we need to add a new logging medium called Mobile Device. To the mobile device , you want to send text messages whenever severe conditions occur. Doesn’t the format of what goes into a database differ from a mobile device ?  Also , later down the line the potential users of our Logger may require that they want the format of the message a little different. Now if we did not separate the format function into it’s own class , we have it as part of the class that writes to the medium or worse as part of the Logger class. Logger class is actually our client facing class that the client would use to log messages . Just to change  formatting , we have to go and change this Logger class or the medium classes.

At this point we need to consider  whether it’s worth writing something like this:


   1:  interface IFormatter {
   2:   
   3:  public string Format( string message );
   4:   
   5:  }
   6:

and the logger could do something like this :


   1:  public class Logger{
   2:   
   3:  public Write( IFormatter formatter){}
   4:   
   5:  }
   6:

Or let’s go couple of steps further along these lines,


   1:  public interface ILogMedium{
   2:   
   3:  void Write(String Message);
   4:   
   5:  }
   6:

   1:  public class LogToDatabase : ILogMedium{
   2:   
   3:  public void Write(string message){
   4:   
   5:  // medium specific logging
   6:   
   7:  }
   8:   
   9:  }
  10:

   1:  public class Logger{
   2:   
   3:  ILogMedium _logTodb;
   4:   
   5:  IFormatter _formatter;
   6:   
   7:  public void Logger( ILogMedium logToMedium,  IFormatter formatter )
   8:   
   9:  {
  10:   
  11:  _logTodb = DBMediumProvider.Create(); // DBMediumProvider could be a creation class
  12:   
  13:  _formatter = DBFormatProvider.Create(); // DBFormatProvider could be a creation class
  14:   
  15:  }
  16:   
  17:  public Write(String message){
  18:   
  19:  _logTodb.Write( _formatter.Format(message) );
  20:   
  21:  }
  22:   
  23:  }
  24:

Please note : The code above is not a working solution. It is only like pseudo code to demonstrate the thought process .

The advantage of the above is we kept the formatter completely separate from the medium , as well as the Logger class. This allows us to plug in new mediums , formatters and configure them based on our needs. Also , if there are bugs it’s easier to fix a specific class as opposed to one big class which could potentially break other functions.

This brings us to the Open Closed principle , which is the next one and the ‘O’ in SOLID as we all know.

A consistent use of SRP can lead to OCP , which simply states: A class should be open for extension but closed for changes. This does not mean that classes should be completely sealed for change , it means that the class should be in a state where only bug fixes should be made and new functions should be added via new classes , minimizing implementation changes to the existing classes.

Words should not be taken literally here : once you wrote the class does not mean that it cannot be changed at all : however we should get to a point that , when behavior changes in predictable ways, you should not have to make several changes into a single class or several classes in the system. An ideal situation would be where you achieve the change by adding new class / code rather than changing existing code . For anyone who has been in programming for a few years supporting production systems , this will make a lot of logical sense.

Whatever we discussed above with Logger example with reference to SRP will apply to OCP as well , because OCP can be achieved by applying SRP consistently. So had we written the formatting in the Logger or the individual medium classes we would not be able to add new formatters or change existing formatting without changing the Logger or the mediums classes , plus mix and match formatting with mediums . The way we achieved OCP with Logger example is we gave the Logger a Single Responsibility of writing to the medium. We gave the Medium classes a Single Responsibility of initializing / creating the medium and specifically write to that medium as well. And then OCP came into effect when we made it possible for new formatters to be added to the system by simply implementing the interface IFormatter and adding that class to the system as a plugin. So , when a user wants to use the new Formatter , she can do it through the configuration system and that formatter will get automatically used which needs to be implemented through some creation classes of course , which is a separate topic – an example is Factory Method pattern.

Several design patterns use the SRP and OCP , and following the above two will put you in the mode of clean and efficient code . Some popular patterns like Strategy , Factory are all based on OCP . We will continue with this discussion in the next post , happy programming !

Few references :

Wikipedia definition Open-Closed

Interesting read on OCP by Jon Skeet

, , , , ,

2 Comments

Finding the way to Design Patterns : SOLID

In my previous post on design patterns I discussed about why it’s a challenge for developers / organizations to adopt patterns as part of their development practices. I also had suggested that a dedicated effort to re-factor code should be made continuously as part of software life cycle especially in the early to mid-stages of development. Having said that, how do you go about re-factoring? How do some of the object oriented principles and design patterns help with re-factoring? So, if re factoring is important, the object oriented mechanisms that allow and facilitate this are important as well. In conclusion the learning needs to happen, slowly and steadily – as a result it will start becoming a habit to incorporate some of these patterns related principles.

I guess the challenge is where do you begin? There is a ton of information on the web, it’s all too overwhelming, scattered and fragmented. You need a have a structure and if you just pick up the Gang of Four book (our bible for design patterns), it can seem rather academic and intimidating for the first timer, abstract as well. The book of course is great, however it’s hard to jump into patterns pragmatically just by reading it.

As in my previous post in my concluding paragraph I had said that getting familiar with SOLID principles developed by Robert. C. Martin is great way to get started with principles that will lead the way towards patterns programming later.  Before expanding upon technically on what SOLID is about, I would like to discuss its importance in the whole OO programming space. If you come from a OO language background like Java, C++, C# etc. you already are familiar with Encapsulation, Inheritance, Polymorphism as foundational principles of OO and they are a daily part of your programming life. SOLID takes it one step further, lays out 5 principles that you can apply to re-factor and improve the code making it maintainable, reusable and efficient.

So when you start applying SOLID, you basically are applying some of the fundamental principles on which Design Patterns are developed. SOLID stands for Single Responsibility, Open-Closed principle, Lishkov Substitution principle, Interface Segregation and Dependency Inversion.

If you just take the first two: Single Responsibility and Open-Closed principle, just there you will improve the structure and quality of your classes.

Single Responsibility states: a class should have only a single responsibility.

Say, you are writing a Log class whose job is simply to write messages to different logging mediums. However you also gave it the responsibility of formatting the messages for these mediums, because it’s part of logging function. In addition, you gave it the responsibility of initializing and choosing the medium to log into, for ex: Event logs, Database, Log files etc. So now the Logger class has multiple reasons to change – one is, Write to the log medium, another is formatting the message: perhaps different mediums require different formatting and on top of that initializing the medium . So your class has now grown into one giant monolithic program that has multiple reasons to change – formatting for each medium, initializing the medium and logging to the medium. So if you need to change only one aspect : say change the formatting messages in database , you have to change that class and the rest of the code could possibly break because you needed to make changes for one aspect. We come across situations like this all the time in production code where we change one thing and potentially something else breaks , without us intending it . This sort of programming makes the code not only fragile, but totally un-pluggable.

Ideally in this situation, the Logger class should just take on the responsibility of writing to the medium being free of what medium it is writing to and what formatting that particular medium needs. Although it seems like they are all related functions , they merit to become individual classes based on their specific function and behavior. More so, ideally you should write an interface that can be implemented to write to different log mediums.

public interface ILog
{
void Write(String Message);
}

As you see , when you just take this one principle and follow it while writing the code you will see that you have written classes that are light weight and each is meant to do one particular job – this makes the design pluggable , reusable and efficient  . You can create classes that are specific to a medium – so adding mediums for logging becomes easy later. Also if there is a change required in it , say how you format the message for that medium, you change only the corresponding medium or the formatting class, avoiding the risk of the rest of the code breaking due to the one change you made in one big class.

As you go further and start applying one by one each principle you will see that certain patterns are shaping up, and possibly they can be applied to common scenarios. I guess I will stop here for now, and conclude that the first step is definitely to understand SOLID and start applying it in your programming life seriously.

Below is a great Video on Single Responsibility Principle by Robert C. Martin himself to learn more on it and get started :

We will discuss more on the rest of the SOLID in the upcoming post …until then happy programming.

,

9 Comments

SignalR : know your transport

I just recently started using SignalR. I had the time to explore this new framework in the last couple of months and use it in a recent work engagement for a proof of concept.There has been a wide acceptance for SignalR – not without a reason.It’s very easy to set up and code, supports WebSockets if Client and Server both support the transport. This has made real-time programming easy and fun, plus in this new age of rich internet applications this can be almost in any context.

Secondly ,the SignalR support is tremendous – it’s open sourced and supported by Microsoft. The MS developers of the product do a tremendous job of answering questions on premier .net QA forums like Stack Overflow. There are several examples on the web available on how to set up and get SignalR working with your project including GitHub where SignalR is hosted. However I would like bring in some observations where I had some in-claritythe beginning. SignalR basically supports four transports, employs graceful degradation based on the negotiation between the client and the server. The four transports are:

  1. WebSockets
  2. Server Sent Events
  3. Forever Frame
  4. Long polling

The WebSockets transport is the most sought after transport for real-time programming because it offers a bi-directional communication as opposed to other transports giving you the maximum performance. Although SignalR can be made to work with .net framework 4.0, however WebSockets will not get used as a transport if you are using a web server version lower than IIS 8.0 .This is an important aspect to realize considering certain browsers like Chrome support WebSockets. It’s good to run Fiddler and see what happens in order to understand if SignalR negotiated a WebSockets transport or not.

GET http://localhost:2592/signalr/hubs 200 OK (application/x-javascript)GET http://localhost:2592/signalr/negotiate?_=1359546909252 200 OK (application/json)

GET http://localhost:2592/signalr/connect?transport=serverSentEvents&connectionId=d66299e6-196f-4f17-a11f-191c8dfd84d1&connectionData=%5B%7B%22name%22%3A%22stocktickerhub%22%7D%5D&tid=9

200 OK (text/event-stream)

POST http://localhost:2592/signalr/send?transport=serverSentEvents&connectionId=d66299e6-196f-4f17-a11f-191c8dfd84d1

200 OK (application/json)

Here I am running this application from Visual studio 2010, .net framework 4.0 and Chrome release 24.0.1312.56 m as a browser which does support WebSockets transport. So the point is,  it is not sufficient if the browser supports WebSockets , the server needs to have support for it as well. As of now , in order to run SignalR in production with IIS , at minimal you must have IIS 8.0 and .net framework 4.5 . ASP.NET 4.5 and IIS 8 include low-level WebSockets support and not the previous versions.

In a second experiment there is a request going into IIS 6.0 , on the local network where SignalR is hosted with .net framework 4.0 and Chrome as the browser -

GET http://192.168.32.11/myservices/signalr/negotiate?_=1359551953460 200 OK (application/json)

GET http://192.168.32.11/myservices/signalr/ping?_=1359551967785 200 OK (application/json)

GET http://192.168.32.11/myservices/signalr/connect?transport=longPolling&connectionId=f40f4a8c-e4b5-4eb8-ad09-5332077baf3b&connectionData=%5B%7B%22name%22%3A%22marketdatahub%22%7D%5D&tid=0&_=1359551968351
200 OK (application/json)

POST http://192.168.32.11/myservices/signalr/send?transport=longPolling&connectionId=f40f4a8c-e4b5-4eb8-ad09-5332077baf3b
200 OK (application/json)

Well , now long polling got negotiated because the request is considered Cross domain , as of now neither WebSockets nor ServerSentEvents are supported under Cross domain.

Any of the above 4 transports could work depending on the context of your application. However it’s important that you run Fiddler or some other similar tool to know what transport got negotiated and whether client and server both support it.

Thanks to this question by Hemang Dave on Stack Overflow, helped me arrive at some conclusions.

http://stackoverflow.com/questions/14503260/signalr-perfroms-long-polling-instead-of-websocket-in-case-of-cross-domain

Happy SignalR programming !

3 Comments

Follow

Get every new post delivered to your Inbox.