Building a Blog Redux - Setting Up Lucene.Net For Search (Part 7)

Thursday, September 27, 2012

This is the seventh post in a series of posts about how I went about building my blogging application.

  1. Building a Blog Redux - Why Torture Myself (Part 1)
  2. Building a Blog Redux - The tools for the trade (Part 2)
  3. Building a Blog Redux - Entity Framework Code First (Part 3)
  4. Building a Blog Redux - Web fonts with @font-face and CSS3 (Part 4)
  5. Building a Blog Redux - Goodreads feed using Backbone.js (Part 5)
  6. Building a Blog Redux - Mapping View Models to Entities Using AutoMapper (Part 6)

Search is a segnificant part of website usability. When someone comes to my website looking for something, I want to be able to provide a good user experience where if they look down at the tags and don't see a specific tag they are looking for, then give them the ability to search my blogs for word matches. I would rather not have the user leave my site to go to Google and search for term that will lead a user to another website. Users should be able to enter a term or phrase on my website, and get the most relevant responses back for them to review.

That being said, the latest functionality added to my blog site is a searching capability. I chose Lucene.net mainly because it is the big open source player on the capability for ASP.Net applications. The common features that come with this framework that I am using is the indexing of content from by blogs and the querying of the indexes to return relevant information in a quick response. It is a direct function to function port over from the original Java version Lucene.

Lucene.Net is an indexing and search framework that can run on many different platforms, but most commonly it is used for searching capabilities on websites. A few years ago, it was essentially a dead project, however some dedicated programmers revived the projected and entered it into the Apache Foundation where was in an incubator status until this past August 15th, where it was then voted out of the incubator status.

Be aware that now that the project has graduated from the incubator status, there project site has moved and it took me some time to actually find it. At least at the time of this writing, it didn't even come up on a Google search. It was on Twitter where I found current project link.

What I would like to do in this post, is show you how I have implement Lucene.net in a MVC ASP.Net web application. This article is mainly going to focus on setting up the components in a MVC web application using StructureMap for Inversion of Control. Later on, I will have a post on setting up the index and another post on setting up querying the index.

Getting Started

First off, I should say that documentation on the project site for Lucene.net is a bit lacking, but you can get a pretty good introduction from the CodeClimber web site starting with this article. Furthermore, since this article talks about how Lucene.net was implemented in the SubText blog engine project, I went to their project site and downloaded the Subtext source code. I tried not to copy any code directly from the Subtext source code, but if you compare my code to the code that in the Subtext source code you will see a lot of similarities.  I think they way the Subtext code is implement on that site is pretty good, and pretty much the most common way to have the framework implemented. For the most part, all the samples I have seen either refer to the same articles I am referring to, or give examples very similar to the code I have setup as well. The point being is that I am standing on other developers shoulders with this code mainly taking there ideas and modifying them to fit my needs.

Set Up

Lucent.Net is up on Nuget so its easy enough to do download install as a package. Just download the package via Nuget, and you are ready to start writing code.

Since I am using StructureMap to manage all of my dependencies, I need to explicitly add the Lucene.Net components to the registry because I want to specify a singleton instance of the components so only one of each of the components run while the application is running. Typically, I have StructureMap scan all my objects and then use StructureMap's "Convention of Configuration" pattern to register them all; however, this unique situation where this pattern does not work. I need to explicitly tell StructureMap that components must run as Singletons; that is, there can only be one instance of the objects running at a single given time.

Regarding how I have set up StructureMap in general, I mentioned this in an earlier post, but if you want to see a good example, of how I have StructureMap setup see Elijah Manner's post.

Here is my StructureMap registry code:

    public class AviBlogRegistry : Registry
    {
        #region Constructors and Destructors
 
        public AviBlogRegistry()
        {
            Scan(
                x =>
                    {
                        x.TheCallingAssembly();
                        x.Assembly("AviBlog.Core");
                        x.WithDefaultConventions();
                    });
 
            //Register Search and Indexing Services
            For<ISearchEngineService>().Singleton().Use(
                () => new SearchEngineService(SingletonDirectory.Instance, SingletonAnalyzer.Instance));
            For<ISearchIndexService>().Singleton().Use<SearchIndexService>();
        }

As I stated, I have the scan feature, which registering all my objects, but right below that I am registering to Lucene.net objects explicitly that will be using one single instance of the SearchEngine object and a single instance of the SearchIndexService object. These objects are my custom objects that wrap the Lucene.net objects which also need to be and are set up as Singletons. I am also telling StructureMap, that when the SearchEngineService is instantiated, they should use the singleton versions of the Directory and Analyzer Lucene.Net objects.

I am using the "Multithreaded Singleton Pattern" to instantiate these objects as Singletons. The Singleton pattern is used when you want to accomplish the following:

  • You want an object to have only one instance
  • You want an object to one global entry point, in my case I am loading the objects up on the Application Start Event.
  • You want the single instance object to "thread-safe". That is, in a multi threaded in environment, the object should be safely created without the chance that another. thread could accidently create a multible instance at the same time.

I would have liked to have StructureMap create the singletons for the Directory and Analyzer, but I think because I am still in the AviBlogRegistry class, the objects are not registered at this point and thus the instantiated objects cannot be used. I am getting null objects from StructureMap at this point. Perhaps if I go back and create a separate StructureMap registry class for these two objects before this registry is executed, I can once again have every object in the application initiated by StructureMap. I might try that later and update this post.

Therefore to create a Singleton instance of the Directory and Analyzer Lucenet.Net objects, I have created some static classes. Here is the Lucene.Net Directory Singleton instance:

public sealed class SingletonDirectory
    {
        private static volatile Directory instance;
 
        private static readonly object syncRoot = new Object();
 
        private SingletonDirectory()
        {
        }
 
        public static Directory Instance
        {
            get
            {
                if (instance == null)
                {
                    lock (syncRoot)
                    {
                        if (instance == null && HttpContext.Current != null)
                            instance =
                                FSDirectory.Open(
                                    new DirectoryInfo(HttpContext.Current.Server.MapPath("~/folder/subfolder/")));
                    }
                }
                return instance;
            }
        }
    }

The Directory object creates the index that the application will use to read search queries from. In this case I am implementing the concrete FSDirectory version of Directory which is the version of the object that writes the index out to a file. There is also MMapDirectory mmap for reading, and RAMDirectory which reads and writes to memory, which from what I can see is mainly used for unit testing. The FSDirectory takes a DirectoryInfo object which specifies where you want the index files to be saved to.

Conclusion

I'll stop here for now. In my next post in the series I'll talk how to read all the blog records from the database and create an index for searching. As always you can check out the code on my GitHub account.

Resources

Using Node.js and Socket.IO to Stream Tweets From the Twitter API

Wednesday, June 20, 2012

I've discussed in a previous blog post about the Twitter API. Specifically in that post I talked about making a specific call, and then getting a response back in a traditional request/response manner. However, Twitter also has a streaming service that you can connect to that will push to messages real time in a persistent, open HTTP connection.

In general, this type of service can be very powerful, because it makes it very easy to stream information to clients with out the client having request the updates. In the past, streaming data to a browser was not been easy. In actuality, websites were really just making the browser appear to be streaming data but they were really just polling for data with multiple requests. However, today's browsers now support the WebSockets protocol which makes this type of streaming rather easy to implement. It essentially gives the browser the ability to open a connection to a host service and retrieve data through the open connection. Here is what is said about this functionality on Wikipedia.

"The WebSocket protocol makes possible more interaction between a browser and a web site, facilitating live content and the creation of real-time games. This is made possible by providing a standardized way for the server to send content to the browser without being solicited by the client, and allowing for messages to be passed back and forth while keeping the connection open. In this way a two-way (bi-direction) ongoing conversation can "take place between a browser and the server."

Enter Node.js

Since this type of service is asynchronous, its the type of functionality that really fits into the Node.js sweet spot, because JavaScript is by its very nature asynchronous.

To set this Node.js application up, I am going to need a few packages. Like Ruby on Rails' Gems package manager and Microsoft.Net's Nuget package manager; Node.js has a package manager called NPM. I'm not looking to reinvent the wheel here. I just want an easy way to get access to the Twitter stream. So to accomplish, this I am using the package nTwitter. There are tons of Twitter packages for Node.js, but I found this one the easiest to get up and running.

I am also going to set up the Express NPM package which gives Node.js the ability to have a more MVC approach to building a website.

So the first thing to do is to reference all the needed packages and also reference my custom nTwitter module.

var express = require('express')
  , routes = require('./routes')
  , socketIo = require('socket.io')
  , twitter = require('ntwitter')
  , util = require('util')
  , twitterModule = require('./modules/twitterModule');

 

My twitter module reference looks like this. Here I am initializing my nTwitter reference and passing all the needed Twitter credentials to make the streaming call. As aside, you need to go the Twitter Development Site and set up an application to get all your needed credentials.

var twitter = require('ntwitter'),
	util = require('util');
 
// Twitter Setup
module.exports.twit = new twitter({
  consumer_key: 'consumer key',
  consumer_secret: 'consumer secret',
  access_token_key: 'access token',
  access_token_secret: 'access token secret'
});

 

Enter Socket.IO

In my example, I am going to stream tweets that are in my general location. To do this, the "statuses/filter" streaming API needs to be referenced. This is the streaming API provided by Twitter.

To stream the API from my server to the client I am going to use Socket.IO. Socket.IO was built mainly for Node.js, although it can be used by other languages, to be the broker that sits on both the server and the client and handles all the heavy lifting of passing the data back in forth. The nice thing is, Socket.IO has the ability to check the browser for compatibility, so if the current client browser does not support WebSockets, then Socket.IO will revert to something like polling instead.

Thus the only server code that is needed to stream the tweets to the client is this:

io.sockets.on('connection'function(socket) {
  var twit = twitterModule.twit;
  twit.stream('statuses/filter', {'locations':'-80.10,26.10,-80.05,26.15'},
    function(stream) {
      stream.on('data',function(data){
        socket.emit('twitter',data);
      });
    });
});

Socket.IO has an "on" function, where a callback function is passed in. Inside this callback function the Twitter API stream is called, and as the stream "emits" new tweets, those entries get sent to the client. The locations parameters specifies the general longitude and latitude I want the tweets to originate from.

Socket.IO and JSRender on the Client

On the client there is the corresponding Socket.IO code that is configured to listen to the URL that was set up on the server for emitting the tweets. When each tweet is emitted to the client, an event is fired to render a new item to the list.

To render the tweets on the browser, I am using jsRender. If you have ever used jQuery Templates, jsRender is very similar and is probably going to end up replacing jQuery Templates going forward; although, at the time of this writing jsRender is still in Beta.

(function($) {
	$(document).ready(function() {
		var $container = $('ul.tweets'),
			socket = io.connect('http://localhost:3000'),
			template = $('#tweetTemplate');
			
 
	    socket.on('twitter'function(data) {
	        $container.append(template.render(data));
	    });
	});
})(jQuery);

For my Node.js views, I am using Jade. Very similar to HAML in Ruby on Rails, it is very terse and clean way creating markup. No angle brackets to deal with which is nice. Jade works by determining the indentation of the each line to base how things are nested. By the way, you can get this syntactical goodness on ASP.Net MVC as well by using The Spark View Engine.

This view also contains the jsRender template that is appended for each tweet.

h1= title
p #{title}
#local-tweet-container
	ul.tweets
 
script(id="tweetTemplate", type="text/x-jsrender")
	{{:#index+1}}: <li><img src='{{:user.profile_image_url}}' /> {{:text}}</li>

Running the App

When start the application by running node app.js and then browse to http://localhost:3000, the Socket.IO code makes the following request.

 

GET http://localhost:3000/socket.io/1/websocket/21369566461136251045 HTTP/1.1
Upgrade: websocket
Connection: Upgrade
Host: localhost:3000
Origin: http://localhost:3000
Sec-WebSocket-Key: Y65qAJoayH795Vbhn3Bj4w==
Sec-WebSocket-Version: 13
Sec-WebSocket-Extensions: x-webkit-deflate-frame
 
The response that is sent back is:
 
HTTP/1.1 101 Switching Protocols
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Accept: vCWC+o5mWP5mnD2enVI3/PJOhMk=
 
The 101 status means that the requestor has asked the host to change its protocal and in this case the change was to WebSockets.
 
If I look at my browser I see the tweets getting added realtime.
Socket.Io Tweets
It's kind of hard to demonstrate what is happenning in screenshot but the tweets are being added to the page and the actual page is not doing anything. Looking on Fiddler, no post or get requests are being made from my localhost domain.
 
Pretty cool stuff!
 

References

 

 

Learning Ruby on Rails

Wednesday, June 13, 2012

Learning other Languages

ailatan / Free Photos

It's often said that a good developer should have as a goal to learn at least one language a year. I am not sure how important that is as goal, but I have to admit that at this point of my career , this is where maybe I have been a bit lacking. I have managed to stay relevant with the latest features of .Net and I am currently working a lot with the new MVC 4 features, like Web API for example, which is currently a release candidate. Still, I haven't spent too much time learning other languages. I have actually not had the need to learn other languages because because the places where I have worked, for the most part, have been .Net shops. However, not having the need is a bad excuse, so this year especially, I have been setting out to rectify this imbalance and learn other languages and frameworks. 

Furthermore, .Net has becoming pretty mature now, and I don't see anything coming up in the near turn that is going take a lot of time too wrap my head around. Maybe async and the new mobile features coming out might take a little time, but other than that, there is not much out there that if I need to learn, I could spend a few days and get up to speed pretty quickly. I mean to say, that Microsoft seems to be spending their efforts on making the current features better, and not introducing a lot of new features, like they have done in the past.

Why Ruby on Rails

I suppose I could have chose Java or PHP, and I have actually been playing around with Node.js for a bit (does Node.js count? It's JavaScript after all). I actually set out to learn Spring in Java but had a hard time just getting a simple "Hello World" app on my Windows machine. Plus, I couldn't find any decent free tutorials to get a newbie like myself started.

I have been fascinated with great things I keep hearing from the development community about Ruby on Rails, and knew that there were a lot of really good resources out there on the language, so I chose to take look at Rails and see what it had to offer.

My Approach

So to get started on this endeavor, I started watching a video series. For me, I find that videos help be get off the ground quickly and then at that point, I can then read a book or two to help me supplement what I saw in a series. 

A couple of years back, when I was first learning ASP.Net MVC, Rob Connery, published a bunch of videos on the asp.net website where he built a MVC eCommerce site from scratch. I liked his approach and easy going manner in the videos and knew that he had created his own learning site called TekPub. I also knew that he had a series on Ruby on Rails, so I put down the money (pretty nominal for the amount videos I got) and began watching the series.

In the series, Rob first goes though a lot of the basic concepts in Ruby on Rails and then later in the series he takes those concepts and starts building a project time tracking application. While he goes through and builds the application, I have been following along and been building the same application trying my best not to copy his code, but writing it by hand in a TDD/BDD approach.

The videos are really well done, so if you are looking for place to start, as I did, you will not be disappointed with this series. Beyond that, I watched a lot of Railscast.com videos for specific Rails functionality. For example, I went there first when I wasn't sure how to go about adding functionality to my models and found the right video pretty quickly. The videos here are quick 10 to 15 minutes nuggets that showoff a specific functionality of rails. These videos are also really well done.

And as always, I have been spending a lot of time Googling. I haven't come across a question yet that I didn't find the answer to by simply searching for it.

My Current Status

So although the video starts building the time tracker web application it doesn't finish it, and since think I could probably use something like this for my side projects, I am pushing forward and am going to see if I can finish this application to at least the point it is production presentable. When I am finished, I am going to put the code on GitHub and see if I can find a free Rails hosting site I can host the application from.

I have also bought the book Eloquent Ruby to assist me with the Ruby language in general. I probably should have went through this book first, as I am finding while I am building the application that there things I want to do and to do them I have to spend a lot of time on Google for the write syntax and approach.

What I Like So Far

After, spending a lot of time in Ruby on Rails recently, I can really appreciate now where a lot of inspiration for ASP.Net MVC came from. Rails does MVC beautifully.

Not being a strongly typed language Ruby on Rails has to depend a lot more on writing tests than ASP does, and to that end, I think the test tooling you can get with rails is pretty awesome. I have been using RSpec for unit testing and Cucumber for acceptance testing. Both those frameworks are really expressive in a BDD style of writing tests and not very difficult at all in learning how to write. FactoryGirl which is a test stubbing tool that can generate a lot of test records for you is the bomb. I wish there were something equivalent to it in the .Net space. All of these frameworks have really good documentation and videos on them.

Ruby on Rails takes "Convention over Configuration" to the next level beyond any thing I have seen in .Net. From setting up a database, to setting up routes, etc, Rails uses conventions all over the place, and if you can take advantage of them, they can save you a lot of time in writing extra code.

Code generation in Rails is really good, and although I am not a fan of scaffolding per se, (ASP.Net MVC does scaffolding too), I really like the simple code generations you can do, and the ability to create your own code generations too if you want.  

There is a ton of documentation, videos and other resources for the newbie developer. There are a lot of functionality you can add to your application simply by adding the reference to in your gemfile. This also has its drawbacks in which I'll discuss later, but its usefulness far out weighs the negative impacts.

Security is more straight forward and easier to implement than it is in an ASP.Net MVC application. The out of the box implementation does what I need, so there is not much code I need to write for the authentication and authorization functionality.

What Could Be Better

I got admit there is not much I don't like, and none of things I am about to mention are really big issues to me, just things that maybe I would just put on my wish list.

One thing I have found is that rails is so dependent on its development community and its community is so diverse, that I am having a hard time keeping up with all the Gems I have downloaded for my application. From watching the TekPub videos to now, there have been a lot of breaking changes that I have spent a lot of time fixing.  There has been a longer than expected learning curve in learning all these different GEMs and all the magic they perform for my benefit. Also, the multitude of Gems makes it hard to go out and get a book to learn Rails because inevitably the book is going to use Gems that might not be what you need for the application you need to write.

I haven't really been impressed with any of the Ruby on Rails IDE's. I love and can't live without Jet Brains Resharper plugin to Visual Studio, so I figured Jet Brains RubyMine IDE would be a good choice; however, I have found that although its light years better than any other Rails IDE I have looked at, it's still kind of clunky, and not as polished as Resharper.

I know pure OO programmers poo poo Rails and its unwieldy inheritance, and I can see what they mean to certain extent. I can also see how a larger enterprise application written in Ruby could become a bit unmaintainable, but I think that's just a product Rails ability to quickly build an application and stand it up in a production environment. Actually, I think if your a thoughtful programmer, you can still write Ruby code that avoids most of these issues, as is the case in any language, it just might take a little more thoughtfulness in the case of Ruby.

Now, having both Rails and ASP.Net MVC in my tool belt, I can see where depending on my budget, and time constraints verses the complexity of the requirements and resources working on the project, I might choose one framework/language over the other. My view could change, but I could see using Rails for my sites I want to get up and running quickly at a low cost, where a more enterprise level application, I would probably go with ASP.Net MVC application. However, I would relent to the fact that this view could just be based on the fact that I am more comfortable with what I know and this is not necessarily a reflection on Ruby.

Conclusion

So far, I am glad that I spent the time in learning this language. I think, in general, its made me a better programmer do the fact that my eyes have been opened new approaches to building a quality software products that I haven't really seen before firsthand. I like Ruby on Rails and look forward to writing more applications in this language.

When I finish my application, and publish to GutHub, I'll also write another post which will be an update on my experience. 

 

 

Building a Blog Redux - Mapping View Models to Entities Using AutoMapper (Part 6)

Monday, June 4, 2012

This is the sixth post in a series of posts about how I went about building my blogging application.

  1. Building a Blog Redux - Why Torture Myself (Part 1)
  2. Building a Blog Redux - The tools for the trade (Part 2)
  3. Building a Blog Redux - Entity Framework Code First (Part 3)
  4. Building a Blog Redux - Web fonts with @font-face and CSS3 (Part 4)
  5. Building a Blog Redux - Goodreads feed using Backbone.js (Part 5)

In the basic building parts of a MVC web application there are the models, the views, and the controller. The models represent the data that will be placed in the view. The controller is the guy that goes and gets the data (the model) and puts it in the view. Then finally, view is responsible for presenting the data to the user. If the data schema is simple, you could use an ORM like Entity Framework, nHibernate, LINQ, or go old school data-access-layer and pass the entity that represents a SQL table directly to the view and display it on the page.

However, as views get more complicated, the data in the entities needs to be manipulated and changed before you can display it in a view. Often times, the views have fields that are not needed in an entity or vice-versa. You could end up with bloated entities that have a lot of properties that are not mapped to any field in a database. Back in the days of yore, we might resorted to creating stored procedures to manipulate the data to return properties we needed, but then the next guy to come along and work who wanted to make changes to that application learned quickly that this was a very bad idea.

My Approach

My MVC applications are really MVCS applications. That is, I typically add a service layer to my applications between the controller and the models, where I try and keep the lion share of my business (domain) logic. Also, instead of having entities display on the page, I create classes that represent a view models and then I map the entities to the view model which then get sent to the views. This mapping takes place in my new service layer that I just created.

Doing this gives keeps my entities clean and provides for a better separation of concerns between my database and my front end views. However, mapping entities to views and vice-versa can be a challenge especially as your application grows in both size and complexity. It can also be quite tedious, and makes for a laborious day of coding.

Enter AutoMapper

So to handle all my mappings, I use AutoMapper. Here is what AutoMapper is in their own words.

"AutoMapper is an object-object mapper. Object-object mapping works by transforming an input object of one type into an output object of a different type. What makes AutoMapper interesting is that it provides some interesting conventions to take the dirty work out of figuring out how to map type A to type B. As long as type B follows AutoMapper's established convention, almost zero configuration is needed to map two types."

Configuring AutoMapper.

Before mapping an entity object to a view model object, I need to tell AutoMapper the details about these mappings. Most times, the defaults were used, which by convention, is if object A has property with the same name as object B then those to fields will be automatically mapped. If object A has a property that object B doesn't have, then I can ignore it, or do something custom to account for that field. In an MVC web application, these configuration steps are done when the application starts.

In my Application_Start event in the global.asax.cs file, I have the following code.

Bootstrapper.RegisterMappings();

Inside this function, I register a profile for each of my mappings. These profile class is what tells AutoMapper the details of each of the mappings.

 public static void RegisterMappings()
        {
            Mapper.Initialize(x =>
                                  {
                                      x.AddProfile(new UserMapperProfile());
                                      x.AddProfile(new UserRoleMapperProfile());
                                      x.AddProfile(new BlogSiteMapperProfile());
                                      x.AddProfile(new PostMapperProfile());
                                      x.AddProfile(new SettingMappingProfile());
                                      x.AddProfile(new PingServiceMappingProfile());
                                  });;
        }

Here is an example of a simple configuration.

public class SettingMappingProfile : Profile
    {
         public const string ViewModel = "SettingProfile";
 
        public override string ProfileName
        {
            get { return ViewModel; }
        }
 
        protected override void Configure()
        {
            CreateMap<SettingSettingViewModel>();
            CreateMap<SettingViewModelSetting>();
        }
    }

In this case, I have a maping from the entity to the view and also from the view to the entity. Also, notice that my mapping class is inheriting from the Profile base class which is doing all of the magic and taking care of all of my mappings.

Where I have mappings that I need to ignore a field, I can use the Ignore option.

CreateMap<BlogSiteViewModelBlog>()
                .ForMember(dest => dest.Id, opt => opt.MapFrom(x => x.BlogId))
                ;

If I have a property that is itself an object and I want to map the Id property to its Is property and then instantiate the object, I could do the following.

CreateMap<HtmlFragmentViewModelHtmlFragment>()
                .ForMember(dest => dest.Blogs, opt => opt.Ignore())
                .ForMember(dest => dest.Location,
                           opt => opt.MapFrom(x => new HtmlFragmentLocation {Id = Convert.ToInt32( x.SelectedLocationId)}));
                ;

In some cases, I have property that I want to do special things with. In those cases, I can use what's called a value resolver. For example for my tags, I want to store them in the database as a collection; however, I want to display them on the page as comma delimited property. Here's how I can do this using AutoMapper.

First specify in the configuration to use a custom ValueResolver.

.ForMember(dest => dest.TagListCommaDelimited, opt => opt.ResolveUsing<TagListToDelimiterResolver>())

Then in the derived value resolver class, add the code to handle the list to delimiter functionality.

    public class TagListToDelimiterResolver : ValueResolver<Poststring>
    {
        protected override string ResolveCore(Post source)
        {
            if (source == nullreturn string.Empty;
            if (source.Tags == nullreturn string.Empty;
            string tagDelimited = source.Tags.Aggregate(string.Empty,
                                                        (current, tag) => current + string.Format("{0},", tag.TagName));
 
            return !string.IsNullOrEmpty(tagDelimited)
                       ? tagDelimited.Substring(0, tagDelimited.Length - 1)
                       : tagDelimited;
        }
    }

Testing

One of the things I like about AutoMapper is that it has an assert routine you can use to set up tests to test that your configuration works. When a test fails, the assert function does a good job of letting you know where the problem lies and which property is have an issue. Leaning on these mapping unit tests save me lots of time configuring my mappings, and I pretty much knew that if my tests were passing, then my web application would handle the mappings without any issues.

        [TestMethod]
        public void Should_be_able_to_configure_user_profile_to_view()
        {
            Bootstrapper.RegisterMappings();
            Mapper.AssertConfigurationIsValid();
        }

Mapping

Once the configurations are in place, the Mapper.Map function can be called and the views will be mapped to entities and entities map to views. Typically, I like to add some further abstraction and place these tasks in there own classes. This way, I can mock the results in a tests and add some additional customization if needed.  It also makes the mappings a little more maintainable.

public class PostMappingService : IPostMappingService
    {
        public Post MapToEntity(PostViewModel viewModel)
        {
            return Mapper.Map<PostViewModelPost>(viewModel);
        }
 
        public PostViewModel MapToView(Post entity)
        {
            return Mapper.Map<PostPostViewModel>(entity);
        }
    }

Conclusion

Hope this helps. As usual, you can see all of the code at GitHub.

 

 

 

Making a Twitter OAuth API Call Using C#

Wednesday, May 23, 2012

I had an idea to create a Twitter management web application, to help me manage my Twitter account. I know there are Nuget packages(TweetSharp is the one I use), that would do this for me, but being the geek that I am, I just wanted to see how this all works for myself, and see if I could get my brain around what it actually takes to make an Oauth call.

Admittedly, if I needed to make a Twitter API call for a real project, I would just use TweetSharp. Thus, this post is more conceptual than practical in nature in that you probably would never need to do this, but its good to know how its done, all the same. 

OAuth Authorization

I previously posted how easy it was to make an API call to Goodreads, but doing the same thing for Twitter was a different story. Unlike Goodreads, Twitter requires to pass all the OAuth parameters in the HTTP header and the have to encrypted in a very specific manner. From the Twitter Development Site, you need to create an Authorization header and this header entry should look like this:

Authorization:

        OAuth oauth_consumer_key="xvz1evFS4wEEPTGEFPHBog", 
              oauth_nonce="kYjzVBB8Y0ZFabxSWbWovY3uYSQ2pTgmZeNu2VS4cg", 
              oauth_signature="tnnArxj06cWHq44gCs1OSKk%2FjLY%3D", 
              oauth_signature_method="HMAC-SHA1", 
              oauth_timestamp="1318622958", 
              oauth_token="370773112-GmHxMAgYyLbNEtIKZeRNFsMKPR9EyMZeS9weJAEb", 
              oauth_version="1.0"

 

The Authorization Parameters

oauth_nonce: Twitter uses this parameter to determine if the same call has been sent multiple times; therefore, every time this call is made it needs to be unique. According to the specs, the value for this parameter needs to be a base64 encoding 32 bytes of random data. A timestamp in ticks works for this.

_oauthNonce = Convert.ToBase64String(new ASCIIEncoding().GetBytes(
                DateTime.Now.Ticks.ToString(CultureInfo.InvariantCulture)));

oauth_token: represents the user's permission to access their account with your application. For example a Twitter application will redirect you to a Twitter permission page and when you log in there, Twitter will redirect the user back to your app and pass back the token in the response. In my case, I am using my own token that I got from the Twitter development site.

oauth_version: This parameter should always be 1.0.

oauth_consumer_key: Tells Twitter which application is making the request. When you add an application to the Twitter development site, they give you this value.

oauth_signature_method: The signature method used by Twitter is HMAC-SHA1. 

oauth_timestamp: This method is the number of seconds from the UNIX epoch at the time the request was generated.

_timeSpan = DateTime.UtcNow - new DateTime(197011000);
_oathTimestamp = Convert.ToInt64(_timeSpan.TotalSeconds).ToString(CultureInfo.InvariantCulture);

The Signature (What the...?)

Twitter has a whole page dedicated to showing you how to create a signature, so I won't go into great detail, but there some things I want to elborate here. First off here is what the specs have to say:

 

These values need to be encoded into a single string which will be used later on. The process to build the string is very specific:
   1. Percent encode every key and value that will be signed.
   2. Sort the list of parameters alphabetically[1] by encoded key[2].
   3. For each key/value pair:
   4. Append the encoded key to the output string.
   5. Append the '=' character to the output string.
   6. Append the encoded value to the output string.
   7. If there are more key/value pairs remaining, append a '&' character to the output string.
[1] Note: The OAuth spec says to sort lexigraphically, which is the default alphabetical sort for many libraries.
[2] Note: In case of two parameters with the same encoded key, the OAuth spec says to continue sorting based on value. However, Twitter does not accept duplicate keys in API requests.

Here is the code for this process:

public string CreateSignature(string url)
        {
            //string builder will be used to append all the key value pairs
            var stringBuilder = new StringBuilder();
            stringBuilder.Append("POST&");
            stringBuilder.Append(Uri.EscapeDataString(url));
            stringBuilder.Append("&");
 
            //the key value pairs have to be sorted by encoded key
            var dictionary = new SortedDictionary<stringstring>
                                 {
                                     {"oauth_version", OathVersion},
                                     {"oauth_consumer_key", OauthConsumerKey},
                                     {"oauth_nonce", _oauthNonce},
                                     {"oauth_signature_method", OauthSignatureMethod},
                                     {"oauth_timestamp", _oathTimestamp},
                                     {"oauth_token", OauthToken}
                                 };
            
            foreach (var keyValuePair in dictionary)
            {
                //append a = between the key and the value and a & after the value
                stringBuilder.Append(Uri.EscapeDataString(string.Format("{0}={1}&", keyValuePair.Key, keyValuePair.Value)));
            }
            string signatureBaseString = stringBuilder.ToString().Substring(0, stringBuilder.Length - 3);
 
            //generation the signature key the hash will use
            string signatureKey =
                Uri.EscapeDataString(OauthConsumerKey) + "&" +
                Uri.EscapeDataString(OauthToken);
 
            var hmacsha1 = new HMACSHA1(
                new ASCIIEncoding().GetBytes(signatureKey));
 
            //hash the values
            string signatureString = Convert.ToBase64String(
                hmacsha1.ComputeHash(
                    new ASCIIEncoding().GetBytes(signatureBaseString)));
            
            return signatureString;
        }

Here are the corresponding steps for the corresponding tasks above.

  • For #1 I am using the URL.EscapeDataString to percent code every key value pair.
  • For #2 I am putting the key values in a sorted dictionary.
  • For #3, #4, #5, #6, and #7 I am iterating through each key value pair and encoding them and appending the "=" and "&" as needed.

From the MSDN site, here is the definition of what the Url.EscapeDataString does:

By default, the EscapeDataString method converts all characters except for RFC 2396 unreserved characters to their hexadecimal representation. If International Resource Identifiers (IRIs) or Internationalized Domain Name (IDN) parsing is enabled, the EscapeDataString method converts all characters, except for RFC 3986 unreserved characters, to their hexadecimal representation. All Unicode characters are converted to UTF-8 format before being escaped.

The string that generated from these steps should look something like this.

POST&http%3A%2F%2Fsearch.twitter.com%2Fsearch.json&oauth_consumer_key%3Dxxxxxxxxxxxxxxxxxxxxxx%26oauth_nonce%3DNjM0NzIzMTkzMzAzMzMyOTM0%26oauth_signature_method%3DHMAC-SHA1%26oauth_timestamp%3D1336736930%26oauth_token%3Dxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx%26oauth_version%3D1.0

Once the base key is generated, then you need to generate a signing key from your OAuth Consumer Key and Oauth Token. Note, that these entries are essentially your password to Twitter, so keep it hush, hush.

Finally, having the base signature and the signature key, I can then create a hash using HMACSHA1.

HMACSHA1 is a type of keyed hash algorithm that is constructed from the SHA1 hash function and used as an HMAC, or hash-based message authentication code. The HMAC process mixes a secret key with the message data, hashes the result with the hash function, mixes that hash value with the secret key again, then applies the hash function a second time. The output hash is 160 bits in length.

Building the Header

Once I have al my header values, including my signature, I can then create a header.

 

        public string CreateAuthorizationHeaderParameter(string signature, string timeStamp)
        {
            string authorizationHeaderParams = String.Empty;
            authorizationHeaderParams += "OAuth ";
            authorizationHeaderParams += "oauth_nonce=" + "\"" +
                                         Uri.EscapeDataString(OAuthNonce) + "\",";
 
            authorizationHeaderParams +=
                "oauth_signature_method=" + "\"" +
                Uri.EscapeDataString(OauthSignatureMethod) +
                "\",";
 
            authorizationHeaderParams += "oauth_timestamp=" + "\"" +
                                         Uri.EscapeDataString(timeStamp) + "\",";
 
            authorizationHeaderParams += "oauth_consumer_key="
                                         + "\"" + Uri.EscapeDataString(OauthConsumerKey) + "\",";
 
            authorizationHeaderParams += "oauth_token=" + "\"" +
                                         Uri.EscapeDataString(OauthToken) + "\",";
 
            authorizationHeaderParams += "oauth_signature=" + "\""
                                         + Uri.EscapeDataString(signature) + "\",";
 
            authorizationHeaderParams += "oauth_version=" + "\"" +
                                         Uri.EscapeDataString(OathVersion) + "\"";
            return authorizationHeaderParams;
        }

Once again, I need to percent escape each of the entries in the header and when the function is completed  the result should look something like this:

Authorization: OAuth oauth_nonce="NjM0NzIyNjg0NjAwMDI1MzMx",oauth_signature_method="HMAC-SHA1",oauth_timestamp="1336686060",oauth_consumer_key="xxxxxxxxxxxxxxx",oauth_token="xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",oauth_signature="xxxxxxxxxxxxxxxxxxxxxx",oauth_version="1.0"

Putting It All Together. Let's Search Twitter

Okay, now that I have a authorization, I can use it to make a call to Twitter. If I wanted to search on "programming" in the South Florida away I could build a URL like this (Note that in this particular type of call I don't actually need the header entry, but I would for more secure calls like viewing my direct messages or posting a new tweet):

http://search.twitter.com/search.json?q=programming&rpp=25&include_entities=true&result_type=recent&geocode=26.225947999999995,-80.18809300000001,100mi

In this case, I am passing the search term "programming", the longitude and latitude of where the search term originated from, and the max of twenty-fine matches. Here is the code that makes that call:

 

        public IList<SearchResponse> Search(SearchRequest search)
        {
            if (search == null || string.IsNullOrEmpty(search.Q)) return new List<SearchResponse>();

            string query = _httpServer.UrlEncode(search.Q);
            
            string url = "http://search.twitter.com/search.json";
            string fullUrl = string.Format("{0}?q={1}&rpp=25&include_entities=true", url, query);
 
            if (!string.IsNullOrEmpty(search.ResultType))
            {
                fullUrl = string.Format("{0}&result_type={1}", fullUrl, search.ResultType);
            }
 
            if (search.GeoInfoCode != null && !string.IsNullOrEmpty(search.GeoInfoCode.Latitude))
            {
                fullUrl = string.Format("{0}&geocode={1}", fullUrl, search.GeoInfoCode.Code);
            }
 
            string signatureString = _oAuthCreationService.CreateSignature(url);
            string authorizationHeaderParams = _oAuthCreationService.CreateAuthorizationHeaderParameter(
                signatureString, _oAuthCreationService.OAuthTimeStamp);
 
            const string method = "GET";
            HttpWebRequest request = _httpRequestResponse.GetRequest(fullUrl, authorizationHeaderParams, method);
            var responseText = _httpRequestResponse.GetResponse(request);
            var searchResponse = _mapSearch.Map(responseText);
            return searchResponse;
        }

You can see my HttpWebRequest and HttpWebResponse code from my Goodreads API blog post. The response is returned in JSON, which if I wanted, I could return right back to the client.

Conclusion

Well, I hope this helps in understanding OAuth a little bit more. Again, if you want make calls to the Twitter API and do want to go through all this yourself, then get yourself a package off of Nuget like TweetSharp. It's easy to integrate into your application, and the project site has some pretty good documentation.

You can see the full code for this API on Github.

References & Resources

Debugging The Web - Using Chrome To Show CSS Element State - Part 1

Tuesday, May 15, 2012

Introduction

As a developer working on ecommerce and corporate websites, I have had to become somewhat proficient in debugging styles. I'm no designer, but just out of the fact that at times I have often not had the luxury to work with a designer on projects, I have to go it on my own and make sure the web pages met all the style guide lines that were given to me as a part of the project. In so doing, I have come to rely on the debugging tools of the trade to assist me in making sure I get it right.

Over the years, I have gravitated to using the Chrome and Firefox/Firebug tools, and even on some occasions Internet Explorer to assist me in debugging. These tools have really evolved, and made it so producing nice looking website pages is not out of reach for the classic, old school back, end developer. It takes some practice and learning, and its not easy, but it definitely not out of reach either.

So with this in mind, I thought it would be useful to share some of my favorite development features in these browsers that I use when I want to debug styles on a web page. As I stated, most browsers, now a days, have a lot of good debugging features, but some browsers add a nice little nugget that the others don't have that really become life savers.  I'll blog some more on other tools at a later time, but this week, I would like to focus on how to debug the different states of an element. Specifically, the different states of an anchor tag.

The Different States of an Element

In some cases, I have had to work on web pages that have specified styles for links that are on a page. These styles will sometimes include what the different states of a link will look like. For example,  a link could be in some of the following states:

  • link: the normal state of a link
  • hover: the state of the link when a user mouses over the top of it.
  • visited: the state of the link when a user has visited referenced URL.
  • hover:visited: the combination of both a visited link and the user hovering over it at the same time.

Using Chrome to Debug

Here is a quick example I put together of some links. If a link on a page was supposed to be a different color based on what state the link was in, I would put these rules in a style sheet like such:

 

a {colorblue;}
a:hover { colorred;}
a:visited {colorpurple;}
a:visited:hover {colororange;}

If I want to know what a link will look like given a certain state, and make a style for that state, I use Chrome's state feature. As far as I know, Chrome is the only browser that has this feature, or at least its the most obvious to use of the other browsers, and even Chrome has this feature in a not so obvious place.

Here is where the button is located

Chrome State Button

When I click on that button I get for states. I could turn on and off on an element. Doing this I can then see what style is being used for the link.

So if I didn't have any buttons checked, I would see something like this.

link in normal state

If I wanted to see what style is being used when a particular link has been visited, I could click the visited checkbox.

chrome state visited link

Notice that I know have a different style. The one where I specified the visited state.

The tricky one in all this is the hover state. In other browsers, I can see what the hover state is by obviously hovering over the link on the page. However, if the hover appearance is not as expected and is being overridden by a rule that is way down on the list of rules, I can't scroll down and see what that list is while still hovering over the link. That's where this tool is a beauty! I can just click on the hover checkbox, and now my link is permanently in the hover state.

chrome state link hover

The style for hover is now being used.

Another area where this feature becomes useful, is when your link is in multiple states, and it is not using the style you specified. For example, what if you did not want the hover style to take affect when when link was visited? In this case you would have to specify both states in the style.

chrome state visited hover

I checked both visited, and hover, and now I can see that the style being used is specifically for that combination.

Hope that helps.

Building a Blog Redux - Goodreads feed using Backbone.js (Part 5)

Tuesday, May 8, 2012

This is the fifth post in a series of posts about how I went about building my blogging application.

  1. Building a Blog Redux - Why Torture Myself (Part 1)
  2. Building a Blog Redux - The tools for the trade (Part 2)
  3. Building a Blog Redux - Entity Framework Code First (Part 3)
  4. Building a Blog Redux - Web fonts with @font-face and CSS3 (Part 4)

So I put a right rail column on my blog site thinking I probably would put some content over there to engage you users. I had my tag cloud, and eventually I am going to put a search box there as a future feature, but I wasn't really excited about what else to put there. I thought about a dated archive list but I have a dedicated page for that and besides, who really ever clicks on those links.

I could put some ads, but admittedly, I don't have enough traffic to consider that, and even if I did, I still probably would not do that right away. Although, I did put placeholder on the right side in case I changed my mind.

I could put my twitter feed, but I don't know; I don't think that is very interesting to user reading my blog, plus it's sort of been done by everybody already.

Goodreads

Enter Goodreads. If you read books and are social, Goodreads is the perfect site for you. You can keep track of all the books you read, categorizing them by  "Currently Reading", "Read", and "To Read". You can also categorize books by genres or whatever classification you want. The great thing about categorizing the books is Goodreads will give you recommendations based on how you have your books categorized.

Goodreads is social site, so just like Twitter, Facebook, etc. You make friends and then you can see what books they are reading and they have rated and reviewed.

And just like most other social sites, Goodreads has a developer section with an API and other tools for you to use. You just go to their website, request an application key, agree to their terms of use and then you are pretty much good to go.

The API

The API call I am using is the one called "user.show". It pretty much contains the basic information about, me, the user, such as what my bookshelves are, and (what I am most interested in) my book status updates.

The particular node I want is the action_text. Every time I update my status with a book I read or started reading, I get a new one of these nodes. The node value is HTML encoded so that task is already done by Goodreads.

<action_text>
  <![CDATA[
  gave 4 stars to: <a href="http://www.goodreads.com/book/show/12371896-the-node-beginner-book">
  The Node Beginner Book (ebook)</a> by <a href="http://www.goodreads.com/author/show/5132009.Manuel_Kiessling">
  Manuel Kiessling
  </a>
]]>
</action_text>

To get the XML for this particular message, I just need to make a GET reques to their server. Unlike Twitter, I didn't have to add a whole bunch of entries in the request header (I am going to write a post about how to do that later), I only needed the Goodreads API URL which looks like this.

http://www.goodreads.com/user/show/{user_id_number}.xml?key={your_key}

The call is a basic HttpWebRequest and HttpWebResponse.

public class HttpRequestHelper : IHttpRequestHelper
    {
        public string GetResponse(HttpWebRequest request)
        {
            ServicePointManager.Expect100Continue = false;
 
            if (request != null)
                using (var response = request.GetResponse() as HttpWebResponse)
                {
 
                    try
                    {
                        if (response != null && response.StatusCode != HttpStatusCode.OK)
                        {
                            throw new ApplicationException(
                                string.Format("The request did not compplete successfully and returned status code:{0}",
                                              response.StatusCode));
                        }
                        if (response != null)
                            using (var reader = new StreamReader(response.GetResponseStream()))
                            {
                                return reader.ReadToEnd();
                            }
                    }
                    catch (WebException exception)
                    {
                        return exception.Message;
                    }
                }
            return "The request is null";
        }
 
 
        public HttpWebRequest GetRequest(string fullUrl, string authorizationHeaderParams, string method)
        {
            var hwr = (HttpWebRequest)WebRequest.Create(fullUrl);
            if (! string.IsNullOrEmpty(authorizationHeaderParams)) hwr.Headers.Add("Authorization", authorizationHeaderParams);
            hwr.Method = method;
            hwr.Timeout = 3 * 60 * 1000;
            return hwr;
        }
    }

I have a function to build the HttpWebRequest and then I pass that request to function which makes the request and gets the HttpWebResponse.

Mapping the XML Response

I could have serialized the XML to an object using the XmlSerializer, but since I was only needing a few of the fields from a rather large XML response, I decided to just map the XML to a CLR object manually using LINQ to XML.

var view = new GoodReadsUserShowViewModel {Updates = new List<GoodReadsUpdateViewModel>()};
XDocument doc = XDocument.Parse(xml);

 

Creating the MVC Partial View

Once I have my object built, I then pass it back to the MVC controller which will then intern pass it back to a partial view.

        [OutputCache(Duration = 60000, VaryByParam = "*")]
        public ActionResult Index(string id)
        {
            try
            {
                GoodReadsUserShowViewModel result = GetGoodReadsUserShowViewModel(id);
                return PartialView("_Goodreads", result);
            }
            catch(Exception ex)
            {
                ErrorSignal.FromCurrentContext().Raise(ex);
                return PartialView("_Goodreads"new GoodReadsUserShowViewModel {ErrorMessage = ex.Message});
            }
        }

Things to note up until this point. Since I know that this feed is not going to change much (I mean I can only read books so fast), I am caching the response for a fairly long time. In this case, performance is more important than timeliness because I don't care that you see the latest status the second after I update it.

Also, I could have passed back a JsonResult, but in this case, Backbone.js actually recommends that you have your data already bootstrapped into the request. Therefore, on my Goodreads partial view, I am taking the view model and passing it to a helper class that will serialize my model JSON and put it in a JavaScript variable.

<script src="@Url.Content("~/js/goodreads.js")" type="text/javascript"> </script>
<script type="text/javascript">
    var grApp = new GoodreadsApp(@Model.ToJson());
    grApp.start();
</script>

 

The helper class ToJson looks like this.

using System.Web.Mvc;
using Newtonsoft.Json;
 
namespace AviBlog.Core.Helpers
{
    public static class JavaScriptSerializerHelper
    {
 
        public static MvcHtmlString ToJson(this object model)
        {
            string json = JsonConvert.SerializeObject(model);
            return new MvcHtmlString(json);
        }
         
    }
}

So when the page is actually rendered the JavaScript model that is rendered will look something like this.

<script type="text/javascript">

    var grApp = new GoodreadsApp({
        "UserId":"3425042",
        "UserName":"avington",
        "Name":"Steve Moseley",
        . . . });

    grApp.start();

</script>

 

Backbone.js Implimentation

Here is what the Backbone.js website has to say about what backbone.js is.

"Backbone.js gives structure to web applications by providing models with key-value binding and custom events, collections with a rich API of enumerable functions, views with declarative event handling, and connects it all to your existing API over a RESTful JSON interface."

It essentially provides a clean way to bind your front-end code to different events to provide a rich user interaction. In this case, I am using it in a very simple fashion and just binding the initial response of the Goodreads data to template and binding it using jQuery Templates. Later on I am probably going to switch out jQuery Templates for jsRender, but for now jQuery Templates it is.

Derick Baily who probably has the best blog articles on Backbone, put a response in StackOverflow as what the best way to render data on a page on the initial load and as you can see from his response and my code, I pretty much did the same thing. Here is my backjone.js code.

GoodreadsApp = (function (Backbone, _, $) {
    var GoodreadsModel = Backbone.Model.extend({});
 
    var GoodreadsCollection = Backbone.Collection.extend({
        model: GoodreadsModel
    });
 
    var GoodreadsView = Backbone.View.extend({
        el: $('.goodreads-widget'),
 
        initialize: function () {
            this.collection.bind("reset"this.render, this);
        },
 
        render: function () {
            var data = this.collection.models[0].attributes;
            if (data) {
                var $template = $('#goodreads-template');
                var goodreadsHtml = $template.tmpl(data);
                $(this.el).html(goodreadsHtml);
            }
        }
    });
 
    var goodreadsApp = function (initialModels) {
        this.start = function () {
            this.models = new GoodreadsCollection();
            this.myView = new GoodreadsView({ collection: this.models });
            this.models.reset(initialModels);
        };
    };
 
    return goodreadsApp;
})(Backbone, _, jQuery);

In this example taken from StackOverflow, when the start function is called, an empty model is initialized. The reset backbone even is called which clear out the collection of models and fills the collection with a new set (in this case taken from the serialized JSON data). The render event is hooked up to the reset function so that when it is called the template is rendered.

The jQuery template looks like this.

<script id="goodreads-template" type="text/x-jquery-tmpl">
    <h4>Goodreads Status</h4>
    <ul>
        {{each Updates}}
        <li class="clear">
            <img class="goodreads-book-img" src="${BookImageUrl}"/>
            <div><a href="${UserLink}">${Name}</a> {{html ActionText}}</div>
        </li>
        {{/each}}
        <li class="last-goodreads-item">Follow me on <a href="${UserLink}">Goodreads</a></li>
    </ul>
    
 
</script>

Conclusion

I would have put a picture of the result here, but if you take a look over here on the right. Yeah the right hand panel you can see it there for yourself. I think its a pretty cool section for only a couple hours of work. As always, you can see all the code for the application at Github.

Resources

 

Creating a Simple Publishing-Subscribe Messaging App Using Node.js, Express, and Faye

Tuesday, May 1, 2012

Introduction

I have been playing around with Node.js for a bit. You know--the hot new kid on the programming block that everyone is talking about. I'm getting old and maybe this is just a mid-life crisis, but I like learning the cool, new stuff. JavaScript on the server! Who knew?

Why Node

Node is popular for a couple reasons. For one thing, it is JavaScript, so pretty much every web developer already knows it. The engine that executes the scripts server side is called V8. This is the very same engine that executes JavaScript scripts in your Chrome browser and it is fast. The thing that makes Node.js standout from other languages is that it is by it's JavaScript nature, non-IO blocking. There is some good information about it on Orielly, but essentially, what this means is Node.js handles it's I/O asynchronously and this makes it quite preforment.   Node.js does not tie up threads waiting on long running call-outs, instead it uses what is called an event loop to handle all of the call-outs at the same time. That's not to say that Node.js doesn't have it's issues, it does. It's still a bit immature (especially on a Windows machine) and you have to be careful with the whole asynchonous thing as it can cause some unusual results. Still, I like it. I am having fun learning it.

​Setting It Up:

First off, a quick shoutout to the guys at http://nodecasts.org/ for getting me started. The screencast was very helful. Because the age of the screencast, the code was a little bit depricated (Express was the culprit), but it didn't take much to find the new code syntax and correct the problems. I am going a little bit further in my example and making the message call via AJAX instead of in the command line via CURL.

To get Node.js up on your machine, go to http://nodejs.org and follow their installation instructions.

Once node is up and running, got http://npmjs.org/ and install the Node.js Package Manager. If your a .Net guy it is the Nuget equivalent, or the Ruby's Gem equivalent.

On my Windows machine, I've setup a directory call Node (i.e. c:\Node\ ). I then created a project folder ( C:\node\faye_sample ).

From the command line, navigate to to your project folder and run the following commands.

  • npm install express
  • npm install faye

These commands will install the Express and Faye packages on your machine that you can then reference in your code.

Express is basically a Node framework that gives you MVC styled architecture inspired by Sinatra in the Ruby world. For this example, I am only going to create a one file application and use Express for some shortcuts. There are some really good Express screencasts at NodeTuts, so if you want to go further with Node, I recomment that you give those a look at.

Faye is subscriber/publisher framework that allows any HTTP client to talk to a server real time as well many clients talk to each other real time via the server.

​The Server Code

Okay, lets take a look at the server side code. First thing I need to do is get a reference to Express and Faye.

var express = require('express'),
    faye = require('faye');

 

The next thing I need to do is initialize Faye on the server.

var bayeux = new faye.NodeAdapter({
    mount: '/faye',
    timeout: 45
});

In case you are wondering about that funny variable name. Faye uses the Bayeux protocol as its mechanism to transport the messages back and forth.

Next step, initialize the Node server using the Express wrappers.

var app = express.createServer();
app.configure(function() {
    app.use(express.bodyParser());
    app.use(express.static(__dirname + '/public'));
});

The express.bodyParser will parse the request coming in and place the data coming in into the req.body object. As you will later see, it will contain my incoming message that I can then get to rather easily.

The variable __dirname is the current directory my application is running in. the function express.static is basically telling Node that all my static content is located in the static folder inside the current application folder.

To capture the posts from the different browsers, I need to set up a route handler that will capture a post request and then broadcast that request to all of the client listeners.

app.post('/message'function(req, res) {
    bayeux.getClient().publish('/channel', { text: req.body.message });
    console.log('broadcast message:' + req.body.message);
    res.send(200);
});

The first line of code is capturing the post request from "/message" and passing in the request and response object on a callback function. The callback function then gets the message from the request and publishes it to the clients. I am also logging the message so I know that the server got the message. Because I called bodyParser when I configured my server, I now have access to the req.body.message which contains my message from the client.

The last thing I need to do is start up my node server. In this case, I am listening on port 8123.

bayeux.attach(app);
app.listen(8123);

The Client

Okay, for simplicity sake here, I am going to create two HTML files with exact same code in it and then hard code Client #1 and Client #2 in the respective files. So lets look at the client code.

In my HTML markup, I have the following elements.

                <h1>Chat Client #1</h1>
		<div id="messages"></div>
		<textarea rows="2" cols="35" id="chat"></textarea>
		<input type='button' value='Chat' id='fire' />

I need to grab a script reference to the latest jQuery, and also the Faye script on the client side.

<script src='/faye/browser/faye-browser.js'></script>

 

With the Faye reference, I can then setup a listener that will update my HTML when ever some sends out a message.

                        var client = new Faye.Client('/faye',{
				timeout: 20
			});
			
			client.subscribe('/channel'function(message) {
				$('#messages').append('<p>' + message.text + '</p>');
			});

 

Next: I am going to use jQuery to post a message when ever a user clicks on a chat button.

    		    var $chat = $('#chat');
    		    $('#fire').on('click',nullfunction() {
    		        var url = 'http://localhost:8123/message';
				
    		        var message = {message: 'Client 1: ' + $chat.val()};
    		        var dataType = 'json';
    		        $.ajax({
    		            type: 'POST',
    		            url: url,
    		            data: message,
    		            dataType: dataType,
    		        });
    		        $chat.val('');
    		    });
    		    

That's it!

I can now start up my server from the command line.

  • ​node server.js

Next, I open two browsers and start chatting.

2 browsers chatting via node server

Now, if I look at my command line server log I should see the messages because I put a line in the code to log them.

server log of faye messages

​Conclusion

I set up a GitHub repository with this code in it that you can take a look at. As I explore Node some more, I'll probably post some more posts on Node.js so stay tuned.

References

Building a Blog Redux - Web fonts with @font-face and CSS3 (Part 4)

Tuesday, April 24, 2012

This is the fourth post in a series of posts about how I went about building my blogging application.

  1. Building a Blog Redux - Why Torture Myself (Part 1)
  2. Building a Blog Redux - The tools for the trade (Part 2)
  3. Building a Blog Redux - Entity Framework Code First (Part 3)

In previously blog sites that I have maintained, there wasn't much thought what fonts I should be using. The fonts that I would use were either Verdana or Tahoma, because they looked nice and were easy to read. However, in today's modern web, things have changed. People and companies want fonts to distinguish their sites from the crowd. They want a font style that is pleasing to the eye, that compliments their site theme, and when necessary, is easy to read.

I wanted that same things for my site, and I also wanted to learn a little bit on how to implement a fancy font, so I went about choosing a unique nice font for my blog site too.

Here are the requirements I had for my font selection:

  • It had to be free.
  • It had to be fairly simple to implement.
  • It had to be clear and modern looking.
  • It had to also look good on my iPhone.

Its Free

Obviously, when someone makes a font family, they would like to make money off their work so when selecting fonts you have consider the licensing that goes along with them. Fortunately, there are those out their who created their fonts with the intentions of making them free to use.  There is a very large selection, and  a lot great fonts to choose from.

Here are some of the free font sites I looked at.

  • Google Web Fonts: They have a good selection of fonts, and even a way to reference their fonts remotely.
  • The League of Movable Type: They have some really nice fonts, but not a large selection.
  • Fontex: Good selection here.
  • Font Squirrel: Another site with a good selection of free fonts and the site where I ultimately chose my font.

Easy to Implement ( using @font-face)...well Sort of.

@font-face is new rule (will actually it is an old rule that was dropped in CSS 2.1 and then added back in CSS 3) that allows you to add a custom font face to any text on a page simply by adding a CSS style. Unlike other rules, there is no JavaScript needed to get the font to display.

Here's where it gets a bit hairy. Different browsers support different font types, and not all browsers support the same ones. Currently the different types are WOFF, OTF, TTF, SVG, and EOT. Thankfully, sites like Font Squirrel have a @font-face kit generator that will generate all the needed font types for you, as well as generate the styles needed to reference those font types.

You simply choose (a legally obtained) font and then upload it to their site.

Font Squirrell Font Face Kit

The site will churn for a few seconds and then you will receive a link to a font package to download. At that point, you can take the entire package, unzip it, and then place somewhere on your website. The picture below is something like the file package you will receive.

Font type files

The CSS style that is generated looks like this:

@font-face {
    font-family'CantarellRegular';
    srcurl('cantarell-regular-webfont.eot');
    srcurl('cantarell-regular-webfont.eot?#iefix') format('embedded-opentype'),
         url('cantarell-regular-webfont.woff') format('woff'),
         url('cantarell-regular-webfont.ttf') format('truetype'),
         url('cantarell-regular-webfont.svg#CantarellRegular') format('svg');
    font-weightnormal;
    font-stylenormal;

}

With the @font-face style reference in my CSS, I then use that font name like any other font on my page.

	font-family'CantarellRegular', Tahoma;

For older browsers that do not support the CSS3 @font-family rule, I am falling back to the Tahoma web font which is supported by virtually all web browsers.

Looks Nice

So what do you think? I think it looks pretty nice. Its clear, and has no jagged edges and easy to read, especially from a mobile device. It's quite handsome.

Looks good on my iPhone

I had to play around with my CSS to get the font sizes to look right, but that was more of problem with my CSS styles than with the font itself.

To be honest, I think the font looks nicer on my iPhone than it does on my laptop browser.

iphone font problems not SVG

Take Note IIS Users:

When I deployed the font packages, one of the things I noticed is that although the font was being serviced, I was getting a 404 response on my WOFF file extension. Here is what I was getting when loaded up Fiddler.

HTTP/1.1 404 Not Found

After doing some searching around, it turns out that by default IIS 6 or higher (currently I am on IIS 7.5) does not have the MIME types for the .WOFF and .SVG fonts registered so I  had to open up IIS and add those changes. Below is a screenshot of where in IIS you go to make the change.

Mime Type Console in IIS

The MIME types are:

  • .WOFF     application/x-woff
  • .SVG       image/svg+xml

Once made those entries, I did not see the 404 errors anymore.

References

Conclusion

I hope you found this post useful, and as always, you can see the code for this website on GitHub.