By Ugo Lattanzi on June 3rd , 2014 in NodeJs | comments

In the last period I'm working a lot with Node JS and, for a developer like me who loves C# and .NET, I'm still not sure I really love Node :confused:

The first impression is positive but I don't know if it's just because I'm playing with something new or if I really like the approach; however, for the moment, now I'm happy with it.

This friday I was explaining my experience with Node to my colleague speaking about the differences between Node and .NET.

From my point of view the biggest difference among .NET and Node is the async implementation. The first one is MTA (multi thread application) and the second one is STA (single thread application).

STA means you can't open a thread to execute something so your application can manage only one request per time.

Ok, that's absolutely fault because Node is smart :smile:

It's true that Node is STA but it is async by default, in fact when you do something that goes outside of your application (database query, I/O, web request and so on) Node uses the thread to do other stuff like answer to another request, execute other code in you application and so on.

That means Node is really really optimised, in fact the performance among .NET and Node JS are very similar and, in several case, Node is faster than .NET.

But how to increase the number of process with Node so that you can have a good scaling system?

In the .NET world you can find something similar is ASP.NET (hosted on IIS) and it's called "web garden", in Node instead it's called Cluster. Basically there are more than one active process and a "manager".

In that scenario you can use one process for each core of you computer, so your Node application can scale better with you hardware.

Basically it's like running 'node app.js' for each core you have, and another process to manage them all.

First step, install some packages:

npm install cluster --save

The goal of this example is to create one process for each core, so the first thing to do is to read the number of cores installed on your laptop:

var cluster = require('cluster');

if (cluster.isMaster) {
  var numCPUs = require('os').cpus().length;
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  Object.keys(cluster.workers).forEach(function(id) {
    console.log(cluster.workers[id].process.pid);
  });
}

cluster.isMaster is necessary to be sure that you are forking it just one time.

Now if you run the app you should have one process for each core, plus the master

image

In my case I've 8 core, so 9 process because of master.

The next step is to add a webserver, so

npm install http --save

and put your logic for each fork:

var cluster = require('cluster');
var http = require('http');

if (cluster.isMaster) {
  var numCPUs = require('os').cpus().length;
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  Object.keys(cluster.workers).forEach(function(id) {
    console.log(cluster.workers[id].process.pid);
  });
} else{

  // Create HTTP server.
  http.Server(function(req, res) {
    res.writeHead(200);
    res.end("This answer comes from the process " + process.pid);

  }).listen(8080);
}

Now calling the webserver you can see which process is answering your request:

image

Because the code is too simple, probably you'll get the same 'pid' for each request from your browser. The easier way to test it is to lock the thread (yes, I said that) so the "balancer" can switch the request to another process demonstrating the cluster.

In Node there isn't something like Thread.Sleep, so the best way to lock a thread is create something that keeps it busy, something like an infinite loop :smirk:

var cluster = require('cluster');
var http = require('http');

if (cluster.isMaster) {
  var numCPUs = require('os').cpus().length;
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  Object.keys(cluster.workers).forEach(function(id) {
    console.log(cluster.workers[id].process.pid);
  });
} else{

  // Create HTTP server.
  http.Server(function(req, res) {
    res.writeHead(200);
    res.end("This answer comes from the process " + process.pid);

    //that's just for example
    while(true){

    }

  }).listen(8080);
}

If you want to manage all the processes and to log some events, it could be helpful to track some events for each process and to send a message from the "worker" to the "master" or to check when a process dies.

To do that it's necessary to use message event on the worker, so here's the code:

var cluster = require('cluster');
var http = require('http');

if (cluster.isMaster) {

  console.log("Master pid: " + process.pid);

  var numberOfRequests = 0;

  var numCPUs = require('os').cpus().length;
  for (var i = 0; i < numCPUs; i++) {
    cluster.fork();
  }

  Object.keys(cluster.workers).forEach(function(id) {
    console.log('creating process with id = ' + cluster.workers[id].process.pid);

    //getting message
    cluster.workers[id].on('message', function messageHandler(msg) {
      if (msg.cmd && msg.cmd == 'notifyRequest') {
        numberOfRequests += 1;
      }

      console.log("Getting message from process : ", msg.procId);
    });

    //Getting worker online
    cluster.workers[id].on('online', function online()
    {
      console.log("Worker pid: " + cluster.workers[id].process.pid + " is online");
    });

    //printing the listening port
    cluster.workers[id].on('listening', function online(address)
    {
      console.log("Listening on port + " , address.port);
    });

    //Catching errors
    cluster.workers[id].on('exit', function(code, signal) {
      if( signal ) {
        console.log("worker was killed by signal: "+signal);
      } else if( code !== 0 ) {
        console.log("worker exited with error code: "+code);
      } else {
        console.log("worker success!");
      }
    });
  });

  //Printing number of requests
  setInterval(function(){
    console.log("Handled " + numberOfRequests + " requests");
  }, 3000);

} else {

  // Create HTTP server.
  http.Server(function(req, res) {
    res.writeHead(200);
    res.end("This answer comes from the process " + process.pid);

    console.log("Message sent from http server");

    // Notify master about the request
    process.send({ cmd: 'notifyRequest', procId : process.pid });


  }).listen(8080);
}

I've created a repository on github with some demos about Node.js (including this one). You can find the repository here.

Have fun.

By Ugo Lattanzi on May 21st , 2014 in Various | comments

During the last year I used Wordpress for my blog, but I never really liked it. I'm not in love with Wordpress and there are several reason that I try to explain here:

  • Maintenance;
  • Hosting;
  • Updates;
  • Backup;
  • Test environment;
  • Performance;

I'm not saying that Wordpress is not good, I'm saying that it doesn't match my requirements. From my point of view (now) a blog engine is something where I can write a post in an easy way.

Moreover, in the past years I created a blog engine (never completed) based on .NET technologies, its name is Dexter and it's available on Github here. IMHO it is/was better than Wordpress, but with many of the problems mentioned above (my mistake).

Some weeks ago, David Ebbo synthesised in this post my idea of blog engine (for a nerd of course)

So, everything has started from that post and I migrated first my italian blog, than this blog.

Why Jekyll?

It offers important advantages; the most important is that it doesn't require server side code :thumbsup:

Finally it's easy to use for a developer. The setup guide is available here (if you are running Windows I suggest to follow this guide.)

Basically it's built on ruby and it generates static files (simple .html files) with the correct folder structure for pagination, custom pages, permalinks and so on.

image

Of course you can't do some stuff like comments and search, but this is not a problem because there are several external services that offer for free search and comments (Facebook, Google and Disqus in my case).

Moreover there is another cool advantage of moving your blog to Jekyll, and it's Github Pages.

Github offers for all its users the opportunity to have a free hosting for static files creating a repository named yourgithubusername.github.io.

Once you have created the repository it's just necessary to push your static files into it and navigate to http://yourgithubusername.github.io

That's all!

Here the problem could be that Jekyll generates static files only after a compilation, so you should compile and then push. Fortunately there is a solution also for this, Github Pages offers the Jekyll compilation and you don't have to do that (here a complete guide).

So, now you have free hosting, amazing performance (your HTML is hosted on Github CDN), everything is managed by a Git repository (so you have the history of all posts (here an example)) and you can use your custom domain.

To be synthetic:

  • Create a new Markdown file into _posts folders;
  • Write the post;
  • Push on github repo

Github will compile the static files for you. Do you wanna know how fast & reliable is Github with Jekyll? Take a look to this report (remember that my skin is not optimised, lot of requests):

image

If I convinced you to use Github, below some advice:

  • To migrate your posts use this (it supports several sources like Wordpress, xml, rss and so on);
  • If you add the license on github, you can also create/modify/delete posts directly for Github website, so you don't have to setup your environment. Read this post;
  • Enable some cool Gems like I did here;
  • To render the emoji (the point above is mandatory) remember the right syntax;
  • Be careful with the redirect if you change the url permalink (take a look here).
  • If Atom is your favorite Markdown editor, install this package;
  • Good free editor for Windows available here;
  • If you wanna create your custom skin, I suggest to start with Jekyll Bootstrap available here.

If you want something with more features, but you like the speed and the idea of static files, take a look to Octopress.

Enjoy it!

By Ugo Lattanzi on May 16th , 2014 in Events | comments

After coming back from my long day in Paris, I finally uploaded the slides and demos of my talk with Simone about Owin and Katana.

The code used during the speech is available on Github here.

You can find 5 demos:

  • 01.OwinIIS: Demo of running Katana on IIS Host
  • 02.OwinHost: Demo of running Katana on OwinHost.exe
  • 03.OwinSelfHost: Demo of running Katana on self host and with custom error page
  • 04.OwinWebAPI: Running WebAPI on top of Katana
  • 05.OwinMiddleware: example of using 3 different middlewares in the Owin pipeline. The 3 middlewares are built using 3 different approaches.

As usual the slides are available on Slideshare here, but you can read them also from this post.

Finally I've to say thanks to all the people who come to the conference, and a special thanks to Rui, ncrafts conference was amazing!

Hope to see you next year!

By Ugo Lattanzi on April 22nd , 2014 in Events | comments

ncraf

Recently I've been writing and reading lots of articles/tweets about OWIN and its implementation Katana.

The reason is pretty simple, I'm writing a book with Simone Chiaretta about OWIN for Syncfusion. The book is part of their “Succinctly” e-book series, so nothing big and complicated, just all you need to know about OWIN/Katana and how to use them in your application.

For this reason we are really focused on OWIN. In the meantime, our friend Rui Carvalho organized a super cool conference in Paris named NCrafts.

It is "dangerous" combo because the conference looks really amazing and I'll speak with Simone about OWIN :-)

All jokes aside, the conference will be really awesome (Web, Cloud, Data and so on) so, if you don't have anything planned for the 16 May, hurry up and come to Paris (all the conference info is here).

I'd like to finish the post with a quote from the NCrafts web site:

In other words, we love building software with art, passion and technology and share with everyone.

By Ugo Lattanzi on Mar. 4th , 2014 in webapi | comments

With the latest version of ASP.NET Web API, Microsoft introduced support for cross domain requests, usually called CORS (Cross-Origin Resource Sharing)

By default it's not possible to make HTTP requests using Javascript from a source domain that is different from the called endpoint. For example, this means that it's not possible to call the URL http://mysite.com/api/myrestendpoint from a domain http://yoursite.com

This limitation has been introduced for security reasons: in fact, without this protection, a malicious javascript code could get info from another site without noticing the user.

However, even if the reason of this limitation is clear, sometimes we need to call anway something that is not hosted in our site. The first solution is is to use JSONP. This approach is easy to use and it's supported by all browsers; the only problem is that the only HTTP VERB supported is GET, which has a limitation on the lenght of the string that can be passed as query parameter.

Otherwise, if you need to send lot of information we can't use this way, so the soulution could be to "proxy" the request locally and forward the data server side or to use CORS.

Basically CORS communication allow you to overtake the problem by defining some rules that makes the request more "secure". Of course the first thing we need is a browser that support CORS: fortunately all the latest browsers support it. Anyway, we have to consider that, looking at the real world, there are several clients that are still using Internet Explorer 8 which, among other things, doesn't support CORS.

The following table (http://caniuse.com/cors) shows which browsers offer CORS support.

CORS SUPPORT TABLE

there are several workaround that allows you to use CORS with IE8/9 but there are some limitations with the VERBS (more info here)

Now that it's clear what is CORS, it's time to configure it using one of the following browsers:

  • Internet Explorer 10/11
  • Chrome (all versions)
  • Firefox 3.5+
  • Safari 4.x

Now we need two different project, one for the client application and another one for the server, both hosted in different domains (in my esamply I used Azure Web Sites, so I've http://imperdemo.azurewebsite.net for the server and http://imperclient.azurewebsite.net for the client)

 

Server Application

Once the project has been created, it's important to enable CORS for our "trusted" domains, in my sample imperclient.azurewebsite.net

If you used the default Visual Studio 2013 template, your Global.asax.cs should look like this:

public class WebApiApplication : System.Web.HttpApplication
{
    protected void Application_Start()
    {
        AreaRegistration.RegisterAllAreas();
        GlobalConfiguration.Configure(WebApiConfig.Register);
        FilterConfig.RegisterGlobalFilters(GlobalFilters.Filters);
        RouteConfig.RegisterRoutes(RouteTable.Routes);
        BundleConfig.RegisterBundles(BundleTable.Bundles);
    }
}

Next, it's time to edit the file with the API configuration, "WebApiConfig.cs" into "App_Start".

N.B.: Before editing the file it's important to install the right NuGet Package; the default template included with Visual Studio doesn't have CORS package, so you have to install it manually.

PM> Install-Package Microsoft.AspNet.WebApi.Cors

Once all the "ingredients" are ready, it's time to enable CORS:

using System.Web.Http;

namespace imperugo.webapi.cors.server
{
    public static class WebApiConfig
    {
        public static void Register(HttpConfiguration config)
        {
            // Web API configuration and services
            config.EnableCors();

            // Web API routes
            config.MapHttpAttributeRoutes();

            config.Routes.MapHttpRoute(
                name: "DefaultApi",
                routeTemplate: "api/{controller}/{id}",
                defaults: new { id = RouteParameter.Optional }
            );
        }
    }
}

Our API Controller looks like this:

using System.Collections.Generic;
using System.Web.Http;
using System.Web.Http.Cors;

namespace imperugo.webapi.cors.server.Controllers
{
    [EnableCors(origins: "http://imperclient.azurewebsites.net", headers: "*", methods: "*")]
    public class ValuesController : ApiController
    {
        // GET api/values/5
        public string Get()
        {
            return "This is my controller response";
        }
    }
}

The most important part of this code is EnableCors method and the namesake attribute (included the VERBS, Domain and HEADERS)

In case you don't want to completely "open" the controller to CORS requests, you can use the single attribute in the Action or leave the attribute in the controller and apply the DisableCors attribute to the actions you want to "close"

Client Application

At this time, the server is ready, it's time to work client side.

The code we'll see for the client is just plain Javascript, so you can use a simple .html page without any server side code.

The HTML Code:

<div class="jumbotron">
    <h2>Test CORS (Cross-origin resource sharing)</h2>
    <p class="lead">
        <a href="#" class="btn btn-primary btn-large" id="testButton">Test it now&raquo;</a></p>
    <p>
    <p id="response">
        NoResponse
    </p>
</div>

Javascript Code:

<script language="javascript">
    var feedbackArea = $('#response');
    $('#testButton')
        .click(function () {
            $.ajax({
                type: 'GET',
                url: 'http://imperdemo.azurewebsites.net/api/values'
            }).done(function (data) {
                feedbackArea.html(data);
            }).error(function (jqXHR, textStatus, errorThrown) {
                feedbackArea.html(jqXHR.responseText || textStatus);
            });
    });
</script>

If you did everything right, you can to deploy our apps (server and client) to test them.

image

When you click on the "Test It now" button the result should look like this:

image

Otherwise if something goes wrong, check the point above.

 

How does it work

CORS is a simple "check" based on HEADERS between the caller and the server.

The browser (client) adds the current domain into the hader of the request using the key Origin.

The server check that this value matches with the allowed domains specified in the attribute, answering with another HEADER information named Access-Control-Allow-Origin

If both keys have the same values, you have the data, otherwise you'll get an error.

The screenshot below shows the headers:

image

Here is, instead, the classic error in case the HEADERS doesn't match:

image

 

Conclusions

For me that I love to sperate the application using an API layer, CORS is absolutely cool. The onyl downside is that it's not supported by all the browser. Let's just hope that hope that all the IE8/9 installations will be replaced soon :-)

The demo is available here