By Ugo Lattanzi on Mar. 5th , 2015 in azure | comments

I know, the title is a bit provocative or presumptuous if you prefer, but I think this post could be useful if you wanna approach to Redis as cache server using .NET. For all the people who don't know what Redis is, let me quote that definition:

Redis is an open source, BSD licensed, advanced key-value cache and store.

And why is it so cool? This is probably the best answer you can find on internet (source here):

Redis running on an entry level laptop can scan a 1 million key database in 40 milliseconds.

Installation

Now that is clear why Redis is so cool and why lot of enterprise applications use it, we can see how to use it. First of all we have to download Redis from here, unzip the file and run it locally

> redis-server.exe redis.conf

and the console output should be something like this:

RedisConsole

if you want to use Redis on Microsoft Azure, you can do it by creating your instance here:

Azure1

Azure2

choose the best plan for you, the location, add your name in the proper field and create it.

creation could take a while and sometime you can get errors. The reason is that the portal is still in beta but don't worry, keep trying till you get the redis cache server up & running

Azure3

Azure4

Configuration

Here we go, Redis is up & running on your dev machine and/or on Azure if you choose it. Before to start writing code, it's important to choose the client library to use. The most used libraries are

The first one is free and opensource so, if you want to use it, you can do easily. The other one has a AGPL license (from 149$ to 249$).

if you prefer ServiceStack.Redis you can downgrade to version 3.9.71 which was the last truly free

In this article I'm going to use StackExchange.Redis so, let's start to install it using NuGet

PM> Install-Package StackExchange.Redis

There is also a StrongName (StackExchange.Redis.StrongName) package if you need to use it into a signed library.

Now, it's time to write some good code:

namespace imperugo.blog.redis
{
    class Program
    {
        private static ConnectionMultiplexer connectionMultiplexer;
        private static IDatabase database;

        static void Main(string[] args)
        {
            Configure();
        }

        private static void Configure()
        {
            //use locally redis installation
            var connectionString = string.Format("{0}:{1}", "127.0.0.1", 6379);

            //use azure redis installation
            var azureConnectionString = string.Format("{0}:{1},ssl=true,password={2}",
                                    "imperugo-test.redis.cache.windows.net",
                                    6380,
                                    "Azure Primary Key");

            connectionMultiplexer = ConnectionMultiplexer.Connect(connectionString);
            database = connectionMultiplexer.GetDatabase();
        }
    }
}

For some plans, Redis on azure uses SSL by default. If you prefer a no-secure connection you can enable it via Azure Portal, in this case use 6379 and remove ssl=true from the connection string

Add and Retrieve cache objects

StackExchange stores data into Redis sending/retrieving a byte[] or so, whatever you are storing into Redis must be converted into a byte[] (string is automatically converted by StackExchange.Redis implementation so we don't have to do it).

Let's start with simple object like a string

private static bool StoreData(string key, string value)
{
    return database.StringSet(key, value);
}

private static string GetData(string key)
{
    return database.StringGet(key);
}

private static void DeleteData(string key)
{
    database.KeyDelete(key);
}

and now we can use this methods

static void Main(string[] args)
{
    Configure();

    bool stored = StoreData("MyKey","my first cache string");

    if (stored)
    {
        var cachedData = GetData("MyKey");

        bool isIt = cachedData == "my first cache string";
    }
}

That's pretty simple but what about storing complex objects? As I wrote above, StackExchange.Redis stores only byte[] data so we have to serialize our complex object and convert it into a byte[] (there is an implicit conversion in case of string, for this reason we didn't convert the type string to byte[])

The easiest (and probably the best) way to store complex objects consists to serilize the object into a string before to store the data into Redis.

Choose your favorite serialized (NewtonSoft in my case ) and create some helpers like here

public bool Add<T>(string key, T value, DateTimeOffset expiresAt) where T : class
{
   var serializedObject = JsonConvert.SerializeObject(value);
    var expiration = expiresAt.Subtract(DateTimeOffset.Now);

    return database.StringSet(key, serializedObject, expiration);
}

public T Get<T>(string key) where T : class
{
    var serializedObject = database.StringGet(key);

    return JsonConvert.DeserializeObject<T>(serializedObject)
}

Now we are able to put and retrieve complex objects into Redis, next step is to remove it and check if the value exists

public bool Remove(string key)
{
    return database.KeyDelete(key);
}

public bool Exists(string key)
{
    return database.KeyExists(key);
}

if you need async methods, don't worry, StackExchange.Redis has an async overload for almost every method

Resources

Redis Commands absolutety the best reference to understand how Redis works and what StackExchage.Redis does under the table.

StackExchage.Redis documentation is absolutely helpful if you choose this library as your wrapper.

StackExchange.Redis.Extensions is a great library (and I suggest to you it) that wrap the common operation needed with StackExchange.Redis (basically you don't need to serialize objects or create helpers like I explained above):

  • Add complex objects to Redis;
  • Remove an object from Redis;
  • Search Keys into Redis;
  • Retrieve multiple objects with a single roundtrip;
  • Store multiple objects with a single roundtrip;
  • Async methods;
  • Retrieve Redis Server status;
  • Much more;

It uses Json.Net (NewtonSoft), Jil or Message Pack CLI to serialize objects into a byte[]. Anyway we'll see it with the next blog post.

Investigating Timeout Exceptions in StackExchange.Redis for Azure Redis Cache great article about possible timeout exception problem with Redis and Azure

Dashboard

Azure Dashboard

AzureRedis-Dashboard

It offers basic stats but it's free when you use Redis with Microsoft Azure

Redsmin

Redsmin-Dashboard

Probably the most complete dashboard for Redis, offers a set of stats about your Redis servers, supports Azure and has a good prompt allowing you to run Redis command directly on the server without using C# or any other programming language. Unfortunately it is not free, here plans and pricing.

Redis Desktop Manager

Redismin-Dashboard

Open Source tool for Windows, Mac and Linux hosted on Github here (right now it the version 0.7.6) offers to run Redis commands into Redis like Redismin, but unfortunately it doesn't support Azure yet (there is an issue about that opened here).

Redis Live

redis-live-Dashboard

It's a real time dashboard for Redis written using Python.

Conclusions

Redis is absolutely one of the best in memory database available right now. There is a wrapper for every language, it's got a good documentation and it's free. If I were you I'd give it a look!

staytuned!

By Ugo Lattanzi on Feb. 17th , 2015 in aspnet | comments

In the previous post, I wrote about HTTP security, particularly about "special" headers. This post is partially related to the previous one, it means I am writing about security in a common scenario for web applications.

How many times did you add a redirect from an HTTP request to an HTTPS? I think you did it more than one time and, looking on internet, there are several simple solutions.

If you are using OWIN it's enough to create a custom Middleware like this:

public class ForceHttpsMiddleware : OwinMiddleware
{
    private readonly int port;

    public ForceHttpsMiddleware(OwinMiddleware next, int port) : base(next)
    {
        this.port = port;
    }

    public override Task Invoke(IOwinContext context)
    {
        if (context.Request.Uri.Scheme == "http")
        {
            var httpsUrl = string.Format("https://{0}:{1}{2}", context.Request.Uri.Host,
                port,
                context.Request.Uri.PathAndQuery);

            context.Response.Redirect(httpsUrl);
        }

        return Next.Invoke(context);
    }
}

Nothing complex here, but the question is: "Is it correct to redirect a user from an unsecure connection to a secure connection?" Basically the answer should be yes, but you must be careful about a particular scenario.

An unsecure request (HTTP) that includes a cookie and/or SessionId is subject to hijacking attacks and we don't want that to happen on our website. The easiest way to prevent this is to release the cookies only in a secure mode, it means that the cookies are not available from an unsecure request, but only from an HTTPS preventig a MITM (Man in the middle) attack.

Using Aspnet and Owin you can do it easily

app.UseCookieAuthentication(new CookieAuthenticationOptions
{
    AuthenticationType = "Cookies",
    CookieSecure = CookieSecureOption.Always,
    CookieHttpOnly = true
});

Here the most important part is CookieSecure property. It defines that only HTTPS request can access to cookie. To complete the security scenario, you could add also HTTP Strict Transport Security (HSTS) explained here.

Enjoy.

By Ugo Lattanzi on Feb. 9th , 2015 in OWIN | comments

There are several ways to add security to our web application, sometime it could be difficult and requires several hours but, with a good architecture, it could be very easy

What some developers don't know is that there are some very useful HTTP Headers available that help your web application to be more secure with the support of the modern browsers.

The site OWASP has a list of the common security-related HTTP headers that every web application must have.


Strict-Transport-Security

also know as (HSTS), is an opt-in security enhancement that it's specified by a web application to enforce secure (HTTP over SSL/TLS) connections to the server preventing downgrate attacks like Man-in-the-middle.

Browser support:

Browser Version
Internet Explorer not supported
Firefox from version 4
Opera from version 12
Safari from Mavericks (Mac OS X 10.9)
Chrome from version 4.0.211.0

Options

Options Description
max-age=31536000 Tells the user-agent to cache the domain in the STS list (which is a list that contains known sites supporting HSTS) for one year
max-age=31536000; includeSubDomains Tells the user-agent to cache the domain in the STS list for one year and include any sub-domains.
max-age=0 Tells the user-agent to remove, or not cache the host in the STS cache

Example:

Strict-Transport-Security: max-age=16070400; includeSubDomains


X-Frame-Options

Provides Clickjacking protection.

Browser support:

Browser DENY/SAMEORIGIN ALLOW-FROM
Internet Explorer from 8.0 from 9.0
Firefox from version 3.6.9 (1.9.2.9) from version 18.0
Opera from version 10.50 not supported
Safari from version 4.0 not supported
Chrome from version 4.1.249.1042 not supported

Options

Options Description
DENY The page cannot be displayed in a frame, regardless of the site attempting to do so
SAMEORIGIN The page can only be displayed in a frame on the same origin as the page itself
ALLOW-FROM http://www.tostring.it The page can only be displayed in a frame on the specified origin.

Example:

X-Frame-Options: deny


X-XSS-Protection

This HTTP Header prevents Cross-site scripting (XSS) enabling the filters available in the most recent browsers.

Browser Version
Internet Explorer supported
Firefox not supported
Opera not supported
Safari not supported
Chrome supported

Options

Options Description
0 Disables the XSS Protections.
1 Enables the XSS Protections.
1; mode=block Enables XSS protections and prevents browser rendering if a potential XSS attack is detected
1; report=http://site.com/report Available only for Chrome and WebKit allows to report the possible attack to a specific url sending data (using JSON and verb POST)

Example:

X-XSS-Protection: 1; mode=block


X-Content-Type-Options

This HTTP Header prevents the browsers from MIME-sniffing a response away from the declared content-type.

Options

The only option available here is nosniff

Example:

X-Content-Type-Options: nosniff


Content-Security-Policy

This HTTP Header (aka CSP) is very powerful and it requires a precise tuning because we need to specify all the trusted sources for our pages like Images, Script, Fonts, and so on.

With the correct configuration the browser doesn't load a not trusted source preventing execution of dangerous code.

Browser Version
Internet Explorer partial support starting from 9.0
Firefox from version 4
Opera from version 15
Safari partial support starting from 5.1, total support from 6
Chrome from version 14

Options

Options Description
default-src Specify loading policy for all resources type in case one of the following directive is not defined (fallback)
script-src The script-src directive specifies valid sources for JavaScript
object-src The object-src directive specifies valid sources for the <object>, <embed>, and <applet> elements.
style-src The style-src directive specifies valid sources for stylesheets.
img-src The style-src directive specifies valid sources for images and favicons.
media-src The media-src directive specifies valid sources for loading media using the <audio> and <video> elements.
frame-src The frame-src directive specifies valid sources for web workers and nested browsing contexts loading using elements such as <frame> and <iframe>
font-src The font-src directive specifies valid sources for fonts loaded using @font-face
connect-src The connect-src directive defines valid sources for XMLHttpRequest, WebSocket, and EventSource connections
form-action The form-action directive specifies valid endpoints for <form> submissions
plugin-types The plugin-types directive specifies the valid plugins that the user agent may invoke.
reflected-xss Instructs a user agent to activate or deactivate any heuristics used to filter or block reflected cross-site scripting attacks, equivalent to the effects of the non-standard X-XSS-Protection header
report-uri The report-uri directive instructs the user agent to report attempts to violate the Content Security Policy (send json using post)

Example:

Content-Security-Policy: default-src 'self'

Now that we know all these header, let's see how to implement them on our applications. As usual there are several ways to configure the HTTP Headers, we can do it using WebServer configuration (IIS and Apache support that) or, if we use owin, we can do it using a sample middleware without configuring the webserver.

The last one is absolutely my favorite implementation because I can switch the webserver without configuring anything (that's is one of the reason why Owin was created).

Anyway let's start to add SecurityHeadersMiddleware

PM> Install-Package SecurityHeadersMiddleware

and now to configure it is very easy


var contentSecurityPolicy = new ContentSecurityPolicyConfiguration();

//Content-Security-Policy header 

//Configuring trusted Javascript
contentSecurityPolicy.ScriptSrc.AddScheme("https");
contentSecurityPolicy.ScriptSrc.AddKeyword(SourceListKeyword.Self);
contentSecurityPolicy.ScriptSrc.AddKeyword(SourceListKeyword.UnsafeEval);
contentSecurityPolicy.ScriptSrc.AddKeyword(SourceListKeyword.UnsafeInline);
contentSecurityPolicy.ScriptSrc.AddHost("cdnjs.cloudflare.com");

//Configuring trusted connections
contentSecurityPolicy.ConnectSrc.AddScheme("wss");
contentSecurityPolicy.ConnectSrc.AddScheme("https");
contentSecurityPolicy.ConnectSrc.AddKeyword(SourceListKeyword.Self);

//Configuring trusted style
contentSecurityPolicy.StyleSrc.AddKeyword(SourceListKeyword.Self);
contentSecurityPolicy.StyleSrc.AddKeyword(SourceListKeyword.UnsafeInline);
contentSecurityPolicy.StyleSrc.AddHost("fonts.googleapis.com");

//Configuring fallback
contentSecurityPolicy.DefaultSrc.AddKeyword(SourceListKeyword.Self);

//Configuring trusted image source
contentSecurityPolicy.ImgSrc.AddHost("*");

//Configuring trusted fonts
contentSecurityPolicy.FontSrc.AddKeyword(SourceListKeyword.Self);
contentSecurityPolicy.FontSrc.AddHost("fonts.googleapis.com");
contentSecurityPolicy.FontSrc.AddHost("fonts.gstatic.com");

app.ContentSecurityPolicy(contentSecurityPolicy);

//X-Frame-Options
app.AntiClickjackingHeader(XFrameOption.Deny);

//X-XSS-Protection
app.XssProtectionHeader();

Here the github repository.

Have fun and make your application secure.

By Ugo Lattanzi on Jan. 20th , 2015 in WebAPI | comments

One of my favorite features of ASP.NET WebAPI is the opportunity to run your code outside Internet Information Service (IIS). I don’t have anything against IIS, in fact my tough matches with this tweet:

But System.Web is really a problem and, in some cases, IIS pipeline is too complicated for a simple REST call.

we fix one bug and open seven new one (unnamed Microsoft employee on System.Web)

Another important thing I like is cloud computing and Microsoft Aure in this case. In fact, if you want to run your APIs outside IIS and you have to scale on Microsoft Azure, maybe this article could be helpful.

Azure offers different ways to host your APIs and scale them. The most common solutions are WebSites or Cloud Services.

Unfortunately we can’t use Azure WebSites because everything there runs on IIS (more info here) so, we have to use the Cloud Services but the question here is Web Role or Worker Role?

The main difference among Web Role and Worker Role is that the first one runs on IIS, the domain is configured on the webserver and the port 80 is opened by default; the second one is a process (.exe file to be clear) that runs on a “closed” environment.

To remain consistent with what is written above, we have to use the Worker Role instead of the Web Role so, let’s start to create it following the steps below:

Now that the Azure project and Workrole project are ready, It's important to open the port 80 on the worker role (remember that by default the worker role is a close environment).

Finally we have the environment ready, It’s time to install few WebAPI packages and write some code.

PM> Install-Package Microsoft.AspNet.WebApi.OwinSelfHost

Now add OWIN startup class

and finally configure WebAPI Routing and its OWIN Middleware

using System.Web.Http;
using DemoWorkerRole;
using Microsoft.Owin;
using Owin;

[assembly: OwinStartup(typeof (Startup))]

namespace DemoWorkerRole
{
    public class Startup
    {
        public void Configuration(IAppBuilder app)
        {
            var config = new HttpConfiguration();

            // Routing
            config.Routes.MapHttpRoute(
                "Default",
                "api/{controller}/{id}",
                new {id = RouteParameter.Optional});

            //Configure WebAPI
            app.UseWebApi(config);
        }
    }
}

and create a demo controller

using System.Web.Http;

namespace DemoWorkerRole.APIs
{
    public class DemoController : ApiController
    {
        public string Get(string id)
        {
            return string.Format("The parameter value is {0}", id);
        }
    }
}

Till now nothing special, the app is ready and we have just to configure the worker role that is the WorkerRole.cs file created by Visual Studio.

What we have to do here, is to read the configuration from Azure (we have to map a custom domain for example) and start the web server.

To do that, first add the domain on the cloud service configuration following the steps below:

finally the worker role:

using System;
using System.Diagnostics;
using System.Net;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Owin.Hosting;
using Microsoft.WindowsAzure.ServiceRuntime;

namespace DemoWorkerRole
{
    public class WorkerRole : RoleEntryPoint
    {
        private readonly CancellationTokenSource cancellationTokenSource = new CancellationTokenSource();
        private readonly ManualResetEvent runCompleteEvent = new ManualResetEvent(false);

        private IDisposable app;

        public override void Run()
        {
            Trace.TraceInformation("WorkerRole is running");

            try
            {
                RunAsync(cancellationTokenSource.Token).Wait();
            }
            finally
            {
                runCompleteEvent.Set();
            }
        }

        public override bool OnStart()
        {
            // Set the maximum number of concurrent connections
            ServicePointManager.DefaultConnectionLimit = 12;

            string baseUri = String.Format("{0}://{1}:{2}", RoleEnvironment.GetConfigurationSettingValue("protocol"),
                RoleEnvironment.GetConfigurationSettingValue("domain"),
                RoleEnvironment.GetConfigurationSettingValue("port"));

            Trace.TraceInformation(String.Format("Starting OWIN at {0}", baseUri), "Information");

            try
            {
                app = WebApp.Start<Startup>(new StartOptions(url: baseUri));
            }
            catch (Exception e)
            {
                Trace.TraceError(e.ToString());
                throw;
            }

            bool result = base.OnStart();

            Trace.TraceInformation("WorkerRole has been started");

            return result;
        }

        public override void OnStop()
        {
            Trace.TraceInformation("WorkerRole is stopping");

            cancellationTokenSource.Cancel();
            runCompleteEvent.WaitOne();

            if (app != null)
            {
                app.Dispose();
            }

            base.OnStop();

            Trace.TraceInformation("WorkerRole has stopped");
        }

        private async Task RunAsync(CancellationToken cancellationToken)
        {
            // TODO: Replace the following with your own logic.
            while (!cancellationToken.IsCancellationRequested)
            {
                //Trace.TraceInformation("Working");
                await Task.Delay(1000);
            }
        }
    }
}

we are almost done, the last step is to configure the right execution context into the ServiceDefinistion.csdef

<?xml version="1.0" encoding="utf-8"?>
<ServiceDefinition name="imperugo.demo.azure.webapi" xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition" schemaVersion="2014-06.2.4">
    <WorkerRole name="DemoWorkerRole" vmsize="Small">
        <Runtime executionContext="elevated" />
        <Imports>
            <Import moduleName="Diagnostics" />
        </Imports>
        <Endpoints>
            <InputEndpoint name="Http" protocol="http" port="80" localPort="80" />
        </Endpoints>
        <ConfigurationSettings>
            <Setting name="protocol" />
            <Setting name="domain" />
            <Setting name="port" />
        </ConfigurationSettings>
    </WorkerRole>
</ServiceDefinition>

Here the important part is Runtime node. That part is really important because we are using the HttpListener to read the incoming message from the Web and that requires elevated privileges.

Now we are up & running using WebAPi hosted on a Cloud Service without using IIS.

The demo code is available here.

Have fun.

By Ugo Lattanzi on Dec. 2nd , 2014 in Events | comments

It’s been a while since my last blog post; Unfortunately I’ve been very busy in the latest months for due to some important deliveries for the company I'm working for.

During the past two days one of my favorite conference, Codemotion, has been in Milan and fortunately I had a talk about NodeJS for .NET Web Developers.

The code used for the demo is available on github here (the most important demo is the Sample 004) and the slides are available on Slideshare and below:

I had a good feeling about the talk, the room was full of people, lot of questions and feedback. It means there is interest about Node from .NET people.