Wednesday, April 29, 2015

The wonderful pipe "|" character

Today I want to talk about a pipe character (a.k.a vertical bar). Or more specifically - a special role it plays in probing the quality of web applications. The pipe character is classified as unsafe in RFC 1738 - this puts it in a "grey area" hence a good candidate for testing how various frameworks and applications handle this case. When URLencoded this character is represented by the %7C sequence.
I love this character when it comes to testing error handling/custom errors setup in the .Net applications.

A standard ASP.Net web application uses *.aspx file extension by default. As a test I like adding a pipe character in the page name (preserving the aspx extension to make sure the request is still sent to the .Net handler). It's worth noting that a similar approach quite often works for the "extensionless" ASP.Net MVC URLs too.

I will demonstrate a few possible scenarios.

Let's take a login page and add our pipe character:
http://<mysite>/login|.aspx
A well-behaving application correctly sends me to the error page:


But notice that the error page is also a .Net page. What if we try to trigger the same error again in the error page itself? This is actually a very common scenario when the initial exception will be caught and handled properly but the error page won't be able to defend itself.

http://<mysite>/Error|.aspx?aspxerrorpath=/login|.aspx

Boom!


We have generated an unhandled exception but because customErrors were turned on in web.config we've got the page above. Not ideal but it can be worse.

Let's try the same approach against a test vulnerable application courtesy Acunetix:
http://testaspnet.vulnweb.com/login%7C.aspx



We get an "Illegal characters in path" exception. We see that this is an ASP.Net v2 application. Notice that we only have the System namespaces - user code hasn't been invoked yet!

For completeness here is the same problem in ASP.Net v4 application (with a slightly different stack trace):

And this is why I love the pipe character. It helps uncovering various interesting scenarios - when customErrors are not setup correctly, when global exception handler (Application_Error) is missing, or when error pages themselves can't handle exceptions properly.

This is a very simple test. So go on - give it a go, see how your application handles the pipe and leave a comment if you found anything interesting or if it helped you to make your application more robust.

See this article for more information how to configure exception handling properly.

P.S. another good character to play with is a tilde "~"

Monday, April 27, 2015

Engineer for resilience

I actually didn't plan to write this blog post. I was reading my twitter feed when I spotted a discussion about an Indian web site TRAI that publicly exposed several thousands of e-mail addresses (net neutrality related e-mails). Another comment was made that the web site was down because of the amount of people trying to check if their addresses were in those published lists. And then I noticed this screenshot:


Makes sense - a massive (and unexpected) spike in workload and it is understandable that the backend database server couldn't handle the load. I am obviously speculating here but it doesn't really matter for the purpose of this discussion.

Before we move on - displaying detailed error messages like this one (containing a stack trace, .Net version etc) is bad from the security perspective. Troy Hunt has a great (and very detailed) explanation how to set up custom error pages properly.

But what really caught my attention was this line:
   banner.proc_Display_banner(String param) +26
A "display banner" function tries to perform an action against the database. This request fails and the whole page "explodes" and displays a detailed error page. I will speculate again - a banner may mean 2 things in my view. It is either a topbar of the web site or an embedded advertisement of some sort.

In both cases I would argue that this banner is most likely not essential for the core functionality of the web site. And that brings me to the topic that I am very passionate about - engineering for resilience. I've seen it many times - it is way too easy to just throw an exception and give up at this point (hoping that this exception will bubble up and will be handled somewhere - or maybe not).

Think about it - a function call to display a banner fails and takes down the whole page. Or we might even argue - takes down the whole web site as the error happens on the homepage. What would you prefer - an operational web site that lacks a topbar (or doesn't display an advertisement) or a web site that is broken?

For some reason I see this times and times again - developers focus on the "success" story - i.e. the code performs what it's supposed to do in the ideal conditions when all databases/services/endpoints are available. And they don't consider scenarios when some of these building blocks they rely on in their code become unavailable. It's convenient to just assume that this database will always be there, right? And it will be the case in 99.9% of the time. The SLA of "three nines" is not ideal but not unheard of (especially for the single instance setups). 99.9% also means roughly 40 minutes of downtime each month. How will your code behave during those 40 minutes? The key point I'd like to make - we need to expect failures and engineer for resilience. As we move from the monolithic systems to distributed/microservices based architectures, inevitably we will be relying more and more on various external (to our code) APIs, various endpoints and databases. And usually we don't have any information about the availability of a particular endpoint when we are about to call it.

There is nothing wrong with failures per se. You pick up the phone, call you colleague and receive a busy signal. In a way it is a failure. You failed to connect and invite your colleague for a coffee. But is it a problem? No, as most likely you will "retry" a few minutes later. 

Retry is a great (and simple) way to recover from failure. Just retry the last failed call and see if it finishes successfully this time. Limit the number of retries to make sure you don't create and endless retry loop and that you don't overload/DOS the system that might already be struggling.

So what can we do if we retried but were not able to achieve a desired outcome?
Consider a degrade approach. (Think - going for a coffee alone without your colleague). In many cases it is possible to reduce full functionality of a system - somehow simplify it - without losing it (functionality) completely. E.g. in the case of a banner we could:
  • Replace a banner that we wanted to display (and couldn't retrieve from the database) with a hardcoded version (that doesn't require access to the database).
  • Replace this banner with an blank image of the right size
  • Don't display the module that failed at all. This might break the HTML layout but the main page will still be alive.

By wrapping the database call in a try-catch block obviously would have allowed a developer to catch this failure condition and make decision what to do next - how to handle this failure scenario.

I will write another complementary blog post regarding how "engineering for resilience" is done in space technology - approach/ideas/principles, which can teach us - IT people - a few things.

Please make sure you consider all possible outcomes - no matter how improbable they might be. Databases go down, processes run out of threads and memory, network failures/packet drops happen. We need to be prepared for all of these scenarios. Code that can survive failures is a sign of a high class/experienced developer.



Sunday, April 19, 2015

MS15-034 (http.sys)

There has been a lot written about this (quite nasty) vulnerability since the latest "Patch Tuesday". My understanding is that so far only DoS (a simple crash) has been reliably confirmed, although I've seen some reports that an RCE (remote code execution) exploit is being sold on black market for slightly over $100.
I can also see scanning requests in the wild. I think it's just a matter of a few days for us to start seeing a world wide spike in those probing requests. That spike will last for a couple of weeks (attackers going after the low hanging fruit). We have seen the same behaviour with CVE-2014-6271 (Bash Command Injection). This is all fine and kind of expected.

What I find surprising is that bugs of that calibre can actually be uncovered in 2015. I will explain what I mean:

First of all - this bug can be triggered by a request like this:

curl -v http(s)://hostname/ -H "Host: hostname" -H "Range: bytes=0-18446744073709551615" -k

So all we need to do is send an HTTP (or HTTPS - doesn't matter) request to a server with a specific Range header. What is so special about 18446744073709551615? It's 2^64-1. So MS15-034 is essentially an integer overflow bug.
HTTP is one of the most popular protocols. And a Range header has been added to the HTTP/1.1 version (which was around for quite some time now - RFC 2068 - January 1997). People have been fuzzing protocols (including obviously HTTP) for ages. In fact I will quote the "Fuzzing: Brute Force Vulnerability Discovery" book By Michael Sutton, Adam Greene, Pedram Amini



"Any and all request headers can and should be fuzzed" indeed. A value of 2^64-1 (or MAXINT-1 on some platforms) should be in the Top 10 of any integer fuzzer. Microsoft lists Windows 7 as affected meaning this vulnerability has been around for quite some time. This is why I am surprised - how come that this vulnerability has only been found now? It should've probably been even picked up by the internal QA team. And certainly with so many researchers running fuzzers (which kind of makes security research less exciting - topic for another blog post) how did we not find it earlier?! And more importantly - how many other bugs of this calibre are still out there?
Gentlemen, start your fuzzers!

Update:
Using a WAF to block MS15-034 attack pattern is a great (and simple) way to protect your environment while systems administrators continue their assessment and rollout of the patch to all servers. Many major vendors (Incapsula, Cloudflare, ModSecurity, Akamai etc) have already created custom rules for their customers.
I've seen some people suggesting a WAF rule to block requests that match on "0-18446744073709551615" when examining the Range header. Be careful - "0" was used for harmless probing requests. The actual DoS requests contain a non-zero value (I've seen 18 and 20 so far). So it would be better to match and block requests that match a broader pattern: [\d]+-18446744073709551615

Saturday, April 18, 2015

First post

Hello World!

I was thinking to start blogging for some time now. But I wasn't sure if I would be able to produce original and interesting (hopefully) content on a regular basis. I've seen many blogs starting well but then slowly being abandoned by their owners. What triggered my decision to actually give it a go was Troy Hunt's milestone which essentially completed his transformation to the Internet celebrity.
I was reading Troy's blog from around 2011. In his very first blog post (from 2009) he highlights the importance of building an active online profile. To be fair I've been on Twitter for a few years now. But sometimes 140 characters is not enough to elaborate on more complex issues. I have also created blog posts for the corporate blog of the company I work for. I will continue to do so but I believe my personal blog will give me more freedom plus it will allow me to deviate every now and then and cover other areas outside of IT/DevOps/Security. After all this online identity doesn't have to be one-dimensional. I read another science/astronomy related blog "Starts with a bang" and I really like how Ethan manages to inject "weekend diversion" posts that are not directly related to the main topic of his blog but they really complement it nicely and demonstrate what else is important to the author on this given day. So don't be surprised if you see occasional "offtopic" posts.

My identity

My uni degree was applied mathematics (which was essentially half maths/half IT related subjects with a sprinkle of space technology related special cources). Since 2003 I've been working for the leading Australian online businesses. Currently I am wearing 2 hats managing a team of engineers looking after DevOps and Security parts of IT. DevOps and Security together (SecDevOps?) is a very powerful combination. And I see these areas as the main direction for this blog.
This is what I will try to achieve. Let's get the ball rolling!