The Wayback Machine - https://web.archive.org/web/20131003142332/http://www.devx.com:80/blog/
advertisement
Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX

 
 
Posted by Jason Bloomberg on October 1, 2013

The Cloud storage marketplace got quite a stare last week when storage provider Nirvanix abruptly closed its doors. Fortunately, they provided their customers with a few days to get their data off of the Nirvanix servers, but those customers are still left scrambling to move their bits to another provider without any business downtime.

The lesson for everyone who wasn’t a Nirvanix customer, however, wasn’t that you just dodged a bullet. On the contrary, the lesson is that the Nirvanix death spiral is but a hint of turbulence to come. We’re all so enamored of Cloud that we forget the entire space is still an emerging market, and emerging markets are inherently chaotic. Expect to see many other spectacular flameouts before the dust has settled.

In fact, the demise of Nirvanix could have been worse. They may have shuttered their doors without providing their customers a time window (and the necessary connectivity) to move their stuff off of the doomed drives. And what if they had declared a liquidation bankruptcy? Those drives may have ended up auctioned to the highest bidder – customer data included.

Does that mean that you should avoid the Cloud entirely until it’s matured? Possibly, but probably not – especially if you understand how the Cloud deals with failure. Remember, instead of trying to avoid failure, the Cloud provides automated recovery from failure. Furthermore, this principle is more than simply a configuration difference. It’s a fundamental architectural principle – a principle that should apply to all aspects of your Cloud usage, even if a service provider goes out of business.

Which Cloud company ended up on the positive side of this news? Oxygen Cloud – a Cloud storage broker I wrote about over two years ago. Oxygen Cloud abstracts the underlying storage provider, allowing you to move data off of one provider and onto another, in a way that is at least partially automated. And as you would expect, the entire Cloud brokerage marketplace is now forming, and the Nirvanix debacle will only serve to accelerate its adoption.

The silver lining to the Nirvanix Cloud story, therefore, is that following Cloud architecture best practices should help insulate you from the turbulent nature of the Cloud marketplace itself. But only, of course, if you get the architecture right.


Posted by Jason Bloomberg on September 25, 2013

Remember the fascinating story of the Mechanical Turk? This chess-playing automaton was a sensation in the courts of eighteenth century Europe. Who ever heard of a machine that could play chess? Turns out it was a dwarf in a box, playing chess via the manipulation of levers.

It is fair to ask whether your Cloud provider uses their own dwarves in boxes as well. In other words, is their Cloud provisioning fully automated, or is there simply an interface that suggests a fully automated back end, while what really goes on when you ask for a machine to be provisioned is that people behind the scenes have to do some or all of the work manually?

We can trust that the core provisioning at Amazon Web Services (AWS) is fully automated – there aren’t enough dwarves in the world to manually handle all the tasks that would be necessary to support AWS customers. But even in AWS’s case, certain tasks must be handled manually, for example, AWS Import/Export.

What about other Cloud providers? Let’s take a closer look at IBM SoftLayer. The reason I’m curious about this provider is that they tout their bare metal offering: you can provision physical servers in various configurations. To be sure, their Bare Metal Instance order interface is every bit as user-friendly as Amazon’s virtual machine provisioning interface.  But with all the configuration options, it strains one’s credulity to assume that SoftLayer is able to fully automate the bare metal provisioning process for all possible order configurations. The bottom line: my guess  is that they have a dwarf in a box handling at least some of the provisioning tasks.

Fair enough. Perhaps SoftLayer or other Cloud providers do not fully automate the provisioning process. As a result, provisioning might take hours or even days instead of minutes. But this whole discussion may be missing the point: does it really matter whether Cloud provisioning is fully automated?

The answer: in many cases, probably not. Yes, manual provisioning shouldn’t take weeks or nobody will be happy. But if setting up a physical server precisely the way you want it takes a few hours because there are people handling the tasks behind the scenes, that’s small price to pay for the automated ordering interface and the pay-as-you-go financial model. And as SoftLayer is quick to point out, virtual machines aren’t for everyone. Sometimes only bare metal will do.

Bottom line: be sure to understand your needs, and then verify the Cloud provider can meet them. It doesn’t really matter if there’s a dwarf in the box as long as they play good chess.


Posted by Jason Bloomberg on September 23, 2013

You have to admit, the United States National Security Agency (NSA) has had a really hard time lately. Edward Snowden’s revelations of the heretofore top secret PRISM program keep on coming. Not only is the NSA spying on everyone outside the country, they’re spying on Americans as well. They’ve been undermining the cryptography industry to place back doors in our security technology. Who knows what we’ll find out they’ve been up to next?

Such tawdry revelations of the inner workings of perhaps the most secretive of government agencies suggests the NSA is populated with nefarious spooks, hiding behind trench coats and dark glasses as they surreptitiously poke away at their keyboards in the dark. However, nothing could be further from the truth.

In reality, the intelligence analysts at the NSA are techies much like techies in any other organization—except for the addition of rigorous controls and processes that would put your average architecture review board meeting to shame. First, these analysts must pass the appropriate security clearance checks, which at the top secret level are extensive, onerous, and at times very personal. Then, once they begin work, they only have access to the systems and data that are specific to their assigned task at hand—the “need to know” partitioning principle behind any secret government effort. Finally, there are extensive checks and controls, so if a mischievous analyst got it into his or her head to, say, listen into a spouse’s phone conversation or poke around in some movie star’s email account, they would be caught, punished, and fired.

These analysts are dedicated professionals working under stringent conditions necessary to do their jobs. Yes, there is always the chance of an insider attack – the one bad apple like Edward Snowden who violates his oath for his own purposes. But for every Snowden, there are thousands of professionals who are simply doing their jobs as best they can.

Sympathizing with NSA employees, however, doesn’t mean we have to excuse the NSA’s actions. And if someone at the NSA was actually breaking the law, then we would all be justified to expect that the malefactor would be caught and appropriate justice would be meted out. But in the case of the PRISM scandal, instances of true law breaking are few and far between – if it turns out that any laws were being violated at all.

Our real concern with the NSA, therefore, is primarily focused on the laws that drive as well as constrain the NSA’s activities. After all, you can’t blame someone for doing their job and following the law. So who should we blame if we don’t like what the NSA has been up to?

Congress.


Posted by Jason Bloomberg on September 20, 2013

In my role at ZapThink I get to speak with people at all levels of many organizations, from the most technical of developers to the most senior executives in the C suite. And I have found that everyone in the enterprise – from one extreme to the other and everyone in between – can suffer from tunnel vision.

Tunnel vision is a problem most familiar among the techies. “I could finally get some work done,” is the common lament, “if it weren’t for all the damn users!” Because of their deep technical strength, whether it be a developer, sys admin, or other role, these professionals tend to immerse themselves into the technical minutiae. Their work conversations drip with jargon, and asking them to interact with people who are not so similarly focused pushes them outside their comfort zone. Hence the phrase “tunnel vision.”

People on the business side of the fence aren’t immune from their own form of tunnel vision. I’ve met many an executive who sees their technical illiteracy as a point of pride. “Don’t confuse me with that technical mumbo jumbo,” the exec opines, “that’s what my IT folks are for!” Whenever they must interact with some piece of technology, they end up with arbitrary requirements that serve to hide their discomfort. Hence so many requests to move this button over there, or to change a ringtone or graph color.

Such tunnel vision results from the common organizational approach that builds and reinforces silos within the enterprise. Divide people up by expertise or specialty so that each team or committee consists of like-minded people with similar perspectives on a common set of problems. And thus the product managers meet with other product managers, the executives meet with other executives, the Java developers meet with other Java developers, etc.

However, not every organization suffers from such paralyzing silos – or at least, not everywhere in the org chart. There are in fact several movements afoot in companies across the globe that purport to cut across silos in order to achieve greater levels of agility in the organization, and as a result, deliver more effectively on the true business needs. In fact, three in particular come to mind:

·         Agile methodologies. Instead of separating the stakeholders from the developers, throwing supposedly fixed requirements over the wall from one silo to another, include the stakeholders in the development process. Iterate frequently, and involve stakeholders along the way. Get the process right and the result? Better software.

·         Dev/Ops. As Cloud leads to increasingly automated operational environments, the role of the ops personnel shift to working with developers and the rest of the software team to roll out better solutions across their full lifecycle. Don’t simply iterate development, iterate deployment as well. The result? More agile software.

·         Next-Gen Enterprise Architecture (or what I call Agile Architecture in my book, The Agile Architecture Revolution). Put all those static EA artifacts in a drawer and instead focus on supporting consistent, comprehensive governance across all levels of the organization. Establish the policies and related processes and tools at the organizational, process, technology, and information levels across the company in such a way that all such policies are consistent with each other and have built-in mechanisms for evolving as needs change. Such EA touches everyone in the enterprise. The result? An agile organization that leverages continuous business transformation to achieve long-term strategic goals.

Do you have what it takes to succeed with Agile software methodologies, Dev/Ops, and Next-Gen, Agile Enterprise Architecture? Every organization has the innate ability, given that people are always able to learn new skills. The real question is: do you have the courage?


Posted by Jason Bloomberg on September 10, 2013

The Public Key Infrastructure (PKI) has been the backbone of IT security for several years. It lies at the heart of SSL, Cloud encryption, digital signatures, and the vast majority of IT security tools and techniques around the world.

But now we know that it has been compromised. In fact, we now know that it has always been compromised.

But not by any run-of-the-mill hacker. Not by some nefarious foreign government or terrorist organization.

No, PKI has been compromised by our very own National Security Agency (NSA).

In a chilling article in the New York Times, we recently learned that the NSA has long been influencing the policies that have driven PKI standards. Perhaps from the very beginning – with the goal of introducing back doors that only they know about.

But the very nature of PKI suggests that this vulnerability is far more than a back door. PKI is nothing more than a house of cards.

At the center of PKI is the notion of a digital certificate. Every browser, every security tool, every server has at least one. Each certificates inherits its authority from the certificate of the tool that issued it, known as a Certificate Authority (CA). Each CA, in turn, inherits its authority from the CA above it in the Certificate Chain – all the way up to the self-signed root CA.

Root CAs, therefore, are the keys to the PKI kingdom. The Root CAs for public Certificate Chains belong to the most secure of security companies, like Verisign and Entrust, who must keep them secure.

Except now we know the NSA has their fingers all over PKI. Including, we presume, the root CAs. And if the root CAs are compromised, then all of PKI is compromised.

I wonder if the whole idea for root CAs was NSA’s to begin with? Perhaps, perhaps not. Either way, we can assume the US government can crack any PKI-secured technology without lifting a finger.

As a result, there’s no place to hide. There are no more secrets – assuming, of course, that there ever were.


Posted by Jason Bloomberg on September 9, 2013

A recent blog post by fellow Cloud curmudgeon Ben Kepes shined the light on a few of the unsightly ingredients in the Amazon Web Services (AWS) Cloud sausage. Apparently, many AWS services depend upon Elastic Block Storage (EBS) solely in their first Cloud data center region, US-East-1. And while AWS generally touts a horizontally distributed, Cloud friendly architecture, this unfortunate dependence on EBS is a single point of failure. Oops.

Amazon is unlikely to provide a full explanation of this architectural faux pas, but Kepes surmises that the problem is that US-East-1 is their oldest data center, and thus Amazon hadn’t really worked out how best to architect their Cloud when they set it up. In other words, US-East-1 has a serious legacy problem.

If the problem simply centered on the technology, then this issue would be minimal. After all, tech refreshes are commonplace in today’s IT environments, and Amazon surely instituted a plan to replace and update aging technology long before AWS was a twinkle in the ecommerce bookseller’s eye. But the EBS single point of failure problem isn’t a legacy technology problem at all. It’s a legacy architecture problem.

Unfortunately for Amazon (and for its thousands of customers), legacy architecture challenges are extraordinarily difficult to resolve, even in the enterprise IT context. But Cloud Computing raises the stakes. The Cloud provider context layers complexity on this already intractable issue, because customers’ Cloud architectures leverage Amazon’s internal architecture. Changing AWS’s inner workings might have a cascading ripple effect across its customers’ architectures, which would be a catastrophe eclipsing the bad publicity from any downtime that might result from the EBS single point of failure.

Amazon has a serious problem on their hands. They must proceed with extreme caution. Will they be able to fix their architecture without bringing down the AWS house of cards? Perhaps. My prediction is that they will eventually fix this legacy architecture issue with minimal customer downtime – but not without any downtime. The question for you is: will you be one of the unlucky customers to get caught by Amazon’s legacy architecture?


Posted by Jason Bloomberg on September 3, 2013

It never ceases to amaze me how many people consider Public and Private Clouds to be somehow comparable. We want a Cloud, they say, but we’re afraid of Public, so we’ll do Private instead. That way we can get the benefits of Cloud without the risks. Sound familiar?

Too bad that just about everything about that argument doesn’t hold water. Not only are Public Clouds actually more secure than Private ones, but they are also far more elastic and far less expensive for most situations. So…why is it you wanted a Private Cloud again?

Whatever your reasoning, let’s say that Private Cloud is still on your shopping list. Sure, Public has advantages, but Private Cloud is almost the same thing, right?

No way. In fact, Public and Private Clouds are very different kettles of fish. To see why, let’s reset the starting line. Where do you start with a Cloud initiative?

Public Cloud: you’re on your laptop, with your credit card. You log into the Cloud provider’s Web site, click a few buttons, and in a half hour or so, you’ve provisioned your Cloud environment.

Done. OK, let’s compare with Private.

Private Cloud: it’s you in a meeting. With your CIO or VP of Infrastructure or other IT bigwigs. You discuss Cloud. They scratch their heads and ask you for more information.

You go back to your desk, meet with your team, do research, talk to hardware vendors, hire consultants, review product literature, write reports. More meetings. Weeks pass.

You meet with data center providers. Or maybe you talk to your own data center team about all that empty floor space they have. Discussions about power, cooling, racks, physical security, and staffing ensue. More weeks pass.

Finally you’re ready to buy some gear. You go through your organization’s existing procurement process. You got it, more weeks – or months – pass.

The gear arrives in your data center. You set it all up in an afternoon. Ha! Gotcha. More weeks pass.

Everything is all set up, fully configured, and fully tested (like the testing bit alone didn’t add more weeks to the process). Now users can come to your Private Cloud internal portal and automatically provision their own Cloud instances, right?

Maybe, but probably not. Most Private Cloud deployments don’t offer automated self-service provisioning. Oops.

Remind me, why did you want a Private Cloud again?


Posted by Jason Bloomberg on August 30, 2013

Perhaps the most fascinating benefit to attending conferences outside the US is to get the non-US perspective on American issues. I was interested, therefore, in the audience’s perspective on the recent NSA spying scandal when I attended Cloud World Forum Latin America in Brazil earlier this week.

The general consensus among this Brazilian audience was that it wasn’t safe to store information in US-based data centers, because American authorities had the ability and willingness to access such information without the permission of the owner.

However, this predictable perspective didn’t sit well with everyone in the room. A few people understood that every country spies on its neighbors, and that there was no reason to believe Brazilian authorities weren’t conducting their own intelligence operations similar to the NSA.

But perhaps the most critical issue for this audience were the regulatory impacts of geolocation. In other words, would putting your data in the US, or any other country for that matter, violate regulations? In Brazil, apparently, there is a patchwork of national and local regulations that impact where data may be stored.

As one of the few Americans in the room, I was somewhat surprised by what this audience didn’t discuss: the fact that the central controversy in the NSA spying scandal from the US perspective is whether the NSA was spying on American citizens. After all, the Fourth Amendment to the US Constitution protects American citizens from unreasonable searches, but we’re all perfectly happy to have our government snooping around in foreigners’ emails.

The subtext behind the conversation in Brazil, therefore, was that there had been an expectation of privacy when storing information on US-based servers, presumably because US law protects the confidentiality of information generally. I guess putting your information on the same box as an American’s information somehow inherits constitutional rights simply by virtue of proximity. Perhaps the most significant impact of the NSA scandal for this Brazilian audience, therefore, was to poke holes in this misperception.


Posted by Jason Bloomberg on August 27, 2013

Millions of Netizens were forced to go outside and get some fresh air for an hour Sunday when Amazon Web Services (AWS) experienced a brief outage, taking down sites such as Instagram and Vine, among others. The downtime only affected the North Virginia U.S.-EAST datacenter, meaning that any Cloud-based company that actually followed Amazon’s own recommendations (as well as the recommendations of any Cloud consultant worth his or her salt, including yours truly) was unaffected.

Why? Because the Cloud is not built to avoid failure. It’s built to work around failure and recover automatically from failure. If you followed the recommendations and geographically distributed your instances, along with implementing a proper Cloud architecture for delivering basic availability, then your service would have remained standing.

Netflix, for one, kept perking along, because Netflix follows this recommendation. In fact, Netflix tests their deployment on a regular basis via their Simian Army – a collection of processes and applications that routinely wreak havoc on their production environment in order to test whether they’ve done things properly.

In this instance, the simian in question is the Chaos Gorilla – an application that takes down an entire AWS Availability Zone supporting the Netflix deployment. What Netflix runs on purpose, Amazon deployed accidentally – or at least, we can presume it was accidental. But maybe they should have taken down a data center on purpose, essentially running their own Chaos Gorilla. How else will AWS customers know they’ve properly architected their Cloud-based apps?


Posted by Jason Bloomberg on August 22, 2013

Perhaps the best part of speaking at Dataversity’s NoSQL Now conference is the opportunity to channel my inner geek. This show brings out the most hardcore data geeks Silicon Valley has to offer, after all, and that’s triple-X hardcore when it comes to data geekdom. My inner geek, however, never goes anywhere without wearing his architect’s hat. Good thing, too, because the challenge with such heavily technical shows is understanding how all this great new gear fits into the big picture of helping enterprises achieve broad-based agility goals.

My architect’s hat perked up during a talk by Nathan Mars, creator of real time Big Data analytics platform Storm. In his talk he called upon the audience to embrace immutability. Forego all DELETEs and UPDATEs on your data. All you get are INSERTs and SELECTs. Furthermore, keep track of which queries generated which data. As a result, you’ve protected yourself from data corruption, because you can always go back in the permanent record to recompute any given result properly.

Immutability is one of those topics that fires up my inner geek, as it crops up in different places. Functional programming requires immutable data, and functional programming is experiencing a resurgence due to its applicability in the Cloud. But that’s not the whole story by any means.

Immutable data have been with us for years, as any application that must keep track of various versions of given information can make use of immutability. In fact, it’s no mistake that Mars’s link is to his GitHub profile, as Git (the technology behind GitHub) is based on immutability. The trick with Git, of course, is efficiency: since changes to checked-in code are incremental, Git has a sophisticated system of creating deltas and snapshots in order to balance efficient use of storage with rapid queries and rollbacks.

The question at this point is why wouldn’t all your enterprise data benefit from the same immutability as Git offers? Shouldn’t you always maintain previous versions of any record, along with a complete trail of everything that happened to that record? After all, any number of human errors may lead to corrupted data. We should always be able to recompute a result based upon accurate historic information.

While Mars focused on real time Big Data analytics, taking advantage of the performance benefits from the proper use of immutable data, there’s a bigger story here. Treating everyday enterprise data (say, customer records, for example) as immutable can turn such information into Big Data. After all, the reason why we weren’t collecting all the deltas and snapshots for all our customer data in the past was because we didn’t have the storage or processing power to deal with all that information. But now such capabilities are just within our reach.

While the conventional approach to Big Data analytics is to crunch massive data sets consisting of mixed data types in order to produce a “small data” result we can make sense out of, the principle of immutability takes existing small data sets and turns them into Big Data sets and then applies Big Data processing techniques to them.

In other words, a whole new way of looking at enterprise data. What would your world look like if you never did an UPDATE or DELETE ever again?


advertisement
advertisement
Sitemap