The Wayback Machine - https://web.archive.org/web/20131022012046/http://www.devx.com:80/blog
Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX

 
 
Posted by Jason Bloomberg on October 20, 2013

Eric Raymond coined the phrase The Cathedral and the Bazaar in his 1997 book of the same name, referring to the differences between the traditional, top-down approach to building software and the bottom-up approach at the center of open source projects, respectively. Today, throngs of developers live and breathe the Bazaar approach, including the OpenStack team I discussed in my previous blog post. It occurred to me, however, that Cathedral-centric thinking still crops up, even in the most Bazaar places (I know, groan!).

Let’s step back a minute to clarify the core concepts. Raymond was essentially describing a set of organizational patterns for how to set up, run, and participate in software development teams. However, it doesn’t take much of a leap to consider the Cathedral and the Bazaar as architectural patterns as well, especially in the context of Enterprise Architecture, where organizational patterns are an essential tool in the tool belt of the Enterprise Architect.

The Cathedral vs. Bazaar architectural patterns crop up whenever there is a choice between doing something one way because it’s your job to figure out a given problem, vs. letting many people propose or contribute possible solutions to a problem and then choosing the right one for your needs. For example, if you’re building a RESTful API, you should provide hyperlinks to all the appropriate media types for a given interaction, and if a standard Internet media type doesn’t meet the requirements, then access a marketplace for custom media types, instead of simply creating one of your own.

Another example, from the world of Cloud Computing: if an existing machine image doesn’t meet your needs for provisioning your Cloud instances, you may find yourself creating your own custom machine images. But this approach takes you down the same Cathedral rat hole: how to deal with continual updates to your custom machine image, and how to get everybody else on the same page. Instead, access a marketplace for machine images. It’s no mistake that Raymond used the Bazaar metaphor, since a bazaar is nothing more than a decentralized marketplace.

Following the Cathedral pattern gives you a bespoke solution that meets a short-term need, but doesn’t deal well with change or with diverse needs. On the contrary, the Bazaar pattern frees you from having to do all the work yourself, and allows you to automate the interactions with the marketplace. And perhaps most importantly, the Bazaar supports change better than the Cathedral, because participants are encouraged to update their contributions, and consumers can always decide whether or not to select the new version.

It’s no surprise, of course, that the nascent Cloud Service Broker market is focusing on automating a marketplace. How Bazaar is that?


Posted by Jason Bloomberg on October 15, 2013

I had an informative call with Jonathan Bryce, the Executive Director of the OpenStack Foundation today. He went over the features of OpenStack’s new Havana release. Good stuff – but not the topic of this blog post. Of more interest, at least for now: his response to my question about how OpenStack handles the “too many chefs” problem.

There are dozens of companies who are members of OpenStack, some large and many small. And most if not all contribute code to the OpenStack codebase. So how do they keep the big vendors from pushing around the small ones? And how do they keep the whole effort from devolving into a massive food fight?

The answer: the community is contribution-driven. It doesn’t matter how big your company is or how much money you’ve donated to the OpenStack Foundation, the value of a contributor’s code is based on the quality of the code, end of story. True, larger contributors tend to assign more people to OpenStack, but they are treated as individual contributors within OpenStack’s consensus-based community review system. So if one vendor figured they could bring some agenda to the table and try to sway the codebase in their favor, there are plenty of controls in place that would counteract such pressure.

So, what’s a vendor to do if they want to drive competitive advantage with OpenStack? A few have tried branching the code to meet their own goals, essentially grabbing the OpenStack football and running to their own end zone with it. But the few vendors who have tried this ploy haven’t met with much success.

Why? Because the OpenStack community is so well-established, or as Bryce describes it, “mass has gravity.” In other words, instead of too many chefs spoiling the broth, the size of the community reinforces its strengths. There’s always the chance that OpenStack will go off the rails at some point, given its size and the immaturity of the IaaS marketplace, but not anytime soon.


Posted by Jason Bloomberg on October 14, 2013

Just when you thought you finally had Cloud Computing figured out, the next great meteorological buzzword rears its ugly head: Fog Computing. Basically an extension of the data center-focused Cloud to the Internet of Things, Fog Computing promises a fully decentralized, peer-to-peer model for elastic, virtualized computing.

The idea for Fog Computing apparently came from some academically-oriented techies at Cisco, who wrote a paper on the topic in August 2012. But such technical papers rarely if ever generate much interest in the marketplace, and thus aren’t very good fodder for blogs like this one. Rather, what makes Fog Computing worth a second glance is what the startup vendor Symform is up to.

Symform offers a peer-to-peer, crowdsourced Cloud backup solution that doesn’t require data centers. Instead, your files are chopped up and distributed to extra storage space belonging to hundreds of other Symform customers. If you remember how the SETI@home alien-hunting screensaver from the 1990s worked, you have the general idea.

Symform, however, is just scratching the surface of Fog Computing. Once we free the Cloud from the data center, and consider any device – computer, smartphone, or even a remote sensor – as part of the Cloud (excuse me, Fog), then all manner of new possibilities pop up. For example, my idea of remotely provisioning Cloud instances on smartphones would be an example of Fog Computing. I’m sure there are others. What could you do in the Fog?


Posted by Jason Bloomberg on October 9, 2013

Two stories of government-tech-gone-wrong hit the wire this week. Each one in and of itself promised to be embarrassing to the US federal government, as well as to any contractors who contributed to the respective projects. But taken together, one might wonder if something more sinister is afoot.

First, we have the various bugs and gotchas with the healthcare.gov Web site, the centerpiece of this month’s Affordable Care Act rollout. Bugs galore. Error messages. Scalability issues. Downtime.

The Obama administration is doing their best to put a positive spin on the news. Too much traffic means that ObamaCare is more popular than even we expected! True, yes, but wouldn’t it have been nice if the site had actually worked properly nonetheless?

And then there’s the latest scandal out of the NSA: their new data center is prone to fires. Lots of them. And all you need to do to start one is, well, turn stuff on. Needless to say, fire and high tech equipment don’t mix very well.

Of course, as any techie will tell you, technical problems are quite routine in any hardware or software rollout. You can expect some bumps along the road regardless of how competent your people are. But the problem here is, both snafus resulted from rather elementary mistakes. Healthcare.gov wasn’t tested enough. The equipment in the NSA data center was packed together too tightly for its electricity consumption parameters. Any college kid with one class in the respective discipline could have avoided these problems.

However, while it might be amusing to theorize some vast anti-government conspiracy, chances are no such thing ever happened. Simple incompetence is a much more likely scenario. After all, conspiracies require coordination, sophistication, and careful planning. Simply mucking things up is far, far easier.


Posted by Jason Bloomberg on October 7, 2013

Last week’s announcement that large US telco Verizon was rolling out their brand spanking new Verizon Cloud created a modest buzz in the Cloud community. Instead of building on the Terremark infrastructure or brand, Verizon decided to build this Enterprise Public Cloud from the ground up leveraging Xen and CloudStack technology.

Not that there’s anything wrong with any of that, mind you. Building a Public Cloud from the ground up is a great way to ensure you’re using the latest and greatest gear, and by all accounts, Verizon is doing just that. But in spite of the positive spin Verizon is putting on this news, they still have plenty of challenges ahead.

The most important question is whether they have any hope of catching up to the Amazon AWS juggernaut. True, they will avoid Amazon’s legacy problem, but it’s only a matter of time until today’s shiny new tech is tomorrow’s crusty old legacy. Will they be able to build enough momentum in the meantime to carve a chunk out of AWS’s market share?

Second, even if they have the technical chops to gain on Amazon, will their telco culture get in the way? As I discussed before, the telco industry has substantial cultural baggage. Can Verizon surmount theirs?

Third, what’s the deal with CloudStack? And why are they saying they are leveraging CloudStack technology without actually running their Cloud on CloudStack? That prevarication suggests that they have branched the CloudStack code to suit their purposes. True, they may be able to build proprietary capabilities this way, but will this strategy backfire over time, as the core CloudStack code base matures?

Fourth, where is Citrix in this story? Citrix is driving both Xen and CloudStack to further their “turnkey” Private Cloud strategy. It’s not clear, however, whether these efforts are up to the task of supporting a vast Enterprise Public Cloud. Can Verizon fill in the gaps quickly enough?

Finally, what about the customers? Will customers buy into Verizon’s offering, even though it’s still relatively untested? Amazon has had years to work out the kinks in their offering. True, Verizon gets to take advantage of lessons that other providers have learned the hard way, but will it be enough?

My guess: the Verizon Cloud will be successful, if only because the rising tide of demand for IaaS is raising all boats. There will be bumps along the way to be sure, but Verizon is unlikely to go the way of Nirvanix. But if there were ever a time for the buyer to beware, this is it.


Posted by Jason Bloomberg on October 1, 2013

The Cloud storage marketplace got quite a stare last week when storage provider Nirvanix abruptly closed its doors. Fortunately, they provided their customers with a few days to get their data off of the Nirvanix servers, but those customers are still left scrambling to move their bits to another provider without any business downtime.

The lesson for everyone who wasn’t a Nirvanix customer, however, wasn’t that you just dodged a bullet. On the contrary, the lesson is that the Nirvanix death spiral is but a hint of turbulence to come. We’re all so enamored of Cloud that we forget the entire space is still an emerging market, and emerging markets are inherently chaotic. Expect to see many other spectacular flameouts before the dust has settled.

In fact, the demise of Nirvanix could have been worse. They may have shuttered their doors without providing their customers a time window (and the necessary connectivity) to move their stuff off of the doomed drives. And what if they had declared a liquidation bankruptcy? Those drives may have ended up auctioned to the highest bidder – customer data included.

Does that mean that you should avoid the Cloud entirely until it’s matured? Possibly, but probably not – especially if you understand how the Cloud deals with failure. Remember, instead of trying to avoid failure, the Cloud provides automated recovery from failure. Furthermore, this principle is more than simply a configuration difference. It’s a fundamental architectural principle – a principle that should apply to all aspects of your Cloud usage, even if a service provider goes out of business.

Which Cloud company ended up on the positive side of this news? Oxygen Cloud – a Cloud storage broker I wrote about over two years ago. Oxygen Cloud abstracts the underlying storage provider, allowing you to move data off of one provider and onto another, in a way that is at least partially automated. And as you would expect, the entire Cloud brokerage marketplace is now forming, and the Nirvanix debacle will only serve to accelerate its adoption.

The silver lining to the Nirvanix Cloud story, therefore, is that following Cloud architecture best practices should help insulate you from the turbulent nature of the Cloud marketplace itself. But only, of course, if you get the architecture right.


Posted by Jason Bloomberg on September 25, 2013

Remember the fascinating story of the Mechanical Turk? This chess-playing automaton was a sensation in the courts of eighteenth century Europe. Who ever heard of a machine that could play chess? Turns out it was a dwarf in a box, playing chess via the manipulation of levers.

It is fair to ask whether your Cloud provider uses their own dwarves in boxes as well. In other words, is their Cloud provisioning fully automated, or is there simply an interface that suggests a fully automated back end, while what really goes on when you ask for a machine to be provisioned is that people behind the scenes have to do some or all of the work manually?

We can trust that the core provisioning at Amazon Web Services (AWS) is fully automated – there aren’t enough dwarves in the world to manually handle all the tasks that would be necessary to support AWS customers. But even in AWS’s case, certain tasks must be handled manually, for example, AWS Import/Export.

What about other Cloud providers? Let’s take a closer look at IBM SoftLayer. The reason I’m curious about this provider is that they tout their bare metal offering: you can provision physical servers in various configurations. To be sure, their Bare Metal Instance order interface is every bit as user-friendly as Amazon’s virtual machine provisioning interface.  But with all the configuration options, it strains one’s credulity to assume that SoftLayer is able to fully automate the bare metal provisioning process for all possible order configurations. The bottom line: my guess  is that they have a dwarf in a box handling at least some of the provisioning tasks.

Fair enough. Perhaps SoftLayer or other Cloud providers do not fully automate the provisioning process. As a result, provisioning might take hours or even days instead of minutes. But this whole discussion may be missing the point: does it really matter whether Cloud provisioning is fully automated?

The answer: in many cases, probably not. Yes, manual provisioning shouldn’t take weeks or nobody will be happy. But if setting up a physical server precisely the way you want it takes a few hours because there are people handling the tasks behind the scenes, that’s small price to pay for the automated ordering interface and the pay-as-you-go financial model. And as SoftLayer is quick to point out, virtual machines aren’t for everyone. Sometimes only bare metal will do.

Bottom line: be sure to understand your needs, and then verify the Cloud provider can meet them. It doesn’t really matter if there’s a dwarf in the box as long as they play good chess.


Posted by Jason Bloomberg on September 23, 2013

You have to admit, the United States National Security Agency (NSA) has had a really hard time lately. Edward Snowden’s revelations of the heretofore top secret PRISM program keep on coming. Not only is the NSA spying on everyone outside the country, they’re spying on Americans as well. They’ve been undermining the cryptography industry to place back doors in our security technology. Who knows what we’ll find out they’ve been up to next?

Such tawdry revelations of the inner workings of perhaps the most secretive of government agencies suggests the NSA is populated with nefarious spooks, hiding behind trench coats and dark glasses as they surreptitiously poke away at their keyboards in the dark. However, nothing could be further from the truth.

In reality, the intelligence analysts at the NSA are techies much like techies in any other organization—except for the addition of rigorous controls and processes that would put your average architecture review board meeting to shame. First, these analysts must pass the appropriate security clearance checks, which at the top secret level are extensive, onerous, and at times very personal. Then, once they begin work, they only have access to the systems and data that are specific to their assigned task at hand—the “need to know” partitioning principle behind any secret government effort. Finally, there are extensive checks and controls, so if a mischievous analyst got it into his or her head to, say, listen into a spouse’s phone conversation or poke around in some movie star’s email account, they would be caught, punished, and fired.

These analysts are dedicated professionals working under stringent conditions necessary to do their jobs. Yes, there is always the chance of an insider attack – the one bad apple like Edward Snowden who violates his oath for his own purposes. But for every Snowden, there are thousands of professionals who are simply doing their jobs as best they can.

Sympathizing with NSA employees, however, doesn’t mean we have to excuse the NSA’s actions. And if someone at the NSA was actually breaking the law, then we would all be justified to expect that the malefactor would be caught and appropriate justice would be meted out. But in the case of the PRISM scandal, instances of true law breaking are few and far between – if it turns out that any laws were being violated at all.

Our real concern with the NSA, therefore, is primarily focused on the laws that drive as well as constrain the NSA’s activities. After all, you can’t blame someone for doing their job and following the law. So who should we blame if we don’t like what the NSA has been up to?

Congress.


Posted by Jason Bloomberg on September 20, 2013

In my role at ZapThink I get to speak with people at all levels of many organizations, from the most technical of developers to the most senior executives in the C suite. And I have found that everyone in the enterprise – from one extreme to the other and everyone in between – can suffer from tunnel vision.

Tunnel vision is a problem most familiar among the techies. “I could finally get some work done,” is the common lament, “if it weren’t for all the damn users!” Because of their deep technical strength, whether it be a developer, sys admin, or other role, these professionals tend to immerse themselves into the technical minutiae. Their work conversations drip with jargon, and asking them to interact with people who are not so similarly focused pushes them outside their comfort zone. Hence the phrase “tunnel vision.”

People on the business side of the fence aren’t immune from their own form of tunnel vision. I’ve met many an executive who sees their technical illiteracy as a point of pride. “Don’t confuse me with that technical mumbo jumbo,” the exec opines, “that’s what my IT folks are for!” Whenever they must interact with some piece of technology, they end up with arbitrary requirements that serve to hide their discomfort. Hence so many requests to move this button over there, or to change a ringtone or graph color.

Such tunnel vision results from the common organizational approach that builds and reinforces silos within the enterprise. Divide people up by expertise or specialty so that each team or committee consists of like-minded people with similar perspectives on a common set of problems. And thus the product managers meet with other product managers, the executives meet with other executives, the Java developers meet with other Java developers, etc.

However, not every organization suffers from such paralyzing silos – or at least, not everywhere in the org chart. There are in fact several movements afoot in companies across the globe that purport to cut across silos in order to achieve greater levels of agility in the organization, and as a result, deliver more effectively on the true business needs. In fact, three in particular come to mind:

·         Agile methodologies. Instead of separating the stakeholders from the developers, throwing supposedly fixed requirements over the wall from one silo to another, include the stakeholders in the development process. Iterate frequently, and involve stakeholders along the way. Get the process right and the result? Better software.

·         Dev/Ops. As Cloud leads to increasingly automated operational environments, the role of the ops personnel shift to working with developers and the rest of the software team to roll out better solutions across their full lifecycle. Don’t simply iterate development, iterate deployment as well. The result? More agile software.

·         Next-Gen Enterprise Architecture (or what I call Agile Architecture in my book, The Agile Architecture Revolution). Put all those static EA artifacts in a drawer and instead focus on supporting consistent, comprehensive governance across all levels of the organization. Establish the policies and related processes and tools at the organizational, process, technology, and information levels across the company in such a way that all such policies are consistent with each other and have built-in mechanisms for evolving as needs change. Such EA touches everyone in the enterprise. The result? An agile organization that leverages continuous business transformation to achieve long-term strategic goals.

Do you have what it takes to succeed with Agile software methodologies, Dev/Ops, and Next-Gen, Agile Enterprise Architecture? Every organization has the innate ability, given that people are always able to learn new skills. The real question is: do you have the courage?


Posted by Jason Bloomberg on September 10, 2013

The Public Key Infrastructure (PKI) has been the backbone of IT security for several years. It lies at the heart of SSL, Cloud encryption, digital signatures, and the vast majority of IT security tools and techniques around the world.

But now we know that it has been compromised. In fact, we now know that it has always been compromised.

But not by any run-of-the-mill hacker. Not by some nefarious foreign government or terrorist organization.

No, PKI has been compromised by our very own National Security Agency (NSA).

In a chilling article in the New York Times, we recently learned that the NSA has long been influencing the policies that have driven PKI standards. Perhaps from the very beginning – with the goal of introducing back doors that only they know about.

But the very nature of PKI suggests that this vulnerability is far more than a back door. PKI is nothing more than a house of cards.

At the center of PKI is the notion of a digital certificate. Every browser, every security tool, every server has at least one. Each certificates inherits its authority from the certificate of the tool that issued it, known as a Certificate Authority (CA). Each CA, in turn, inherits its authority from the CA above it in the Certificate Chain – all the way up to the self-signed root CA.

Root CAs, therefore, are the keys to the PKI kingdom. The Root CAs for public Certificate Chains belong to the most secure of security companies, like Verisign and Entrust, who must keep them secure.

Except now we know the NSA has their fingers all over PKI. Including, we presume, the root CAs. And if the root CAs are compromised, then all of PKI is compromised.

I wonder if the whole idea for root CAs was NSA’s to begin with? Perhaps, perhaps not. Either way, we can assume the US government can crack any PKI-secured technology without lifting a finger.

As a result, there’s no place to hide. There are no more secrets – assuming, of course, that there ever were.


Sitemap