The Wayback Machine - https://web.archive.org/web/20131201011442/http://www.devx.com:80/blog/
Login | Register   
Twitter
RSS Feed
Download our iPhone app
TODAY'S HEADLINES  |   ARTICLE ARCHIVE  |   FORUMS  |   TIP BANK
Browse DevX

 
 
Posted by Jason Bloomberg on November 26, 2013

A recent blog post by Gartner Analyst Alessandro Perilli created a bit of a dustup in the OpenStack world. He wondered why vendors can’t sell OpenStack to enterprises, and came up with four basic answers: lack of clarity about what OpenStack actually does, lack of transparency about OpenStack’s business model, lack of vision, and lack of pragmatism.

All four points are quite valid and Perilli’s argument holds water as far as it goes. However, his argument as well as the contrasting opinions of OpenStack aficionados all miss a larger point here.

Perilli provides some interesting insight into what Gartner customers are looking for when they ask for OpenStack advice: he points out “how many large enterprises keep calling Gartner asking clarifications about what OpenStack is and how they could leverage it to reduce their dependency from VMware.” This statement reveals two interesting truths: first, many Gartner enterprise customers don’t really understand OpenStack, and second, those same customers have VMware products but have problems either with VMware the company, or their products, or perhaps both.

Fair enough. However, the first question out of enterprise customers’ mouths isn’t “how can we use OpenStack to build a Private Cloud?” Why not? Given that OpenStack’s core value proposition to enterprise customers is to form the core orchestration platform for building Private Clouds, the obvious question is why those customers aren’t thinking in terms of Private Clouds.

The answer that illustrates the larger point here: nobody really wants a Private Cloud. In fact, there are two types of enterprise Cloud customers: those who think they want a Private Cloud but don’t really want one, and those who don’t think they want a Private Cloud and in truth, don’t actually want one.

There are many reasons why people might think they want a Private Cloud: better security, greater control, increased flexibility, or better regulatory compliance, to name the most popular reasons. But in fact, Public Clouds either offer these qualities in much greater abundance than Private Clouds, or they soon will, once Public Clouds have matured a bit more.

Furthermore, there are many reasons why companies don’t really want Private Clouds: they’re far more difficult to build than anybody thinks, they’re surprisingly expensive and difficult to secure, and they don’t offer the elasticity benefit so important to Cloud Computing. In fact, many purported Private Clouds currently up and running aren’t really Clouds at all, because they lack the automated provisioning, configuration, management, and monitoring that differentiate true Clouds from nothing more than virtualized data centers.

What do all these provocative points mean to OpenStack? Bottom line: they are stuck in a HAL 9000 moment. Remember HAL 9000, the sentient computer from 2001: A Space Odyssey that tried to kill the crew? As we learned in the sequel, the reason HAL went bonkers was because it was given contradictory instructions.

Just so with OpenStack: While OpenStack supports both Public and Private Clouds, the OpenStack enterprise story is a Private Cloud story. Only nobody really wants a Private Cloud. So what should OpenStack build for the enterprise, and what should they say about what they have to offer?

“You can’t do that, Dave…”


Posted by Bradley Jones on November 26, 2013

We've added a new blog series to DevX entitled Enterprise Issues for Developers. Look for regular entries to start appearing on the site in the very near future. These articles and blog entries will focus on some of the core topics that impact developers working in a corporate setting or on large-scale projects.

We hope you like the new blog. Remember to comment and give us your feedback!


Posted by Jason Bloomberg on November 20, 2013

It seems my recent article on why the IoT may be DoA (dead on arrival) may have shaken something loose at large telecommunications firm Verizon. Today they announced that they are rolling out digital certificate services for the Internet of Things (IoT), so now we can all implement secure Machine-to-Machine (M2M) communication with nary a care in the world.

Not so fast. Yes, putting digital certificates on a wider range of devices is a step in the right direction. But first, there’s no new technology here; the announcement is more about pricing than anything else. Secondly, many M2M devices can’t support digital certificates, as I explain in my article.

Even so, any device with storage, a processor, and a power supply should have no problem storing and using certificates, which covers a large swath of the IoT to be sure. So can we at least say that said swath is secure?

Not really. When a device is communicating with a server, then yes, a digital certificate on the device may help to secure that device’s interactions with the server. But M2M suggests interactions between devices on a peer-to-peer basis with no server involved. Such interactions have complications that traditional identity management scenarios don’t cover.

Verizon’s Managed Certificate Services are a good starting point, but there’s much more work to be done. And we still need to figure out how to secure RFID tags and all other sensors that don’t have the internal capacity to deal with certificates.


Posted by Jason Bloomberg on November 19, 2013

I thought the 9000-person crowd at Amazon re:Invent last week was large, but by all accounts, Salesforce.com’s Dreamforce this week in San Francisco puts re:Invent to shame. And all those throngs of aficionados heard one main announcement from Salesforce today: Salesforce 1.

Billed as a platform for the Internet of Things (IoT) – excuse me, the Internet of Customers – Salesforce 1 has new APIs, mobile tools, and more for connecting apps, devices and customer data.

Slick. Sexy. And unquestionably Cloudy. What’s not to like?

The problem is, I doubt the Salesforce 1 team read my recent article on why the IoT may be DoA (as in dead on arrival). The reason? Security. As in, the IoT doesn’t have any to speak of. All those RFID tags, smart refrigerators, cyber-aware automobiles, and intelligent traffic lights blink away, ripe for the hacking.

Now here’s Salesforce, building an Internet of Customers, with nary a word about security in the marketing verbiage. Sure, they discuss security somewhere in the technical details, but the bottom line is that neither Salesforce nor anyone else really has a clue how to secure the IoT.


Posted by Jason Bloomberg on November 15, 2013

Not a moment after the virtual ink dried on my recent blog post pointing out that the more mature Hadoop becomes, the less of a Big Data tool it is, I received news that cemented this counterintuitive statement. I had a conversation at re:Invent today with SyncSort, an old guard mainframe ETL vendor who has leveraged their deep expertise in sorting algorithms to revamp the inner workings of MapReduce for the new Hadoop 2 release. During the conversation, however, they mentioned a research project they’re working on: Hadoop on zLinux.

zLinux, of course, is Linux running on IBM zSeries mainframes. Linux on the mainframe has been around for a few years now, but the notion of running Hadoop on it is a glorious and unexpected mixing of old and new. The advantages, however, are very real.

Mainframes are still the workhorse of the enterprise data center. They are blisteringly fast and remarkably reliable. Those qualities, however, are not the whole story.

Perhaps the greatest bottleneck in any large scale Hadoop deployment is the local network. The Hadoop File System (HDFS) contains dozens or even hundreds of nodes, and all your Big Data must get onto them somehow.

However, when you put the HDFS on zLinux, then all those nodes are on the same physical server, no larger than a refrigerator. The mainframe’s internal backplane handles the traffic to the nodes and between the nodes, lightening the load on the network. Hadoop on steroids.

Hadoop on zLinux isn’t ready for prime time yet, but once it is, expect to see Hadoop on mainframes, no matter how strange that sounds. Use the right tool for the job, after all.


Posted by Jason Bloomberg on November 14, 2013

Some random thoughts from Amazon’s second Cloud conference, re:Invent, in Las Vegas.

There are 9,000 attendees here, easily making it one of the largest Cloud conferences ever. But the crowd is about 98% male. Where are the women of the Cloud?

Amazon is rolling out WorkSpaces, their new Cloud-based desktop virtualization service. It will clearly shake up the desktop virtualization marketplace, but it’s unlikely to expand the market by much. While admins love desktop virtualization, users hate it, because a blip in the network connection can hose your entire desktop experience.

The key to successful DevOps is beer. DevOps requires technological, process, and cultural changes, with the latter being the most difficult. Work through the cultural changes by helping developers and ops people get to know and like each other. That’s where beer comes in.

IBM is going after Amazon, but Amazon isn’t worried. After all, Amazon’s cloud both works and makes them money. At the same time. Imagine that!

Workload migration and Cloudbursting have largely been vaporware up to this point, because workloads are typically complex and distributed, thus difficult to migrate. But no longer – CliQr has cracked this nut in spectacular fashion.

Amazon has a hard time opening the kimono. The press weren’t invited to the analyst briefings, and neither press nor analysts were invited to the enterprise summit.

I’m going to miss the Pub Crawl tonight. My wife joined me on this trip, so I’d rather have a quiet evening with her. That means my evening will be only 50% male, while the pub crawl is bound to be in the 98% vicinity. Have fun, guys!


Posted by Jason Bloomberg on November 12, 2013

The burgeoning Cloud marketplace generates so much noise and smoke that it’s unusual a significant event doesn’t create a hullabaloo. But that’s just what happened when IBM abruptly announced the shuttering of SmartCloud Enterprise. Why aren’t customers up in arms? Where’s the consternation? Why aren’t pundits forecasting an Armageddon?

Clearly, the lack of noise from SmartCloud Enterprise customers has some significance. Precisely what that significance is, however, is open to guesswork.

The party line from IBM is that they’re migrating SmartCloud customers to SoftLayer. They’ll even help with the migrations. So, are SmartCloud customers OK with that?

Perhaps, but workload migration is easier said than done. Sure, if your workloads are simple storage instances or lightweight Web sites, then migration is a  breeze. But what about distributed Cloud-based apps running on multiple instances? Or what about complex hybrid apps with many integrations between Cloud-based and on-premise systems? Such migrations can be complicated and difficult.

However, nary a peep from customers with such problems. My guess? Because IBM didn’t have any such customers. In fact, it’s not clear that SmartCloud Enterprise was really a Cloud at all.

After all, IBM lost an enormous private cloud buildout deal for the CIA to Amazon. No amount of protesting could convince the government they had made an improper decision. Why? Because the technology in the IBM proposal didn’t offer autoscaling that met the CIA’s requirements. No autoscaling, no elasticity. No elasticity, no Cloud. Cloudwashing at its finest.

Not that IBM’s move is not without its own irony. It was only a few weeks ago that the punditocracy dinged Google for not “futureproofing” their Cloud offerings sufficiently to assuage enterprise concerns. And now here’s IBM, the enterprise’s vendor of choice, proving they're only too happy to flush enterprise customers down the toilet if it suits them.

Assuming SmartCloud Enterprise had any enterprise customers, that is.


Posted by Jason Bloomberg on November 8, 2013

Among the big news in the world of Big Data is the impending release of Hadoop 2, a major refactoring of the popular Big Data processing tool. This release is notable not because it offers plenty of new bells and whistles, but rather because the Hadoop team has cleaned up many of the limitations and inconsistencies in the original Hadoop code.

At the core of the new release is YARN, a new cluster resource management tool that supports and displaces MapReduce: it supports MapReduce as a data processing engine while offloading the cluster resource management. Hadoop 2 is also even more scalable than the previous version, and supports multitenancy – a feature that makes it better suited to run enterprise data warehouses.

And therein lies the irony. Hadoop 2 promises to become the engine that supports data warehouses in enterprises around the world, a better mousetrap for catching traditional, familiar mice. In other words, the better Hadoop gets, the less of a Big Data tool it becomes.

Remember that Big Data are data sets that traditional tools are unable to adequately deal with, necessitating cutting edge technology that takes unconventional approaches. Hadoop version 1 clearly qualified. But now that Hadoop 2 is positioned to dominate the staid, traditional enterprise data warehouse market, it will pass the Big Data moniker to newer, less mature technologies that are emerging to deal with challenges that traditional tools – like Hadoop – are poorly suited to tackle.

Oh, the irony!


Posted by Jason Bloomberg on November 5, 2013

As the market gradually settles on a clearer definition of Big Data, and as the tools that target Big Data mature, more and more organizations are seeing the value in Big Data. As a result, enterprises far and wide are putting together Big Data strategies.

Nonsense, I say! Simply writing the phrase Big Data strategy indicates that you either don’t understand Big Data, or even worse, you don’t understand the word strategy.

Your strategy delineates how your organization is going to achieve its long-term goals. What markets will you be in? How will you differentiate yourself? How will you establish and maintain a barrier to entry? How will you gain market share? What innovation is important to your organization, and what are the goals of that innovation?

How you achieve your strategy is what we call tactics. The plans, the technologies, the assets we bring to bear and how we’re going to use them are all tactics. And those assets include your data – even your Big Data.

The phrase Big Data refers to data sets at the bleeding edge of our ability to deal with them, as well as the tools, technologies, and approaches that are emerging to tackle such challenges. But either way, Big Data are tools in your tool belt. They can help with your tactics. But don’t confuse tools with strategy.

Of course, this argument applies to any tools, technologies, or approaches. Cloud strategy, SOA strategy, etc. – all are nonsense. Do carpenters have hammer strategies? Do artists have paintbrush strategies? Perish the thought.

So why are so many organizations putting together Big Data strategies? Because Big Data is a hot buzzword, of course. And everybody knows, buzzword strategies are a whole lot easier and sexier than real strategies. After all, executing on real strategies is, like, real work.


Posted by Jason Bloomberg on October 31, 2013

Back in the good (bad?) old days, developers wrote code and chucked it over the wall to QA for testing. The QA folks in turn chucked it over the wall to ops.

Too many failed projects to count later, along came Agile. We brought QA into the development process, and in the Extreme Programming case, developers wrote the test plans themselves (with the help of stakeholders), and wouldn’t go home until every test passed. But they still tossed finished code over the wall to ops.

Today, the Cloud is pushing all software teams to automate the operational environment – even if you aren’t actually running production code in a Cloud. Automated ops is the wave of the future.

Automated monitoring, automated management, automated provisioning are the watchwords of the day. But none of these newly minted best practices remove the need for monitoring, management, or provisioning. Instead, they enable a broader range of people to handle the tasks using increasingly lightweight, sophisticated tools. Even developers.

Just as developers added automated testing to their toolbox, today they must add automated monitoring as well. Only we’re not talking about simply watching red lights and green lights. Instead, developers must write their own monitors and every time they roll out some code, they must make sure all the monitors pass. Sound familiar?

DevOps is only a burden on developers if they have inadequate automation tools to handle ops from the perspective of working software, just as Agile was a burden without the appropriate automated testing tools. With the right tools and the right practices – using the tools properly is every bit as important as the tools themselves – continuous build, continuous deployment, and continuous integration are well within reach, or even easy. But if you still find that these activities are too difficult, you’re not doing them properly.


Sitemap