« July 2003 | Main | September 2003 »
August 30, 2003
SchoolForge and GovernmentForge: Open Source Projects for Government
Everyone knows about SourceForge which bills itself as "the�world's�largest Open�Source�software�development�website, with the largest repository of Open Source code and applications available on the Internet." SourceForge.net provides free services to Open Source developers including project tracking and collaborative development tools. Too relatively new sites have sprung up, taking their inspiration from SourceForge, to support open source software development and sharing for schools and governments.
SchoolForgeprovides a place where proponents of open source software use in schools can get together and share information and collaborate. From their mission statement:
Schoolforge's mission is to unify independent organizations that advocate, use, and develop open resources for primary and secondary education. Schoolforge is intended to empower member organizations to make open educational resources more effective, efficient, and ubiquitous by enhancing communication, sharing resources, and increasing the transparency of development. Schoolforge members advocate the use of open source and free software, open texts and lessons, and open curricula for the advancement of education and the betterment of humankind.
At present, there's not much there in terms of actual project building actual open source software, but perhaps that will come in time. There is a nice collection of information and a purpose. School districts and even places like GovernmentForge is a dedicated to the promotion and use of open source software in state and local governments. They have an initial project called Leopard which stands for LAMP eGovernment OpenSource Project Augmented Relational Database. That's an awful long way to go for an acronym, but whatever. The goal of Leopard is to create a pre-packaged, easily installable LAMP (Linux, Apache, MySQL and PHP) system for use in eGovernment projects. I've though for a long time that someone needs to make setting up a LAMP system easier and maybe this is the answer.
So far, SchoolForge and GovernmentForge are just ideas with a website. What's going to make them work is fostering a sense of community spirit that has to exist in open source projects. If everyone's coming to see what they can get rather than coming to see how they can participate, these two sites will never bloom. That would be a shame---there's much that could be done through cooperation and open source software is a wildly successful paradigm for code reuse and sharing. Schools and governments need to learn how to play the OSS game to reap the benefits.
9:43 AM | Comments () | Recommend This | Print This
August 29, 2003
Extreme Mobility
If you haven't yet read Ray Ozzie's essay on Extreme Mobility, you should. In his usually fashion its well written and well thought out. Ray starts off recalling a recent conversation with the CIO of a company with over 1000 employees who's stopped buying laptops. I reported last month that laptop sales had outpace desktop sales for the first time in May. Ray has some thoughtful analysis of what mobility means in various contexts defining "usage mobility," "Infrastructure mobility," and "participant mobility."
3:33 PM | Comments () | Recommend This | Print This
I'm Way Past Caring About That
Tom Yeagar says that he's way past caring about things like legitimate anonymous contact in email as his friends and family get buried under a storm of emails and web sites that scam the technically unsavvy. Sifting the good from the bad is too subtle for most folks---its akin to debugging. As Tom says:
As much as the tech elite likes to make fun of average Internet users -- including nontechnical corporate users -- average users don't live in straw huts and communicate with drums. Most have flush toilets, cell phones, satellite TV, and caller ID, and use them appropriately. They're buried in technology, most of which is -- as it all should be -- invisible. But computers need constant care to keep their users safe.
The answer? Tom calls for a single public standard for verifiable digital identity where folks can voluntarily register their ID. Then my Mom could simply not accept mail from folks who didn't identify themselves. Of course, there are plenty of issues around this, but that's what makes it fun!
I fear, however, that like DNSSEC, we techies aren't capable of building this system without way over-engineering it. We naturally want to think of, and protect against, every contingency and that means that it will be 10 years just getting through the standards process, let alone the implementation.
Here's my answer to the problem, much as I hate to say it. Microsoft and the Post Office have already teamed up on electronic post marks for Word docs. They ought to team up to create an open, easy way to put an inexpensive, identity proofed certificate into Outlook and Outlook express and then add the cost into the cost of Office and Windows. A big campaign could drive users to protect themselves from scam and spam by using the "free certificates" they can print with every copy of Windows and Office to create an ID and only accepting email from those who have one.
I know, you're going to hate this idea. You're going to say "Windley's gone off the deep end!" and "I'll never use Microsoft's ID program!" and "Microsoft will turn it to commercial advantage!" All of those things may be true, but the alternative, in my opinion, is much worse. At some point, legislatures, spurred on by their constituents, will undertake to solve this problem and no matter how well intentioned, you'll lake that solution even less---I promise.
8:05 AM | Comments () | Recommend This | Print This
August 28, 2003
RSS As an Alternative to Email
An article worth reading on the use of RSS as an alternative to email for newsletter publishers. The author, Steve Outing, talks about how Chris Pirillo's Lockergnome is actively moving people to RSS to avoid the hassles of email. Even though Chris' email newsletters are opt-in, they frequently get caught in spam filters. I know that I have to explicitly add each of the newsletters I subscribe to to my whitelist or SpamAssassin will kill them as Spam.
Of course one of the reasons people publish email newsletters is to sell ads in them, so this bring up the whole issue of ads in RSS feeds. If you subscribe to any of Infoworld's or Lockergnome's feeds, you'll have seen them. I prefer InfoWorld's style a little since its clearer to me what's an ad and what's content. I think it ads more credibility to the real story when the ads are clearly identified.
I was thinking about this today before I saw Outing's article in the context of my desk. My desk is much cleaner since I left the state. That's primarily because I don't get nearly the volume of magazines. I read most of what I see now online. That's nice, but I miss the friendly "event notification" that a magazine arriving at my desk signifies. This is, of course, what publish and subscribe is all about in the physical world as well as the virtual one. Email provides this same sort of event notification, but as Outing and others point out, its being drowned in a sea of viruses.
RSS offers an alternative that can provide all the benefits of an email newsletter or even magazine subscription. There's still some work to be done, but I look forward to the day when I can subscribe to Infoworld or CIO Magazine and get the same product online that I do offline and have it delivered. This won't happen if the publishers can't insert ads.
8:58 AM | Comments () | Recommend This | Print This
August 27, 2003
A Better Apache Than Apache
An article on Server Watch entitled "Making an Open Source Server Enterprise Ready" reviews the Covalent Enterprise Ready Server, which claims to offer increased reliability and security over the stock Apache server. One of the features that interests me is the Covalent Management Portal, a hardened Web-based server configuration tool that can manage from one to "hundreds" of Covalent ERS servers. This includes support for SNMP v1, SNMP v2c and secure SNMP v3. At $1500/CPU its not cheap.
10:46 AM | Comments () | Recommend This | Print This
August 26, 2003
Western CIO Summit: Data Exchange
Des Vincent, the CIO of Northern Ireland, is the first speaker on the data exchange panel. I'm enjoying listening to him very much. He's discussing the COINS system that linked vehicle information from both public and private sources in NI. Not surprisingly, the politics was the most difficult part. He mentioned that there were over 200 databases in NI that contained names and addresses of citizens. I'm intrigued that, coincidentally, its almost exactly the same number as we founds in Utah.
Mark Blatchford, from the Social Security Administration, is presenting information about the eVital project and its predecessor pilot: EVVE. EVVE was a pilot project with eight states that provided SSA with an electronic way to match vital record data. Since SSA didn't want to be a repository of vital record data, all they got back from a request to a state was a green or red light. The project worked on XML and through a hub. One of the big challenges to this program was the financial aspect. States have become dependent on the revenue that is driven through the SSA requiring citizens to present a certified copy of their vital records when they talk to the SSA. Now that the SSA is going direct, they're not likely to pay the same amount for a data query that states could charge each citizen for getting a certified copy of their records.
2:16 PM | Comments () | Recommend This | Print This
Western CIO Summit: Enterprise Architectures
One of the panels is on Enterprise Architectures. The panel consists of:
- Curtis Wolf, CIO, North Dakota
- Val Oveson, CIO, Utah
- Robb Stoddard, CIO, Alberta
- Moira Gerety, CIO, new Mexico
- Bob Haycock, Manager, FEAPMO
Curtis is talking about North Dakota's Enterprise architecture program. They have made a lot of progress, although Curtis says its been sidetracked a little by agency angst over a legislatively mandated centralization of many IT functions, including email, database, and server administration. Curtis believes that the EA process would have eventually led to the same conclusions and done so in a way that wasn't so upsetting to the business. I think he's right. I've always believed that more centralized administration of IT functions is a fact of life that will happen and its much better for an IT organization to decide on their own how that should happen than it is to wait for someone else to decide for you.
Val is describing Utah's governance structure, put in place by the Governor last August, that uses the Cabinet as the IT project portfolio managers and a dotted line organization between the CIO's office and the CIOs in each agency that . Still, he says uncertain related to governance is the hardest question in putting an EA into place (every other CIO on the panel shakes their heads). Vision in Utah is clear. Application development is moving forward (witness eRep, for example). Infrastructure consolidation has not happened because the required political capital isn't available. Val also mentioned the new strategic plan and the hard work that went into it by agencies. This strategic plan does an excellent job of outlining seven great goals for eGovenrment in Utah and listing objectives for each one. As always, the proof will be in the implementation, but getting the governance done is a greate start.
Moira is one of the new CIOs who came into office from last year's election cycle. Her background is private sector. Its clear as you hear her speak that she's got an aggressive new Governor who's ready to make some changes. This translates into a desire to move money from IT into programs, in this case. This leads to less emphasis on technical architectures. She makes a case for open source and open systems. Architecture is impacted by procurement. Being new, she's concentrated on the governance issue and working toward a sub-cabinet group of agency CIO's. They are focused in three main areas of state functions: client services, resource management, and government operations.
Robb started off talking about the creation of Alberta SuperNet, a network that connects every school, library, health facility, and government office. This gave Robb a wide area network that allowed him to ask the agencies: how would you do business if bandwidth and storage were not an issue. The natural outcome in many cases was more consolidation. Robb views his role as defining the rules and creating the rulebook (the EA) and playing the part of referee as the agencies "play the game." Interestingly, after getting a governance model in place, Alberta started with a data architecture. This is unusual, but also a good way to promote data sharing. Its hard to do because executives want to fund "programs" not data.
Bob, as director of the Federal EA Program Management Office and, given OMB's mandate that new money won't go to programs without an EA, is in much demand. He's responsible for directing the development and implementation o the Federal EA. There are four primary objectives:
- identity opportunities to leverage technology and alleviate redundancy. or to highlight were agency overlap limits the value of IT investments.
- establish "line of sight" contribution of IT to mission and program performance.
- facilitate horizontal (cross-Federal) and vertical (Federal, State, and Local) integration of IT resources.
- support a more citizen-centered, customer-focused government that maximizes IT investments to better achieve mission outcomes.
11:31 AM | Comments () | Recommend This | Print This
Western CIO Summit: eAuthentication Panel
I'm at the Western CIO Summit in Park City. My panel on eAuthentication was the first one this morning. Also on the panel were Glenn Miller of the University of North Dakota's NDGRO program, Steve Timchak, the eAuthentication program manager at the GSA, and Chuck Chamberlain who does business development for the US Postal Service. Steve talked about the eAuthentication initiative and provided some clarifying information about what it is and what it isn't. Essentially, eAuthenticaion is a policy decision point (PDP) for the federal government.
Chuck talked about the US Postal Service's In-Person Proofing and Electronic Postmark initiatives. Both of these are quite interesting. In-person proofing allows a private company to create an electronic form that can be printed and taken to the local post office for in-person authentication against a physical ID. This is perfect for certificate authorities and other who need strong proof that the certificate is being issued to the right person. Electronic postmark is exactly what you'd think it is. Chuck mentioned that this will be part of Microsoft Word soon so that you can get an electronic postmark on a Word document before you email it off to someone.
My role was to be controversial (a role I had no trouble fitting myself for). My talk was about the proper role for government in digital identity. I've put my slides online along with a paper that discusses my points in more detail. In short, my primary thesis is that government has played an important foundational role in identity in the physical world, but has abdicated its role in the digital world, hoping that private interests will somehow fill the gap. I generated plenty of comments, both pro and con. That was the point: raise some awareness.
10:30 AM | Comments () | Recommend This | Print This
Managing Blackberrys and Other PDAs
In a tale that reminds us that IT organizations still haven't come to grips with the management of PDAs and other palm-sized computers, this Wired magazine article reports that the Blackberry of a former Morgan Stanley VP, chuck full of all sorts of corporate information, was recently purchased on eBay for $16. The VP had left the company several months earlier and the IT department failed to wipe it clean. Naturally, they want to make it his fault. Quoting from the article:
"We trust employees with a lot of sensitive information; that's why we have these procedures in place. Someone who is in mergers and acquisitions and is a vice president should be very aware of his responsibilities," [said Morgan Stanley's Quintero]. But Korn/Ferry's Steinbock said, "If they were vigorously wanting to protect their intellectual property, I would hardly think that's enough. "Since it's information that would harm them, not him, it's perplexing that they wouldn't be more aggressive about retrieving that information and follow up with him. The company obviously doesn't have controls in place to take care of its own intellectual property, and that's really their fault," she said. In fact, the VP said that when the company closed his e-mail account on his last day of work, he thought any data on the BlackBerry would be deleted remotely by the server. "I just assumed it was all taken care of," he said.
The BlackBerry belonged to the executive because Morgan Stanley has them buy their own. This policy seems shortsighted. Sure, the company saves a few bucks, but it makes it much harder to control the information. Furthermore, the IT department will never be able to get its arms around a collection of incompatible devices. Companies need to manage their IT and the data, not the employees.
I've heard companies brag about things like "zero-day start" where an employee is up and running with all the accounts, permissions, gear, etc. that they need to do their job the first day they show up. How many companies are good at turning everything off? I still had an email address over a month after I left the State of Utah (which gave the conspiracy theorists something to worry about). I'm sure if I'd checked, I still had access to all kinds of data as well. Not that I'm picking on Utah--they just happen to be my most recent experience. As I've said this sort of thing is common.
6:07 AM | Comments () | Recommend This | Print This
August 25, 2003
Loosely Coupled Conversations
My July column from Connect Magazine has now been published on their web site. Its called Loosely Coupled Conversations. The article starts off:
heard science fiction author Greg Bear say recently that sci-fi authors have conversations in slow motion.� One author writes something in his book to which another author responds in his own book and a conversation develops over a period of years.� Weblog authors are doing the same thing with two interesting differences:
- The cycle times for Weblogs are much shorter because Weblogs are easier to publish.
- The technology used in Weblogs allows you to discover who's responding more easily than in print.
7:59 PM | Comments () | Recommend This | Print This
Sharing Code and Data
States sometimes share applications and code. Utah, for example, was very aggressive in developing an offender tracking system (called OTRACK) and in getting several other states to sign up to use the system and contribute to its development. An article in the August issue of Governing talks about this practice. Part of the problem with all this talk is that most of it is focusing on 1990's terms and technology. Its all about components and system compatibility. Witness this quote from Charlie Gerhards, Pennsylvania's CIO:
The biggest problem, though, is that as much as agencies may want to share components, their computer systems aren't designed to do that very well. Governments typically run on a hodgepodge of different systems, and programs may not transfer from one agency to another - let alone between states. Oftentimes, sharing requires so much customization work that agencies actually are better off starting from scratch. Before any kind of formal component exchange will work, the states must adopt systems that are more compatible with each other. "The way you optimize this thing," says former Pennsylvania CIO Charlie Gerhards, "is for everyone to have strict standards when they�re developing applications and adhering to that."
What states need to focus on, instead of components, is in the two areas that have shown to promote sharing and re-use: open source software and the Web. I'm no just talking about service orient architectures and Web services, although those are certainly a part of it. We'd be far ahead to just move toward more effective data sharing.
Toward the end, there's a section on open source software that highlights Jim WIllis' OSS efforts in Rhode Island. But this ends with some off-putting language that states "many skeptics wonder whether such do-it-yourselfism might lead to more trouble than it�s worth." Somehow we're not getting the message across that OSS isn't some odd idea that we might want to try out, but a proven development model that should be employed by government.
The article itself doesn't talk about data sharing, but a Tom Davies column in the same issue call information sharing "The Missing Link". Tom cites a book by Donald Machand that
...concludes that the exchange of information - between individuals on a team and across functional and organizational boundaries - is one of the critical information values that senior executives need to instill in their organizations. Values and behaviors such as information sharing are, the book suggests, just as important to increasing organizational performance as is having the latest and greatest technology. Organizations that demonstrate a high IO do so after years of promoting the right information-sharing values.
Tom asks when we're going to put the "I" back in "CIO."
There are some successful instances of governments sharing code and data, but CIOs need to make a key focus area. Of course, a lot of the talk about Enterprise Architectures is rooted in a desire for more interoperability and sharing. This is good: an EA will enable a lot more than just sharing. But OSS and service oriented architectures are relatively simple things that can be done, even in the absence of a comprehensive EA, that will promote sharing.
10:13 AM | Comments () | Recommend This | Print This
August 23, 2003
Distributed Computing Course Topics
I teach a course on distributed computing. In the past, the course has focused heavily on N-tiered architectures. This time, I'm taking a different approach. I'm going to divide the course into three parts: one on 2-tier architectures, one on n-tier architectures, and one on service oriented architectures. I wrote earlier about the texts for the course. Today while I was sitting at a meeting wasting time, I decided to put together a topic outline on the way to creating a syllabus. Here it is:
- Introduction
- servers
- Linux and lab information
- Part I: 2-Tier Distributed Architectures
- Architecture: presentation and data layer
- Transport: HTTP
- Methods: GET/POST/PUT/DELETE
- WebDAV
- MIME
- CGI
- Presentation layer: JSP
- Data layer: JDBC
- Case Study
- Project: Build a system for registering for courses. Allow self-service for students. Courses should be searchable and there should be provisions for retrieving and reviewing course information. Administrative interface allows courses and students to be managed.
- Part II: N-Tier Distributed Architectures
- Architecture: properties and benefits
- Component Transaction Monitors
- Enterprise Java Beans
- Entity Beans container and bean managed persistence
- Session Beans
- Patterns: DTOs
- Transactions
- Case study
- Project: Build a reservation system for signing up for courses. The courses have a price and require a credit card for reservation. Uses transactions for ensuring atomicity. Save student information for doing multiple courses. Has administrative interface for adding courses, deleting reservations, managing students.
- Part III: Service Oriented Architectures
- Architecture properties and benefits -- decentralized vs. distributed computing. Programming the Internet
- XML and Schemas
- XSL
- Web services and SOAP
- UDDI
- Synchronous and Asynchronous transport
- Common patterns for SOA design
- Use with EJB
- Identity and Security Issues
- The role of intermediaries
- Case Study
- Project: Add service layer to Project II. Use credit card authorization as a WS
- Conclusion: Making architectural decisions or when to use what
This is still a little rough, but its the main points. I try to not make it a class about specific technologies, but use those technologies as examples and, obviously, the students do their projects using specific technologies. I'd appreciate comments on things I've left out that absolutely have to be there or things I've included that seem out of place. One note on the language choice: the students already know Java and should be proficient in it, so my intention is to use JSP and Java servlets rather than PERL, PHP, or Python, even though I'm aware of their great use in 2-tiered architectures. That will be a point made in class.
4:31 PM | Comments () | Recommend This | Print This
August 22, 2003
Who's Afraid of Web Services?
Web services can be confusing, maybe even scary. Sometimes it seems that every time you pick up a magazine, there's another Web services protocol to understand. Given the uncertainty in the standards space, perceived security issues, and the complexity of deploying high-reliability Web services, its no wonder that many enterprises are taking a wait and see attitude.
Doug Kaye, in his book Loosely Coupled, defines complex Web services projects as those that are based on asynchronous messaging, require high availability, or involve providing service to external partners. Web services projects that don�t share these attributes are usually easy to get approved. But complex projects give seasoned managers pause.
One way to mitigate issues surrounding changing standards, security, and complex deployments is to hire one of the large service companies, like IBM or Accenture, to deliver and manage your company�s Web services. But what if your budget doesn�t have room for a top-drawer services company? Should you just give up on Web services until all the issues get sorted out?
Another route is to take advantage of a Web services intermediary such as Grand Central Communications or Confluent CORE. Web services intermediaries, offer configurable services such as logging, auditing, monitoring, alerting, authentication, and authorization. Grand Central and Confluent CORE differ significantly in how they�re deployed: Grand Central is a value-added network that you subscribe to for a monthly fee and Confluent Core is a software server that you buy, install, and operate. Both, however, can be used to connect external partners and customers to your Web services flexibly, securely, and reliably.
Web Services Networks
Confluent CORE provides a scaffolding for building a reliable Web services-based application. Because CORE is software you buy and operate, it appeals to industries that want an increased level of control over the communications between them and their partners. CORE works through a set of active intermediaries called "gateways" or "agents" depending on how they are deployed.
Gateways are proxy servers that intercept requests, enforce policies, and then forward requests onto registered services. Clients must be directed at a gateway for it to serve as the proxy for a service. Agents, on the other hand, are policy enforcement plug-ins that are deployed inside a SOAP container. Consequently, the client of the service is unaware of the message interception that is necessary to enforce policies. Multiple distributed gateways and agents can be configured and managed by means of a single policy manager and monitoring engine which is made up of three integrated, but distinct tools: CORE Manager, CORE Monitor, and CORE Analyzer.
CORE is organized around the idea of policy pipelines�centralized repositories of policies that are used to control the behavior of managed Web services. CORE also provides tools for monitoring and managing your Web services network giving detailed information about the Web services under management. Pipelines are linear series of steps. Each step can be individually created and edited. As new services are added to gateways and agents, default pipelines are created that perform authentication, access control, and logging for the request and the response. These default pipelines can be modified from a catalog of steps. For example, the administrator might choose to change out basic HTTP authentication with LDAP authentication or add a completely new step that does message transformation based on XSLT.
Brain Dead Easy
CTO John McDowall of Grand Central Communications wants to make �"connecting brain dead easy." In much the same way that the Postal Service or Federal Express create value by providing a trustworthy, auditable, and convenient way to deliver paper messages to other parties around the world, Web Services networks like Grand Central provide a trustworthy, auditable, and convenient way to connect multiple in a Web Services network. Grand Central solves the problem of matching multiple permutations of technologies and procedures with a set of configurable policies that can be defined by each party.
Trust is one of the greatest values that networks like Grand Central provide to its customers. Trust that your connecting to the intended partners. Trust that messages will be delivered once and only once. Trust that messages are confidential. Trust that the network will be there when it is needed.
The end result, according to McDowall is that connecting to the outside world will become a commodity where linking up services with different implementations of standards or transport will be "point and click." In this kind of self-service environment, the role of consultants changes from supplying armies of programmers who lashing together and managing complex systems to providing business process expertise and showing how service-oriented architectures add business value.
Outsourcing your entire Web services development effort can be a costly way to gain a little comfort. Fortunately, there are other strategies you can pursue to get around these problems and move forward with complex Web services projects now.
4:27 PM | Comments () | Recommend This | Print This
Modern Day Screen Scaping
Bill Humphries is writing about creating RSS feeds by screen scraping. He's using curl to get the page, tidy to clean up the HTML, and an XSL program to convert the result into RSS. Because the example he's using is making good use of CSS, he can use XPATH to easily grab the right nodes in the HTML doc. Very different from the PERL screen scapers we were writing 4 years ago.
8:36 AM | Comments () | Recommend This | Print This
August 21, 2003
Business Process Outsourcing
There are a number of companies whose IT shops are providing market competitive, rock solid IT as a shared service within the company. Some of those companies are wondering how they can turn that into a profit center. To quote a recent analysts report from CIO magazine:
This prospect is attracting a great deal of enthusiastic market interest. Diverse investors-including venture capitalists, investment banks and Systems Integrators-are placing big bets on the future of business process outsourcing (BPO). The range of strategies they are funding is dizzying, ranging from simple outsourcing to joint ventures to spin-offs, and even the outright sale of shared services operations to third parties. As any corporate development executive who has met with these would-be dealmakers will attest, the shared services value proposition has moved beyond cost reduction; these operations are now viewed as a vehicle for generating substantial shareholder value.
When I first read this article, I thought about Loudcloud. If you remember, they were formed at the height of the ASP hype my Marc Andreessen and some other Netscape refugees to package the knowledge they'd gained about running large-scale Internet services and resell it. They're still around, but as OpsWare and they sell system management software.
The article gives three steps to doing something like this. They're not surprising, they're exactly what you'd expect:
- Stage 1: Become Operationally Efficient
- Stage 2: Become Commercially Capable
- Stage 3: Become Market Competitive
Even though they're, in some respects, self-evident not enough shared service organizations see anything beyond stage 1. They don't understand that becoming commercially viable ought to be their goal, even if a spin-out of JV isn't in the cards. Becoming commercially capable includes things like having a billing system that works, having pricing that's competitive, knowing your costs, having a product line and knowing what products you offer, and offering good customer service. You can be operationally efficient without being stellar at those, but you can never be a first class shared services organization until you get at least to stage 2. Every IT shop ought to have a roadmap that leads them, as an organization to these goals, even if they don't plan on going commercial, because that's how they'll add value to their organization and serve their customers.
4:37 PM | Comments () | Recommend This | Print This
August 20, 2003
Scaling Web Services: The Role of Web Service Intermediaries
I'm going to be speaking at the Enterprise IT Week conference that is part of CDXPO. I'll be in the Web service track. I've proposed the following abstract for the talk:
Many Web services projects never make it out of the "pilot" stage. While free tools and direct connections work fine in small implementations, they fail to scale, suffer from reliability problems, and are difficult to secure. Web service intermediaries provide answers to these problem. While Web service intermediaries may not show up in the standard Service Oriented Architecture (SOA) discussions, they're crucial to successfully implementing large scale SOA-based systems. This talk will focus on the function of Web service intermediaries and how they can turn a pilot project in a secure, reliable, scalable, production-ready implementation.
The talk will be based on the experience I've gained reviewing intermediary products, including the XML firewall reviews I'm working on right now. As time gets closer to the talk, I'll be posting some of my ideas here.
9:45 PM | Comments () | Recommend This | Print This
Windows Security Exploits
The W32.Blaster worm that struck last week infected millions of computers and caused a lot of IT shops to drop everything to repair the damage. I've talked to the heads of several large IT shops and most of them were affected in significant ways. The ones who weren't had installed the patch from Microsoft before the worm struck. I've written about the problems with too many patches in the past and this just highlights it. An article in ComputerWorld gives a slightly different twist to the problem.
One thing companies ought to pay particular attention to in this last episode is the short amount of time between when the vulnerability was announced and when the worm appeared. Most companies assume they have some time to apply patches and many are afraid to do so automatically. There was less than one month between the announcement of the vulnerability and the appearance of the worm exploiting it. As this time shortens, companies will have to be better and better at applying patches and be willing to do automatic updates. Even the best run IT shops find this to be a challenge. Organizations with a disorganized desktop management strategy will find that they are spending all their time managing desktops---reacting to problems rather than solving them.
6:04 PM | Comments () | Recommend This | Print This
August 19, 2003
DNSSEC and Identity
DNS Security Extension, or DNSSEC, is a set of extensions to DNS, which provide end-to-end authenticity and integrity. In an article in the Business Standard, Paul Mockapetris, the inventor of DNS talks about DNSSEC and why he thinks its the answer to many of the identity problems on the Internet. Quoting from the article:
Mockapetris argues that a work-in-progress extension to the DNS specification called DNSSEC is what makes the DNS up to the task of solving most of the identity related issues on the Internet. Unfortunately, since DNSSEC isn't bulletproof (and, according to some, could result in other vulnerabilities), the specification has been a work-in-progress since November 1993, when the DNS working group of the Internet Engineering Task Force (IETF) held its first DNSSEC design meeting. Despite the imperfections of DNSSEC, Mockapetris says that it's time to go for it. "The DNS has been growing for twenty years, but during that time, no progress has been made on securing it.
Paul claims that the problem is that the committee is trying to solve and problem perfectly rather than doing what can be done now. He's got a point. I think part of the problem with UDDI is that its tried to solve too many problems when 90% of what we want is a DNS-like mechanism for Web services.
The basic idea behind DNSSEC is simple: provide an authentication mechanism for DNS lookup so that its harder (not impossible) to forge DNS information. That means that you be be relatively certain that the email that claims to have come from windley.org actually did, or that the HTTP request you're processing is actually from your partner at myco.com and not an imposter.
In theory, the implementation is fairly straightforward. Again quoting from the article:
The technology behind these confidence checks uses digital signatures and public key cryptography. For starters, DNSSEC involves the use of secure hash algorithms for the digital signing of the records - called RRSets - that appear in the DNS database. Using its private key, the site could digitally sign the domain mapping information that appears in the DNS and any application that depends on that information could retrieve the matching public key from a special key record (part of the DNSSEC specification) under the DNS entry. Using the public key, the application can verify that the domain mapping information was signed with the private key, which presumably only the website has.
One of the hard problems of DNSSEC is political, not technical (no surprise there). DNS is hierarchical, which means that the security will have a trust pyramid. Someone has to sit on top of the trust pyramid and whoever does is in the catbird seat. Naturally, companies like Verisign are lining up to fill that role.
Even with the holes and problems, DNSSEC would fill a need and solve many problems that we grapple with in the area of security and identity. I agree with Paul. Its time to move forward.
5:00 PM | Comments () | Recommend This | Print This
Linux Networx's 11 Teraflop Cluster
Linux Networx, who I wrote about last February has closed two deals with Los Alamos to build Linux supercomputer clusters for use in nuclear weapons simulations. The larger cluster will have a theoretical peak performance of 11.26 teraflops per second and use 2,816 Opteron processors. I toured their facility before they moved and thought that it looked like a hardware guru's dream. According to the article in TechNewsWorld, the cluster will be delivered in 2 months---a record delivery time compared to the typical two-year delivery time for a supercomputer. Linux Networx makes news selling these multi-thousand processor clusters, but their bread and butter is smaller clusters for more typical business needs.
4:42 PM | Comments () | Recommend This | Print This
August 18, 2003
A New Utah Blog
Mike Jones was one of my Master's students and then got his PhD at Utah. now he's back at BYU teaching. Makes me feel old. Mike has started a blog to keep track of his research interests and thoughts about papers he reads. That's a great way to use a blog. I suspect his students will also find this useful. Maybe he'll get his students to write blogs as well.
5:04 PM | Comments () | Recommend This | Print This
Wal-Mart Moving to Internet-based EDI
Via Frank Scavo comes this news about Wal-Mart requiring Internet-based EDI as opposed to older, VAN-based EDI connections to all of its suppliers by October. Wal-Mart, of course, is an 800-pound gorilla and suppliers are falling all over themselves to meet the deadline. From the article:
Black & Decker and drugmaker Abbott Laboratories are among a handful of major companies that this week said they have purchased and installed special software to help meet new requirements for doing business with Wal-Mart, the world's largest retail chain, with more than 4,700 stores around the globe. ... Although Wal-Mart declined to specify the dollar value of the various projects, one analyst estimated Wal-Mart suppliers are collectively spending tens of millions, if not hundreds of millions, of dollars on software to comply
The big winners are EDI software system companies like webMethods and Tibco. The losers are the operators of the value added networks like GE and Sterling Software, although GE is making the move to provide Internet-based services as well and retain the business.
9:46 AM | Comments () | Recommend This | Print This
IT Governance
IT Governance is one of the foundations upon which a good enterprise architecture is built. There's no point trying to build interoperability for systems in a IT environment where you can't even make good decisions about projects and policy. This month's CIO magazine has a good article on IT Governance called "Deciding Factors". The article makes the counterintuitive point that strong IT governance leads to resourcefulness. Most of the time we think of governance as something constrains and indeed, that's the case in the points made by this article. The enterprise is constraining certain activities so that the overall organization can be more resourceful. Here are the eight key practices listed in the article:
- Scrutinize For Value - treat each IT project like an investment.
- Require Formal Project Requests - formal written IT project requests are a staple of good governance.
- Reuse Whenever Possible - reuse of hardware and software will be a standard part of a good IT shop's standard operating procedure.
- Mandate Speed With Short Deadlines - mandating short deadlines builds a culture of speed.
- Adjust Budgets To Reflect Benefits - remove the claimed savings from the requester's budget.
- Manage Projects As A Portfolio - never review one project at a time. Review them in groups with a scorecard approach to ensure they compete.
- Close The Loop With Postmortem Audits - check to see whether projects produced the expected value and learn from your mistakes.
- Avoid Draconian Measures - keep bureaucracy to a minimum.
Number five was interesting to me. The utah legislature approved a program to improve IT infrastructure and gave the money to the CIO's office in 2001. The money had to be spent on projects that had an ROI. The departments had to give some percentage (not even the whole amount) back to the fund to be used by future projects. The idea was that we'd fund continuing improvement out of the savings. We couldn't get anyone to take the money because no one wanted to commit to the savings. Eventually the legislature took the fund away when the budget crunch hit because it wasn't being used.
If you think about these points and read through the CIO article, one of the things you'll see is that IT governance, like all governance, concerns restricting individual unit freedom for the good of the whole. Project planning assumes that there will be projects that get funded and projects that don't. This can be especially hard in the public sector where agencies have considerable autonomy and guard it jealously. Layer on top of that a legislative system that's set up to fund each department as a separate line item and you can see the how problematic portfolio-based project management can be in a government setting.
While the article is mostly about making project decisions, but there are other aspects to governance that are equally important. An equally important governance area is policy, both that internal to the IT shop and that which affects the rest of the enterprise.
9:32 AM | Comments () | Recommend This | Print This
August 15, 2003
Westerm CIO Summit
I'll be attending the Western CIO Summit on August 25 and 26th. I've been asked to participate on the eAuthentication panel. I attended last year and blogged all three days. This is a farily small, close gathering and I enjoyed the interactive nature of the presentations last year. I'm looking forward to seeing some old fiends and talking about identity management in the government space.
4:29 PM | Comments () | Recommend This | Print This
The Essence of XML
Phil Wadler is one of Computer Science's deepest thinkers in the area of programming language theory. I've been a longtime fan of his work. He presented a paper at this year's POPL (Principals of Programming Languages) entitled The Essence of XML. Phil says some controversial things, among them:
The essence of XML is this: the problem it solves is not hard, and it does not solve the problem well.
Don't let that stop you from reading the article. Phil and his team have developed a formalization of XML Schema which is quite elegant. This formal semantics is part of the official XQuery and XPath specification, one of the first uses of formal methods by a standards body.
Wadler makes his judgment about XML not being particularly good at what it does because of its shortcomings with respect to two properties that a data representation language ought to provide:
- Self-describing: From the external representation one should be able to derive the corresponding internal representation.
- Round-tripping: If one converts from an internal representation to the external representation and back again, the new internal representation should equal the old.
With respect to these properties, Phil says:
XML has neither property. It is not always self-describing, since the internal format corresponding to an external XML description depends crucially on the XML Schema that is used for validation (for instance, to tell whether data is an integer or a string). And it is not always round-tripping, since some pathological Schemas lack this property (for instance, if there is a type union of integers and strings).
Of course, as Phil points out, LISP s-expressions have both of these properties. This doesn't necessarily imply that s-expressions would be a good substitute for XML. One of XML's great features is that its parsers work as interpreters rather than being compiled. That is, they update their syntax on the fly as they work rather than having a syntax compiled in, as is the case with s-expressions or other representations.
The paper introduces, very early, a theorem characterizing validation in terms of erasure and matching. The theorem is easy to understand showing how validation takes an external value into an internal value, and how erasure takes an internal value into an external value. With this model, a lot of thorny issues regarding XML Schema are more readily described and defined. As the introduction to the paper says:
XML Schema is a large and complex standard - more than 300 pages in printed form. The main difficulty was to understand that the essence of XML Schema lies in the way type names and structures interact. Our first surprise has been to realize that once we captured named typing and validation, most of the myriad other features of XML Schema fit neatly into the simple framework presented here. ... Our second surprise has been to realize that, despite XML Schema's complexity, the resulting theorem turns to be simple.
If you've been following the discussion on RDF and grounding, then this is a paper I recommend you read. The mathematics is quite accessible and you'll have a deeper understanding of XML Schemas when you're done.
11:49 AM | Comments () | Recommend This | Print This
August 14, 2003
Symbol Grounding and Namespaces
Jon's recent recent discussions on RSS, RDF, XML and symbol grounding remind me of a story.
When I was a grad student, I took a model theory class. Model theory is a branch of mathematical logic that deals with the meaning of symbols (in part). There were about a dozen of us in the class and half we CS PhD students and the other half were Math PhD students. The first part of the class was filled with pretty heavy set theory and the CS students were struggling. The next part however, was much easier for the CS students than the Math majors. I remember one class where the professor was introducing the idea that symbols and their meaning were separate. He made the point that the + symbol doesn't mean addition. One math major with a very perplexed look on his face said "that doesn't make any sense!" He'd never considered that the symbols and their meaning were separate
Computer scientists have long dealt with the issues surrounding the meanings of symbols. We're very comfortable with syntax and semantics. Every time we learn a new programming language, the task is building a mental model of what the syntax means. The symbol grounding problem that Jon mentions is the same problem. Computer science does syntax very well and XML is a great example of that. Semantics is tougher. Unfortunately, most of the work in semantics is not accessible to an undergraduate computer science major because of the complex mathematics involved. Also, some of the best books on the subject, like this one by Mike Gordon are out of print.
When students ask me why they ought to get a CS degree when they already know how to program, XML is one of my favorite examples. The concepts behind XML are pretty well understood CS theory. There's a lot of work turning that into a practical system, but the theory's years old. Having a solid grounding in CS theory is very helpful in understanding new things. 20 years ago it was object-oriented programming. Now, its XML. I'm sure it will be useful when the next new thing comes along.
My understanding of namespaces, which is admittedly not based on extensive study, is that they serve three purposes
- They eliminate symbol clashes.
- They potentially ensure that when we see a element we can tell if its the same element as the one with the same name in another document.
- They potentially give us more information about what the author of the XML in a particular namespace expects that tag to mean
The first purpose is important from a practicality standpoint. For example, if I write:
<dc:creator>Phil Windley</dc:creator>
you can distinguish it from other <creator/> elements used in that particular document. These other elements with the same name might be distinguished in their own namespace, or they night be in the "null" namespace, which I prefer to empty (after all, its not empty if its got elements in it). :-)
The second purpose is about symbol equality across documents. When we use <dc:creator/>, we use it in a context where dc has been grounded by referencing a URL. That URL has to be unique, but it doesn't have to actually point to something. If it is a null URL, then we've achieved the purpose of making the elements in the namespace unique. We've also achieved a second important goal, we've ensure that when you say <creator/> and I say <creator/>, we're talking about the same tag if we ground our namespace in the same URL.
The third purpose is really what Jon's been wrestling with. When I see <creator/> how do I know what it means? As Jon has pointed out, this is where things get tricky. When we use the word "means" we usually think of some rigorous, complete definition. Its fairly easy to see how namespaces might provide us with more metadata and thus increase the information we have available to us about any given XML document. Its much harder to imagine that machines will be able to divine the meaning of the document no matter how much metadata you include.
Yet, its precisely this latter concept I hear when I listen to people talk about the semantic web. I'm with Jon, "If the RDF folks have really solved the symbol grounding problem, I'm all ears." I'll be satisfied, however, with better representations for metadata and good tools for processing it.
2:07 PM | Comments () | Recommend This | Print This
August 13, 2003
LavaRnd: Truly Random Numbers
Truly random numbers are crucial to good encryption. Most people have heard of Silicon Graphic's use of Lava Lamps to generate random numbers. There were some problems: it required special SGI hardware and software along with six lava lamps. SGI developed one of the best FAQs on lava lamps around as a result. What's more, the solution wasn't portable. But the biggest drawback was that SGI patented the idea so it wasn't freely available. Now, some of the scientists behind the SGI random number system have created LavaRnd, an open source project for creating truly random numbers using inexpensive cameras, open source code, and inexpensive hardware.
The system uses a saturated CCD in a light-tight can as a chaotic source to produce the seed. Software processes the result into truly random numbers in a variety of formats. The result is a random number that is crytographically sound, ranking at the top of its class in the NIST 800-22 Billion bit test. Its even portable, so the truly paranoid can take it with them when they travel. I've got an old Logitech camera hanging around. Maybe I'll try building one.
5:29 PM | Comments () | Recommend This | Print This
Identity Management in a Business Context
Related to my post on business context security yesterday is this excellent whitepaper from PirceWaterhouse and Gartner on identity management. They list the following components to an IM solution:
- Enterprise information architecture
- Permission and policy management
- Enterprise directory services
- User authentication
- User provisioning and workflow
I'd add a hearty amen. You can't manage the security of your enterprise in a business context without an enterprise architecture, good policies, global namespaces, the ability to authenticate users systematically, and a good way to manage account provisioning and deprovisioning.
5:17 PM | Comments () | Recommend This | Print This
August 12, 2003
Business Context Security
In a discussion today with Wes Swensen of Forum Systems
about XML security appliances, the concept of "business context
security" came up. The idea is relatively simple: in the past
people have mostly thought of security as an edge game. Given a
firewall and access control to the network and publicly viewable
machines, I can do a lot to secure my business. Sure, security
experts have been telling us for years that this isn't enough, but for
the most part it has been good enough.
One of the unmistakable trends in IT is the need to integrate
systems, not only internally, but with trading partners and
customers. This has been fueled by XML and the creation of
standards for exchanging data as well as Web services which give us the
ability to decentralize our computing. This trend has huge
ramifications for the security folks: they can't treat the edges of the
network as a secure perimeter and call it good. Intrusion
detection is a lot harder when you're allowing people and software
agents access to your internal data and systems.
Consider the common firewall. Almost every corporation has a
firewall in front of their network. Almost every corporate
firewall is configured to pass port 80 traffic (HTTP traffic)
unhindered and unnoticed because no one can live without the web.
The problem is obvious: if port 80 can carry Web service traffic and
XML data, then everything in your network is potentially exposed.
Your firewall is filtering out one kind of attack only to allow another
kind right in. Firewalls are designed to filter packets,
not the content in the packet.
Integration is being driven by business needs. This means that
security policies need to talk about documents, data, actions, people,
and corporations instead of machines and networks. This security model
is infinitely more complex than the old "secure perimeter" model. Even
if you do define your policy, how do you ensure its properly
implemented across dozens or even hundreds of systems? How do you
manage access control to field of the database or paragraphs of the
document?
XML security products aim to fill this hole. I suspect that if
you don't have one now, you will in the near future. Even if
you're not actively pursuing Web services, you'll need to protect
yourself from rogue SOAP insertions into your network, since Murphy
will ensure that there's a machine somewhere in your network with a
SOAP server running. I'm working on a review of XML security
appliances right now and will have more to say about these products
later.
What everyone needs to realize is that tools like XML and Web
services are making some jobs, like legacy system integration easier,
but they're making other jobs, like security, much more
difficult. The answer isn't just more technology, its also more
discipline. Well run IT organization will manage this job the way
they manage other tough jobs: using well documented processes with well
thought-out metrics and reviews to ensure that the policies are
implemented correctly and doing the job. Enterprises that
continue to treat security in and ad hoc manner will get burnt.
5:42 PM | Comments () | Recommend This | Print This
WYSIWYG Editting in Radio
Jake has created a WYSIWYG editor for Radio that runs in Mozilla. One of the things I missed when I switched to OS X was the IE WYSIWYG editor, bad as it was. I'm using it to type this entry. Installation was simple and it seems to work just great in Mozilla 1.3. No support for Safari and Firebird seems to try to work, but then doesn't. I'll be playing around with it in the next few days, but so far, I'm impressed.
7:53 AM | Comments () | Recommend This | Print This
August 11, 2003
Leavitt to Head EPA
The Whitehouse will announce soon that Governor Mike Leavitt has been picked to replace Christie Todd Whitman as the head of the EPA. Leavitt's moderate brand of environmentalism, known as Enlibra has made both sides of the issue nervous in Utah. I suspect it will do the same nationally. His approach is a practical, middle ground kind of environmentalism the eschews the extremism of both sides of the environmental battles that erupt with some frequency. Now he gets a chance to push this process at the national level. This is a lightening rod job that won't be a walk in the park. Still, I'm happy for him. I've said before that I believe that Mike Leavitt is one of the most capable public sector executives in the country. I'm anxious to see him take on a new set of challenges.
Coverage:
2:32 PM | Comments () | Recommend This | Print This
Event-based eGovernment: One Stop Goes Live
While I was gone last week, Utah's One Stop Business Registration service went live. This was one of the projects started last summer as part of the Governor's IT plan. The idea is simple: rather than go to seven different state agencies, the IRS, and a city to start a business, create a single place where people can fill out one set of forms, pay one fee, and take care of it all at once. This kind of service integration is one of the great possibilities of eGovernment. There are dozens of these kind of "life event" services that could be developed. Some examples: "I'm moving to Utah," "My child is starting school," "I've been arrested," "I'm getting married," "I'm getting divorced," and so on. I'm glad to see one off the blocks.
2:01 PM | Comments () | Recommend This | Print This
Novell and Open Source
While I was out last week, Novell announced it was buying Ximian. I'm always skeptical when big companies start buying companies based on an open source philosophy like Ximian. Mostly I worry about the innovative products that these companies are working on getting quashed. In this related article, Chris Stone, Vice Chair at Novell, talks about this deal and the SCO lawsuit. Chris says all the right things and I'm sure his heart's in the right place, but other forces will come into play and affect whatever strategy Novell has now. In another article, my friend Mall Asay, Novell's chief OSS strategist, writes about how OSS ideas leak into companies regardless of their persuasion. One interesting quote from Matt:
Open source has given Novell a way to build upon its past, without relinquishing it. NetWare remains, but becomes even better, offering customers more choice.
Of course, in the case of SCO, an open source company bought a closed source company (Caldera bought SCO) and the ideas seem to have leaked the other direction. What kills me about SCO is when I talk to the people who work there they have all these great ideas that I think OS companies ought to be talking about. I'm afraid, that their legal strategy is going to drown out any other conversation SCO will want to have.
1:42 PM | Comments () | Recommend This | Print This
More on CIO Certification
I wrote earlier about some Federal CIO certification programs. Today I found out about the Federal CIO Council's CIO University program which includes a consoritum of several universities that are offering a wide variety of coursework in this area. These are all concentrated in the DC area.
11:55 AM | Comments () | Recommend This | Print This
Temporary Flight Restrictions
From time to time, the government issues temporary flight restrictions, or TFRs, which pilots are responsible for knowing and following. In the west we get a lot of them in the summer months because of firefighting operations. When a tanker is coming in to drop fire retardant, they don't really want to worry about what other planes in the area might be doing. Other TFRs deal with sensitive national security areas, stadiums during games, and even the President's ranch. In the past when you wanted to know what current TFRs were in effect, you had to call the regional flight service center (run by the FAA) and get a briefing. Needless to say, that often doesn't get done.
Now, the BLM has a web site that lists TFRs by state and provides maps. This is a huge improvement and makes it much easier to visualize where not to fly. I wish it were designed a little differently to encourage use of the data by other programs and not just people. There are numerous web sites that pilots use to gather information and it would make things safer if all of the relevant data could be more closely integrated. Making data easy to use in multiple formats isn't much more costly than creating single-use data sources. Government's have a duty to make data as widely available as possible. I wish more eGovernment applications took that responsibility seriously.
9:06 AM | Comments () | Recommend This | Print This
August 4, 2003
Out for Four Days
I'll be gone for the next four days to Island Park, ID for a little vacation. I'll be back next week.
10:08 AM | Comments () | Recommend This | Print This
John Gotze Visit
I had a great time visiting with John Gotze from the Danish Government last week. We flew up to Driggs ID for breakfast one morning and had a great flight. I'm going to be working with John and the Danish Government on enterprise architecture and service-oriented architectures. I'm looking forward to it.
10:07 AM | Comments () | Recommend This | Print This
The Zachman Framework for Enterprise Architecture
In the 1980s, an IBM researcher named John Zachman wrote a paper entitled "A Framework for Information Systems Architecture" and gave birth to the ideas around Enterprise Architecture. Zachman's framework is a table with columns that relate to the what (data), how (function), where (network), who (people), when (time or schedule) and why (motivation or strategy) aspects of the architecture and rows that walk down the scope continuum: Context (partners), Business Model (owners), System Model (designer), Technology Model (builder) and Detailed Representations (subcontractor).
Zachman, along with Samuel Holcman, created that Zachman Institute for Framework Advancement. The institute's web site contains some free white papers (you'll need to register) and offers to help you figure all this out for a fee.
I like the matrix because it gives you a field of play, so to speak. I imagine that if you study it, you'll find two, three, five, or ten of these squares you think you have a pretty good handle on. That tells you where to concentrate you focus and gives you some context. One of the tough parts of enterprise architecture as defined by the federal EA project management office or the state CIOs at NASCIO is that there was so little context that people have a tough time finding a hand hold and figuring it out. There's lots of places in the Zachman framework to grab on and get started.
That said, FEAPMO and NASCIO are trying to define processes for large organizations to create an enterprise architecture. An EA is nothing more than a detailed plan for how IT will be managed in the enterprise. Detailed enough to ensure interoperability and backed up by reasonable policies. As you look at the Zachman framework, pick a square at random and ask yourself: "what plans, policies, and procedures does my organization have in place to make sure we do the things covered by this square consistently and in accordance with best practice?" If you'd like some help figuring this all out or just talking about how to get started, give me a shout.
10:02 AM | Comments () | Recommend This | Print This
August 2, 2003
Losing Data
I was watching the news last night and they broadcast a story about an elementary school burglary where a couple of file servers were stolen. The principal, Kim Roper, said:
We lost two file servers and it was really more damaging to the school that we lost the date than the computers; all of the school information is gone.
All of the work completed by the school staff this summer, including which classes students are going to be in, is gone. The last back-up of the data on the school file servers was done in May.
Of course, anyone who's worked around computers much knows that bad things happen to disks all the time and I'm sure this principal and his staff knew that they should be backing data up more frequently. The fact of the matter is, most people ignore these warnings until its too late. Well managed IT departments don't rely on users to safeguard sensitive or valuable enterprise data---they put policies and systems into place that make sure data is safe.
Utah's schools do not have a CIO. The State CIO is explicitly barred from acting in public and higher education. School districts love their independence and each manages their own IT systems. The elementary school in question is part of the Alpine School district the third largest in the state. Do they have the resources to hire competent IT staff and put systems in place that would prevent this loss? Sure, but other priorities compete for those dollars.
What boggles my mind is that school officials and the legislature will just shake their head at this and say "that's too bad" instead of being outraged that sensitive, private, valuable data is now lost and in unknown hands. We're running a multi-billion dollar enterprise like a mom and pop shop and no one sees the problem with that.
I'll bet there's at least one elementary school that will be more conscientious about back-ups this fall. But they'll be doing it on their own, with little expertise, and no systematic help. Consequently, they'll soon tire of it and be back to business as usual. What's needed are consistent IT policies across public education, an infrastructure that supports their use, and accountability for protecting sensitive, valuable data.
Its only too hard or too expensive if everyone insists on doing it on their own. UEN already has a network connection in every school district and most schools in the State. For very little additional money, they could operate a centrally managed storage array network (SAN) and sensitive and valuable files could be automatically backed up to this repository. Of course, to make this happen, someone would have to be able to set IT policy and enforce compliance for public education and right now the only group that can do that is the legislature.
8:12 AM | Comments () | Recommend This | Print This
August 1, 2003
Homeland Security Meets ARPA
Taking a page from the Dept. of Defense, the Dept. of Homeland Security has announced its own version of DARPA: HSARPA. DARPA is a major source of high-dollar research grants in universities and private companies. HSARPA will focus on funding projects related to bio-terrorism, but says any idea related to homeland security is eliglble.
5:03 PM | Comments () | Recommend This | Print This
SOA is Not a Silver Bullet. SOA is a Discipline
A recent CBDi commentary says "SOA is ... not the silver bullet that many are suggesting; it's plain hard work." The article is obviously intended to make people think they need to pay for CBDi research reports before entering the scary world of service oriented architectures, but once you get past that, there's some pretty good information in there regarding what an SOA is and what that definition means to architectural choices about coupling and components. If you're an SOA expert, you can skip it. If you're trying to figure this out and build a mental model of what an SOA is, I recommend it.





