February 13, 2009
Just in Time for Valentine's Day: Bob Blakley on Relationships

Image by orcmid via Flickr
When Bob Blakley presented his ideas on relationships at IIW a while back I blogged it and so did others (like Drummond). After Bob released his paper on the subject to Burton Group subscribers, I blogged about relationship providers (with pictures even). Then Scott and I interviewed Bob on the subject for IT Conversations. Needless to say, I think this is an interesting idea. Now, I’m happy to report that Bob and Burton Group have made the paper publicly available. Go get it and read it. There are some great ideas in there.

Posted on 8:32 AM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
relationships
identity
burton+group
itconversations
February 12, 2009
SpokenWord Has Launched
Doug Kaye has launched SpokenWord.org, a “new free on-line service that helps you find, manage and share audio and video spoken-word recordings, regardless of who produced them or where they’re published.” SpokenWord lets you build collections of programs or follow someone else’s collection. There are a number of things you can do with a collection (from the FAQ):
- Add Tags to help others find your Collections. Go to My SpokenWord and click on the collection’s [edit/tag] link.
- Add Comments.
- Click on the Share This link to send via email or post to services such as Facebook, del.icio.us and Digg.
- Subscribe to any collection using the RSS/feed icons for iTunes, Google Reader, My Yahoo!, etc.
Very nice stuff. I think SpokenWord.org is going to set a new standard for helping people find and manage their podcasts.
Posted on 9:35 AM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
itconversations
podcasting
February 10, 2009
Is SOA Dead?
Image by crazbabe21 via Flickr
At the first of the year, Anne Thomas Manes wrote a provocative blog entry stating that SOA is dead. This week’s Technometria podcast is a discussion with Anne about her thesis and what it means for practitioners and technologists. I think you’ll enjoy it whether you’re a fan of SOA or not.
From the description at IT Conversations:
Service-Oriented Architecture (SOA) provides ways to group functions around business processes, packaging them as services. This allows for better coordination between services. Anne Thomas Manes of the Burton Group joins Phil and Scott to discuss whether SOA is dead.
Many of her ideas are built around the idea that people are bad at architecture. She reviews examples of concrete architectural practices, including arch and process normalization. As part of her review of SOA, she shows why spectacular gains come from spectacular efforts.
From Anne Thomas Manes
Referenced Tue Feb 10 2009 14:30:26 GMT-0700 (MST)
You ought to also read Anne’s post on the responses to her original article for a few laughs.

Posted on 2:33 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
soa
web+services
itconversations
technometria
podcasts
February 9, 2009
More on Context Automation, Privacy, and Kynetx Business Models
Joe Andrieu posted a response to the white paper I released last week. I’m grateful that Joe would take the time necessary to read the paper in depth and offer a long, well-thought out, and helpful set of questions and critique. From his article it’s clear to me that Joe understands the problem space well and has a firm grasp on what Kynetx is doing there.
Joe raises a number of questions and points that I’d like to respond to.
First, Joe asks who the target developers are: Web sites or third party services. Or both? Our primary offering is aimed at third party services, but we also recognize the value that Web sites can add by responding to people more appropriately. For example, we have an OEM arrangement with Parity for their RemindMe service. Their customers (i.e. people with RemindMe cards) benefit, however, if Web sites work well within this larger context. Incidentally, those Web sites benefit as well. We believe this dual-use strategy is the right way to go since we want to see the kind of silo-spanning context-aware services that third parties are likely to build and yet know that there has to be a way for the silos to play the game too.
Joe’s second question foresees a user experience nightmare with users managing A cards on B sites and getting caught in a crush of A x B “identity ceremonies.” Something that isn’t clear from the white paper is how this user experience is managed. In most cases (certainly the kind of casual-browsing, context-management activities mentioned by Joe), Kynetx is the relying party—not the individual Web sites. Before data from cards is sent to the site, of course, a separate ceremony establishing a identity session between the site and the person would have to be performed. But this happens only when the user is intent on dealing with that site, not when just cruising around.
This might raise some questions about privacy and security, so it’s important to understand in these scenarios that the user data isn’t ever leaving the browser. Some session data is sent back to Kynetx servers, but not the data out of the cards themselves.
Joe also mentions in his discussion the idea of user data stores. Yeah. That’s one reason I’m interested in first class namespaces in programming languages. We’d love to collaborate with Joe on the idea’s behind Search Map and user-driven search since user controlled data stores are important to us as well. I foresee the day when a KRL rule can use and respond to the data in a person’s SearchMap. KNS has the ability to link to data stores on the Web and on the user’s machine (permissioned, of course) but accessing those in a coherent way within KRL requires more advanced linguistic leverage than we now have. It’s on my list…
Joe says that user’s do want to manage their context, but they haven’t been given the right tools. Fair enough—in fact, I don’t think we’re actually saying different things. We don’t anticipate that people would have no part to play in managing context. We see KNS as a tool for managing context and using it effectively. Right now, Web users mostly manage context in our heads—there is a dearth of tools for helping with that task.
Joe also raises questions about privacy and data rights management of data inside information cards (or OpenID attributes, for that matter). That’s a bigger issue that Kynetx alone can solve, but I do think we’re in good shape in that regard. As I mentioned above, most of the data stays in the browser and Web sites never see it until the point that the person is ready to take action. Joe uses the AAA and Hertz example. When you use your AAA card at Hertz, Hertz knows you’re a member and can tells others. We don’t solve that problem, but we don’t make it any worse. In fact Kynetx decreases the frequency that you’d have to reveal you’re a AAA member while still letting you receive the benefit of knowing what it will get you as you cruise the Web. We allow merchants to respond to you without you having to reveal data to those merchants.
Perhaps’s Joe’s most important discussion from my perspective is on business model. He’s right: CPM charges for ruleset evaluations increases friction at the adoption point and for smaller players. That’s a problem and we’re open to fixing it. I’m not opposed to more open models—in fact I see great value there.
That said, Kynetx also has to survive and right now that means getting funding. The because of model is great as far as it goes, but I don’t find the idea of selling consulting and IDEs to be very compelling. I frankly can’t imagine myself sitting with a VC and pitching it. Maybe I’m gun shy or lack vision, but I’m unsure how it would play.
At any rate, I’m anxious to collaborate with Joe and others on this. Our vision is similar and our methods aren’t that far apart either. This is a fun time to be working in the Web.
Update: I forgot to comment on Joe’s point about centralization. His idea of using a reputation network with strong identity in place of centralized certification is brilliant. We’re definitely not looking for ways to make this more centralized. Quite the opposite.
Posted on 1:22 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
kynetx
vrm
structured+browsing
krl
white+papers
February 6, 2009
First-Class Namespaces in Programming Languages
Image via Wikipedia
Over the last few years, I’ve written plenty of programs—in various languages—that used a HTTP library to fetch an XML document pointed at by a URL and then used XPath to grab parts and pieces of that XML document. The problem with this is that I’m using two different namespaces (the URI and the XML) neither of which is directly supported by my programming language.
Programs that use relational databases suffer similarly: a datastore with a namespace that is extralingual. One of the great selling points of JSON is that it reduces the cognitive load of programmers by getting rid of at least one namespace by simply reading the data into the program’s namespace directly.
Historically, programmers dealt in local, in-memory data structures that they usually built themselves programmatically. Modern programmers face the challenge of multiple namespaces at every turn and yet our programming languages haven’t grown up to help us meet those challenges. They provide nice linguistic abstractions for dealing with local data, but force the programmer to make all the mental leaps necessary to translate between the local store and any remote ones.
This week Drummond Reed came to visit for a day and as we talked, I realized that you could resolve this situation by giving the programming language a namespace vocabulary rich enough to handle most remote and local situations. Further, with what we’ve learned about namespace resolution over the last decades, we can resolve namespace references in ways that programmers wouldn’t have to worry about whether data was local or remote. Programmers could just use a reference and anywhere a variable would do and let the underling system worry about the details.
Of course, since I was talking to Drummond, XRI and XDI were the primary focal points of our discussion. XRI is a well-thought-out system for creating URL-like namespace references. XDI is a way to resolve those references into data. Using XRIs you can create abstract references to data regardless of whether it’s local, comes from a URL/XML combo, lives in a session hash, or even resides in an information card on the user’s machine.
I believe that incorporating XRI references into a programming language as first-class objects would offer a powerful abstraction on data. I’m anxious to give it a shot. The trick, of course, is to continue to make easy things easy to do without complicating every variable reference while at the same time allowing programmers to leverage the power of other namespaces when needed.
I’m sure there’s a programming language out there—maybe even a dozen—that provide support complex namespaces. I’d love to know what they are and how they’ve worked. In the meantime, this is what I think about when I’m driving.

Posted on 8:48 AM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
programming+language
xml
xri
xpath
February 4, 2009
The Advent of Next Generation Browsing
Today I’m releasing the first of a series of white papers on Kynetx and what it’s trying to accomplish: The Advent of Next Generation Browsing (PDF). I introduce the problem in the first few paragraphs like so:
We are mired in a tangle of architectural legacies that make today’s Web browsing experience uncomfortable, confusing, and tiresome for many users. In particular, the lack of Web site independent identity has hampered the ability of the browser to effectively intermediate the Web on the user’s behalf. But change is coming and we are about to witness a significant improvement in the nature of Web browsing—-indeed, the nature of the browsing experience is about to change forever.
In the early days of the Internet, companies sought to give users the benefit of a consistent experience by building portals that integrated multiple activities. Portals are now mostly a thing of the past; a few large examples such as Yahoo! and MSN still exist, but by and large they have fallen victim to what must be the most important law of the Web: a different site is only a click away. As users sought out the best sites for any given purpose, the browsing experience fractured and became ad hoc. As a result, disaggregation of Web sites and services is now the norm.
An example from the world of ecommerce illustrates this. Shoppers use search engines like Google or Yahoo! to find a product and choose an online retailer from the search results. Before buying they might research products at independent review sites like Epinions.com and Viewpoint.com. They might discuss those same products on myriad blogs, Twitter, and social networking sites like Facebook.
As we’ll see, disaggregation causes users to manage too much of the experience themselves. This situation is untenable and must change. Fortunately, three major technology trends are creating the needed opening for improving the browsing experience.
Certainly I can’t hope, in a short blog post, to lay out the vision that the white paper carefully develops in many pages. You’ll have to read it to get the full story; but, we structure the browsing experience using explicit episode context to the benefit of users and Web-site operators. This paper explains why that’s desirable and explains the system we’ve developed to make it happen. I hope you’ll take a few minutes to review the paper and let me know your thoughts.
I should mention that we owe a great debt to Craig Burton for patiently working with us to clarify and strengthen our vision. He worked magic and has our gratitude. I’d recommend him without reservation to anyone developing a product vision and strategy.

Posted on 7:44 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
kynetx
structured+browsing
context
information+cards
February 3, 2009
This Week on Technometria: Aaron Iba on EtherPad
This week Scott and I speak with Aaron Iba about EtherPad and the AppJet platform that it’s built on. There are plenty of interesting problems involved in creating a real-time collaborative editing environment with Javascript in the browser. I loved the discussion and got a few ideas about designing collaborative services.

Posted on 7:15 AM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
technometria
podcast
itconversations
January 29, 2009
Kynetx Demo Day
I’ve had a handful of people ask if they could stop by Kynetx and see what we do. Steve has had similar requests. In an effort to not miss anyone who would like to visit Kynetx and get a demo of our fledgling product, we’re hosting a Kynetx Demo Lunch on Friday, Feb 6th at 11:30 at Kynetx World Headquarters in Thanksgiving Point. This map will give you directions and we’re in Suite 275 (metal doors). We’ll supply the pizza, you bring your curiosity. Please RSVP so we know how much pizza to buy.
Posted on 7:18 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
kynetx
utah
events
January 27, 2009
CTO Breakfast this Friday

Image by windley via Flickr
We’ll be holding January’s CTO Breakfast this Friday in the Novell cafeteria in Provo (Building G). Come prepared for an awesome discussion of technology and companies—especially startups. Feel free to bring topics for discussion. Anyone interested in building high-tech products and services is welcome to attend—not just CTOs.
Please put future CTO breakfasts on your calendar so you can be sure and be there. Here are the scheduled dates so far:
- Jan 30, 2009 (Friday)
- Feb 26, 2009 (Thursday)
- Mar 27, 2009 (Friday)
- Apr 24, 2009 (Friday)
I have created a Google Calendar with dates for the CTO breakfast that you can subscribe to.
Or if you’d rather subscribe from iCal or Outlook, here’s the iCalendar link.

Posted on 9:22 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
utah
events
cto
breakfast
Interactive Map of Utah Legislators
Back in 2003, I lamented the fact that there was no interactive map to finding your legislator in Utah. Indeed, the process involved a lot of steps that introduced considerable friction.
Now, thanks to the power of mash-ups and open data, Scott Riding has created an interactive map of Utah legislative districts and the legislators representing them. I typed in my address and was presented with pictures and contact information of my legislators along with a pin in the map showing my house so I could verify everything was right. Thanks Scott!
Posted on 8:54 AM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
utah
politics
egovernment
geoweb
January 26, 2009
Real-Time Regular Expressions
Grant Skinner has a nifty little regular expression tool, written in Flex, that let’s you put text in one window and type regular expressions in the other and show the match. There’s a “replace” tab as well for seeing how a regular expression replace would modify the text. The right side shows regexp components and describes what they do. I love it.

Posted on 8:58 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
regexp
regular+expressions
I'm Going to Gluecon
I’m going to be speaking at GlueCon in Denver on May 12-13. The overall theme of the conference is that there is a lot of interesting stuff happening in what we might have thought of as “glue” before—all the code that holds things together. Turns out that there’s plenty of value you can add in the glue that makes the resulting mash-up better.
Glue is a new conference (the best kind) and is being organized by Eric Norlin, Seth Levine, and Phil Becker. These guys do good conferences. Eric and Phil were the founders of DIDW. More recently Eric’s been doing Defrag. Follow the Gluecon blog for more info.
My session is titled “Building Context-aware Applications using Identity as a Foundation.” The gist of the talk is about how Web-site independent identity allows us to create context that spans Web-sites. Of course the most obvious context is an authentication context that we refer to as “single sign on” but it goes well beyond that. This is exactly the problem we’re tackling at Kynetx. More on these ideas in this blog between now and then.

Posted on 2:11 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
gluecon
travel
events
kynetx
Mounting Remote Filesystems Using SSH and Fuse

Image via Wikipedia
Paul Figgiani, the Senior Audio Engineer at IT Conversations, sent me a link to a program called ExpanDrive, that allows you to mount any remote directory to which you have SSH access on your Mac. The cost: $39.
ExpanDrive is based on MacFUSE, an extension which extends OS X’s native file handling capabilities to programs in user space (that is, outside of the kernel). I first heard about this when Scott and I interviewed Amit Singh on IT Conversations. Amit is probably the world’s leading expert on OS X internals and the creator of MacFUSE.
Because MacFUSE allows filesystem extensions to live in user space, you can write a regular program that looks like a file system. That’s what ExtenDrive is. Another example is the Cryptomfs which creates an encrypted directory that can be mounted and then read in plain text. I’ve used the ntfs-3g filesystem to read and write NTFS files to a USB drive. It also works great for accessing Bootcamp partitions.
If you’re geeky, there’s a free Fuse FS called sshfs that does essentially the same thing but without all the bells and whistles. You can use the OS X automounting capability to automount sshfs volumes.
Having something like this—where a remote directory just shows up in your file system is nice for backups. Mount a directory on a remote machine and then use rsync and a cronjob to perform automated backup over the ‘Net. All for free.
Update: CIO has a feature on dynamic languages where ExpanDrive (written in Python) was the major factor in turning someone from C to Python for a project involving ZFS.

Posted on 1:46 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
ssh
osx
fuse
macfuse
expandrive.
itconversations
January 22, 2009
Seeing the Dynamic in the Static
Image via Wikipedia
Jeff Atwood opines on why many programmers are also musicians of some kind—or at least appreciate music. I’ve heard this said of mathematicians as well. I have long held that all of these disciplines hold in common the ability to relate the dynamic and the static. Good programmers can see how the program will operate by looking at a lexically scoped program listings. Musicians do the same thing with music (I’m not just talking about music notation, but that’s part of it).

Posted on 7:41 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
music
computers
programming
January 21, 2009
A Retweeting Twitterbot in Perl

Image via CrunchBase
I’m trying an experiment with this year’s Utah legislative session, I’ve created a Twitter account (@utahpolitics) and set up an autofollower on it (hat tip to @jesse). I wanted to also set up a retweeting twitterbot so that people following the account would see what anyone else following the account said when it contained certain keywords.
The world probably doesn’t need yet another retweeter, but I couldn’t find exactly what I was looking for and decided to build one for a few reasons:
- I like to program
- I want to understand the Twitter API more deeply
- I didn’t want to modify someone’s PHP or Python code
- Oh, and I like to program
Armed with those few requirements and Chris Thompson’s Net::Twitter library, I wrote the following program in an hour or so:
#!/usr/bin/perl -w
use strict;
use Net::Twitter;
use DateTime;
use DateTime::Format::HTTP;
use Fcntl;
use SDBM_File;
use File::Copy;
my $dt = DateTime->now;
$dt->subtract( minutes => 60);
my $class = 'DateTime::Format::HTTP';
my $since = $class->format_datetime($dt);
my $today_string = $dt->strftime("%M");
my $seen_dir = "./seen";
my $seen_file = "$seen_dir/latest.db";
my $backup_file = "$seen_dir/$today_string.db";
# make a backup of the hash and open it
unless(-e "$backup_file.pag") {
copy("$seen_file.pag","$backup_file.pag");
copy("$seen_file.dir","$backup_file.dir");
}
my %seen;
tie %seen, "SDBM_File", $seen_file, O_CREAT|O_RDWR, 0644 ||
die "Can't link to $seen_file, $!\n";
# set your own username and password here
my $user = 'put_your_twitter_screenname_here';
my $password = 'put_your_password_here';
my $twit = Net::Twitter->new(
username=>$user,
password=>$password,
source => "Utah Politics Retweeter",
clientname=>"UtahPolitics ReTweeter");
# find replies
my $retweets = [];
my $twit_replies =
$twit->friends_timeline({since => $since, count=>100}) ;
foreach my $reply (@{ $twit_replies }) {
my $text = $reply->{'text'};
my $id = $reply->{'id'};
my $name = $reply->{'user'}->{'screen_name'};
print ".";
if($text =~ m/utahpolitics|#utpolitics/ &&
! $seen{$id}
){
unshift @{ $retweets},
{'name' => $name,
'id' => $id,
'text' => $text
} unless $name eq $user;
}
}
print "\n";
foreach my $retweet (@{ $retweets }) {
print ".";
my $status = "(@".$retweet->{'name'}.") ".
$retweet->{'text'};
my $code = $twit->update($status);
$seen{$retweet->{'id'}} = 1 if $code;
}
print "\n";
1;
I made heavy use of Data::Dumper to Dump data structures I got back from the library during development. This could be generalized in lots of ways. For example, passing the username and password along with the keywords to look for as arguments would allow it to be used for more than one ID. I run this as a cronjob every five minutes and so far it seems to be working fine.

Posted on 2:16 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
perl
twitter
programming
Technometria Podcast: A New Year and New Projects
In this week’s Technometria podcast, we talk about the new year and some new projects. With the beginning of a new year, it’s always a good time to look ahead to upcoming activities and products.
In this podcast Dion, Ben, Scott, and I talk about what we’re are expecting in 2009. We also discuss the problems with having to raise funds for a business startup, a necessary but often difficult process. We also talk about some of the new products announced at CES and Macworld. Scott also talks about his download of the Windows 7 Beta. The discussion ends with the upcoming transition to digital TV.

Posted on 2:01 PM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
technometria
itconversations
podcast
January 19, 2009
jQuery, Monads, and Functional Programming

Image via Wikipedia
I tweeted this over the weekend, but it deserves a bigger mention than that. Patrick Thomson has written a wonderful description of why jQuery is a monad including a discussion of cautious computation and state transformations. No need to know monads or jQuery (although an interest in one will help you appreciate the other).
As Patrick explains nicely, jQuery is monadic in that is meets all three requirements of a monad. A monad is a concept from category theory. A type is a monad if it meets three requirements
- Monads wrap themselves around other data types
- Monads have an operator that performs this wrapping
- Monads can feed the wrapped value to a function as long as that function also returns a monad
Anyone familiar with jQuery will immediately recognize that this is a good, abstract description of jQuery’s operation. The beauty of this, from jQuery’s perspective is a nice programming style called “chaining” where wrapped chunks of the DOM are passed from operator to operator in pipeline fashion. Using this style effectively results in compact, yet readable code with little need for intermediate variables.
You may find it ironic to think of DOM manipulation happening in a functional style, but that’s just what jQuery allows. So, if you’re a jQuery programmer you may be moving more and more toward a functional style of programming without even knowing it.

Posted on 10:31 AM |
Comments () |
Recommend
| Print
Add to del.icio.us
| digg
| Yahoo! MyWeb
Related:
monad
jquery
javascript
haskell






