Subscribe to feed
‹ Free Grand Canyon Plane Tour • Noticing deaths ›
April 21, 2008 in Ideas | 2 comments
I just made a few changes to Understanding Infrastructure, which I trust will improve it. See what you think.
Mike Warot on April 22, 2008 at 3:17 am
The internet was first started as a way to allow access to programs that happened to be on computers that were scattered around the USA. It was not the sharing of data, which could be done by shipping paper tape, cards, etc… it was about interacting with cool new software that was the main motivator for the genesis of the internet.
We seem to have forgotten this point along the way. The desire to run code is what it’s all about. Now that we’ve got insecure endpoints, everyone is afraid to use the internet for it’s main purpose, sharing code.
We now all have a watered down, Disneyland version of the original Arpanet, where the code was king. The Java sandbox, the Flash animations, the Ajax client-server stuff all pales in comparison to just being able to offer access to something that works, on the system it was tweaked to run on.
I hope I’ve made my point… that we’re missing something big because we don’t have a way of sharing code like we used to. There’s a big hole where the main generative nature of the internet used to be… it’s just not safe to run random stuff any more.
It doesn’t have to be this way, we can make sandboxes that work, in all circumstances, but it requires a heavy duty paradigm shift that can best be summarized like this:
Never trust code
It worked in the beginning because the code came with the system it ran on. We can build systems that let the code run locally with our stuff as input, in a secure manner. All we have to do is teach our computers how not to trust.
In this manner we can vastly increase the generative power of the ends of the Internet.
I hope that makes sense, and expands the conversation.
Time for bed now…
Keith Dick on April 24, 2008 at 4:01 am
I guess I don’t catch something you are saying here.
The web server at the far end of an http connection can run any program on the server it is set up to access. And it is often the case that web requests result in a lot of computation being invoked on the server. So, at least to a first approximation, it seems that the internet is still being used to provide access to remote code.
Of course you know how web servers work, so there must be something about that approach that doesn’t equate with giving access to remote code that you are talking about. Could you say a few more words about this to point out the difference I’m overlooking?
Comments are now closed.
@jeffjarvis @JPBarlow Thought: publisher and advertiser appetite for "content" is what sphinctered the paid #journalism market to near-zip.
About 12 hours ago from Doc Searls's Twitter via Twitter Web Client
@jeffjarvis And @JPBarlow once said, long ago, "I didn't start hearing about content until the container business felt threatened."
@anildash @cwodtke Quite aside from #Cluetrain, "brands as friends" is oxymoronic. @mpared puts it best: slidesha.re/1zZXTeO #VRM
@timolloyd In some cases. But plenty of #Cluetrain theses haven't proven out yet. See: bit.ly/1ohdJRc So we're working on that. #VRM
@davewiner @janl Right. Follow the money. And the fashion. Algorithms are the new black—and they go so well with Big Data and surveillance.
About 13 hours ago from Doc Searls's Twitter via Twitter Web Client
Powered by WordPress and Tarski