Subscribe to feed
‹ Free Grand Canyon Plane Tour • Noticing deaths ›
April 21, 2008 in Ideas | 2 comments
I just made a few changes to Understanding Infrastructure, which I trust will improve it. See what you think.
Mike Warot on April 22, 2008 at 3:17 am
The internet was first started as a way to allow access to programs that happened to be on computers that were scattered around the USA. It was not the sharing of data, which could be done by shipping paper tape, cards, etc… it was about interacting with cool new software that was the main motivator for the genesis of the internet.
We seem to have forgotten this point along the way. The desire to run code is what it’s all about. Now that we’ve got insecure endpoints, everyone is afraid to use the internet for it’s main purpose, sharing code.
We now all have a watered down, Disneyland version of the original Arpanet, where the code was king. The Java sandbox, the Flash animations, the Ajax client-server stuff all pales in comparison to just being able to offer access to something that works, on the system it was tweaked to run on.
I hope I’ve made my point… that we’re missing something big because we don’t have a way of sharing code like we used to. There’s a big hole where the main generative nature of the internet used to be… it’s just not safe to run random stuff any more.
It doesn’t have to be this way, we can make sandboxes that work, in all circumstances, but it requires a heavy duty paradigm shift that can best be summarized like this:
Never trust code
It worked in the beginning because the code came with the system it ran on. We can build systems that let the code run locally with our stuff as input, in a secure manner. All we have to do is teach our computers how not to trust.
In this manner we can vastly increase the generative power of the ends of the Internet.
I hope that makes sense, and expands the conversation.
Time for bed now…
Keith Dick on April 24, 2008 at 4:01 am
I guess I don’t catch something you are saying here.
The web server at the far end of an http connection can run any program on the server it is set up to access. And it is often the case that web requests result in a lot of computation being invoked on the server. So, at least to a first approximation, it seems that the internet is still being used to provide access to remote code.
Of course you know how web servers work, so there must be something about that approach that doesn’t equate with giving access to remote code that you are talking about. Could you say a few more words about this to point out the difference I’m overlooking?
Comments are now closed.
#Privacy is Personal: bit.ly/1R7q5Kp My latest column in @linuxjournal #VRM
Yesterday from Doc Searls's Twitter via Twitter Web Client
Might have something to do with what @MichaelWolffNYC said here: nyti.ms/1R7pTuK twitter.com/amcafee/status…
If people won't follow the lead of human heroes, will they follow a fictional one? Be interesting to see. #VRM twitter.com/meeco_me/statu…
@GrahamHill @dmarti For those who don't want the tracking-based varieties of those, what do you recommend?
About 2 days ago from Doc Searls's Twitter via Twitter Web Client
Liveblog: Tweetline — a tweet that unpacks into a liveblog outline. #Wildfires seems to be today's theme, so far. liveblog.co/users/dsearls/…
About 2 days ago from Doc Searls's Twitter via radio3.io
Powered by WordPress and Tarski