You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Simplicity and Scale

ø

Something that always surprises me (well, more accurately, boggles my mind) is how successfully the Internet has scaled.

The original ARPAnet was designed to connect a small number of research locations so that they could share resources. Starting with just three sites, it scaled up to 10, then 100, over a long period. Some of the protocols changed, until finally TCP/IP came along. While there have been minor changes in those protocols since they were first introduced, they have remained pretty much recognizable as the Internet has gone from hundreds of connections to billions. Which is more than something of a miracle.

If I had to pick a reason for this, it would be the simplicity of the design of the network. Making the Internet really simple. All it does is deliver packets, using best effort, and relying on the end-points to do as much as possible. This means no part of the Internet needs to know about all of the Internet. It also means that new applications can be added without changing the underlying network. All sorts of things are possible when you keep the network as simple as possible.

What is often not appreciated is how hard it is to design something that is this simple. The temptation to add a feature here or something that will make life at the endpoints easier there can be overwhelming. I have great admiration for the original designers of the Internet protocols for their ability to resist this. As an engineer, one of the most difficult things to do is know when to say “no”. We can thank those early designers for the taste and courage to do just that.

Back on the air…

1

Hard to believe that I haven’t written since last November, but there it is. Now the second version of our Freshman Seminar is up and running, so I have a forcing function to write…

I’m always fascinated by reading the early history of the ARPAnet/Internet. The whole effort was an amazing leap of faith by a group that didn’t know if what they were trying to do would ever work. They certainly had no conception of how it would scale and the importance it would have in the current world. They were just trying to get something that would let them do their work more easily.

Which leads me back to a question I often consider, which is how can standards best be set? The standards that enable the internet (TCP/IP, DNS, and the like) were, for the most part, built to solve problems people were actually having. They weren’t built using an inclusive, multi-stakeholder, democratic standards organization. The IETF, which is the organization that has the most to do with Internet standards, is run by a process that causes governance people to faint. It’s a bunch of geeks, figuring out how to get the job done. But the results are pretty good.

I’ve argued elsewhere that the best technology standards are the ones that describe an existing tool or technology that everyone uses because it is useful. The C programming language, for example, was standardized well after everyone who was going to use it was already using it. The purpose of the standard was to write down what we all knew. The early days of Java(tm) were the same, as was Unix (which had, in its early days, two standards; one from AT&T and the other from Berkeley). I think of these standards efforts and writing down what are already de facto standards. This is valuable, and the standards seem to work.

I contrast these standards with those that begin with a standard’s process. These are technologies like Ada, the object-oriented language mandated by the Department of Defense and built by committee, or the ISO networking standards, produced by a full multi-stakeholder process. While they may have their good points, they tend to be clumsy, ugly, and not very useful (and therefore not very used). These are attempts to impose standards, not describe them. I used to claim that the reason managers liked these standards is that they turned technical questions (which the managers didn’t understand) into political questions (which they did). But perhaps I’ve mellowed with age; I now think an even more basic problem with these committee-invented standards is that they are created without a particular problem to solve, and so solve no problems at all.

One of the real blessing of the early Internet was that no one other than the engineers working on it thought it was important enough to be worth putting under some form of governance. The geeks got to solve their problems, and the result is a system that has scaled over many orders of magnitude and is part of the infrastructure of all technology. But one of the reasons that it works is that those who started it had no idea what they were doing, or what it would become.

Technology and government

1

Our discussion in our Freshman Seminar this last week concerned how technology in general and the Internet in particular could be used to help governments (at all levels) be more responsive and deliver services better. We were fortunate to have David Eaves join us; he has been an advocate for open data and using technology in government for some time, so getting advice from someone who has been there is really helpful.

What I keep thinking about, after the discussion, is the number of large technical projects within various parts of government that fail. There are the really visible failures like healthcare.gov (on its initial rollout) or the attempts to implement a case management system for the FBI or attempts to replace the air traffic control system (which has failed multiple times). Depending on the statistics you want to cite, 72% of government IT projects were in trouble in 2009, or that 70% of government IT projects fail, or that only 6.4% are successful. David claimed that things are not that different outside of the government, and you can certainly find studies that agree with this. In fact, reading these studies, it is surprising that anything ever works at all.

My problem with all these studies is that they fly in the face of my own experience. I spent about 30 years developing software in industry. I was on lots of projects. There were some that weren’t successful in the market for one reason or another. There were a couple that were stopped when the company I was working for got bought and the new owners decided that those projects weren’t the sorts of things they were interested in. But I was never on one that failed because we just couldn’t get the system to work.

David did distinguish between projects that were done in “technology” companies verses those done by everyone else, and I certainly worked in technology companies. But over the past 6 years I’ve been working in one part or another of Harvard Information Technology. Harvard is hardly a technology company (don’t get me started…), but in that time we have successfully rolled out a new course management system, a new student information system, re-vamped the Identity and Access Management system, moved most of email from local servers to the cloud, and done a ton of other projects as well. Not all of them have done exactly what everyone hoped they would do, but that have all pretty much worked. None had to be abandoned, or re-written from scratch, or got deployed and then turned into a disaster.

So what is the difference between the facts and figures that we see about project failure and my own experience? Maybe I have some sort of magic about me, so that projects I join or observe are somehow saved from the fate of all of these others. That would be really useful, but I don’t think it is the right explanation. I think I’m good, but I don’t think I’m magic.

I’m more inclined to think that the difference has to do with what the managers of the projects care about. In most of the government projects I’ve heard about, and in lots of the non-governmental projects that have failed, managers have been more concerned about how things get done than anything else. That is, the worry is what kind of process gets followed. Is the project being run using the waterfall model (which was first discussed in a paper saying that it was the wrong way to manage a software project) or various forms of agile development (which is a new cult), or some other method? These are approaches that managers really hope will make the development of software predictable, manageable, and most importantly, independent of the people who are on the project. All of these models try to make the developers interchangeable parts who just do their job in the same way. Doesn’t matter who is on the project, as long as the right process is followed.

This is in contrast to what I saw through my career, and what I see in companies that might be thought of as “tech” companies now. In these projects, the worry was all about who was on the project. There was a time I gave talks about what I called the Magnificent Seven approach to software projects. The process was straightforward: hire a small group of experienced professionals, let them deal with the problem as they saw fit, and if you found a kid who could catch fish barehanded ask him or her along. This was hardly an idea that I came up with by myself; you can see it in The Mythical Man Month and other things written by Fred Brooks.

A process-based approach seems a lot more egalitarian, and in some ways a lot more fair. It means that you never have to tell someone that they aren’t good enough to do the job. It is good for the company, because you don’t have to pay outrageous salaries to engineers who are probably a pain in the tail to manage because they think (often rightly) that the company need them more than they need the job (since, if they really are that good, they can find another job easily). So I certainly understand why managers, and large bureaucracies like various levels of government, want to focus on process rather than individual talent.

But then you have to explain the difference in success rates. If focusing on process gives a success rate somewhere between 40% and 5%, and focusing on talent does a lot better (I don’t have numbers, but my anecdotal experience would put the success rate of really high performance teams at the 85%+ range), then maybe making quality distinctions isn’t a bad idea. I’m not sure how you get the various levels of government to accept this, but I think if we are going to have governments that are dependent on good technology, we need to figure out a way.

The Singularity, and Confessional Language

ø

In our seminar this last week, we talked about the Singularity, that point at which machines become smarter than people, and start designing machines that are even smarter so that the gap between humans and computers running AI programs just gets larger and larger. Depending on who you listen to, this could happen in 2045 (when the computing power of a CPU will, if current trends continue, be greater than that of the human brain), or sooner, or later. There are people who worry about this a lot, and in the past couple of weeks there have even been a couple of Presidential studies that address the issue.

I always find these discussions fascinating, as much for what is presupposed in the various positions as for the content of the discussion. The claim that machines will be as “smart” as humans when the complexity of the chips equals the complexity of the human brain assumes a completely reductionist view of human intelligence, where it is just a function of the number of connections. This may be true, but whether it is or not is a philosophical question that has been under discussion at least since Descartes. Consciousness is not something that we understand well, and while it might be a simple function of the number of connections, it might be something else again. In which case, the creation of a computer that has the same level of complexity as the human brain would not be the same as creating a conscious computer, although it might be a step in that direction.

Then there is the assumption that when we have a conscious computer, we will be able to recognize it. I’m not at all sure what a conscious computer would think about, or even how it would think. It doesn’t have the kinds of inputs that we have, nor the millions of years of evolution built into the hardware. We have trouble really understanding other humans who don’t live like we do (that is the study of anthropology), and this even goes back to Wittgenstein’s dictum that “to understand a language is to understand a way of life.” How could we understand the way of life of a computer, and how would it understand ours? For all we know, computers are in some way conscious now, but in a way so different than we can’t recognize it as consciousness. Perhaps the whole question is irrelevant; Dijkstra’s aphorism that “The question of whether machines can think is about as relevant as the question of whether submarines can swim” seems relevant here.

Beyond the question of whether machines will become more intelligent than humans, I find the assumptions of what the result of such a development would be to tell us something about the person doing the speculation. There are some (like Bill Joy) who think that the machines won’t need us, and so will become a threat to our existence. Others, like Ray Kurzweil, believe we will merge with the machines and become incredibly more intelligent (and immortal). Some think the intelligent machines will become benevolent masters, others that we are implementing Skynet.

I do wonder if all of these speculations aren’t more in the line of what John L. Austin talked about as confessional language. While it appears that they are talking about the Singularity, in fact each of these authors is telling us about himself– how he would react to being a “superior” being, or his fears or hopes of how such a being would be. These things are difficult to gauge, but the discussion was terrific…

Money, bits, and the network

1

We had an interesting discussion in the freshman seminar on the impact of the internet on the economy. We all see some of the disruptions– Uber and Lyft are causing major changes to the taxi companies, Amazon has done away with most brick-and-mortar bookstores (and many other kinds, as well), and the notion that you need to actually to somewhere in the physical world to buy something (that you will see, touch, and perhaps try out before you buy) is getting restricted more and more to items that seem oddly special for just the reason that you need to see before you buy. Even the paradigm of this, the car, is having buying habits changed by companies like Tesla.

We only touched on a more fundamental change in the economy that the computer and networking world is bringing about– the very notion of money. It wasn’t that long ago that what stood behind our currencies was a hot political topic– there were those who worried about currency that was only backed by silver, rather than by gold. The idea was that the value of money needed to be directly traced to some precious metal that the money represented. A $100 bill got its worth by being capable (at least in theory) of being exchanged for $100 in gold (or at least gold coins).

This notion was abandoned by most countries in the early part of the 20th century. What made $100 worth something is that someone else was willing to part with some set of goods and services in exchange for the bill. The final arbiter was that you could pay your taxes with such bills; the government always needs to be paid, so that kind of worth is a form of guarantee.

Now, the real money is just bits in a computer. Banks exchange the bits with each other, and we all hold our money in those banks, where they are represented as bits, and where we can transfer the bits via computers, or credit cards, or by direct electronic means. On occasion we exchange some bits for bits of paper (bills) that we can carry around with us and use to pay, but that is become less and less needed as we all become used to the notion of money as bits.

This can be very convenient. I was recently in England for most of a week. My ATM card could be used to get English currency because bits are easy to transfer internationally. But I didn’t need the currency all that often; mostly I paid with credit cards that were tokens allowing me to directly move bits from one account to another. I remember many years ago when I first went to England, and needed to worry about getting traveller’s checks beforehand so I would have money while I was there. Just not needed any more.

All of which leads to the question of what money is now. It isn’t a representation of a precious metal. It’s more a consensual hallucination that we all believe will continue to be exchanged for goods, services, or (as a last resort) taxes. But it isn’t much of a thing so much as a representation in the Internet.

All of which brings me to one of my favorite characters in this space, J.S.G. Boggs. Boggs became (in)famous for drawing complex pictures of obviously fake U.S. bills (he often put his own face on the bill), and then passing the art at the value shown on the bill. So if the bill showed $100, he would ask for $100 worth of goods (or change). Merchants were happy to do this, since the bills were worth much more than their face value (as art). But it drove the U.S. Secret Service (which has as part of its job the enforcement of anti-counterfeiting laws) somewhat bananas. The courts finally sided with Boggs, who claimed that the work was performance art. But it was also something that had value of a sort that, at least in the initial transaction, was not like other forms of art.

Things are getting more complex today. Bitcoin is hardly performance art, but whether or not it is currency is under debate. It appears to have value (although it fluctuates considerably), it can be used for payments, and is certainly causing some authorities consternation. How much it changes the economy is yet to be seen, but it will certainly have an impact no matter what happens.

Design principles

ø

As part of the seminar on the Internet that I help to teach, we read End-to-end arguments in system design, one of my favorite papers in computer science. What makes this such a great paper is that it takes the notion of design seriously. The reason to read the paper isn’t to learn about a nice implementation, or the proof of some theorem, or even how to make an algorithm that runs faster than others when doing some task. Instead, this paper talks about a general principle that informs any work that might be done that involves a network.

Simply put, the end-to-end argument tells us to keep the network simple, and do the work at the endpoints, since those endpoints will need to do most of the work themselves anyway. Worried about who you are talking to? Well, you need to authenticate at the endpoints, so there is no need to do that in the network. Need to check that the message hasn’t been corrupted in transit? That has to be done at the endpoint, so there is no reason to do so in the network, as well. It is an outgrowth of the idea that you don’t want to do work twice. So find the right place to do the work, and ignore it everyplace else. The result of following this principle is that the network we now use is simple, has scaled remarkably, and can be used for all sorts of things it was never intended to be used for. To introduce a new bit of functionality, all that needs to be changed are the end points that need that functionality, not the network itself.

It seems obvious now, but this principle was pretty radical at the time it was proposed. A lot of people thought it would never really work, never really scale, never really perform. It was fine for experimentation and toy applications, but not for real work (that needed token rings, or something far more reliable and guaranteed).

Articles that enunciate general design principles are few and far between, and should be treasured when they are found. My other favorite is Butler Lampson’s Hints on Computer System Design, a paper written in 1983 but still relevant today. The examples may be somewhat outdated, but the hints are still important to understand. The details of the work may change, but the underlying design principles are much closer to being timeless.

In the seminar, we also talked about the design notion that you can solve a problem by introducing a level of indirection. When the first ARPAnet was built, the networking was taken care of by simple, small computers called interface message processors (IMPs), that were responsible for talking to each other, with each IMP connected to a host system. This isolated the differences in the host systems to a local phenomenon; if I wanted to connect I had to deal with local characteristics locally, but not with the special characteristics of the remote hosts I wanted to talk with or connect to. The IMPs offered a level of indirection. When it was realized that there were local networks that wanted to be connected, another level of indirection was introduced. This level looked to an IMP like a host, and to the local network like part of that network. Thus was a gateway born, allowing the local idiosyncrasies of the local networks to be dealt with locally, not globally.

Each of these levels of indirection can also be seen as adding a layer of software into the system that will translate from the local environment to the more global one. Each local environment may have a different gateway, but that is masked from the global environment. The power of software is that it allows a common external interface to be implemented in a different way that is hidden to those who don’t need to know about it.

Discussions of software design have recently centered around various sorts of patterns. While these may be interesting, I do wish we as a community would talk more about the general principles that can inform a wide variety of designs. They are hard to find, often look trivial once they are known, but are important ways to think about how we build our systems.

Back again

1

It has been way too long…amazing how life intrudes on the best of intentions. But time to get back.

To help force myself, I’m teaching a freshman seminar (along with Mike Smith) in which we are requiring that the students keep a blog of their thoughts about the content of the class. And since it seems unfair to ask others to do what you are unwilling to do yourself, we committed to do the same. It’s one way to get back to writing.

The seminar’s topic is What is the Internet, and What Will It Become? One of the pleasures of teaching a freshman seminar is that the topic can be wide open, pretty much unconstrained, and far more interesting than tractable. This topic fits the bill pretty well. It reminds me of my past as a philosopher– the more I think about the topic, the less sense I can make of it. Is the Internet just TCP/IP? Is it a suite of protocols, or a consensual hallucination?

Beyond the topic, we get to discuss this (and in the process get to know) a small group of what appear to be spectacular students. I always learn more from them than they learn from me, and I’m looking forward to being taught by them.

We are starting by looking at the history of the development of the Internet. We have been reading Hafner and Lyon’s Where Wizards Stay Up Late, as accurate a single-volume history as we could find. History is a funny thing, especially when there are still those around who were involved in the events. It is hard to get everyone to agree who did what when, and even more difficult to get everyone to agree on the impact and import of much of what went on. It’s so much easier when no one is around who can say “well, I was there, and it didn’t really happen that way.”

There are lots of interesting lessons to learn from the way the early Internet was constructed. There seemed to be some ideas that permeated the air but were completely counter to the existing orthodoxy, such as packet switching. It was clear that there was no real agreement on what the end state of the experiment that was ARPAnet was going to be. And reading the history it becomes apparent that then, as now, much of the real work was done by graduate students, who seemed to have a better idea of what it was all about than the people who were supposedly running the project.

What I find most interesting, though, is the contrast in notions of how to build a reliable network. The packet network advocates started with the assumption that the network could never be made reliable, and that was just the way the world was. So they spent a lot of time on figuring out how to build reliable transmission on top of an unreliable network, thinking through things like re-tries, congestion control, and dynamic routing. Errors, on this design philosophy, are a given, and so the users of the network need to acknowledge that and build reliability on top of the unreliable pieces.

This is a huge contrast to the network engineers of the time at, say, the Bell system. The phone company (and there was only one in the U.S. back then) was all about building a reliable network. They did a pretty good job of this; I remember when not getting a dial tone on your (AT&T owned) phone was a sign of the Zombie Apocalypse (or, given the times, the beginnings of nuclear war). But making the system reliable was difficult, and expensive, and limited what could be done on the network (since lots of assumptions about use got built in). It is hard to remember, now that the Internet is the backbone of most everything, that which of these approaches was going to be best wasn’t clear for about 20 years. Big companies backed “reliable” networks well into the 90s. But in the end, simplicity at the network level won out, giving us the networks we have today.

I suppose my interest in this evolution is not surprising, given that I have spent most of my life working in distributed systems, where the same argument went on for a long time (and may still be going on). Building a reliable computing platform can be done by trying to insure that the individual components never fail. When you build like this, you worry about how many 9s your system has, which is a reflection of what percentage of the time your system is guaranteed to be up. Four 9s is good (the system is guaranteed to be up 99.99% of the time), five 9s is better (now you have a guarantee of 99.999% uptime). But moving from four 9s to five 9s is expensive.

The alternative, best exemplified by cloud computing, or the approach taken by Amazon, Google, or Facebook, is to build a reliable system out of lots of unreliable components. You assume that any of the servers in your server farm is going to fail at any time, but build your system around redundancy so that the failure of one doesn’t mean that the system is no longer available. It is a more challenging design, since you have to worry about failure all the time. But it is one that works well.

Just like the Internet.

Furthering Human Knowledge

1

A recent graduate with whom I did considerable work recently wrote and asked me the following question:

What are your thoughts about how academia versus industry contribute to the expansion of the world’s knowledge? My thoughts from speaking to people from both sides are that it seems that in academia, knowledge is much more open to sharing, but progress is much slower (because of multiple things: the need to apply for grant money, the multiple roles that professors must juggle, the difficulty of the problems being tackled, the lack of readily accessible consumer data, etc.), and that in industry, there are probably tens or hundreds of companies trying to do the same thing (like spam detection, for example) but who do not want to share their knowledge, but that the competition and that the money incentive makes development faster. Do you think that working in one versus the other is more effective or efficient if the furthering of knowledge is the ultimate goal? This is one of the big questions I’m thinking about as I’m trying to figure out where I best see myself in 10 years.

I’ve been thinking about this problem, and some variants, for a considerable period of time, so the response was fairly length. On the off chance that others might be interested, I post it here:

And this is why I love working with students— they always ask the simple questions :-).

There may have been a time when the answer to this was actually simple. The idea was that academics worked on long-term, highly speculative research topics, generally funded by the government, and published papers for all the world to see. Then industry would pick up some of these ideas, figure out the commercial application, apply the pure research, and take the product that resulted from applying that research to market. Academics worked on the long-term (5-10 year) problems, and industry worked on the short term (.5-2 years). A simple and rational pipeline, with each group knowing what it was doing and how it interacted with the other. If you wanted to work on the high-risk, high-impact (but likely to fail) stuff, you became an academic. If you wanted to have immediate impact on the real world, with some better guarantee of success, you worked in industry.

This is a nice picture, but if it has ever actually been true in the computer/software sector, it was before my time. As long as I’ve been watching and taking part, real research was done all over a combination of academia, industrial research labs (Bell Labs, BBN, Xerox PARC, Digital’s SRC, Sun Labs, MSR) and people in product groups at various companies, both large and small. I remember a time when it seemed that everyone in the industry worked in Research and Development; the difference between being an academic and working for a company was whether you wrote the paper before or after releasing the product.

But there are some changes that have occurred over the past 10 or more years that have made the picture even more muddled than it was, and thus make the question harder to answer.

One of these changes is that the amount of research funding going to academics in the CS field has been going down, and the risk profile of the funding agencies has been going down with it. There was a time when NSF and DARPA would give out lots of money, and be willing to take chances on things (like the internet) that didn’t have any obvious payback and might not have worked at all (there were plenty of research projects that didn’t). As the amount of money has decreased, the willingness for risk has decreased, as well— while there are some projects that seem pretty risky, for the most part the perception is that for a project to get funded, you need to show that it will succeed. This leads to people getting startup funding to do a year or so of research, applying to funding agencies for that work (or a small follow on to the work), and then using the funding to do something new that they can then ask for the next round of funding to support. Again, it isn’t always like this, but it is like this often enough that it can be problematic. By the way, it may not be that the actual amount of money has gone down (I believe that it has), but the amount hasn’t gone up in proportion to the number of academics who are trying to get it. So that makes things strange, as well.

At the same time, the number of industrial research labs seems to be decreasing, along with the funding available for such labs. Big places are willing to do some really adventurous stuff (look at Google X, or at least the rumors from Google X), but the work is not done in the open, may not be shared, and when it is often is covered by patents. Which is natural; these companies want a return on their investment. But it does limit the scope of the spread of the innovation or knowledge. In a lot of cases, companies are now willing to take more of a chance on research than academics, because they realize that the payoff to being the first to figure something out is so huge. So some of the really speculative, long-range work is being done in industry, but you hardly hear about it (think of the self-driving car).

And then there is a third trend. What seems to be happening more and more is that innovation and real research is being outsourced to startup companies. If you have an innovative idea, you start a company and work on the idea. Then if the idea works out, you either take over the world or (more likely) get bought by a larger company. This is a really attractive model for those who fund innovation; they have essentially out-sourced the problem to the VC community. The government doesn’t have to spend any money at all. The existing companies only have to buy when they know that the idea works. And the VCs are willing to put up the initial money, because the payback for the companies that get bought or get to go public is large enough to make the initial investment profitable. This certainly seems to be the way Oracle does research these days (they don’t even do much hiring; most of the people they add to the company come in through company acquisition). Ryan Adams recently had his little company bought by Yahoo, so sometimes the line between academic, startup, and established company can be even more blurred.

Of course, such outsourcing also means that the time depth of start-up company research is dictated by the patience of the VC community, which seems to be limited to a couple of years (at best). And the research better have a clear commercial application. 

All of this has to do only with how the initial research gets funded. The real question centers on how you could most effectively add to human knowledge. Which is a lot harder than just getting funding, because once you get funding, you then need some way to get people to recognize what you have done.

Academics do this by writing papers and giving talks, which sometimes works. But there are a lot of papers that get written and never really get read, a lot of talks that are heard and immediately forgotten. By the same token, there are lots of products that are really great pieces of technology that, for one reason or another, never get adopted. Before inventing Java, James Gosling invented NeWS, an elegant and fully functional window system. But NeWS went nowhere; instead the industry of the time adopted X-windows, which a lot of us thought was not technically as nice. Dick Gabriel and I have been arguing over why LISP lost out to C or Multics lost out to Unix for a long time, but whatever the reason was it was not purely technical. I remember being told by Ivan Sutherland, who has done more great technology than just about anyone I know, that all you can do as a technologist is make sure that the work is good; adoption is a question outside of your control. A hard lesson, but true.

After all of this evasion, let me try to answer the real question, which is what should you do if you want to push forward the boundaries of knowledge? And the answer will depend on how you want to live your life, not on which is more likely to push those boundaries successfully. As an academic, you have much more freedom, deciding on your own research agenda and working with students who you will get to advise and direct. In industry, you probably won’t have that sort of freedom for 5 to 10 years (and that’s if you enter with a Ph.D.); you will be part of someone else’s group for that time, working on their problems. But while in industry you will not have the worries over funding (my funding cycle when at Sun Labs was a couple of weeks), and the people you will be working with will have a higher level of skill and discipline than is generally found in students. But the problems you will work on will be constrained, to some extent, by the market and you may not be able to share all you know. The environment of a startup gives you some of the freedoms of an academic, but also brings the pressures of the market. 

And, of course, a final consideration is just what is meant by “furthering human knowledge.” One way of doing this is to come up with something that no one has ever thought of before, and getting everyone to understand it. This might be a new theorem, a new system, or a better way of doing things. Java, for all its flaws, certainly contributed to human knowledge in some way; the IP protocols did the same. But these sorts of contributions are few and far between. When they happen, it is a combination of insight, perspiration, and luck; no one knows how it really happens, but when it does it is pretty amazing.

But the other way to further human knowledge is to train the next generation in how to further knowledge. This can also be done in all of the contexts spoken about above— I mentored a lot of people when I was in industry, start-ups teach in their own kind of way, and being a professor is (supposedly) explicitly about that. As my career has moved along, I’ve grown to appreciate this way of furthering knowledge more and more; it may not be as romantic or immediately satisfying, but you end up playing the long game, knowing that your contributions will outlast you and not be subject to quite so many whims. Which is the other reason that I love working with students— it is part of my attempted contribution to the expansion of human knowledge.

It was 20 years ago today…

1

A lot of Beatle’s songs have been running through my head this last little while, but Sgt. Pepper is getting most of the mental traffic. In part this is because of a recent personal anniversary (will you still need me?) but mostly because it was 20 years ago that Java(tm) was first made public. My, how time flies.

There was actually a bit of a debate on the JavaSoft (that was the part of Sun that was responsible for Java) mailing lists; the first public release of the alpha technology preview of Java went out March 27, 1995, while the first announcement of the language and environment was during the Sun Developers Conference held May 20-something in that year. To give a bit of context, that was the same Sun Developer’s Conference when the Memo of Understanding between Sun and Netscape that placed Java in the browser was announced. For those of you who don’t know what Netscape was, go look it up. At the time, Netscape was a much bigger deal than Java, but time has a way of changing those things.

Java had actually been wandering around Sun internally for a couple of years before that; during the 10 year anniversary I found that I had the electronic copy of the oldest known (then) spec for the language, which you can see here. First note that the language was not then called Java. It was originally named Oak, after the tree outside of James Gosling’s office. But the trademark folks said that name was already taken, so the commercial name was changed to Java. The rest, as they say, is history.

The Oak spec is recognizably Java, but is also different. The exception mechanism at the time did not require that exceptions be declared or caught; this was changed before the first official release. The language was a lot smaller (full description in under 30 pages), and the mass of libraries, frameworks, and imported features from other languages weren’t there. This is a snapshot of Java in its purer form, before it became important enough to be fought over by different companies, before the community process, and before all those who wanted it to be something else had their way with it.

I still like to program in Java (the Standard Edition; don’t get me started on the Enterprise Edition), since at its core it is still the simple, object-oriented, strongly typed language that it started out being. I do use generics, even though I hate the way they were implemented and argued against them (and still think they aren’t needed). I’m glad to see lambdas introduced, although I could have lived without them. And I do sometimes wish that we had had a few more years to develop the language before it became the global phenomenon it became. But it ruled much of my life in technology, and it was fun.

So happy birthday, Java. And thanks for all the good times. The bad ones weren’t your fault…

Architecture at HUIT

ø

It’s been quite a while since my last post; lots has happened and it is past time to start talking about some of it.

Information technology at Harvard is changing a lot, as all of the planning that started when Anne Margulies came in as CIO is beginning to be implemented. We are well into the implementation of a new Student Information System, and the move from a classroom support technology from iSites to Canvas is happening (sometimes faster than we thought). All of these are good, and visible. But this is hardly all of it, or even the most (technically) interesting.

In some important ways, the biggest change is the move to the cloud. If done right, this will be pretty much transparent to the users of our services, but it is a huge change for the organization itself. It makes sense in lots of ways, but it takes new sets of skills and new ways of looking at the problems we are trying to solve. But we have committed to moving 75% of our services to the cloud in the next three years, which is a lot more than just testing the waters.

As we do this, it is also an opportunity to start putting some architectural principles in place. HUIT has traditionally treated most applications as “one-of-a-kind” entities, with machinery and underlying software stacks selected, optimized, and maintained for each system. When we were trying to squeeze all we could from each application, this made sense. But as computing power grows, the complexity of such an approach is overwhelming any advantage in performance we might be able to gain.

To help bring some regularity to this, I’ve convened a new group, the Architecture Decision Group. As the name implies, this is a group that comes together to make decisions about what the architecture is and will be going forward, at least for HUIT. If the wider Enterprise Architecture work was complete, this group would spend most of its time making sure that architecture was being followed. But since that architecture is in-process, this group is trying to decide on issues that need answers now. Since that wider effort is just starting, we needed something to make decisions now, so we can avoid the lack of regularity that is our current state.

The group is intentionally designed to be small and technical. Permanent members are the CTO (natch), the deputy CIO, the Managing Directors of Engineering and Architecture and Strategy and Planning, the Chief Information Security Officer, and the Director of Networking. Depending on the subject being discussed, we will ask other (technical) people to attend.

An important part of the work that we are doing is writing it down. We have a backlog list, and then a set of decisions and rationales for those decisions. All of this is kept on a publicly viewable wiki.

While the deliberations of the group are invitation-only, we are looking for ways that the more general engineering community can contribute. For any of the topics in the backlog, we invite opinions to be written up (on the wiki) and submitted. The group will read these, and those that seem particularly relevant may cause us to invite the writer to join for a session or two. We also invite comments on our decisions. The assumption is that nothing we decide is set in stone, but unless there is good reason to follow some other design everything that HUIT does should follow the decisions made by the group.

We have already made a number of decisions around the cloud and the network architecture that impact our move to the cloud; take a look and file a comment if you think we have not understood something important. We will next be looking at some of the patterns for deployment in the cloud; opinions on those topics are being sought. So take a look and get involved…this is the technical future of HUIT that is being worked out here, so we would love to hear from you.