You are viewing a read-only archive of the Blogs.Harvard network. Learn more.

Archive for the 'Uncategorized' Category

Cyber war, cyber crime, and jurisdiction

ø

It’s an odd thing about ‘cyber’ as a prefix– with the exception of cyberspace, it almost always means something bad. We have cyber-crime, cyber-war, cyber-bullying, but never cyber-puppies or cyber-joy. And most of the people working in technology don’t use the term at all. But it is a big thing in government and policy circles.

We had a great discussion in the seminar this week with Michael Sulmeyer about cyber war. The subject is complicated by the difficulty of distinguishing between cyber war, cyber crime, and cyber espionage. There are rules about war, but they were developed for the kind of conflict that occurs in physical space. The rules for conflict in the digital world are not well understood. And the notion that the two spheres of conflict will remain distinct is something that few believe. We have already seen some attacks that move from the digital world to the physical world, but there is little understanding of how an escalation from the digital world to the physical world would work. What are the rules, and what can be expected from adversaries? Without having some notion of reasonable escalation, it is hard to tell were any attack will end.

One worry that I have is that the pace of change in technology is so much faster than the pace of change in the policy and legal worlds. Getting countries to talk to each other about the rules of cyber engagement takes years, and reaching an agreement takes even longer. By the time treaties can be written and agreed upon about some aspect of technology, the technology has changed so much that the agreements are irrelevant. How to get these time scales more in synch is a difficult problem.

But I think a larger problem is getting the right set of players into the discussion. Most countries think that discussions about trans-national conflict need to take place between countries, which is reasonable in the physical world. But when we talk about the cyber world, just having the various countries at the table misses a major set of actors– the technology companies that are building and shipping the technology that make up the cyber world. As was pointed out in our reading by Egloff, we now live in a world where major players include the corporations, much as was the case during the age of exploration. Keeping these players out of the discussion means that major forces are not represented. Companies like Google or Apple may be based in a single country, but their interests cannot be fully represented by their home government. They are powers themselves, and need to be represented as such.

It may seem strange to think of the tech giants in this way, but no more so than seeing the influence of the East India Company or the Hudson Bay Company during the age of exploration. It took a couple hundred years to work out the law of the sea; I hope that we can do better with cyberspace.

WeCode and Visceral Education

1

Last weekend I had the great pleasure of attending the WeCode conference run by the Harvard Undergraduate Women in Computing group. It was a great event; well-organized, well-attended, and far more interesting than the “OMG, Goldman Sachs was giving out nail files with their logo, how insensitive” meme that seems to have gone rampant on news sites that should know better. I was there to moderate a set of panels, but decided to attend most of the Saturday event to see what it was like.

The first keynote was in one of the large lecture theaters in the Science Center. When I walked in, there were probably 200 conference goers in their seats, and more were streaming in. I took three or four steps into the hall, and it suddenly hit me. I was one of may two or three men in the hall. I’ve never been accused of being shy, but I felt completely out of place. Completely other. All of the voices in my head were saying “get out of here… go to your office and get some work done…”. All the flight responses were active.

And at the same time, I was realizing that this is the feeling everyone else in the room must have at every other tech conference in the world, or in most computer science classes, or tech gatherings in general. It was a Zen experience. I suddenly felt that I had a better understanding of what women in computer science (and the STEM fields more generally) are up against.

I’ve tried to be a supporter of women in software positions all my life. My groups at Sun always had women software engineers, and my closest collaborator over most of my career was a woman. I’ve tried to encourage women in my classes. The last edition of my privacy course was 2/3rds female (a fact that one of the male students complained about; his complaint was an opportunity for a discussion of these issues which I hope had some impact). But I’ve never felt the problem the way I did last Saturday.

I’ll admit I’m not sure what to do about this. But it is a problem, not just of fairness and justice, but for the field. We need good people in software engineering, computer science, and related fields. The supply of any kind of people can’t keep up with the demand, and the supply of good people isn’t even close. Artificially limiting the supply of talent to half the population is insane, destructive, and wrong. Changing this will be hard, because not everyone understands. I thought I understood, but I didn’t really. I don’t fully understand now, but I’ve had a good lesson. It’s amazing how much more effective a lesson is when it arrives through the emotions instead of the brain.

I’m still thinking about the experience. But I know I won’t think about women’s issues in the STEM field in the same way. For that reason alone, the WeCode conference may have been the most educational I’ve ever attended.

Privacy and Anonymity

1

It has been an interesting summer on the privacy front. Following the Spring revelations at Harvard about email searches, we have watched Edward Snowden subject the intelligence agencies of the U.S. to a version of the classic Chinese water torture (except he has replaced the drops of water with bowling balls) by releasing information about all of the information being gathered by them. I’ve been a bit distressed by how little discussion there has been about all of this in public, although an interesting alliance of the far left and the far right in the House of Representatives (the self-proclaimed “Wing Nuts”) seems to be paying some attention.

There are also a host of interesting questions that aren’t being addressed, but which the different sides seem to assume have already been answered (often in different ways). One of these questions is whether gathering data is a privacy issue, or if the issue only arises if and when the data is accessed. Those defending the gathering of all of the data seem to think that it is access that needs to be monitored and watched, telling us that we shouldn’t be worried because while they have all that data, actual access to the data is far more controlled. Those who are worried about the gathering seem to believe that the act of gathering the data is the problem, often pointing out that once the data is collected, someone will do something with it. Another question has to do with whether or not privacy is violated when data is viewed algorithmically, rather than when a human being looks at it. Again, those defending the various data gathering programs seem to hold that computers looking at the data has no privacy implications, while those objecting to the programs think that even algorithms can violate privacy.

I think these are both interesting questions, and I’m not sure I know the right answer to either of them. I have been able to construct some cases that make me lean one way, while others make me lean the other.

Another issue I don’t see being raised has to do with the difference between privacy and anonymity, and how the two relate. In fact, what I see in a lot of the technical discussions around the questions of data aggregation, is an implicit equation of privacy and anonymity. This is an equivalence that I think does both sides a disservice, but especially the side wanting to argue for privacy.

Anonymity, roughly put, is the inability to identify the actor of a particular action or the individual with whom some set of properties is associated. The inability to identify may be because you can’t see the individual (as is done for symphony auditions, where the players are placed behind a screen, a practice that has increased the number of female members of major orchestras), or because there is no identifier associated with some document, or when a database has been scrubbed so that only some data is associated with each record (although this can be more difficult than most think).

Privacy is more difficult to characterize (take my course in the fall if you want lots more discussion of that), but is more involved in not knowing something about someone. My medical records are private not because you don’t know who I am, but because you don’t have access (or have the good taste not to access) those facts about me. What happens in Vegas stays in Vegas not because everyone there is anonymous (that would make hotel registration interesting), but because those who are there don’t tell.

I often think that voting is the best example that can illustrate this distinction. You don’t want voting to be anonymous; it is a good thing to need to identify yourself at the polls and make sure that you are on the voter lists (how you do this, and how much trouble it should be, is a very different issue). But voting is a very private thing; you want to make sure that the vote I cast is private both to protect me from any blowback (I grew up blue in a very red state) but also to protect the integrity of the voting process itself (as long as voting is private, it is hard for someone trying to buy votes to determine if the money spent led to the right result in any individual case).

One problem with this slushy notion of how to define privacy is that it is hard to build a technology that will insure it if you don’t know what it is. So a lot of work in the technology space that appears to preserve privacy actually centers around preserving anonymity. Tor is one of my favorite examples; it is often seen as privacy preserving, but in fact is designed to insure anonymity.

The argument over the collection of meta-data rather than data is all about this distinction. If (and it is a big if) the metadata on phone calls and internet communications only reveals the identity of those communicating, it violates the anonymity of those who are communicating. The analogy here is if you follow someone and note all of the people the person followed talks to, without actually hearing what the conversations are about. Such a thing would be creepy, but it isn’t clear (especially if you are following the person in public areas) that it violates anyone’s privacy.

Confusing privacy and anonymity also allows those who may be violating privacy to point out that insuring anonymity helps bad people to cover their bad actions (the standard “terrorists and child pornographers” argument, which reduces to some variation of “if we insure anonymity, we help the terrorists and child pornographers”). No one wants to enable the bad actors to act in those ways, so it appears that we have to give something up (although, if you really believe in privacy as a right, perhaps you are willing to give some of this up– just as free speech has to include those who say things that you really don’t like).

I’d really like to see some deeper thinking here, although I expect that it won’t happen, at least in public. These are important issues, and they should be thought about calmly and not in the heat of some shocking revelation (like the current discussion) or in reaction to some horrific event (like the 9/11 terrorist attacks, that gave us the current legal frameworks). One of the problems with privacy law in the U.S. is that it tends to be reactive rather than contemplative.

Maybe we can do better at Harvard. I hope so.

 

The Bozo Event Horizon

20

I’m on a Harvard mailing list for some folks interested in startups and innovation. A recent thread of discussion was around hiring, and in a posting to the group I talked about making sure that you did your hiring so that you avoided the bozo effect. I was asked by a number of people what I meant by that, which led to a long post that generated some interest. So I thought it might be of interest to a wider audience, as well. So I’m posting it here…

On hiring, bozos, and some (admittedly biased) history

Some time ago on this list I sent out a message concerning hiring, and mentioned that you need to avoid bozos if you want your company to survive. I said in that post

It is a truism that good people want to work with other good people; a corollary to this is that bozos attract other bozos. Once the bozo count reaches a certain percentage, the company is doomed (I saw this happen from the outside to Digital Equipment Co. and from the inside to Sun; I’mworried that Google may have hit the bozo event horizon).

A number of you asked, either privately or publicly, if I would expand on this, and perhaps talk about what happened at Sun and DEC, and what I’m seeing happening at Google (and what I mean by a bozo). These are difficult topics, some intellectually so and others emotionally so. But I’ve been thinking about this for a bit, and I’ll give it a try.

Let’s start with the notion of a bozo. All of the great companies I have worked for (Apollo and Sun in various incarnations) or heard about (DEC, PARC, Bell Labs and the like) started around a core of incredible people. These were people who are or were legends in the field. They were the ones who where 10 or 100 times as productive as the average engineer. Some, like Bill Joy, are idea gerbils who can spout out hundreds of original ideas a week. Only some of them are actually workable, but if there is someone around to catch the good ones and edit the losers, these people change the world. Others, like James Gosling, quietly change the world by building something (the core Java language and libraries) that make so much sense and are so elegant that you just smile when you use them.

Good tech companies find a way to reward these people without making them go into management or otherwise change what they are doing. DEC had the title of consulting engineer and senior consulting engineer; at Sun there were the distinguished engineers and fellows. These were levels above the rank and file engineers; no one could expect to be promoted to that level, but you always hoped to become one of the elect. I remember being told that the requirement for becoming a Sun Fellow was that you had invented one or more major branches of computer science; the original fellows (Bob Sproull, Ivan Sutherland, and Peter Deutsch) all qualified on that metric.

One aspect of these positions is that they generally required peer review. You couldn’t become a Sun DE or a DEC consulting engineer just because the managers said you should. You became one because the other DEs or CEs had looked at your technical chops, and said that you were one of the elect. It was often compared to getting tenure, except that it was often more difficult; professors with tenure who shifted to these companies often weren’t passed into this level. And these people were the keepers of the corporate technical flame, making sure that the company stayed on the right (technical) footing.

The core of this decision procedure was the ability of the top-level technical talent being able to make technical judgements about other technical contributors. But at some point in the history of the companies, there arose worries that the selection criteria wasn’t, in some sense, fair. People who, from the manager’s point of view, did great work weren’t being selected by the technical leaders to join the top group. People who did other kinds of important work were seen as being de-valued because they weren’t being allowed into the upper ranks. And at some point, in the name of “fairness” or “diversity of skills” or the like, contributors who would not have otherwise been let in are added to the group.

And these are the bozos. Not necessarily bad people, or even unintelligent, but those who have been promoted to a level where they are given technical weight that they don’t deserve. The “A” team now has some “B” members, but those outside of the team (and maybe some inside of the team) can’t tell the difference. The upper levels of the technical parts of the company now have some people who are driven more by politics, or quick business wins, or self-promotion (all of which may have been the skills that got them support from the non-technical to be promoted to the technical elite). Without a clear technical voice, management does the best it can. But the ship is somewhat rudderless.

Worse still, the bozos will push to promote others like themselves. Which dilutes the technical thinking even more. At some point, what used to be technical discussions devolve into discussions about politics, or business models, or market share. All of which may be important, but they aren’t the technical discussions that had made the company a leader. This is when you have reached the bozo event horizon. I’ve never seen a company recover.

All of this is about the technical bozos, because that is what I’ve experienced. But it wouldn’t surprise me to find that the same sort of phenomenon goes on in marketing, or management, or any other field. The indicator is when process and fairness becomes more important than judgement, and when it isn’t ok to say that some people have reached their limit. Or maybe this is something that happens more in the technical parts of an organization than in the others. I wouldn’t know.

I don’t know that Google has hit the bozo event horizon, but I’m worried that they might have. Part of the worry is just because of their size; it is really hard to grow the way Google has without letting some lightweights rise to the top. The other is their hiring process (full disclosure; I’ve looked at Google a couple of times and it never worked) which has gotten pretty process-bound and odd. The last time I went through it, the site manager admitted that I was plenty smart, but they didn’t know what they would do with me. Given what they were obviously looking for, I wasn’t sure what I would do with them, either. But the whole process seems to indicate that they are looking for people to fit a pre-defined mold, which the top performers generally don’t do all that well. In fact, the Google process reminded me of the time, more than 20 years ago, when I interviewed at Microsoft. And we saw how well that worked…

 

Residential Education

ø

A fairly consistent reaction to the advancement of on-line educational materials (like edX or its west-coast counterparts) is that this is the beginning of the end for residential higher education. If you can take a course over the internet, the reasoning goes, why spend the time and the money to actually go to some place for college? It is far more efficient to do your travel virtually. If the end result is the same (or even close to it) there is no need for the overhead of the residential education.

Back when I was in the commercial world of hi-tech, I used to refer to thinking like this as being an example of the Highlander Fallacy. This is the fallacy based on the assumption that there can be only one; one programming language, one database, one web-server, one network transport. The new will always win over the old, and we will unify around a single standard that everyone will use. The real world doesn’t work that way; while there may be a best programming language, database, or transport for any (particular) problem, there isn’t a best of any of these for all problems.

Saying that on-line education will replace residential education is another example of the Highlander Fallacy. But it also misses the point of residential education in so many ways that it is hard to know just where to begin. A residential education is a way to spend four years in a community that is all about learning, allowing students to experiment in ways that they won’t be able to when they are out of school. At a place like Harvard, the interaction with other students is probably more educational than any courses that you could take. And heading off to college is the first chance many get to re-invent themselves; going to a new community frees us of the history that has been built up around us in our old community.

But the real reason residential education (or at least co-located education) will never go away has to do with the different kinds of things that we learn when mastering a subject. There are multiple things that need to be learned to attain mastery in a particular subject. One set of things is the content of that subject matter. But the other, more subtle and I think more important, is a set of techniques around problem solving that are used in that subject. Back in my days as a philosopher, there was an important distinction between knowing that and knowing how. Knowing that has to do with the content of a field. Knowing how is a set of skills that allow you to think like a practitioner in that field.

Consider the example of computer science. The content of computer science includes, among other things, learning about a lot of algorithms, different programming languages, the principles of operating systems and databases, and the math needed to understand cryptography. But the techniques of computer science are none of those– they have to do with learning to decompose a problem into a set of (hopefully simpler) problems, of knowing how to build a set of interfaces between different components, or how to design a user interface so that it is intuitive and easy to learn. The notion of computational thinking is all the rage at the moment, but the real core of that kind of thinking is learning how to approach problems the way a computer scientist would.

Other fields have other ways of approaching problems, which are the techniques of that field. You need to learn the content of the field to become a practitioner, but it is far more important to learn the ways of thinking. When I studied philosophy, it seemed that most of the content of the field was uninteresting (which may be why I’m no longer a philosopher), but the techniques of analytic philosophy were very interesting (and have served me well as a computer engineer and scientist).

Bringing this back to on-line education– I think that the real promise of on-line education is the ability to teach the content of a field. But it is going to be much harder to teach the techniques of thinking in an on-line fashion. The best ways to teach technique tend to look like the apprenticeship model. I’ve talked about this for system design elsewhere, but I believe it is true for lots of other fields as well. That is where the residential (or at least face-to-face) form of teaching will still be needed.

In fact, I think the proper use of on-line learning materials will enhance the residential experience. If we can get most of the content taught on-line, we will have more time to mentor students in the techniques of a field. I’d love not to have to do lectures again, and just work on problems and code and designs with students. That sort of work needs the content to be known, but is much more rewarding for both the student and the teacher.

So don’t think of edX as replacing the residential experience. The real goal is to enhance that experience.

Back to Normal, or Something Like It

1

Now that edX has been launched, there is a chance that life can get back to something like normal (what is that at Harvard?). It was odd spending a significant portion of my time on a project that I couldn’t, until yesterday, talk about. It was exciting, it was strange, but it also made me appear more flakey and irresponsible than I would like. Also more tired.

It will come as no surprise to learn that the partnership with MIT was not the only choice that had been offered to Harvard. The usual suspects had all come by. It’s nice to have the Harvard brand, in that educational ventures are all anxious to have us join in.

I was also heartened (and impressed) by the considerations that made Harvard decide to go the edX route. One of the core reasons that we are taking the direction that we are taking is that we can approach on-line education (and how it impacts the on-campus experience) as a research project. The folks who are guiding the educational ship have the Socratic wisdom to know that we don’t know how to approach the use of technology to teach. The folks at MIT who are doing this have the same wisdom. So this isn’t a venture-funded concern where we are shipping a product. Instead, edX is a not-for-profit run by two universities with the stated goal of finding out, through real research, how best to use on-line technologies to enhance our core missions of learning and research.

This is not a simple research endeavor. I’ve been known to characterize this as something on the order of the Manhattan project or the Apollo program. It is going to take time. It is going to take money (fortunately, we are already being approached by foundations and other donors who are excited about this). It will take cooperation and the suppression of some standard primate behaviors. Most importantly, we don’t know what the end result will be. But we do know that it will be transformational in the areas of teaching and research. Which are the areas that a university like Harvard should be transforming.

I think the whole thing is pretty exciting…

My Life as a Technology Canary

3

A gentle nudge from a reader made me realize how long it has been since I’ve posted. Time to get back into the habit.

This has been a particularly busy semester, both from the point of view of my academic life and as CTO. The academic side has been great– I’ve been teaching CS 105, Privacy and Technology, which is always more fun than I should be allowed to have. This is a class that looks at technologies that are seen as privacy-invasive (things like surviellance cameras, wire tapping, and Facebook), dives into the technology and policy, and tries to figure out what can be done. I co-teach with Latanya Sweeney, who really knows this stuff, is a great lecturer, and a better friend. But what made this semester fantastic was the best group of students I’ve ever had in a class–smart, engaged, funny, and fun. On days (and they happen) when I wondered why I was doing all of this, I just had to go to this class to be reminded what fun it is to be at Harvard.

The CTO work has been a lot more scattered, but has also been interesting. Probably the biggest change in my life as I moved to the CTO position was finding that I have very few concentrated, extended periods of time to think about things and get things done. The life of a CTO is one of constantly swapping context, trying to help others (who I hope have concentrated periods of time for their work) to move forward or course correct.

There is also another, odder, part of my job which I characterize as being a technology canary. Canaries were used as early warning systems in mines, organic sensors for dangerous gases. My role of technology canary is to be an early warning system for HUIT on technology trends that are going to change the way we do our jobs. There are lots of these changes coming around, like the changes in client devices (moving from desktops to laptops to tablets and phones, a change that had a pretty disastrous impact on the University of California’s email system). But the most interesting whiff of the future that I’ve seen had to do with a bill from Amazon.

First, some context. All colleges and universities are supposed to offer a net price calculator, a tool that will allow prospective students and their parents to estimate what their college educations will really cost at a particular school (anyone who has to worry about this doesn’t pay list price, at least at Harvard). The financial aid folks here have done a very nice web tool, which they decided to host on Amazon.

Recently, I got a copy of their bill for a month. They had had about 300,000 hits, most from the U.S. but others from all over the world. And the total bill for running this site? $0.63. That’s right, sixty-three cents.

Now, not everything we do at Harvard in IT can be farmed out in this way. This is a simple site, and doesn’t have to be up all the time. It doesn’t have a lot of computation associated with it, and there isn’t a lot of data being moved around. More important, there is no confidential or protected data. But there is a lot of computing at Harvard which has similar characteristics. And at this price, we need to figure out what we can host elsewhere. It may cost more than this example, but even if it is one or two orders of magnitude more it will be less expensive than setting up our own servers and running them here.

This will change a lot of things. We need to figure out what will be changing rather than having it done to us. As the canary, I get to think about these things early on. Which makes life more, um, exciting. But also a lot of fun.

Separated by a Common Language

3

I was reminded once again this morning of how the language of programmers is not the language of everyone else (“doh”, I hear you say). For most of my adult life I lived in a society of software developers, and the linguistic patterns developed are hard to shake (even if, contrary to fact, I was trying to shake them). There are some phrases that just elicit blank stares from my non-programming associates. I’ve come to realize that “paging in the context” is not something most people rightly understand, nor do I expect them to understand where /dev/null is (or, more importantly, isn’t).

What is more distressing is the terms that are understood, but in a different way, by the general community. In particular, I find that I often get worried looks from my colleagues when I talk about doing some hacking, or being a hacker. Those not in the programming community understand hacking to be the act of breaking into a computer system. But the older meaning, still in use by many in the programming world, is much different. On that meaning, “hacking” is the activity of writing code that is particularly clever, elegant, or that solves a particularly difficult problem, and hackers are those who have shown a consistent ability to write such code.

In the programming culture, being called a hacker is an honorific. This is the sense of the term that Steven Levy wrote about in his book Hackers (subtitled Heroes of the Computer Revolution). To be called a hacker in this sense is a considerable honor, and (at least in the old days) was not a title that you could bestow upon yourself, but one that had to be bestowed on you by someone who was himself (or herself) a hacker.  A kind of hacker writes good, clean, understandable code that runs well and does the job at hand. It is a term denoting craftsmanship and art.

But now the term “hacker” is being used to denote those who break into networked systems. There is a connection between the old meaning and the new– some of the early (code) hackers believed that they should be able to look at any code to try to make it better, and would use their (considerable) skills to break into computers that those who were not hackers would try to close off to them. But the goal, in those cases, was always to make the underlying code better, not to steal information or shut a system down. That is a new phenomenon, or at least newer than the term. I’m afraid that the press has made this meaning of the term the dominant one, and trying to change it is a battle already lost.

But it does lead to confusion. When I tell people that I am going to do some hacking, I get odd looks, and have to explain to them that I just mean writing some code. Even worse, when people ask me if I am a hacker, I have to ask what they mean by that. On one sense of the term, I am proud to say that I am (or at least once was). But in the more popular sense of the term, I am not (by choice, rather than because I can’t).

RIP Steve Jobs

ø

I, like many in the tech world, was saddened at the news of the death of Steve Jobs. I had met Jobs a couple of times, but he hardly counted as a friend or even acquaintance. I’d experienced the reality distortion field around him, but his main impact on my life is on the computing environment that I use, and the gadgets that are part of my daily life. Even my choice of a phone (Android based) was made as a conscious decision not to buy another Apple product rather than a decision to buy something else.

But I’ve also found it a bit odd that everyone talks about what an innovator he was. He wasn’t, really. Apple didn’t do things first. Xerox PARC did the windows/mouse/icon interface well before Apple. Sony did personal and portable music before the iPod. Smart phones existed before the iPhone, and tablets were around for a couple of years before the iPad.

What Apple under Jobs did so well was to design products that were beautiful and a joy to use. Much of this had to do with the design aesthetic that Jobs brought. But just as important was that Jobs trusted his customers. He felt, in the face of all the business advice to the contrary, that building a beautiful product that was easy to use would be appreciated, and that his customers would be willing to pay extra for the beauty and ease of use. And he built a company around that, which was successful. You can see the same sort of faith in the customer in the products of his other company, Pixar– the animated movies didn’t condescend to the audience, but expected a level of intelligence and sophistication that differentiated those movies from the usual animated stuff.

I hope those now running Apple are allowed to continue on the assumption that their customers care about more than cost, and that design is important. We won’t know until the product pipeline that is currently filled has emptied out. I sincerely hope that they will, and that I’m given the choice to fill my computing world with objects that make me smile (mostly). Otherwise the world will have lost much more than just another innovator.

I still think the best exemplar of the Jobs attitude is the famous 1984 ad. This is how I will remember what Steve Jobs did to the technology industry, and I will always be grateful.

Open office hours

ø

Time to try another open office hour. Fridays seem to work best, and this time I think I’ll try the suggested location of the Fisher Family Commons in the Knafel Building of the CGIS complex (1737 Cambridge Street). I’ll show up around 3:30 (the lunch crowd should have left by then), and stick around until 4 or no one is around, whichever comes last.

All topics are open. If nothing else, come with the feature of Java that you hate the most, and I’ll tell you why it happened that way (working on the principle that history clarifies stupidity).