This is the story of how the internet began. Naturally, since it is a story, there is a certain subjectivity involved in the telling of it, but the basic facts remain the same regardless of who is telling the story, so if you know the tale already, I would ask you to accept and be patient with any minor deviations from your own version. It is worthwhile pointing out at this juncture that no one person “invented” the internet. As with so many other innovations throughout human history, the internet’s timeline features many different people who all added dollops of their own creativity and resourcefulness to the melting-pot of technological advances which eventually became the internet.
Once upon a time there was a man named Tim Berners-Lee. No, wait. Hang on a moment. I’ve skipped ahead a few pages. Let’s rewind and start again. Are you still sitting in a reasonably comfortable position? Then I shall carry on.
A long time ago, in a country far far away, an organisation was set up in response to the Soviet triumph of Sputnik’s launch in 1958. It was called ARPA* (Advanced Research Projects Agency) and its aim was (as far as I can make out) to prevent the USSR from ever again over-taking the US in their technological arms-race. Three years after the organisation was first set up, an extremely important paper was published, but to understand why it was so important, we must jump forward a year to 1962, where a man with the fabulous name of J.C.R. Licklider first wrote a series of notes detailing his thoughts on an idea he termed the “Galactic Network”. Licklider envisaged a “globally interconnected set of computers through which everyone could quickly access data and programs from any site”. This is almost exactly the shape of the internet today and shows that his initial idea was a winning combination of simplicity and ingenuity. Licklider was the first head of computer research at ARPA, and during his time there managed to demonstrate the value and importance of his idea to several colleagues.
We now return to 1961, where a man named Leonard Kleinrock, who worked for MIT, had written the first ever paper on packet-switching theory. He convinced fellow MIT researcher Lawrence G. Roberts that communication using packet-switching rather than circuits was theoretically possible. In 1965, Roberts set up the first wide-area computer network with Thomas Merrill, comprising of the TX-2 computer in Massachusetts and the Q-32 computer in California. The network used a low-speed dial-up telephone line and proved that programs and data retrieval could be carried out between two computers. However, the experiment also showed that the circuit switched telephone system was totally inadequate for the job and Kleinrock’s packet-switching theory was confirmed as being the most viable option for future computer networks. Kleinrock left ARPA in 1964, but Roberts carried on the work, joining ARPA in 1966 and publishing his plan for the ARPANET in 1967. At the conference he met up with some British researchers who were also publishing a paper on packet-switching theory. It turned out that three groups of researchers: Roberts’ team at MIT, Davies and Scantlebury at NPL and Paul Baran at RAND had all been working on packet-switching without being aware that there were other groups pursuing the same line of research. The term “packet” was actually originally coined by the NPL team.
I feel at this point that the narrative is becoming a bit weighed down with technical terminology and random name-dropping, so I shall try to condense the last portion of the timeline into something more digestible. This may be somewhat difficult owing to the excessive geekiness of the material! Geeks rule, by the way, but the language barrier can sometimes be an issue 😉
By the end of the year 1969 four host computers were connected, forming the initial ARPANET. In 1972 Bob Kahn organised a very successful demonstration of ARPANET at the ICCC (International Computer Communication Conference), the first time that the network had been shown to the general public. By “general public”, I assume they just mean people not in the military/research industries, because there can’t have been many non-nerdy average-joe types attending the event, judging by the title of it.
In 1972, another important development occurred when electronic mail was introduced. Ray Tomlinson of BBN first wrote the basic email software and a few months later, Roberts extended the email application – it then became the biggest internet application for over a decade. Meanwhile, other networks, such as SATNET and Hawaii’s ALOHANET (no really, that’s what it was called!) had sprung up, but the networks could not connect to each other because they used different methods of data transmission. This all changed in 1974 when Vint Cerf and Bob Kahn wrote the Transmission Control Protocol, which would become the accepted standard and allowed the individual networks to merge together.
By the 1980s, most universities and research institutes had computers that were connected to the internet, but the innovation that perhaps did most to open up the internet to the wider global community was yet to come in 1990. The term “hypertext” was coined by Ted Nelson in the 1970s to describe the non-linear linking of documents. And so we come to the end of the story – or at least the end of this story, for considering how young the internet is, we are still relatively near the beginning. Interestingly, when I first set out to write this post, I was under the erroneous impression that the internet was invented in 1990. Clearly this is far from being the case. The true story is far less simple, but the collaboration and co-operation (plus a fair amount of bitching, no doubt, they are scientists, after all!) involved along the way makes it a far more inspiring one.
Anyway. 1990. Enter stage right a man named Tim Berners-Lee. Whilst working at CERN (where Prof Brian Cox lives! ;)), he became frustrated by the difficulties involved in sharing documents with other researchers. Because CERN colleagues were based all round the world, they all had different data formats which had to be converted to be compatible with the main CERN computing system. Berners-Lee decided it would be much easier if researchers could just “jump” into each other’s databases, rather than having to go through all the hassle of converting data – something which many of the scientists refused to do. He sent his proposal to the bigwigs at CERN, but got no response, so decided to carry on with his idea anyway.
Now we have more geeky stuff again, but it’s too important to miss out, so keep reading. In 1990 Berners-Lee wrote the Hypertext Transfer Protocol (HTTP), which is what allows computers to communicate hypertext documents through the internet. Then he devised a way of giving documents “addresses” on the internet, rather like the way each house has a slightly different address on your street. This address was known as a Universal Resource Identifier (URI), but has since been changed to URL (Universal Resource Locator). He also wrote a program, or “browser” which allowed the user to retrieve and view hypertext documents, naming it the WorldWideWeb. The hypertext pages were formatted using the Hypertext Markup Language (HTML). It’s incredible to think that all of these familiar terms (http, url, html) were invented by one man, an interesting contrast to the collaborative efforts by others that I mentioned earlier. The first web server was info.cern.ch at CERN. Web servers are used to store webpages on a computer and allow those pages to be accessed by others. CERN still did not seem particularly enthused by his project, so he introduced the World Wide Web to other user groups on the internet, who quickly realised its potential.
Whilst he was obviously pleased at the success of his invention, Berners-Lee was worried that it might lead to “destructive competition”, which would have a negative impact on the open nature of the Web. He knew that to keep it running smoothly, the Web needed to have some sort of supervision, but also did not want the overseers to have the power to change the structure of it. In May 1994, the first WWW conference was held at CERN (who presumably had by now started to sit up a bit and take notice). Berners-Lee said “the conference was the way to tell everyone that no one should control it (the WorldWideWeb), and that a consortium could help parties agree on how to work together while also actually withstanding any effort by any institution or company to ‘control’ things.”
I think this was an excellent idea on his part and shows his vision and understanding of the possible consequences of his own invention. That he took swift and ultimately successful steps to maintain the free, open and uncompromised nature of the Web should be eternally to his credit. Membership of the consortium is open to any organisation, whether it be institutional, governmental or educational. The best explanation of the W3C’s modus operandi is probably this, hence me quoting it verbatim rather than trying to change it into my own words:
“The W3C develops open technical specifications that can be used for free by anyone. These specifications are reached by a very democratic process. Any member can suggest a new project. If there is sufficient support within the consortium the project proceeds. When it is finished it is released by the consortium as a ‘recommendation’. The W3C does not enforce its recommendations. It simply encourages everyone to adopt them.”
The internet is such an integral part of most people’s lives now that it seems impossible that we should ever have managed without it. Simply for that reason alone I think it is important to understand where it originated and how it came to be such a ubiquitous presence in our society. If someone came up to me and asked what the internet was for, the answer “well, everything, really” would be difficult to resist. But I think the true reason for the internet’s popularity, indeed, its very existence, is that it is about sharing. Sharing of information, sharing thoughts, ideas, feelings and experiences. It is one of the fundamental things that make us human -communicating and sharing with others to enhance and improve our collective knowledge.
Here are some wise words from a man talking about his vision of the future in terms of communication and knowledge – spookily accurate, as it turns out:
“Once we have computer outlets in every home, each of them hooked up to enormous libraries where anyone can ask any question and be given answers, be given reference materials, be something you’re interested in knowing, from an early age, however silly it might seem to someone else… that’s what YOU are interested in, and you can ask, and you can find out, and you can do it in your own home, at your own speed, in your own direction, in your own time… Then, everyone would enjoy learning. Nowadays, what people call learning is forced on you, and everyone is forced to learn the same thing on the same day at the same speed in class, and everyone is different.” ~ Isaac Asimov
Interviewer: “But what about the argument that machines, computers, dehumanize learning?”
Asimov: “As a matter of fact, it’s just the reverse. It seems to me that, through this machine, for the first time we’ll be able to have a one-to-one relationship between information source and information consumer.”
Here is the full videoclip, for those who are interested: http://www.brainpickings.org/index.php/2011/01/28/isaac-asimov-creativity-education-science/
It has just occurred to me that all that babble I have just blurted out about sharing etc may have been a bit too nauseatingly reminiscent of an episode of “Barney and Friends” (oh you know Barney, the bloated mentally-deficient purple dinosaur, curse him). For you cynically-minded and much more discerning folk, I give you the REAL reason for the creation of the Internet! 😉 http://www.youtube.com/watch?v=NiFD6EFVsTg