Hostname: page-component-76fb5796d-x4r87 Total loading time: 0 Render date: 2024-04-26T05:17:43.066Z Has data issue: false hasContentIssue false

Computers, the internet and the World Wide Web: an introduction for the e-therapist

Published online by Cambridge University Press:  02 January 2018

Rights & Permissions [Opens in a new window]

Abstract

The purpose of this first article of four addressing the electronic approaches to psychotherapy (e-therapy) is to introduce the equipment (computers) and systems (the World Wide Web and the internet) involved. I describe some of their many elements (e.g. bits and bytes), uses (such as search engines, email, web mail) and a few abuses (e.g. spam, spyware).

Type
Research Article
Copyright
Copyright © The Royal College of Psychiatrists 2006 

We take for granted the cameras, television and video recorders, voice recorders, event recorders, polygraphs, scanners and, of course, computers that have become features of our everyday life. All of these were first made possible by ‘electronic’ valves that opened or closed circuits by deviating a stream of electrons onto or away from a charged plate. Valves quickly gave way to, and have been largely replaced by, transistors and other semiconducting devices, but the application of electronics to valves that were not mechanical in operation has stuck and the word ‘electronic’ now attaches to any device which involves valves or semiconductors. This is a great many devices, and psychotherapists, and psychotherapy, have been affected by all of them.

When I began training, the telephone and the Dictaphone were ubiquitous, and their impact on working practices unquestioned. That was not so for a previous generation. Cassette-recording devices had been available for some time, and audio cassettes were being used for teaching and also for monitoring psychotherapy. After a year or two, cheap and convenient video recorders became available. They were rapidly accepted and put to use by family therapists, but many psychoanalytic psychotherapists considered them inimical to one-to-one psychotherapy. Family therapists used video not only to augment existing methods of teaching and therapy, but also to create new therapeutic techniques, for example having families view videos of their interactions and then comment on them. Since then, many psychoanalytic psychotherapists have found a place for video recordings, particularly in training and demonstration, whereas family therapy has become less high-tech.

The same trajectory has, in my experience, been followed by subsequent technological innovations. There has been an initial polarisation of response to a new innovation, with those ‘for’ making large claims for the technology and those ‘against’ preoccupied by the potential for ethical and clinical harm. The limitations, and often inconvenience, of the technology – what one might call the hassle factor – have then emerged, while some of the polarisation between colleagues has subsided. Finally, the technology has found a place, often a limited one, in the teaching and clinical repertoire.

The use of cameras to enable viewers to participate in real-time, geographically remote teaching events (teleconferencing) or clinical examinations (telemedicineFootnote ) is another example of a technological innovation that has followed this trajectory. But the one that has had the most impact, partly because it has subsumed many of these earlier technologies as well as adding new possibilities all of its own, is the programmable electronic device, principally the digital computer.

Computers

Digital computers have been available for well over 60 years (http://www.cs.iastate.edu/jva/jva-archive.shtml), analogue computers since the difference engine (http://ed-thelen.org/bab/bab_philosopher.html) designed in 1822 by Charles Babbage (http://ei.cs.vt.edu/~history/Babbage.html), a mathematician with many characteristics that we would nowadays associate with Asperger syndrome. Babbage's engine, although entirely made out of gears and moving parts, could be set or ‘programmed’ to perform different kinds of calculation. The difference between analogue computers such as the difference engine and digital devices is that the analogue computer processes a mechanical movement or signal that is proportional in some salient characteristic to the information being put in, the ‘input’. The digital computer turns input into a number, which it processes. Typically the number is a binary string, or sequence of binary digits or ‘bits’. Each ‘bit’ is characterised by a simple dual state: on or off, there or not there. The speed of networks or downloads, and the speed at which video or audio files can be processed, are often given in the rate at which bits are transferred, the ‘bit rate’ (this approximates to the baud rate, which is often quoted for types of internet connection).

The most important information for computers to process is based on characters: numbers, letters, punctuation marks, operators and so on. It turns out that there are rather more than 128 useful characters if one includes upper and lower case letters. So for every character to have a unique code one needs a binary string that encodes decimal numbers larger than 128. One hundred and twenty-eight is 26, which converts to a binary number that is 7 bits long. Since more characters are needed than this, an 8-bit number, an ‘octet’, has become the standard processing unit, or ‘byte’. Memory storage is measured in bytes and so, very roughly, a modern hard disk of 40 Gbyte stores 40 000 000 000 bytes or 4 times 1010 characters: quite a few books (although nowhere near as many pictures).

The ubiquity and ease of use, and therefore the impact, of computers can probably be traced to the advancement of desktop devices and the development by a young Bill Gates of an operating system (MS-DOS) that could fit into the 64 000 bytes of memory (or ‘64 K RAM’), which was all that could then be supported by the semiconductors and their connecting circuits etched onto the mica (‘silicon’) scaffolding (the ‘chip’) of the earliest microprocessors or central processing units (CPUs). Greater and greater miniaturisation has allowed more and more powerful chips, that is more and more transistors and circuits in the same space (the current Pentium 4 has 55 000 000 transistors, whereas Intel's first 4004 chip had just 2300). More powerful chips mean more powerful computers in the same portable and convenient boxes that can be located on office desks rather than in the large computer rooms of yesteryear.

Uses in psychotherapy

Much psychotherapy research has exploited this power, for example in conducting the complex multivariate statistical analyses that these computers make practicable. Another application of computing power has been the use of computers to analyse narrative. One example is the Code-a-Text system (http://www.code-a-text.co.uk/index.htm) developed by a psychotherapist, Alan Cartwright. That was briefly published commercially but has lost out to a heavyweight alternative, another common occurrence in the computer field, in which new programs are constantly being developed, but equally regularly squeezed out of existence by competitors. My own referencing program, Memory, which I presented to a meeting of the Royal College of Psychiatrists in 1992, has been abandoned in the face of competition from Reference Manager and its stablemates, which have become near essentials for psychotherapists and others who write scientific papers regularly.

Reference Manager is an example of a program that assists in information-handling. We rely on computers, and increasingly on hand-held computers or personal digital assistants (PDAs), instead of books for information, but they provide such a flood of it that we suffer from an embarras de richesses and so need new ways of using computers, new programs, to deal with it. This will be a theme in the remaining articles in this series.

The internet

Although we now speak loosely about the impact of computers, we subsume in that two further, distinct technical innovations. The first is the use of computer networking, which enables information to be transferred between two computers connected by a wire or, nowadays, two computers connected by a radio signal. Networking required the development of ‘protocols’, or a universal language that enabled different designs of computer or other digital device to turn information that they contained into ‘packets’ that any other digital device running the same protocol could translate and use. The first Transmission Control Protocol (combined now with Internet Protocol and known as TCP/IP) was developed by Robert Kahn and Vinton Cerf at the Defense Advanced Research Projects Agency, to connect US military computers. Networking is what enables telesales, telephone helpdesks, airport security, police checks and remote computing, among many other applications. It means that data entered into one computer is potentially available to all the other computers with which it is networked.

The network that Cerf developed, ARPANET, is often hailed as the precursor of the internet. So it was, in the sense that the internet relies on networks and, as time has gone by, on a network of networks, or inter-network, which can link any computer anywhere. This link requires access over a telephone line (‘dial up’) or a some kind of dedicated network service designed to carry much more information per second (and therefore ‘broadband’ rather than the narrow-band dial-up service). These networks of networks are mainly provided by telecommunications companies, but some are provided by national governments or universities (e.g. the Joint Academic Network, or JANET, in the UK). These lines are rented to users via internet service provides (ISPs), who lease lines and the amplifiers, or routers, that the telecommunications companies maintain.

The World Wide Web

The second technological revolution was the transformation of the internet into the ‘World Wide Web’. The English physicist and computer hacker (a person who uses the internet to gain access, without permission, to another computer or computer network) Tim Berners-Lee is usually credited with creating it. Before 1990, when Berners-Lee developed the semantic web and then a prototype of the World Wide Web (http://en.wikipedia.org/wiki/World_Wide_Web), it was possible to send messages deliberately to someone else by encoding a text file, sending it to the address of the other computer (the IP address) and then, if the other user had the appropriate means and inclination to translate it back, it would be received. People (they were called ‘sysops’ or system operators) soon began adapting their own computers so that these messages could be stored and then looked up by someone else. Such ‘bulletin boards’ have largely been replaced by ‘discussion forums’, but they were the basis for the development of commercial packages such as AOL and Compuserve, which essentially do the same thing, and also for the development of ‘user groups’ where, in a tradition that soon extended to the whole of the World Wide Web, computers, often university computers, were used to store files (‘host’) for free. These files could be accessed anonymously by anyone, so that a user with the appropriate software or ‘newsreader’ would appear to be seeing a series of messages sometimes with attached files that would translate on the guest computer as pictures or sounds. These groups, usually known now as usenet groups, still exist, although the computer that hosts the groups is usually owned by an ISP which, behind the scenes, makes sure that its version of the usenet group is up to date by sharing it with the version maintained on all the other hosts of that group.

Hypertext

User groups and bulletin boards were not always easy to use. What Berners-Lee did was to imagine that the internet could be the medium in which a giant book was written and read. Unlike print books, this one would be as easily written to as read from. The terminology of the World Wide Web was closely based on text. So the basic element that a user created or read was called a ‘page’ and the page contained text (and later other media such as pictures) that the user saw and information that told the web server how to make the page look. These instructions are like the ‘mark up’ used by printers and newspaper editors. So Berners-Lee named the language HyperText Mark-up Language or html. Berners-Lee wrote a language that would instruct a computer which was continuously connected to the Web, a ‘web server’, to store the text file written by the user, and to give it an address, or universal (now more commonly ‘uniform’) resource locator or URL. Unlike the linear medium of a book, each page could be connected to any other by embedding URL addresses within the page. These connections were conceived as being a kind of link over the text and were named hypertext links.

My text for this article is written under the influence of web page design. I have regularly put web addresses after keywords and ideas, and these lead to web pages or websites where further information can be found about that word or the concept behind it.

A reader might just as often make this click as read the next word. So text becomes hypertext: multiply connected to allow each reader to go off in many different directions and therefore make something different of what they are reading. Some would say that this is how the dry rot of post-modernism has spread, that the author has no control over a text that, if Derrida is to be believed, is annexed by the reader (see http://www.chass.utoronto.ca/pcu/noesis/issue_vi/noesis_vi_6.html). We shall see in my next few articles on e-therapy that this unreliability of the e-text, its susceptibility to being changed or taken over by the user despite the expressed intention of the author, makes it a source of suspicion for the professional.

None of this was anticipated in the early days of the internet. Tim Berners-Lee simply wanted to access the library of the then European Council for Nuclear Research (CERN) without difficulty. Having created the means to do this, he decided to make his work freely available, without charge or license. This grand tradition has continued to influence the development of the World Wide Web, and is perhaps the single most important determinant of its universality. Berners-Lee had already designed a program into which one could type the URL of a web page and, if the page was hosted by an internet server and your own computer had an internet connection, this ‘browser’ would download the file at that address and ‘serve’ its contents, i.e. show it as a page on your computer. Page design rapidly advanced and browsers became massive programs to keep up with the new technical innovations of sound files, music, still pictures, video and the like and design features such as colour, boxes, tables, animations, and a host of others. Amazingly, these browsers have continued to be freely available even though they are some of the most complex programs that most of us normally use.

Email and webmail

An early development of the World Wide Web was the facility for sending text messages to a specified recipient. These electronic mails or emails needed special programmes that allowed text to be input, encoded using a standard protocol such as Unicode, and sent via the internet to the host computer specified in the email address. And, of course, the programs had to reverse this process so that received emails could be read. From an early stage, free mailers came with (were ‘bundled with’) browsers. These bundled mailers were an enormous improvement in usability than the original mailers and email has become the dominant method of exchanging mail in many industry sectors, particularly in the academic world.

Email is gradually replacing post (‘snail mail’) for many professionals, but its advantages are also its major drawbacks: it is instantaneous and it is as easy to send an email to many people as it is to one. Sending an email creates the expectation that there will be an immediate response. Not only is immediate response expected, but it is expected from many people. This new phenomenon of interactivity has become a major problem for professionals, who spend more and more time dealing with emails and expurgating rubbish emails or ‘spam’. Medical secretaries, who used to type letters, organise replies and even draft letters, are having to adapt to the redundancy of these tasks in the email era.

Where is the World Wide Web going?

There is a growing assumption that information of all kinds will be made available on the World Wide Web. With this has come an increasing expectation of the sophistication with which this information is presented. Furthermore, it is now a requirement in many countries that information be accessible to people with disabilities, and Berners-Lee among others has developed standards of compliance (e.g. see the Web Accessibility Initiative at http://www.w3.org/WAI/http://www.w3.org/) to ensure that this is so. Thus, ensuring accessibility joins the long list of web design features that the modern designer has to consider, including aesthetics, usability, download speed, intuitive use of navigation, use of ‘plug-ins’ (programs that can be called up to play music, show videos or perform other functions) and search-engine optimisation, as well as more complex features of mark-up language itself, such as pull-down forms, buttons and animations (Reference WyattWyatt, 1997).

This sophistication has subverted Berners-Lee's concept of a World Wide Web that would be easy to use by everybody. Creating web pages has become increasingly difficult. Expensive programs have been developed to edit html (Microsoft's Frontpage and Macromedia's Dreamweaver are examples), but to use even these needs a knowledge of image editors, audio editors, video play-back programs such as Flash or Shockwave, and, increasingly, of embedded scripts that tell the server how to handle information: Java, Microsoft's ASP, PHP and Macromedia Coldfusion are all examples (it is easy to see whether a web page is using one of these scripts since they will not have the suffix .htm or .html, but one corresponding to the server-side language, for example .asp or .php).

The original HTML has now been extended to XHTML (Extensible HyperText Mark-up Language), itself a sub-set of XML (Extended Mark-up Language), and ultimately of SGML (Standardised General Mark-up Language). Other dialects of XML include RSS (some people think this stands for really simple syndication), which enables simplified web page contents to be sent to web pages, mobile phones and other readers, and CSS (‘cascading style sheets’), which determine the look of web pages by providing browsers with instructions on how to execute or ‘parse’ mark-up instructions or ‘tags’. XML is set to become a world-wide standard for all digital devices. It will be the basis of the next version of Microsoft Windows and Office. It enables computers to link to televisions, DVD players and more and more household devices that become programmable. The long-term goal is the wired home. Web designers will have to contend with the demands of all these different devices. Web writing is therefore likely to become even more specialised as web reading becomes easier and more and more available.

One consequence of the increased technical difficulty of web editing has been the rapid development of a web design industry. Another is the increasing divergence between those with the ability to read from the Web and those who can write to it. This is also how print, radio and television developed. There are many, Berners-Lee included, who would like to reverse this trend, and make writing as easy as reading. It is possible to download pre-made ‘templates’, many for free, that simplify creating web pages. Content managers are also available that make creating web pages little more complex than word-processing. More and more institutions are buying them to enable staff to create and maintain their own web pages: the Sheffield University site (http://www.shef.ac.uk) is one example.

App. on the Web

Some web developers argue that there is currently a revolution in web design, dubbed Web 2.0, that reflects the greater use of server-side applications such as Java. One consequence of this new technology, sometimes called AJAX (Asynchronous Javascript and XML), is that pages can ‘refresh’, that is change how they look, without loading up all the page information. This makes pages much more responsive to data entered by the user and therefore makes possible usable interactivity. Another consequence is that they can be personalised, so one gets pages that reflect preferences or even previous browsing history. But the most important long-term consequence is that we will increasingly shift from having programs on our computer to using ‘applications’ (app.) on the Web. Application service providers (ASPs) such as Google (http://www.google.com) are already providing new ways of using the internet. Google's mapping service is currently one of the best examples. This article was reviewed and revised using an app., and this is an increasingly common way for journals to manage submitted articles. Essays too can be submitted using an app. For example, students of the MSc in Psychotherapy Studies at the University of Sheffield submit pages via a version of Turnitin (http://www.turnitin.com) which checks for plagiarism.

Alternatives to web pages

Blogs

Another development, fuelled as so often by free programs and even free web space, are web logs or ‘blogs’. Blogs use special content management systems, which are widely available and make it simple to create a web diary, available to anyone, with minimal training. Blogs by psychotherapists and counsellors are just starting out (see http://www.technorati.com/tags/psychotherapy for some examples). Video blogs, in which home movies and not text are published, are also growing in popularity.

Wikis

The wiki (named after the shuttle buses at Honololu airport, the wiki wiki, meaning quick or informal) is the closest the Web comes to Berners-Lee's original, anarchic vision. Wikis are pages that are created using an open-source, open-access content manager that needs no password. So anyone visiting the page can also change the page. The most remarkable example of a wiki is the encyclopaedia that I have most frequently cited here, Wikipedia (http://en.wikipedia.org/wiki/Main_Page). All of the articles in Wikipedia have been created by people who have read an article and then added a bit to it or taken something away.

A wiki is exciting to a group therapist like myself. Like a group, its content is greater than the individual contribution of any of its members. Of course, it can have a destructive side. Wikis could be destroyed by a single user, or could be subverted to convey disinformation. Wikipedia has editors who are responsible, like group conductors, for ensuring that this does not happen.

Wikipedia itself currently dominates the list of wiki sites. The English wikipedia is the largest, with about 734 100 articles, many of them relevant to psychotherapy. Wikipedia has articles in 188 languages other than English, is developing a ‘wikiversity’ and also publishes ‘wikibooks’. The ‘book’ on introductory psychology (http://en.wikibooks.org/wiki/Psychology:Introduction), which is very short, begins:

‘The stereotypical situation where a psychologist is involved plays out like this: a bearded man, perhaps middle-aged and balding, sitting at his desk behind a long leather couch. He is jotting down notes about the mental condition of a patient, the very one reclined on that couch busily babbling and confessing his deepest troubles, secrets, and fears. A loud clock is ticking in the background, and the psychologist is asking probing, uncomfortable questions, but he seems to know the answers before his patient speaks. He betrays no emotion, and says nothing to suggest his own point of view, yet he conveys a sense of moral and intellectual disdain at the thoughts his patient is thinking.’

Clearly, we psychotherapists have a lot of work to do to overcome our stereotypes.

There seem to be few psychotherapy-related wikis. One exception is a wiki that has generated a consensus definition of the standards of care for people with gender identity disorders (http://wiki.susans.org/index.php/Standards_of_Care_for_Gender_Identity_Disorders). The educational website for staff and students at the University of Sheffield (http://www.septimus.info) includes a wiki seeded with the kernel of much psychotherapy debate, ‘Psychotherapy is …’. Psychotherapy terms and interventions are difficult to standardise. The European Association of Behavioural and Cognitive

Psychotherapy is attempting to do so in its own field by the exchange of documents. A wiki would be an ideal method of undertaking this or similar projects, and will, no doubt, be so used in the future.

Peer-to-peer programs

File transfer over the internet precedes the development of the Web. Telnet was one of the very early internet functions that enabled files, and commands, to be sent to a remote computer. Telnet required that the user was recognised by the computer. The development of an anonymous file transfer protocol (FTP) enabled users to access and download files from publicly accessible directories.

Recently, a new generation of file transfer programs have enabled users to swap files and, rather than having these on one central computer, to open their own computers to being downloaded from, as well as downloaded to. These ‘peer-to-peer networks’ enable music files, videos and text files to be exchanged (often in breach of copyright). This method of file distribution has become extremely popular, despite threats of prosecution. At the time of writing four out of ten of the most popular downloads logged at http://www.cnet.com, the website of the magazine Computer Shopper, were peer-to-peer file-sharing programs. Many of these programs install hidden programs that show advertising (‘adware’) or log computer usage and send it to advertisers (‘spyware’). Not surprisingly, therefore, CNET's most commonly downloaded programs were an adware blocker and a spyware eliminator.

Peer-to-peer protocols are also used to support chatrooms such as MSN messenger and ICQ chat, and also, more recently, to support telephony using the internet rather than a telephone line as the carrier medium (voice-over-internet phone, or VoIP). They also enable videoconferencing if the computers are linked to suitable webcams (a video camera with an output that is compatible with the computer and the videoconferencing program). A step up from exchanging files is to exchange and play files. This becomes a means of sound broadcasting similar to radio (radio has been broadcast over the internet for years) but easily accessible. Podcasting (named after the iPod since both use the MP3 audio compression protocol) is, like blogging, a method for people with a minimum of equipment to reach a large audience.

Open-source programs

Browsers, newsreaders, bloggers, peer-to-peer file transfer programs, MP3 players and video players are just some of the many programs that are available free. They reflect an important value in the World Wide Web: making available programs and functionality for free. Most of these free programs carry advertising these days, and make their money that way. But there are many programs that are simply made freely available. Programs like this are often termed ‘open source’, and files are sometimes explicitly distributed as ‘copyleft’ (not under copyright). Operating systems (e.g. Linux), databases (e.g. MySQL), server languages (e.g. PHP, or personal home page), scripts, templates and many, many web pages are all available for free.

This tradition of freely available information has extended to medicine, and to psychotherapy and counselling. Scientific information is increasingly being made available to all via the Web. Online journals are already widespread, and many now have a full-text facility either freely available or on subscription. Electronic-only journals have been championed by the London-based company BioMed Central and have proved extremely successful. The Gutenberg project has been putting classic books (including a few relevant to psychotherapists) online for decades, and that will be accelerated by Google's project to scan in four US libraries and the back catalogue of Oxford University Press, and the matching European Union project to do this for books in languages other than English.

The rise and rise of the search engine

Finding the right information is both easier and harder because of the internet. In a survey of web server software usage on the internet April 2006, Netcraft received responses from 80 655 992 web-sites (http://www.serverwatch.com/stats/article.php/3596491). In September 2005 Google claimed to search ‘over 8 billion pages’ (Reference MarkoffMarkoff, 2005). It is increasingly likely that the information one needs is there, somewhere, but it is becoming ever harder to find it among all the other information. Just having so much information available is sometimes seen as a stressor because it is assumed that all of us have a limited capacity to attend to information and that exceeding this results in a state called information overload.

Web rings and portals

To some extent web pages themselves try to reduce information overload by displaying links to related information. Some web pages are also in ‘web rings’, linking to pages with similar or related content. ‘Directories’ are websites whose main purpose is to list hyperlinks to pages on a particular topic. Web portals provide more signposts, containing text, news or highlights, to help users navigate to the pages they want. Good examples of portals are http://www.direct.gov.uk and http://www.nhs.uk. Large sites often develop into portals or incorporate portal facilities. For example, the Royal College of Psychiatrists’ site includes some portal-style pages (e.g. http://www.rcpsych.ac.uk/mentalhealthinformation/weblinks.aspx). Box 1 lists examples of portals, including some excellent ones for academic disciplines related to psychotherapy. Psychotherapy itself is not so well served, although there are commercial examples and a few relating to particular therapeutic modalities. PSY-LOG, for example, was created under the auspices of the European Psycho-Analytical Federation.

Box 1 Some commonly used portals

General and medical

  1. Yahoo (http://www.yahoo.com)

  2. UK government pages (http://www.open.gov.uk)

  3. NHS pages for patients about services (http://www.nhs.uk) and about their own health (NHS Direct online: http://www.nhsdirect.nhs.uk/)

  4. the National Electronic Library of Health (http://www.nelh.nhs.uk) for professionals

  5. Psychotherapy and related disciplines

  6. The social psychology network (http://www.socialpsychology.org)

  7. The philosophy of mind portal (http://www.philosophyofmind.net)

  8. Autism (http://www.nas.org.uk or http://www.udel.edu/bkirby/asperger)

  9. PSY-LOG (http://www.psy-log.com)

Directories

In the early days of the Web, people published printed web directories, imagining that books would continue to be the dominant sources of information. Although magazines and newspapers continue to publish directories of websites, the only medium that is volatile enough to keep these up to date is the Web itself. Directories were created by people reading and classifying web pages, and then publishing their results on the Web. Yahoo! began as a directory. The Open Directory Project (OPD; http://www.dmoz.org), also known as DMOZ, uses volunteers to read and classify pages. At the time of writing this article, the OPD reported that it had 69 629 editors, who had reviewed 5 148 836 sites and placed them in over 590 000 categories. The OPD has 908 psychotherapy-related sites classified in five categories.

Indexes and search engines

Directories have gradually succumbed to indexes as the dominant method of finding general content, although they have an increasingly important place for specialised searches such as checking whether a medical practitioner is registered with the General Medical Council.

Originally these indexes were built from the words that web authors used to describe their own pages (meta-tags and keywords), but with increasing computer processor speed and increasing efficiency of search algorithms, companies can now browse whole pages looking for content, and use the results to build their indexes. They do this by using specially written programs (‘search bots’ – ‘bots’ – or ‘search engines’), which browse the Web from page to page, storing the page (‘caching’, to use the Google jargon) or its words as they go, indexing them, and then using one of the hyperlinks in the page to move to the next page and do the same thing. This process is known as ‘crawling’. The user searches the index that the company, generally metonymously called a search engine, maintains for keywords using a form served by the browser or using a plug-in which appears as a toolbar. The results of the search are returned as a list of page titles, with embedded URLs so that the user can click on them, and a description taken from the site itself.

In July 2005, according to Nielsen's internet ratings, Google, Yahoo! and the Microsoft search engine MSN Search were the vehicles for 80% of all searches. Search engines can now search usenet groups and blogs, as well as the Web. The top three can also search one's own computer, can search specifically for images or for text, and can search using Boolean combinations of terms (Box 2).

Box 2 Boolean searches

A Boolean search uses the words (‘operators’) AND, OR and NOT (in upper case, as shown) to refine a search:

  1. ‘psychotherapy AND counselling’ will find pages that mention both psychotherapy and counselling

  2. ‘psychotherapy OR counselling’ will find pages that mention either

  3. ‘psychotherapy NOT counselling’ will find pages that mention psychotherapy but not counselling

‘Meta-search’ engines start searches using several search engines and collate their results. The first of these was Metacrawler (http://www.metacrawler.com) but subsequent popular engines are Copernic (http://www.copernic.com) and Dogpile (http://www.dogpile.com). Arguments for and against the use of meta-search engines are considered at http://news.zdnet.com/2100-9588_22-5647280.html.

Search engines are currently introducing many new features, for example searchable maps (e.g. Google Earth at http://earth.google.com/), searches of university web pages, local business searches (similar to a service already provided by http://www.yell.com), searches for free computer code, language-sensitive searches, searches of online shops and a pay-per-enquiry service.

Google also provides for specialist searches, but other dedicated specialist search engines have evolved to access information of particular relevance to the scientist. The publisher Elsevier funds Scirus (http://www.scirus.com), which searches selected web pages but also scientific publications, and provides a citation and abstract that can be downloaded to a bibliographic program. Scirus often provides a link to a full-text version of the paper, which can also be downloaded, or to the relevant entry in Medline. Google Scholar (http://www.scholar.google.com) searches for full-text publications that have been transferred to the Web.

Search engines can only search what is accessible. Many web pages are password protected, for example Royal College of Psychiatrists’ members’ pages, or are maintained on a network of computers (an intranet) not connected to the internet. Intranets may have their own search engines, but they are restricted to those with intranet access.

Spam spam spam spamFootnote 1

‘Cyberspace’ contains more and more emails, either in transmission or, increasingly, kept on a server and accessed via an appropriately designed web page (‘web mail’). Email did not present a problem in the past. As with snail mail, it was merely a matter of throwing out the junk mail (spam) and filing the remainder. However, the massive increase in the volume of email and, even more, of spam has created email overload for many people. Spam can be offensive and it clogs mail boxes. Dan Evett estimates that 12.4 billion spam emails are sent daily, amounting to 40% of all emails (http://www.spam-filter-review.toptenreviews.com/spam-statistics).html). Of these, 1 in 250 were ‘phishing’ attacks, i.e. fake messages designed to obtain confidential details, often relating to bank accounts but which could in future be targeted at confidential medical or other personal information.

The opportunities and risks for psychotherapists and their clients

This first article has been a rapid introduction to the rise of the World Wide Web. Perhaps readers will have experienced through its very rapidity some of the helterskelter flavour of current computer development. This is often expressed in terms of Moore's law: the processing speed of computers doubles every 24 months (although usually this is misquoted as every 18 months) (Reference MooreMoore, 1965). Despite the apparent impossibility of such rapid evolution, Moore's law has held true for over 40 years and is expected to continue to do so for some time in the future, even though this requires constant development of new technologies.

In contrast to this increase in the rate of processing, computer disk access times have not got faster. Flash storage and other new random access storage devices are being developed, but the electromagnetic hard disk is unlikely to be replaced in the near future as the computer's main bulk storage device. So less and less of the burgeoning information on the net is going to be downloaded for storage on local machines. Instead, information and even programs are going to be maintained on remote computers, ‘servers’, which are going to proliferate on ‘server farms’. Search and highly selective retrieval – and the search engines that support this – are going to become even more important. It will no longer be a case of getting information or programs to do the job, but selecting which tools to use from an overflowing toolbox.

In the next article in the series (pp. 368–374, this issue) I begin to consider what kind of tools psychotherapists might want.

Declaration of interest

D.T. is the author of a hypermedia distance-learning course Septimus.

MCQs

  1. 1 RSS:

    1. a stands for ‘rapid standard service’

    2. b stands for ‘really simple syndication’

    3. c stands for ‘readable simply sent’

    4. d is a variant of XML.

  2. 2 Wikis are:

    1. a a kind of tepee

    2. b a type of malicious junk mail

    3. c an airport taxi

    4. d a web page created by the people who visit the page.

  3. 3 XML stands for:

    1. a extended make-up language

    2. b extra messy listings

    3. c extreme multimedia levelling

    4. d extra modifiability and its limitations.

  4. 4 Browsers are:

    1. a simple programs

    2. b originally the invention of Tim Berners-Lee

    3. c what makes the world wide web work

    4. d open source.

MCQ answers

1 2 3 4
a F a F a T a F
b T b F b F b T
c F c T c F c T
d T d T d F d T

Footnotes

For a review of the development of telepsychiatry and key research findings see McLaren, P. (2003) Telemedicine and telecare: what can it offer mental health services? Advances in Psychiatric Treatment, 9, 54–61. Ed.

1 With due acknowledgement to Monty Python's Flying Circus.

References

Markoff, J. (2005) How many pages in Google? Take a guess. The New York Times, 27 September.Google Scholar
Moore, E. (1965) Cramming more components onto integrated circuits. Electronics Magazine, 19 April.Google Scholar
Tantam, D. (2006a) The machine as therapist: impersonal communication with a machine. Advances in Psychiatric Treatment, 12, in press.Google Scholar
Tantam, D. (2006b) The machine as intermediary: communication with a therapist via a machine. Advances in Psychiatric Treatment, 12, in press.Google Scholar
Wyatt, J. C. (1997) Commentary: measuring quality and impact of the World Wide Web. BMJ, 314, 1879.Google Scholar
Submit a response

eLetters

No eLetters have been published for this article.