Tuesday, 30 August 2011

Life and Learning, Part 1: Poetry, Pebbles and Poo

by Emma-Kate Prout

Emma-Kate Prout will soon begin her final year as an undergraduate in Earth Science and Geography at Durham University. When she asked if she could write and create some illustrations for the blog she insisted she could do it sensibly if she really tried. Her offer to contribute was welcomed with open arms. No bribes with biscuits were involved.

I was initially unsure where to begin with the blog. My brain was buzzing with half-formed ideas but I didn’t want to simply regurgitate what I’d read about science communication, or copy too closely the good stuff already posted on this page. So I mooched for a while, scribbling any passing words, phrases or thoughts of food on the walls inside my mind. Then I had some biscuits.

After a while longer of me thinking too hard, a thought nudged me in the side of the head: I realised that when I write, or indeed do many other things, I’m half-aware of all sorts of scientific processes- both physical and mental- in operation. And not only do I write about science, and read others’ writing about science, but the whole process of writing, from head (and heart?) to pen and paper, seems to be as much a science as an art- or at least the outcome of a whole splurge of scientific activity. Once you make the links, isn’t everything inseparable from some aspect or another of science, whether or not we understand it yet? It occurred to me that even a few years ago, I didn’t generally think so ‘scientifically’ whilst engaged in writing or anything else except Science as a School Subject. Writing was writing. Science was science. And I decided that ‘proper’ science writing wasn’t proper writing; yes, it was necessary, but a necessary, white-coated vacuum for lyricism and wit. Something changed at some point since then. I decided that the enlightenment I seemed to have undergone was something to celebrate and so began some more blog-shaped ramblings…

Consciousness of science in everyday life, even just of its presence and importance if not its details, opens the mind to a whole new way of thinking (to be blogged about later). The ‘Whys’ are a quality (not a disease) in young children, even if they don’t see them as ‘science’, but are unfortunately often lost later. My early childhood was a whirl of curiosity. There was no specialisation, no segregation of subjects: it was just life and learning. A lot of time was spent in nature, with a semi-permanent coating of water, sand and/or mud, and an innate scruffiness that seems to have survived the years since. Most of the rest of the time, I was attached to a pen, paintbrush, book or musical instrument, and swapped the dirtiness for more refined paint, glue and glitter.

Collecting pebbles was always an addiction. I loved the ink-splashes of colour when I wet them, the stripy, sparkly or speckled ones and the small, round ones that looked like toffee. No sooner was one stone stuffed into a fast-wearing-out pocket than another appeared in a tantalising new shade or an even-more perfect heart shape than before. I was also the proud owner of a delightful, orange-flecked lump of alleged dinosaur poo (‘Jurassic Krap’, I announced to anyone and everyone: ‘Don’t blame me; it’s printed on the poo’). I travelled my world in all weathers, in my own mobile library: my coat carried the legends of Earth in a myriad of minerals. The stones are still scattered among vast collections of childhood paintings, poetry, sketches and stories- volumes on a world with everything from road works to rainbows and spiders to sphinxes.

Gradually, I developed more of a mental distinction between subjects. I don’t remember ever actively disliking science, despite what I said at the time. I went through school quietly, an eco-geek interested in most things except computers, cosmetics and conformity, yet seemingly lacking my now-unconcealed scienciness. Until relatively recently, science was simply ok. I studied it and worked hard, but I certainly wasn’t a ‘scientist’ and never would be, so I said. I think it was a mentality maintained by habit and fear rather than lack of interest. Science had become, in my mind, synonymous with ‘not creative’ and had to be put into mental solitary confinement so as not to eat away at the bigger, juicier right side of my brain: I didn’t want to lose my way with words or skills with a sketchbook among too much logic and lab work.

For a while, then, I didn’t see the planet that I’d hitched a ride on as a spinning ball of science. But something clicked since then. It wasn’t in a single moment; more a series of Rubix cube clicks that led to logic making chaos less…chaotic, yet no less colourful. Constantly, though, the little science-squares mix and multiply with more discoveries. There will always be grey blocks of the unknown: just as we scribble in one with a new theory, we notice another shaded in. In Life and Learning, Part 2: Outside Box Town I will share with you a small journey of scientific revelation...

Tuesday, 23 August 2011

Adventures in Time and Space

(Lynne Hardy)

On Saturday 23rd November, 1963 at around 5.15pm, a new television series aired for the very first time. Its remit was to entertain and educate Britain’s children. It was to become the longest running and most successful science fiction television series of all time: Doctor Who.

One of the Doctor’s first assistants was Ian Chesterton, a science teacher. Intrigued by the advanced scientific knowledge displayed by one of his pupils, a strange girl called Susan, Ian and his colleague Barbara (a history teacher) follow the girl to a junkyard. They wander into a blue police box in their search for her, only to be whisked away on a series of adventures by Susan’s grandfather, the Doctor (played by William Hartnell). Throughout their travels, Ian uses his scientific knowledge to solve many of the problems they encounter. In fact, due to William Hartnell’s age, Ian is very much the heroic man of action on the show, which doesn’t happen very often on TV until the advent of the glossy police procedural scientists in the 00s.

Not all scientists are portrayed in such a favourable light, though, even by fellow scientists! In the later 60s one producer, Innes Lloyd, wanted to introduce more hard science to the stories and brought Dr Christopher (Kit) Pedler on board as an unofficial scientific advisor. An expert in ophthalmology and electron microscopy, Dr Pedler went on to become a writer for the series, co-creating the Cybermen with Gerry Davis. These emotionless, cybernetically enhanced beings from the planet Mondas made their first appearance on our screens in October 1966. They re-appeared, with a slightly different origin story, during the Tenth Doctor (David Tennant’s) tenure in 2006.

The Doctor’s greatest arch-nemeses are the Daleks, created by writer Terry Nation and designer Raymond Cusick, who first appear in the Doctor’s second ever adventure. Although their origin story has also undergone many different iterations over the course of the series’ history (that’s time travel for you), their creator Davros is probably the ultimate mad, bad and dangerous to know scientist on the show. He was first introduced in 1975 during the Fourth Doctor (Tom Baker) story “Genesis of the Daleks”, where his ruthless experimentation leads to the creation of the brutal Dalek race. Like the Cybermen before him, he was reintroduced to the new series in 2008.

No one could really argue the effect that Doctor Who has had on the imaginations of generations of children; you just have to look at the popularity of the new series and the numerous Blue Peter competitions over the years for evidence of that. In his article, Mike Wiggett talks about children not having their curiosity engaged in science classes. However dodgy the science might be, the Doctor is always curious and always trying to use that science (and his imagination) to solve his dilemmas. He sees beauty in everything, like Feynman’s artist, but also understands fundamentally how things work, so marvels even more. That’s a powerful role-model for science if ever there was one.

Between 1973 and 1991, Target books published a series of Doctor Who novelisations. These books were cheap, allowed children access to the back catalogue of the Doctor’s travels in the days before video recorders and endless repeat showings and included those stories that the BBC (in their infinite wisdom) had wiped from their archives. Nev Fountain, co-writer of Dead Ringers, was accused by one of his schoolteachers of reading “too much Doctor Who” as a child. He found the books exciting, but more importantly, he found them accessible, leading to a love of reading in general. In Nev’s case, it also inspired his love of telling stories. And he’s not the only one; many of the writers involved with the show’s revival were inspired as children by the Doctor’s adventures, people such as Russell T Davies, Steven Moffat, Mark Gatiss and Robert Shearman (an award winning playwright and short-story author who trained under Alan Ayckbourn).

At a school in Teeside, one teacher often uses the Doctor to introduce historical topics, thereby taking the character all the way back to his very roots. This gives the children someone familiar to identify with and an excuse as to why someone in costume claiming to be from a different time period can show up in their classroom (the Doctor brought them). A friend is using his collection of Target books to help improve his son’s reading. His son is a big fan of the current show, so his interest was already piqued. They sit together and read a chapter, then watch the old television episode it’s based on and discuss the differences.

So if the Doctor can be used to inspire reading, writing and an interest in history, why aren’t we using him more to inspire budding scientists? Yes, the science in the show can be a bit far-fetched - this is science fiction after all – but so what? It’s the sense of wonder and enthusiasm, the sense of exploration and discovery, the joy of solving a problem based on what you’ve observed that we should be exploiting. Because, let’s face it, everything is more exciting when the Doctor comes to town…

Tuesday, 16 August 2011


By Mike Wiggett

On 7 August I wrote about the gulf between the prodigious achievements of science and often negative public attitudes towards it.  Modern prosperity owes everything to science, but we don’t celebrate it: we don’t instil in our children an enthusiasm for the wonder of scientific inquiry or an understanding of the Scientific Method.  

Science still wanders at the margins of popular culture. WW2 sowed the seeds of this alienation.  War is always a major spur to scientific research and development. It leads to great technological advances – computers, rocketry, jet engines – and also, regrettably, to nuclear and biological weapons.  And in wartime, research is conducted in conditions of utmost secrecy. 

Thus the main elements of public perception of science in this era were twofold: fear (of its colossal power) and mystery. In the war movies of the 50s and 60s, the heroes were soldiers and fighter pilots; scientists were mere “boffins”, “back-room boys”, and “men in white coats”.  In science, geekdom reigned, and the less the public knew about it, the better.
Not much has changed.  Ordinary people remain alienated from science, no matter how familiar and cherished its products. How many of us can diagnose the faults of a car engine, are confident of not getting ripped off by the repair man? Who can understand the electronic circuit boards of a computer, or even change a light fitting?

People not only see science as difficult and daunting, they often take pride in their scientific incompetence. An arts graduate might feel embarrassed to expose ignorance of T S Eliot’s poetry in a pub quiz, but that same person will proudly aver utter hopelessness at maths. The implication is that science is only for nerds, and may be safely discounted. How desperately stupid and wrong! 

Of course, most occupations require no familiarity at all with interferometric microscopy or the Higgs boson, but a basic science education offers everybody one of life’s most precious tools: the habit of evidence-based thinking, an earnest desire to see reality as it is, not merely how one would like it to be. Absence of evidence-based thought in human affairs is seriously corrosive.
An example: consider the former doctor (“former” as in “struck off”) Andrew Wakefield. In 1998 he gained wide publicity with an article in The Lancet claiming to have established a causal link between the MMR vaccine and autism in children. However, studies failed to corroborate Wakefield’s “evidence”, which later turned out to be fraudulent. Still the damage was already done: vaccination rates fell sharply, and a number of children have suffered harm or even death in consequence. 

A vivid memory from this period is the BBC News headline on the General Medical Council’s conclusions: “Doctors find the vaccine safe, but we talk to a mother who disagrees.” Appalling!  One can sympathize with a mother who thinks, “My son had the vaccine, my son is autistic, ergo...” but the BBC, as a responsible public broadcaster, should know that the plural of anecdote is not data.
Rejection of science in favour of mere whim is commonplace.  Prince Charles has championed the cause of homeopathy, a quack “alternative” medicine proven in countless clinical trials over 175 years to be utterly ineffectual.  But some people still swear by it.  Fine, swear away!  Only don’t lie about the curative benefits of homeopathic nostrums. The closure of the Royal Homeopathic Hospital in September 2010 was a triumph for reason and the NHS budget.

A hilarious aspect of homeopathy is that, even by its own lights, it cannot work! The pseudo-scientific methodology set in place by its founder, Samuel Hahnemann, ordained that the supposedly active ingredient of a homeopathic remedy be repeatedly diluted – to the point where it is completely absent!  

In the 1990s, homeopaths tried to counter this objection with the time-honoured ploy of the snake oil merchant: make it up as you go along. They enthusiastically embraced a Pythonesque notion from the French chemist, Jacques Benveniste; water memory. OK, there may not be any active ingredient, but the water in the homeopathic solution “remembers” it! OK? Benveniste even claimed laboratory evidence for his hypothesis, but nobody else could reproduce his results. Besides, if water can remember the “cure”, why doesn’t it also remember all the sewage it has ever been part of?

Uri Geller with bent spoon
In the 1970s, a young Israeli illusionist, Uri Geller, achieved worldwide fame with his spoon-bending antics on TV.  The mood of that era was credulous and almost everyone lapped it up.  The difference between Geller and a regular stage magician was that Geller attributed his skill to “psychic” powers. They lapped that up too!
But Geller was called out by the great illusionist and skeptic, James Randi, who said; “Hey, I can do all that stuff too, and it’s just trickery.” Geller repeatedly tried to sue Randi to shut him up, without success. Randi has since offered a $1 million dollar prize to anyone who can demonstrate paranormal powers. The offer has stood for over 20 years, with no takers. 

James Randi
It is perplexing that so many people are distrustful of science, and yet those same people readily believe all kinds of monkeyshines for which there is no supporting evidence whatsoever! Homeopathy, faith healing, crystal therapy, astrological prediction, palm reading, paranormal and “psychic” phenomena, communication with the dead, reincarnation, Roswell aliens, the Loch Ness Monster...the list goes on and on.

The Loch Ness Monster?...

I have not forgotten our invisible friend in the sky, but religion will keep for another day! Let me close by warmly recommending the animated movie version of Tim Minchin’s beat poem, Storm, much in tune with the theme of this blog and available on YouTube. 

Mike Wiggett retired in 2005 from the mundane business of international trade credit. The foreign business travel was OK, but he now enjoys travelling for pleasure, reading, scribbling and Thinking About What To Do Next.

Sunday, 7 August 2011


by Mike Wiggett

Mike Wiggett retired in 2005 from the mundane business of international trade credit. The foreign business travel was OK, but he now enjoys travelling for pleasure, reading, scribbling and Thinking About What To Do Next.

Once on a trip to St Petersburg I found myself suddenly alone in the study of Peter the Great.  For a few brief, dizzy moments, I saw the room as he saw it, and wondered how I would have liked being Tsar of All the Russias. The modest refinement of Peter’s study, with its morocco-bound books, orrery and other astronomical instruments, was enviable. And perhaps it would have been fun to come home from Versailles and say, “Make me one of those”, and see it done. On the other hand... 

Even the most powerful 17th Century autocrat was in many ways less privileged than ordinary people today.  I’m much more travelled than Peter was, and he never googled, watched repeats of Columbo, or warmed his chocolate in the microwave. Moreover, Peter the Great lived in a world plagued – literally - with deadly pathogens, a world with no germ theory of disease, no antibiotics, no anaesthesia, in fact no medicine to speak of at allHe died at 52 from chronic bladder infection, possibly exacerbated by non-sterile surgery.  
Just 300 years on, science and technology have transformed our world into one Peter would barely recognize.  Hundreds of millions enjoy longevity and material prosperity unimaginable in his day We owe it all to science, every bit. 

And yet our culture has such an ungrateful, fractious, churlish attitude to science - science is seldom invited to the ball!  Britain has no easy familiarity with its many heroes of science, people who have shaped world history. We all know the dubious tale of Newton’s Apple, but how many know what he did when he wasn’t sitting under a tree? How many school-leavers have even heard of James Clerk Maxwell or Rosalind Franklin ? [1]  Sir Alec Jeffreys, perhaps?   

Rosalind Franklin
Not long ago, an arts-educated Brit asked me who Alan Turing was. I said Turing had invented the computer and that, thanks to his brilliant work in decrypting Hitler’s radio communications, he had made a crucial contribution to the defeat of the Nazis in WW2. Apart from that, he didn’t really do much. Turing has now received belated recognition, but a blue plaque in Wilmslow is not a monument in Westminster Abbey. To get one of those, writing poetry is a much better bet. 

Alan Turing
Public attitudes to science range from cool and warily respectful to distrustful, suspicious or downright hostile. The great Nobel physicist, Richard Feynman, was irked by the cliché that science is cold, mechanical, uncreative, unpoetic – a scientist cannot appreciate the beauty of a flower, but has to dissect it. In reclaiming normal human aesthetic sensibilities for scientists, Feynman observed that scientific awareness is not inferior to that of the artist: it is always more. A poet may extol the beauty of a flower, and the scientist can see that too, but (s)he also has a deeper appreciation of the flower through knowledge of its cell structure, its role in reproduction. The scientist asks fascinating questions: why is the flower red?  Bright colours attract bees to spread the pollen. So now you also know bees have colour vision. How did that come about?  And how did the flower make the shrewd choice of red in the first place? The wonder of Nature is endless. [2] 

Sadly, Feynman was not at my school back in the late 50s. My science teachers did not induce his “Pleasure of Finding Things Out.” Sure, we did “experiments” in the lab, but the teacher often told us in advance what was supposed to happen.  This wasn’t a process of discovery, it was cookery to a recipe. We heated the solution and weighed the precipitate and wrote it all down. We even cooked our results, knowing how much the precipitate was meant to weigh, because Sir had told us 

Richard Feynman
At least we learned how to describe an experiment in a manner useful in an exam, but we were not engaging our curiosity, our eyes and minds, to solve problems as the pioneers of physics and chemistry had to, with no text books to guide them. For many students in my class, this lab learning experience was alienating; science seemed to be an occult art, the province of a few geeks – nearly always male, with pens in their top pocket – who would later wear white coats in labs of their own. 
Alas, we were not being coached in the true glory of science, the very secret of its success: the Scientific Method - it teaches you how to think Feynman again:

"In general we look for a new law by the following process. First we guess it. Then we compute the consequences of the guess to see what would be implied if this law that we guessed is right. Then we compare the result of the computation to Nature, with experiment or experience, compare it directly with observation, to see if it works. If it disagrees with experiment it is wrong. In that simple statement is the key to science. It does not make any difference how beautiful your guess is. It does not make any difference how smart you are, who made the guess, or what his name is - if it disagrees with experiment it is wrong. That's all there is to it." 

First trust the evidence, then your instincts. If only more of us did! I cringe to think I once accepted the claims of the I Ching uncritically. But I was young, and got over it. Still, this naivety was part of a general defect in our society: inadequate understanding of the importance and the power of evidence-based thinking – scientific thinking – and the great harm done by its opposites: credulity, superstition and prejudice. More on this next time...  

[1] Brenda Maddox's biography, Rosalind Franklin:The Dark Lady of DNA is spellbinding.

[2] The full Feynman conversation was the subject of a 1981 Horizon special, The Pleasure of Finding Things Out. This is available on YouTube. Feynman also wrote a wonderful little book of the same title. See also Richard Dawkins’ book on the same theme, Unweaving the Rainbow: Science, Delusion and the Appetite for Wonder

Wednesday, 3 August 2011

The Tales We Tell, The Games We Play: Part 2 - The Games We Play

(Lynne Hardy)

Not only is man a story-teller, he is also a player of games. Dice developed independently in many different cultures, with the oldest known examples being over 5,000 years of age. Perhaps the most familiar is the six-sided die (or d6), where opposite faces traditionally add up to 7. However there are many different types, often based on regular polygons (also known as Platonic solids), giving 4-, 8-, 12- and 20-sided dice. Varying the numbers of sides on a die is not a modern invention, either: icosahedral dice have been found on Roman archaeological sites.
So why am I talking about dice when I said last time that we’d be looking at interactive storytelling? Because dice are fundamentally important to one particular branch of it: table-top roleplaying games (RPGs). But what is an RPG? Anyone who’s ever daydreamed or told stories, placing themselves in the hero’s shoes, has roleplayed, just a little bit. All those games you played as a child, tearing round the school playground pretending to be someone else? That was roleplaying, too. But when you say roleplaying to a lot of people, the first thing they think of (okay, maybe the second, after corporate team-building exercises) is Dungeons and Dragons (D&D). The game, written by Gary Gygax and Dave Arneson, was first published in 1974. It had developed from table-top wargames, where players use a set of rules, dice and miniatures to replay historical battles.

In table-top RPGs, players take on the roles of various characters in order to achieve some sort of goal. Although one person will usually be responsible for creating the over-arcing plot of the story (the games master or GM), all of the players collaborate to determine the exact details, making the whole process very interactive indeed. In that respect, table-top RPGs fall somewhere between improvisational theatre and a murder-mystery party. The rules are there, as in any other game, to ensure balance, fair-play and a common framework for the story-telling that takes place.

The rules tend to fall into two distinct categories: those systems which attempt to provide a near-perfect simulation of real world events, and those which are there purely to facilitate the dramatic proceedings. And as you might expect, science fiction has proven a popular inspiration for RPGs. For the most part, simulation systems tend to underpin hard sci-fi settings, with dramatic systems appearing most often with space opera type backgrounds.

But what about the dice? When there is a conflict or challenge, the players will roll dice to see if they succeed or fail, with the exact details determined by the system they are using. Simulation games often use “percentile” dice in order to model actual probability distributions; you roll two non-cuboidal dice (either 10- or 20-sided), with one die acting as the “tens” and the other acting as the “units”, to obtain a percentage. Some games will only use one specific type of die (for example, WEG’s dramatic d6 system, used in the original Star Wars RPG), whilst others will use pretty much every size and shape available. Some, such as Eric Wujik’s 1991 Amber game (based on Roger Zelazny’s novels), even do away with dice altogether, although they still retain some sort of conflict resolution mechanism (often involving that other great gaming device, playing cards).
The very first official science fiction table-top RPG, Metamorphosis Alpha, was produced by TSR in 1976. This particular game, set on a spaceship inhabited by the descendants of those who survived an onboard disaster, is in essence a science fantasy game, borrowing heavily from its predecessor D&D. It was closely followed by GDW’s Traveller in 1977; very much a hard sci-fi system, it used vectors to calculate the movement of spacecraft, but could also be played as space opera if all the complicated maths wasn’t your thing.

Both Star Wars and Star Trek have had their own RPGs (in the case of Star Trek, quite a few different ones). Larry Niven’s Ringworld, a hard sci-fi classic, got its own RPG in 1984 and the scientific romances were the inspiration for GDW’s Space: 1889 game in (confusingly) 1988, making it technically the first steampunk game. The same year, the first cyberpunk RPG appeared: R. Talsorian Games’ Cyberpunk 2020. This was quickly followed by FASA’s Shadowrun in 1989 which, like its predecessor Metamorphosis Alpha, blended science fiction with fantasy elements. Scientific romances were the subject of Marcus Rowland’s 1993 shareware game Forgotten Futures, which lovingly plundered the sci-fi stories of Rudyard Kipling, Conan Doyle and George Griffith (to name but a few). A year later in 1994, R. Talsorian’s Castle Falkenstein became the steampunk equivalent of Shadowrun with its mix of fantasy and technology.

Although many gamers see the 1990s as the golden age of table-top RPGs, such games are still in production, often as re-releases of earlier games or as new rules systems for popular settings (such as Star Trek). Some however, are original like Cubicle 7’s Victoriana (steampunk) and Robin D. Law's upcoming Ashen Stars for Pelgrane Press.

Monday, 1 August 2011

The Tales We Tell, The Games We Play: Part 1 - The Tales We Tell

(Lynne Hardy)

The Oxford English Dictionary defines science as “a branch of knowledge involving the systematized observation of and experiment with phenomena.” By definition, therefore, early man must have been a scientist, carefully observing and experimenting with the world around him merely to survive. But man, by nature, is also a story-teller. We may never know which story was the first to be told, but it’s fairly safe to assume that as long as there has been fiction, there has also been science fiction of one sort or another.

Across the globe, many myths have science fiction elements, including flying machines, fantastical devices and time travel, although the line between fantasy and technology often becomes blurred. This is by no means limited to ancient stories; more modern creations such as Frank L. Baum’s Oz books and Roger Zelazny’s Chronicles of Amber also blend elements of outright fancy with scientific elements to great effect.

What many might consider to be true science fiction began to emerge during the Enlightenment in the early 16th Century as the Western world’s understanding of science blossomed. Others identify Mary Shelley’s Frankenstein, published in 1818 as the Industrial Revolution gathered pace, as the first true science fiction novel. Today it tends to be seen very much as gothic horror, but it relies heavily on extrapolating then current scientific understanding to extreme fantastical ends.

There are many sub-genres of science fiction; it may even be classified as “hard” or “soft”. Soft science fiction is that which emphasises character and story over technology (or focuses on the so-called “softer” social sciences), whilst hard focuses very much on the detail and intricacies of that technology (and therefore has a strong physics and engineering bent).

In terms of sub-genres, probably the first to develop clearly in the mid- to late-19th Century was that of the scientific romance, typified by the works of Jules Verne and H.G. Wells. These tended to be sweeping adventure stories or commentaries on society with science acting very much as a supporting character or means to an end. The term is occasionally used to refer to a much newer sub-genre, that of steampunk, usually set in the Victorian or Edwardian era and having at its core steam-powered equivalents of modern technological devices (making them, effectively, alternative history stories).

Space opera, like its predecessor scientific romance, also tends to the sweeping and melodramatic but with a definite view to the stars. It became massively popular through the pulp magazines of the 1920s and 30s, although to a modern audience its most famous examples would have to be George Lucas’ Star Wars saga and the Star Trek franchise (although the latter does sometimes stray into the occasional bit of hard science fiction with its techno-babble).

Hard science fiction stories are underpinned by accurate science or that which can be logically extrapolated from our current understanding of the world around us. Often these stories are criticised for their lack of characterisation and some can be a little dry. The term itself came into use in the 1950s as our technological expansion once again gathered pace and for the first time it looked as if we truly would be able to reach out and touch the stars. Arthur C. Clarke is one of the most famous proponents of the field.

Just as scientific romance has steampunk, modern science fiction has cyberpunk, a branch that emerged in response to the technological advances in computing in the 1980s. Invariably the settings are bleak, dystopian futures where hi-tech rules the roost, even if the methods by which that technology works in some cases are far removed from science fact. One of the most famous cyberpunk authors, William Gibson, could also be credited as the father of steampunk, at least in its earliest modern form.

In some cases, science fiction has itself been the inspiration for science fact: submarines, satellites, mobile phones, e-books and robots to name but a few have all taken their inspiration from story-telling. Or have been predicted by it, the line between prediction and inspiration being as hazy as that between science fiction and fantasy in some places. In fact, the word robot comes to us from science fiction, first appearing in the 1921 play Rossum’s Universal Robots by Karel Čapek.

However, for all their ability to engage us as readers or viewers, science fiction stories as discussed here require us to be passive participants in the narrative. There is another medium, that of interactive story-telling, that has been just as heavily influenced by science fiction and we will look at that in the second part of this article.