Participants
Participants
***
Since broadband began its inexorable spread at the start of this millennium, internet use has expanded at a cosmic rate. Last year, the number of internet users topped 2.4 billion—that’s more than a third of all humans on the planet. The time spent on the screen was 16 hours per week globally, double that in high-use countries, and much of that on social media.
We have most certainly changed how we interact. But are we also changing what we are?
We put that question to three people who have written extensively on the subject, and brought them together for a discussion with Serge Schmemann, editorial page editor of The International Herald Tribune.
SERGE SCHMEMANN: The question we are asking is: Are human beings being turned into cyborgs? Are advances in digital technologies changing us in a more profound and, perhaps even troubling, way than any previous technological breakthrough?
Let me start with Baroness Greenfield. Susan, you have said some very scary things about the impact of the internet not only on how we think, but on our brains. You have said that new technologies are invasive in a way that the printing press, say, or the electric light or the television were not. What is so different?
SUSAN GREENFIELD: Can I first qualify this issue of ‘scary’? What I am really trying to achieve here is stimulate the debate and try and keep extreme black or white value judgements out of it. Whether people find it scary or not is a separate question.
Very broadly, I would like to suggest that technologies up until now have been a means to an end. The printing press enabled you to read both fiction and fact that gave you insight into the real world. A fridge enabled you to keep your food fresh longer. A car or a plane enabled you to travel farther and faster.
What concerns me is that the current technologies have been converted from being means to being ends. Instead of complementing or supplementing or enriching life in three dimensions, an alternative life in just two dimensions‚ stimulating only our hearing and vision, seems to have become an end in and of itself. That is the first difference.
The second is the sheer pervasiveness of these technologies over the other technologies. Whilst it is one thing for someone like my mum, who is 85 and a widow, to go onto Facebook for the first time—not that she has done this, but I would love for her to do it‚ to actually widen her circle and stimulate her brain, there are stats coming out, for example, that suggest that more than 50 per cent of kids, aged between 13 and 17, spend 30-plus hours a week recreationally in front of a screen.
So what concerns me is not the technology in itself, but the degree to which it has become a lifestyle in and of itself rather than a means to improving your life.
SCHMEMANN: Maria, I have seen some amazing statistics on the time you spend online, on your tablet, and also on reading books and exercising. You seem to have about 30 hours to your day. Yet you have argued that the information diet works like any good diet: you should not think about denying yourself information, but rather about consuming more of the right stuff and developing healthy habits. Has this method worked for you? How do you filter what is good for you?
MARIA POPOVA: Well, I don’t claim to have developed any sort of universal litmus test for what is valuable for culture at large; I can only speak for myself. It is sort of odd to me that this personal journey of learning that has been my site and my writing has amassed quite a number of people who have tagged along for the ride. And a little caveat to those statistics: a large portion of that time is spent with analog stuff‚ mostly books, and a lot of them old, out-of-print books.
Which brings me to the cyborg question. My concern is really not‚ in keeping with Baroness Greenfield’s point, the degree to which technology is being used, but the way in which we use it.
The World Wide Web, by and large, is really well-designed in that it helps people find more of what they already know they are looking for, and really poorly designed in terms of helping us discover that which we don’t yet know will be of interest to us and hopefully even change the way we understand the world.
One reason for this is the enormous chronology bias in how the Web is organised. When you think about any content management system or blogging platform, which, by the way, many in the mainstream media use as their online presence‚ be it Wordpress or Tumblr, and even Twitter and Facebook timelines‚ they are wired for chronology, so that the latest floats to the top, and we infer that this means that the latest is the most meaningful, most relevant, most significant. The older things that could be timeless and timely get buried.
So a lot of what I do is to try to resurface these old things. Actually, in thinking about our conversation today, I came across a beautiful 1945 essay that was published in The Atlantic by a man named Vannevar Bush, who was the director of the Office of Scientific Research and Development. He talks about information overload and all these issues that, by the way, are not at all unique to our time. He envisions a device called the Memex, a portmanteau of memory and index; he talks about the compression of knowledge, how all of the Encyclopaedia Britannica can be put into the Memex, and we would use what we now call metadata and hyperlinks to retrieve different bits of information.
His point was that at the end of the day, all of these associative relations between different pieces of information, how they link to one another, are really in the mind of the Memex’s user, and can never be automated. While we can compress the information, that alone is not enough, because you need to be able to consult it.
And that is something I think about a lot, this tendency to conflate information and knowledge. Ultimately, knowledge is an understanding of how the different bits of information we collect fit together. There is an element of correlation and interpretation. While we can automate the retrieving of knowledge, I don’t think we can ever automate the moral end on making sense of that and making sense of ourselves.
SCHMEMANN: Evgeny, in your book, The Net Delusion: The Dark Side of Internet Freedom, you paint a fairly ominous picture of the internet as almost like a brave new world‚ a breeding ground, you say, not of activists, but slacktivists‚ the sort of people who think that clicking on or liking a Facebook petition, for example, counts as a political act. Do you think that technology has taken a dangerous turn?
EVGENY MOROZOV: I don’t think that any of the trends I have been writing about are the products of some inherent logic of technology, or really of the internet itself. To a large extent, they are the products of a political economy and the various market conditions that these platforms operate in.
It just happens that sites like Facebook do want to have you clicking on new headlines and new photos and new news from your friends, in part because the more you click, the more they get to learn about you; and the more they get to learn about you, the better tailored the advertising they can sell.
In that sense, the internet could be arranged very differently. It doesn’t have to be arranged this way. The combination of public/private funding and platforms we have at the moment makes it more likely that we will be simply clicking and moving on, rather than, say, reading or getting deeper within, under the surface of one particular link.
As for the political aspect of the internet, I didn’t mean to paint a picture that is as dark as you suggest. As a platform, as a combination of various technologies, the internet does hold huge promise. Even Facebook can be used by activists for smart and strategic action.
The question is whether this will displace other forms of activism, and whether people will think they are campaigning or signing up for something very important when they are, in fact, simply joining online groups that have very little relevance in the political world—and this set of affairs their governments are actually very happy with. Many authoritarian governments that I document in the book are perfectly OK with young people expressing discontent online, so long as it doesn’t spill over into their streets.
What I am campaigning against is people who think that somehow social media and internet platforms can replace that whole process of creating and making and adjusting their strategy. It cannot. We have to be realistic about what these platforms can deliver, and once we are, I think we can use them to our advantage.
SCHMEMANN: You have all spoken of the risk of misusing the new technology. But is not such apprehension about new technology as old as technology itself?
POPOVA: I think one of the most human tendencies is to want to have a concrete answer and a quantifiable measure of everything. And when we deal with degrees of abstraction, which is what any new technology in essence compels us to do, it can be very uncomfortable. Not to cite historical materials too much, but it reminds me of another old essay, this by a man named Abraham Flexner in 1939, called The Usefulness of Useless Knowledge. In the essay, he argues, basically, that curiosity is what has driven the most important discoveries of science and inventions of technology. Which is something very different from the notion of practical or useful knowledge, which is what we crave. We want a concrete answer to the question, but at the same time it is this sort of boundless curiosity that has driven most of the great scientists and inventors.
MOROZOV: It is true that virtually all new technologies trigger what sociologists would call moral panics. Also, that there are a lot of people who are concerned with the possible political and social consequences, and that this has been true throughout the ages. So in that sense we are not living through unique or exceptional times.
That said, I don’t think you should take this too far. Surrounded by all of this advanced technology now, we tend to romanticise the past; we tend to say, ‘Well, a century ago or even 50 years ago, our life was completely technologically unmediated; we didn’t use technology to get things done and we were living in this nice environment where we had to do everything by ourselves.’
This is not true. If you trace the history of mankind, our evolution has been mediated by technology, and without technology it isn’t really obvious where we would have been. So I think we have always been cyborgs in this sense.
You know, anyone who wears glasses, in one sense or another, is a cyborg. And anyone who is reliant on technology in daily life to extend their human capacity is a cyborg as well. So I don’t think that there is anything to be feared from the very category of cyborg. We have always been cyborgs and always will be.
The question is, what are some of the areas of our life and of our existence that should not be technologically mediated? Our friendships and our sense of connectedness to other people‚ perhaps they can be mediated, but if so they have to be mediated in a very thoughtful and careful manner, because inter-human relations are at stake. Perhaps we do have to be more critical of Facebook, but we have to be very careful not to criticise the whole idea of technological mediation. We only have to set limits on how far this mediation should go, and how exactly it should proceed.
GREENFIELD: I don’t fear the power of the technology and all the wonderful things it is capable of doing‚ these are irrefutable; I worry more how it is being used by people. The human mind‚ and this is where I do part company with Evgeny, is not one that we could say has always been a cyborg. There is no evidence for this statement. Niels Bohr, the famous physicist and father of the atom, once admonished a student, “You are not thinking; you are just being logical.” I think it actually demeans human cognition to reduce it to computational approaches and mechanistic operations.
I am worried about how that mind might be sidetracked, corrupted, underdeveloped—and whatever other word you want to use—by technology.
Human brains are exquisitely evolved to adapt to the environment in which they are placed. It follows that if the environment is changing in an unprecedented way, then the changes in the brain’s processes too will be unprecedented. Every hour you spend sitting in front of a screen is an hour you are not talking to someone, not giving someone a hug, not having the sun on your face. So the fear I have is not with the technology per se, but the way it is used by the native mind.
MOROZOV: There are a great many things I could say in response. The choice to view everything through the perspective of the human brain is a normative choice that we could debate.
I’m not sure that that is the right unit of analysis. It in itself has a cultural tendency to reduce everything to neuroscience. Why, for example, should we be thinking about these technologies from the perspective of the user and not of the designer?
GREENFIELD: The user constitutes the bulk of our society. That’s why. They are the consumers, they are the ones who....
MOROZOV: I know, but, for example, perhaps I want to spend more time thinking about how we should inspire designers to build better technologies. I do not want to end up with ugly and dysfunctional technologies and shift the responsibility for such to the user.
GREENFIELD: But Evgeny, the current situation is constituted by the current users....
MOROZOV: ....but it shouldn’t be left up to the individuals to hide from all the ugly designs and dysfunctional links that Facebook and other platforms are throwing at them, right? It is not just a matter of not visiting certain websites. It is also trying to alert people in Silicon Valley and designers and....
GREENFIELD: Yes, they have got minds as well, so I would not disenfranchise them. Everything starts with the people. It is about people, and how we are living together and how we are using the technology.
POPOVA: To return to the point about cyborgs‚ and I think both of you touch on something really important here, which is this notion of, what is the human mind supposed to do, or what does it do? The notion of a cyborg is essentially an enhanced human. And I think a large portion of the cyborgism of today is algorithms.
So much of the fear is that rather than enhancing human cognition, they are beginning to displace or replace meaningful human interactions.
With Google Street View’s new neural network, artificial intelligence technology, for example, they are able to discern whether an object is a house or a number. That is something that previously a human would have had to sort through the data to do.
That is an enormous magnitude of efficiency higher than what we used to have. But the thing to remember is that these are concrete criteria. It is like a binary decision: is this a house, is this a number? As soon as it begins to bleed into the abstract: is this a beautiful house, is this a beautiful number, we can no longer trust an algorithm, or even conceivably hope that an algorithm would be able to do that.
The fear that certain portions of the human mind would be replaced or displaced is very misguided. You guys have been talking a lot about this notion of choice: the future is choice, both for us as individuals and what we choose to engage with, and what careers we take, and whether we want to hire the designers in Silicon Valley to build better algorithms—those are choices—and also at a governmental and state level, where the choice is what kind of research to fund.
My concern is that the biases in the way knowledge and information are organised on the Web aren’t necessarily in humanity’s best interest. When you think about so-called social curation algorithms that recommend what to read based on what your friends are reading—an obvious danger. Eli Pariser called it ‘The Filter Bubble’ of information, and it’s not broadening your horizons.
I think the role of whatever we want to call these people, information filters, curators, editors or something else, is to broaden the horizons of the human mind. The algorithmic Web can’t do that, because an algorithm can only work with existing data. It can only tell you what you might like based on what you have liked.
GREENFIELD: Maria, you mentioned differentiating information from knowledge. Whilst we can easily define information, knowledge is a little bit more elusive. My own definition of knowledge, or true understanding, is seeing one thing in terms of other things. For example, Shakespeare’s “Out, out, brief candle” you can only really understand that if you see the extinction of a candle in terms of the extinction of life.
In order to have knowledge, you need some kind of conceptual framework. You need a means for joining up the dots with the information or the facts that you have encountered throughout your life, not someone else’s life. Only when you can embed a fact or a person or an event within an ever wider framework do you understand it more deeply.
Speaking of Google, there’s a wonderful quote from Eric Schmidt, its chairman, “I still believe that sitting down and reading a book is the best way to really learn something. And I worry that we’re losing that.” So whilst we shouldn’t be too awed by the power of information, we should never, never confuse it with insight.
POPOVA: I completely agree. This conflation of information and insight is something I constantly worry about. Algorithms can help access information, but the insight we extract from it is really the fabric of our individual, lived experience. This can never be replaced or automated.
SCHMEMANN: Let me relate what you say to my own craft: journalism. We in what is now condescendingly called ‘the legacy media’ live in terror of the internet, and the sense that it is creating a kind of information anarchy. Our purpose has always been to apply what you have called experience, knowledge, judgement and order to what we call news.
Now the internet and Facebook not only have assumed this function, but they create communities of people who share the same prejudice, the same ideology. This may be a greater danger than shifting newspapers to a different platform.
MOROZOV: If it’s really happening, it is a danger. But I’m not convinced that it’s actually happening. The groups that are hanging out in bubbles—whether it’s the liberals in their bubble or the conservatives in their bubble—tend to venture out into sources that are the exact opposite of their ideological positions.
You actually see liberals checking Fox News, if only to know what the conservatives are thinking. And you are seeing conservatives who venture into liberal sources, just to know what The New York Times is thinking. I think there is a danger in trying to imagine that those platforms—the Net, television, newspapers—all exist in their own little worlds and don’t overlap.
GREENFIELD: I think a related issue, if you take conventional print and broadcast media compared to the internet, is speed. When you read a paper or a book, you have time for reflection. You think about it, you put it down to stare at the wall. Now what concerns me is the way people are instantly tweeting. As soon as they are having some experience, some input, they are tweeting for fear that they may lose their identity if they don’t make some kind of instant response.
This is a concern for me, apart from the obvious want of regulation and slander and unsubstantiated lies that people spread around, that people no longer have the time for reflection.
POPOVA: If I may slightly counter that, I would argue that there’s actually an enormous surge in interest in a sort of time-shifted reading—delayed and immersive reading that leaves room for deeper processing. We have seen this with the rise of apps like Instapaper and Read It Later and long-form ventures like The Atavist and Byliner, which are essentially the opposite of the typical experience of the Web, which is an experience of constant stimulation and flux.
These tools allow you to save content and engage with it later in an environment that is controlled, that is ad-free, that is essentially stimulus-free, other than the actual stimulus in front of you.
GREENFIELD: I’ll just add one more thing, and that is the alarming increase in prescriptions for drugs used for attentional disorders in most western countries over the last decade or two. Of course, it could be doctors are prescribing more liberally or that attentional illnesses are now becoming medicalised in a way they were not before. But my own view, especially for the younger brain, is that if you take a brain with the evolutionary mandate, which the human brain has, to adapt to the environment; if you place such a brain in an environment that is fast-paced, loud and sensory-laden, then the brain will adapt to that. Why would it compete with the other, three-dimensional world?
And whilst the apps that Maria raises are fine for the more mature person, younger kids could be handling it in a very different way. My concern is that we are heading toward a short attention span and a premium on sensationalism than on abstract thought and reflection.
SCHMEMANN: Susan, having described all these dangers you perceive, do you think this is something that we as people or we as governments or we as institutions need to work on? Does this require regulations, or do you think the human spirit will sort it out?
GREENFIELD: My emphasis would be away from regulation, to education. You can regulate till you are blue in the face; it doesn’t make it any better. I think that, although I sit in the House of Lords and although we debate on all the various regulations on how we might ensure a more benign and beneficial society, what we really should be doing is thinking proactively about how, for the first time, we can shape an environment that stretches individuals to their true potential.
SCHMEMANN: Picking up a bit where Susan was, Evgeny, in your book you talk a lot about the political uses and misuses of the internet. You talk about cyber-utopianism, internet-centrism, and you call for cyber-realism. What does that mean?
MOROZOV: For me, internet-centrism is a very negative term. By that I mean that many of our debates about important issues essentially start revolving around the question of the internet, and we lose sight of the analytical depths that we need to be plumbing.
The problem in our cultural debate in the last decade or so is that a lot of people think the internet has an answer to the problems that it generates. People use phrases like “This won’t work on the internet”, or “This will break the internet”‚ or “This is not how the internet works”, I think this is a very dangerous attitude because it tends to oversimplify things. Regulation is great when it comes to protecting our liberties and our freedoms—things like privacy or freedom of expression or hate speech. No one is going to cancel those values just because we are transitioning online.
But when it comes to things like curation, or whether we should have e-readers distributed in schools, this is not something that regulation can handle. This is where we will have to make normative choices and decisions about how we want to live.
POPOVA: I think for the most part I agree with Evgeny. I think much, if not all of it, comes down to how we choose to engage with these technologies.
Immanuel Kant had three criteria for defining a human being: One was the technical predisposition for manipulating things. The second was the pragmatic predisposition: to use other human beings and objects to one’s own purposes. I think these two can, to some degree, be automated, and we can use the tools of the so-called Digital Age to maximise and optimise those.
Kant’s third criterion was what he called moral predisposition, the idea of man treating himself and others according to principles of freedom and justice. I think that is where a lot of fear comes with the Digital Age. We begin to worry that perhaps we are losing the moral predisposition or that it’s mutating or that it’s becoming outsourced to things outside of ourselves.
I don’t actually think this is a reasonable fear, because you can program an algorithm to give you news and information, and to analyse data in ways that are much more efficient than a human could.
But I don’t believe you could ever program an algorithm for morality.
Tags