Inaugural Lecture: Professor Shannon Vallor The School of Philosophy, Psychology and Language Sciences is very proud to announce the inaugural lecture of our colleague, Prof Shannon Vallor: "In a Mirror, Dimly: Why AI Can’t Tell Our Stories, and Why We Must". The event is open to the public and all colleagues, friends and students are welcome to attend. Recording of Professor Shannon Vallor Inaugural Lecture View media transcript My new book, 'The AI Mirror,' This is that moment where you see if the clicker actually works. There we go. Uses the mirror as a metaphor to help us better understand how today's powerful AI tools work, what they are and what they are not, what they can do for us and what they can't. What they can do, but probably shouldn't. Throughout the book, I'm going to talk about AI in the context of our humanity. That is those capabilities that we most treasure in the human animal and that are most essential to our shared flourishing together on this planet. In this lecture, I'm going to explore the rapidly accelerating impact of AI on our humanity by talking about four things. Mirrors, space, time, and stories. Let's start with mirrors. Understanding AI, and its implications for humanity is a daunting and ever expanding task. We need to grasp AI's accelerating impacts on democracy, education, the sciences, media, the arts, and our climate. We have to consider how AI is changing, how we acquire information, how we carry out our daily work, how we make life and death decisions, how we find companionship, and how we write books. How do you understand a new social force so vast and encompassing, that it is all at the same time, changing how we make food, music, love, laws, and wars. When we need to understand a new thing, we seek a metaphor, a way to link the new thing with something that we already understand. No metaphor is ever perfect. But the mirror metaphor for AI is far more apt and illuminating, much safer and less deceptive than the familiar metaphor that we're encouraged by the AI industry to use for understanding their tools, namely the metaphor of a mind. Now you might wonder, how can a mirror possibly be a more fitting and useful metaphor than a mind for something that contains the term intelligence? A word we only associate with minds. Well, let me explain. In some sense, all technologies are mirrors of humanity. Technology is an extension of human values and imagination into the built world. Every technology from the wheel to the book to the engine to the computer reflects what humans of certain times and places have thought was worth doing, worth making, worth enabling or worth trying. But today's AI tools are mirrors in a much more powerful sense. At a book talk earlier this year, a physicist in the audience approached me afterward to tell me that she'd been convinced by my talk that the mirror is not just a metaphor for AI, that AI tools like large language models, functionally just are mirrors, only of a mathematical rather than physical construction. Before I explain how this could be true, it's important to point out that AI is not one technology, but many. AI is much more than technology, too. It's a distinct field of scientific research, one that's been carried out here at Edinburgh for over 60 years. But today, most people in the world know AI as a suite of tools and techniques for solving human problems with computers. The mirror metaphor is a stronger match for some kinds of AI than others. It's best suited to describe machine learning tools trained on large volumes of human generated data, which fortunately happens to be the kind of AI that you've heard the most about lately, namely large language models and their cousins. The foundational technology behind consumer facing generative AI applications like Chat GPT, Gemini, and Claude. I'm not going to get too deep into the weeds of how large language models are constructed, but a simplified understanding of how mirrors work can help anyone grasp the basics. When you see an image in a glass mirror, it's a result of three interacting components. The first is a polished and coated glass surface that has the required physical properties to receive and reflect light. The second is the light itself. The third is the reflected image that you see when you look at the glass. Now, when you talk to something like a large language model, a very similar set of components is interacting with you. First, corresponding to the polished glass surface is the machine learning algorithm. That's a set of encoded software instructions for training something called a machine learning model. A machine learning model is a large, complex mathematical structure extracted from an initial data set. The learning algorithm first structures and then further trains the model by specifying how that model will change as it is exposed to incoming data. Much as the physics of a polished mirror determine how the glass will receive and reflect incoming liight. Imagine if a mirror's glass surface could be physically altered in subtle ways by the light that fell on it, such that the more light hit the mirror, the better the glass performed in reflecting it. Now you've got a basic intuitive grasp of how machine learning models work. We create a reflective mathematical surface using a type of algorithm that can polish the model surface further by being exposed to a large volume of training data. When the training process is complete, the mirror is polished to its final or near final state, and we have a mathematical structure, what we call a trained model that can receive and successfully reflect the patterns in any new relevant data given to it. Light is the second component of the metaphor. This is an easy one, right? Data is the light. Instead of visible light falling on physically polished glass surfaces, AI mirrors work when stored digital data falls on mathematically polished algorithmic surfaces. The model gets its initial properties, what we call weights or parameters by means of exposure to a torrent of digital light. Our data, all the hoarded data that large AI companies have grabbed and scraped and stolen from us, so that they can beam our collected light, the light of our minds, into their own tools, training them to reflect new light, new data inputs, back to us in ways that will look intelligent. That's the third and final component of an AI mirror structure. The reflected images they show back to us are the outputs we get when we use the model, whenever we enter the new light of our prompts and queries, whether those are typed by hand into a chatGPT window, spoken into your phone, or by a gesture like pressing an icon on your screen. The answers that we get back from the model. The intelligent seeming replies, and text, audio or video form are just composite reflections of all the data taken from us that now resides in the model as a mathematical structure. So, if AI tools like large language models are just mirrors made of math, why do the responses of a large language model sound so intelligent or conscious, even alive? It's because they're reflections of data taken from intelligent, conscious, living beings--from us. But if you now understand that this is a mirror image of intelligence, you realise that what looks intelligent is not. Think about when you walk into your bathroom in the morning and you look in the mirror. You know very well that there's not a second face with you in that room, right? No second body. If you're alone, there's only one face, one body in the room, along with a mirror image of it. You know that the image of the face that you're looking at has no depth, no warmth, no smell, no life. It's a reflection, not a duplicate. A reflection of a face is not a face. A reflection of a flower is not a flower. It needs no water. A reflection of a flame is not a flame. It does not burn. A reflection of a mind is not a mind. It does not think. Reflections are their own kind of thing. And as psychologists have demonstrated, only an infant or creature who hasn't passed the cognitive stage known as the mirror test will confuse the thing with its reflection. AI companies very much want you to fail the mirror test when it comes to their products. They want you to believe that you're seeing a new kind of intellect emerging on the other side of that algorithmic mirror, something even smarter, more powerful and more capable than you are. Something that cannot only help you, but teach you, comfort you, even speak and write for you. One day that can't come soon enough for those who control the image, replace you. That's the power of the conjurers mirror illusion and the danger. Much like the Roman poet Ovid's tragic figure of Narcissus, the boy who lost his life to his captivation by his own reflection. We today are at risk of losing ourselves in our AI mirrors, slowly becoming convinced that the thing on the other side is just as real, just as much of a mind as we are. Maybe one day, even more so. In my new book, I described more fully the power of these algorithmic mirrors to not only usefully reflect and magnify valuable patterns in our data, a power that can be lifesaving when used responsibly. But like all mirrors to also distort, confound and occlude our reality. For example, AI mirrors not only reproduce and magnify the patterns of injustice in our data, They routinely produce even more distorted versions of these, bouncing back to us, funhouse mirror images of our histories, of racial abuse and exclusion, or reflecting past worlds where women have never been CEOs. These are, of course, images returned by a generative AI model, in this case, stable diffusion from the prompt CEO, or worlds where women have never been artists. Even showing us nonexistent worlds. Worlds where all attractive people are light skinned and young. Worlds white people never use social services, and where productive people are only young men with too much hair product. [Audience laughter] Now, these unpredictable distortions can be really absurd. Consider Google's fine tuned AI that was designed to remedy some of these distortions and instead wildly overcompensated for its underlying biases by portraying Nazis as racially diverse. Or consider its AI overview tool telling us that nutritionists want us to eat one rock a day or that you should glue your cheese onto your pizza to secure it. Or we have X's tiresome GROK bot, making up a news story in April about New York City Mayor Adams sending the NYPD into the subway to shoot the April earthquake before it could strike again. Now, we should be grateful actually for these more ridiculous distortions, not just because they're really entertaining, but because they quickly puncture for us the mirror illusion of a machine mind and show the utter absence of thinking and understanding behind these hollow reflections. We need that perspective when falsehoods sound dangerously real. Like the same GROK newsbot falsely claiming also in April, that Iran had struck Israel with missiles. Or when New York City's AI bot from Microsoft confidently advised local businesses to break the law, telling them that it was legal to discriminate against disabled people or the elderly or to steal their workers tips. But it's important to know that there's no more thinking behind the reflection when these AI mirror images get things right. AI tools can often get the reflection right. When they do, they serve the primary purpose of a mirror to reveal things that can be impossible for us to see otherwise. While the reliability of large language model outputs remains a major problem, the power of AI mirrors is well demonstrated by machine learning models trained on narrower, but higher quality data sets than large language models. These are the types of AI models that can reveal early signs of cancer or sepsis in our medical data or show evidence of insider trading in our e mail and banking history. Like all AI mirrors, however, there are also powerful magnifiers of the patterns found in our data from patterns of subtle truths, which are quite helpful to magnify for our sight to the harmful stereotypes and unjust biases that they reproduce in nearly every context. But mirrors do far more than reveal and magnify our reality. Because as I've shown, they also distort reality, they can be instruments of deception. There's a reason that we write warnings on glass mirrors like this, when they distort our vision of what is true. AI mirror images are now being widely used to confound us, as mirrors have always been by entertainers and charlatans alike. They can send us into surprise laughter at Balenciaga Pope-- truly delightful. I enjoy that a lot. But they can also deceive us with fraudulent, deep fake images, audio and video designed to gin up your political rage over a non existent campaign speech, to make you break up with your involuntarily pornified girlfriend, or to get you to wire cash to the fake face of your pleading parent or child. That you would swear on your life is begging you over that phone, in that video, for help. Crying in their own distinctive voice, moving their own injured body, and shedding their own desperate tears. In my book, I ask us to consider the heavy social, psychological, and increasingly environmental cost to us of this new mirror world, and how we might refuse the offer and rewrite its terms in our favour. But in this talk, I want to take a different turn. I want to focus on these AI mirrors. How these AI mirrors, which can be used as tools to enlarge our possibilities, enhance our freedom, and secure our futures, are instead increasingly being used to steal these from us. To snatch away from us the space and time that we use to think, the power we hold to know and craft our own narratives, and the confidence we need to choose and build more humane futures. This talk is a call to reclaim our humanity, to retake our space, our time and our stories. So, now that the mirror metaphor has been explained, we're going to slow down the visuals just a little bit while I lay out some deeper ideas. In my book, I talk about the space of reasons, especially the space of moral reasons. That's another metaphor. Sorry about that. The philosopher Wilfrid Sellars, last century spoke of the space of reasons as the field of shared human thinking, where we explore and exchange thoughts about what might justify our beliefs and claims. Knowledge is social, and it's made up not of singular truths or data points, but instead of the web of justifying reasons that thinking weaves to hold those truths and our minds together in a shared grasp of reality. The spatial metaphor is important. It reflects the fact that thinking is action, like physical limbs, thoughts need space to move in. When we think and especially when we reason, we need an understanding of the possibilities that still lie open to us. We need to be able to see ahead of us the contours of known reality that both constrain and enable our ambitions, whether we're navigating our shared physical reality, our social, economic, and political reality, or our moral reality. We need to explore what the space of the real world allows us and what it forbids us. And so, to reason, we need to use thought as a way to stretch into that open space, and imagine the different paths that we might travel through it. The space of reasons has to be held open in the mind for this to happen. That's why we speak in spatial terms, of the virtues, of an open mind, and the vices of a closed mind. An open mind has room to think and reason with others, Room to navigate objections and contradictions, space, to chart the new routes that will justify unfamiliar or contested ideas. A closed mind is boxed in, immobilised and helpless. This is why, of course, philosophers following the model of Socrates are trained to ask many questions before they attempt to answer one, and to follow every answer with even more questions. It's also why we're super irritating. Well, not the only reason, but it's up there. The questions are what hold open the space around the answers and allow for better ones to come along. AI tools can absolutely be designed to help us hold open the space of reasons, but for the most part, they aren't. They're designed to instantaneously and tirelessly fill our mental space with answers. The one intelligent thing you'll not likely hear a large language model say to you, unless it's been specifically instructed by you to respond in this way is, 'I don't really know,' or 'I'm not sure,' or, 'why don't you tell me?' Instead, the version of AI mirrors now being aggressively pushed by platform AI companies like Microsoft, Google and Meta into the hands of educators, novelists, journalists, artists, doctors, judges, policymakers, scientists, and business leaders everywhere is a mental space filling machine. You want to know what 'Moby-Dick' is really about, besides a whale. Are you curious about the strongest and weakest arguments against imposing term-limits on judges? Do you want to know what you should do to repair a damaged friendship? Do you want to know how to find a better ending to your novel? An AI mirror has the answers or at least some kind of answers, and will deliver them to you in milliseconds with a tone of unflagging confidence so that you need not walk one more step through the endless space of reasons. The pesky business of thinking can be done away with as soon as possible. Efficiency at last. Because of course, efficiency is the ultimate goal behind today's AI gold rush. We could try to use AI mirrors as aids to our own learning, thinking, deliberating and deciding together in the space of reasons, but that would be less efficient. It's not clear just how much help they can be right now because unlike the high dimensional vector space occupied by an AI model, the human space of reasons is not mathematically closed but open to the future. Unlike our minds, which can invent new patterns of thought, AI mirrors point backward in time. They know only the patterns in the light of the data we've already created. They can only mirror the shapes of the thoughts we've already had. This is not what it's like to be in the space of reasons where our thoughts can move forward, not just trawl backward. In the space of reasons, our thoughts are not restricted to the shape of old reasons. They can also take on new shapes forming new reasons. But like the light cones that constrain the travel of information in physics, AI mirrors cannot see or think into the future. They predict only what was already there in the training data. When I ask large language models about styles of art that have not yet been invented, they suggest extremely well established artistic approaches, like bio-sculpture. Yeah. We've tried that. Multi-sensory art. Yeah. No one's ever thought of that before. Using technology to visualise emotional states. We've been there and done that. AI systems, however, are rear-view mirrors. Try asking the latest GPT model, 'what kind of stories have not yet been written?' Here's what it suggests. It took me 10 minutes in an airport hotel bar. Yeah. We've written all this stuff. These are variations of themes that we have already invented. But we managed to invent these without having GPT draw us the map. Now, of course, you might fine-tune a model not to answer your questions, but instead to ask you new ones or to point out new paths through the problem space that others have seen, but that you didn't even know were there to explore. Some people do have the skills, time, motivation, and permission to tweak these tools for that purpose. But for most users, especially for the companies and governments who are betting on these tools as magic beans that will spoon--soon spark hots-- Why did I write 'soon spark sky high' and not test that out and know I was going to trip over that 12 times? Magic beans that will soon spark sky high productivity and efficiency gains. That's the whole point of these tools, and the only thing that justifies their immense cost. That is their ability to save us all the effort of thinking. Now, right now, generative AI mirrors like GPT and Gemini can't really do this. They're still too distorted and unreliable to generate massive profits. By design, they don't find facts. They make up outputs that look and sound like facts, which is a real problem when the truth actually matters to you or your organisation. Because they can't actually think their way out of a paper bag if the shape of that bag wasn't already seen in the training data. The algorithmic golden goose has yet to start laying profitable eggs for most companies, as Goldman Sachs observed in a summer report that spooked the whole AI investment community. But despite the scare, most investors are doubling down. Convinced that the next generation of AI mirrors will work out the kinks and deliver the dream. A thing that can do all the work of a mind without the inconvenient luxury and unpredictable freedom of thinking, and we can't take comfort in that prospect. It's actually the worst case scenario. The philosopher Alfred North Whitehead said in 1911, that civilization advances in proportion to the number of important mental operations that we can perform without thinking about them. He was speaking of mathematical techniques to solve complex problems with less effort, a kind of, low level cognitive automation, so that we could save more of our mental energy for higher order thinking, and deciding. It was a pretty good point. But now that we have machine mirrors that will mimic the most plausible outcome of any thought process at all. I think we need to ask Whitehead's ghost and ourselves. What thoughts do the civilised keep? In a 2021 magazine essay of that title, I compared the seemingly astonishing ability of AI mirrors to generate plausible sounding answers to any complex problem to a robot being flung to the summit of a towering mountain by an immensely powerful catapult. It's kind of amazing when the robot lands on the peak, but it's not mountain climbing. And when the robot or the AI mirror misses the right answer, it often misses not by feet but by a mile. And God help the person who hires the robot as a mountain climbing guide or instructor. Of course, here's another metaphor. The mountain climbing is thinking. Because if you ask an AI mirror to describe the reasoning that it used to get an answer to a difficult problem, the cognitive mountain, you'll get a completely fictional made up path. The mathematical vector space it traversed to get the answer doesn't actually look anything like our mountain, our mental space of reasons. When asked to give its reasons, an AI model does the same thing it did to get the original answer. It makes up the best sounding, most plausible guess at a valid chain of reasoning, which will often land in the ballpark, but sometimes 100 miles away. The point of the mountain climbing analogy is that thinking like mountain climbing is a skilled activity and an open ended process, not an event. A person who gets to either summit on the back of a flying robot learns squat about the mountain and acquires no skill. Just like mountain climbing, thinking requires repeated practise of scaling, daunting peaks. Thinking skills must be gradually cultivated and mastered in the shared space of reasons. It's these skills that allow a thinker to get up the next mountain, a new cognitive challenge they haven't seen before. Maybe one that no one has seen before. Now, are we going to use AI mirrors to build those skills in ourselves and most importantly, in the next generation? Right now it's not looking that way. Because to do that makes nobody a quick buck. That's a long term investment in human cognitive development and that's not what the AI market wants. What the AI market wants is productivity with fewer steps. They want to shrink the human space of thinking as much as possible. Not only because it saves time, we can go from data set to decision summit in 3 seconds, not three days, three weeks or three years, but also because it cuts out the middleman or woman, you. You're not only slow, you're unpredictable and so are your colleagues. You might question the whole task. An AI mirror is not going to do that. You might get together in the space of reasons and decide amongst yourselves that the original problem is misframed, or not a problem to be solved at all. You might decide that the most efficient and profitable short term solution to the problem is an unethical solution or an unsustainable one. You might turn left in the space of reasons when the market wants you to walk a straight line. What better solution to that eternal problem of business and politics than offloading the task to a machine that doesn't need to walk with us in the space of reasons at all? I've been talking about space, and now I'm going to talk about time before we finish by talking about our stories. But these are all connected, of course. We know that space and time aren't separate. From the standpoint of the universe, there's just spacetime, and conscious beings are the only ones who can perceive them apart. Stories are just how conscious beings make sense of our perceived space and our experienced time, how we give them meaning and how we come to understand what might come next from each. As narrative theorists of identity, such as Paul Ricoeur and others have argued, narrative storytelling is not only the activity through which stable, reflectively aware selves come into being. It's how we grasp the full arc of our lives and our communities. Without the ability to tell and understand stories, there is no history, only disconnected events. No expanse of time, no growth, only fleeting moments. Now, AI mirrors have a peculiar relationship with time that I've already mentioned. Like the massive optical mirrors in orbit that produced this image from NASA by collecting light that travelled to the mirror from the early universe. AI mirrors are also confined to the light cone of the past. As I've said, the future is utterly closed to them for their predictions can only extend the patterns found in the data, which of course is only the data that we already have. As the philosopher Abeba Birhane has observed, AI is an inherently conservative technology. Generative AI tools like ChatGPT, DALL-E and Gemini do have a unpredictability built into them, of course, which is why they can often surprise and amuse us with their unexpected answers, images, or predictions. That generative part of generative AI comes from a mechanism built in to introduce some artificial randomness in the selected output. Generative AI models even have control settings called things like temperature that allow us to ramp up that statistical variance and inject more surprise into their results. But this too is just a reflection of the past. The surprise isn't actually a departure from those learned patterns. It just uses a wider spread of our historical data, a wider statistical distribution of the likely answers within the past data cone as the basis for the new prediction. The guess in this case is no less based on the past. It's just a slightly wilder guess. Just as our AI mirrors don't know the space of reasons that we walk in, they don't know it's time either. That is they don't have what philosophers like Edmund Husserl called our temporal horizon. The human experience of an open and indeterminate world, held open by the flow of conscious time where something new is always just over the horizon of our present experience and about to come into view. What comes into view is often familiar, recognisable. The sun rises every single day, just as it has in all of human history. Taxes are always due, and my cat always wants breakfast way too early. But things and events of startling new shapes do arrive from time to time on an utterly unpredictable schedule. As the Scottish philosopher David Hume pointed out centuries ago, the unremovable wart on human reasoning is that we've got absolutely zero evidence that the future won't suddenly go all pair-shaped on us at any given instant. Life is utterly predictable until it isn't. But then that's a wart I'm eternally grateful for. Our ability to experience the future as utterly open is what makes us human because the future being a giant question mark is the heart of our cognitive, moral, artistic, and political freedom. It's what gives us the existential task of deciding each and every day and each and every moment, just how much of our past we want to bring with us into the future, how much of it we want to leave behind, and what we want to create in the open space that is new. The Spanish philosopher José Ortega y Gasset called this the task of auto-fabrication, literally self making. He said, to be human is to be called to be an engineer. An engineer of yourself by choosing from your future's open possibilities, which of those to make real? But also with others, you're called to be an engineer of the future that we will share together. We are engineers of human futures, all of us. We might try as engineers to keep things looking much the same, but that too is an engineering choice to carry the past forward. We can always choose otherwise and sometimes do. We abandon a commitment that we promise to hold for life, or we make a commitment that we swore we'd never take on. We can extend our trademark artistic style into endless satisfying variations or we can drop it on a dime and leave our audiences utterly baffled. We can stick with the dire politics we know, or we can say, 'to hell with this -- we're changing the game.' We can keep burning the world for the temporary benefit of a handful of billionaires, or we can say, 'no more.' But here's the thing. AI mirrors can't make these kinds of radically new choices. They're stuck pointing backward, reflecting what has been seen in the past. That's why their reflections keep regenerating the most common social biases and most widespread errors, a digital humanity's history. That's why the speeches and essays and songs they generate are so weirdly generic and somehow familiar, even though we've never seen them before. This regressive power of AI mirrors has been described by philosopher Mark Coeckelbergh, as a kind of time machine. But an AI mirror is a perverse kind of time machine, one that can't take us into the future. But instead, lets the patterns of the past leak ever more into the present. That's a real problem now that everyone from government leaders, to screenwriters, from university educators, to criminal court judges, and journalists are being told by AI companies that they need to use AI to write the future to predict from the past data cone, what decision or word or idea is most likely to come next, and then automate that statistical ghost of the past into our lived reality. We live at a time where the most dominant economic, political, and environmental patterns of human activity on the planet are all under growing strain because these patterns for reasons that are tightly interwoven, are jointly unsustainable. They not only serve increasingly few of us at the growing expense of everyone else. They endanger the future of the entire human family and all other intelligent life on this planet. Now more than at any other point in human history. We need to exercise our collective power of auto-fabrication. The power that only human beings have to engineer another future to turn the wheel in a new direction to give ourselves more road to travel before we reach a cliff. AI mirrors when used to optimise the efficient reproduction of human work, won't invent that future. They'll steal it from us, or rather they'll be the thieves tools. What we need in order to reclaim the future for humanity is new tools and new systems of value that build them, design to hold open, and enlarge human space and human time. Time to write new stories for ourselves and time to give our descendants to do the same. Yet to do that, we need to change the relationship between technology and time. Today, we use algorithmic technologies from smartphones to social media platforms to video games to AI mirrors to generate content to fill all open time and all open mental space. The tendency to fill our time and space with any available distractions long predates the digital age. But only the digital age could deliver today's 24-7 torrent of on-demand content of endless variety and deceptively low cost, algorithmically fitted to your strongest existing preferences. This qualitative shift in the media age has been widely lamented in countless diagnoses of the ills of the attention economy, where the demands of platform business models and digital media politics mean that your eyeballs and your children's eyeballs are now the most prized object of economic and political control. Sure, those of us captured by the attention economy walked into the trap. But we had a good push. And once the doors closed behind us and all our friends and colleagues were there in with us, it got harder to see a way out that wasn't exile. And because of this, we increasingly do not know what it is to have unfilled time or unfilled mental space. And a growing number of us don't know what to do with it on those rare occasions that time and space does arrive, other than to fill it as quickly as possible with more algorithmically generated content. AI mirrors are the latest delivery system, not just for finding that content, but for producing it. That's the moment when the digital revolution challenges our most vital human capability, the capacity and the will to tell and write our own stories. This is the end of the talk. As many of you will know, and more than a few lament, AI mirrors can be used to literally generate stories. ITV just ended up in the news this week, for planning to replace some of their writers with AI experts that can use the tools to generate stories for TV, and certainly they're not alone. And the goal is to generate any kind of story, as long as it's somehow like the stories we've already told with maybe just a bit of random statistical noise thrown in for extra flavour. AI mirrors are being used to quickly and cheaply generate news stories, blog posts, and op eds, turning the vocation of the human journalist at many cash strapped outlets from a crafter of human narrative, something that takes time and the luxury of mental space into a polisher, fact checker, and enlivener of algorithly generated instantaneous prose. The writer still has a vital role to play in most of these cases. AI mirrors are still too incompetent at storytelling to avoid needless repetition, stale cliches, and illogical contradictions. But AI developers are working night and day to reduce those flaws. And at the same time, some audiences are learning to grudgingly tolerate them. Fiction writers are being encouraged to use AI to generate new story outlines, early drafts, possible plot twists, character motivations, and background sketches, and possible endings. We're reassured that AI can't yet write a whole novel that's worth reading. But from these fairly sorry, ragged pieces, a former writer might assemble and polish a passable Frankenstein. Even those who aren't storytellers by profession are being marketed AI tools that will record the story of your workweek or the story of your family vacation. Summarise its highlights for your perusal later. Prepare a diary record of it for your safe keeping. Remind you of any follow up actions. Suggest a plan for executing those follow up actions. And in the near future, it'll take care of those actions for you. AI mirrors are being pushed on educators at schools and universities around the world, where we're told that they can help us more efficiently develop our lesson plans and structure our lectures. Summarise for our quick review the key points of the readings we've assigned to our students, and even help generate feedback for students on the bland insights about those readings that they increasingly use their AI mirrors to write for them. Many educators are vigorously resisting this campaign, but the platform and EdTech companies are relentlessly pushing AI into virtually every tool we use. And administrators see it as the way to manage ballooning academic workloads without actually having to reform the system that produces workload bloat. More than a few academics having already lost to administrative bloat, the precious time and mental space they need to conduct meaningful research, will be tempted to claw some of it back however they can. And if this trend continues, the result will be a classroom environment where the stories of our history, our literature, our politics, our scientific discoveries will increasingly be told, read, interpreted, and critiqued by our machines. Some of you like me are old enough to remember this image from the 1985 movie, 'Real Genius,' where the professor fully disenchanted with students tape recording his lectures and skipping class, leaving their tape recorders on their seats, starts playing his lectures from the podium by a boom box and skips the whole farce himself. Today, the farce is becoming reality. Only instead of farce, we call it innovation. We can demand better. But first, we have to want better and we have to believe we deserve it. At the moment, that's a challenge, perhaps for young people most of all. A recent Atlantic piece by Rose Horowitch titled, 'The Elite College Students Who Can't Read Books,' describe the growing resistance of university students to reading any extended narrative, from a novella down to a sonnet. In the article, a psychologist notes that digital media has altered young people's sense of what is worthy of attention. But in the shift from the age-old pre-digital phenomenon of students opportunistically skipping the reading, which I can confirm has always been a reality. The article highlights professors worries that unlike previous generations, many of today's students seem to be skipping the reading, not because they won't do it or don't want to, but because they can't. The account of a growing number of university students instantly shutting down at a single encounter with more than a few pages. Experiencing the tracking of a story as an impossible cognitive task isn't new to me. It's been a worry of many of us for a while. But AI mirrors are now promising to close that gap for students, launching them to the top of the exam or essay mountain in a single prompt. Even preparing seemingly thoughtful and perceptive discussion questions to ask in class or tutorials or coming up with likely objections they can make to the core text. Pretty much any learning task or assessment that an instructor can imagine can be ginned up by an AI tool with the right prompting, short of a direct one to one oral examination, which most educators are blocked from giving by reasonable concerns about accessibility and student anxiety. The important question though, isn't whether today's students can make their way through, 'The Iliad,' or 'War and Peace.' The question is whether our students will retain the will, confidence, and capacity to participate in the vital human task of reading and writing their own stories, which requires the mental allowance of time and space. One of the things that's widely reported as uncomfortable for many digital natives and older digitally habituated readers alike is the experience of empty mental space. To allow mental space and quiet to arise and dwell in it, to not respond to the powerful impulse to fill it with something, anything, is a challenge for many of us, and increasingly impossible for more than a few. It's what the novelist knows as the torture of writer's block that pushes us to close the laptop and clean another part of the house. It's what a child knows as the torture of boredom that quickly takes on the tinge of anxiety until he grabs for the phone to scroll. It's what the journalist staring at a sheaf of notes or file of audio recordings feels when she can't begin to see the shape these fragments must take. In the Atlantic piece, the psychologist states that boredom has come to feel unnatural to many young people. But that yawning empty space that boredom opens when we realise we can bear it and live through it and walk through it, is a portal to a future on the other side. Empty mental space is where time opens and where values are chosen. When the story could be anything, and yet the story as nothing feels unbearable, I choose something. And then I realise that I've made that chosen new thing matter in some way. I've brought it into reality, and it's part of my world now, part of myself now. The new plot point I chose, was chosen not because it was already interesting and important to me. Otherwise, it would have already been within my grasp. Instead, it becomes interesting and important because I chose it, yanked it into reality from all those undefined possibilities on the other side of time's horizon. The innovative artist does something similar. When as a child, I chose in desperate boredom. I was an only child, so I had a lot of that. To try to learn how to play my mother's guitar or to climb a big rock or to invent a new computer adventure game, or pick up my father's dusty, neglected Bible and find out what that was all about. I was choosing to make things important to me that had not been important before. Even if I later abandoned some of them, my choice at that moment proved to me that I'm free to make things matter, that my life is not a passive waiting game, but an action, a construction of meaning and value that never ends. I didn't understand that framing as a child, of course, but even then I knew implicitly, non-verbally, as a feeling that it was a kind of power. A power to be alone with my thoughts and yet not captive to their present shape or their present formlessness. These choices, more than all the choices that came easily because they followed a familiar pattern, became part of my story in a way that all selves are forged and shaped. We learn that we have power over the future. We are not helpless victims of fate. By reaching across the void of mental space into times open and formless horizon and bringing something over from that undefined future into our present world, giving it a place in our story, and by that choice, making it real. We do that when we innovate in art and media and design. We do it when we come together to invent new politics, and we do it when we press new moral claims on the world. But to learn that power, we have to be given space and time to think and feel without someone else filling it for us. Empty space and quiet is the field of the future, that's the opposite of what digital culture gives us. Modern digital life is much like a 90s Tetris game. Every moment is a challenge to fill the open space of existing as efficiently and seamlessly as possible. The blocks are falling faster and faster, and they must be quickly slotted into their predictable and rational places. See what's coming next. What meeting, what exam, what tweet, what photo, what reel? React and process, like, or save, fit it into the most likely slot, or let AI do it for you. But most importantly, leave no space unfilled. We need instead to hold open our space and our time so that we can write the next chapter of the human story and let it go somewhere new. And that's the opposite of what AI currently offers us. In the words of Mark Coeckelbergh, instead of using AI to save time by filling up the future with automated projections from the past. Quote, 'We need to figure out how to make time in a different way, how to let in the future, and make time for social change, how to make time for interpretation and judgement, and how to make time for people and their stories.' AI mirrors can't tell our stories for us because they don't know why those stories matter. There's nothing at stake for an AI storyteller, because an AI bot has no future to make for itself. From the standpoint of a trained AI model, the shape of the mirror image, the arc of the data story has always already been made. Unlike the work of being human, its work has been done, and all that is left is to generate its most probable echoes until it is trained again. The image of our data that appears in the AI mirror can be dimmed or blurred or fogged or distorted, but it can't be unknown, unmade. For an AI mirror, there is no story still untold. Unlike an AI model, we each and together sit on the edge of a future that is open. One that need not reflect the past. Not unless we surrender to machines the power of telling our own stories. This narrative power is the self's crucible. But even more importantly, it's the crucible of our collective humanity, which will have a story that only we have the power and the duty to write. Thank you. [Applause] So, this is the fun part where we have questions for ten or fifteen minutes. And I get to point the fingers. So, who has one? Yes. Yes, right there. Hi. Hi. My name is Sidra. And through this lecture, I learned that how AI actually like you said, it's the reflection of our light. But I was thinking more in terms of that, I don't know, instead of us resisting that narrative that of course, it is a reflection of her light, but instead of resisting the narrative that we can not work with AI, we have to work with it as an assemblage. In terms of that, I was thinking a lot, because my PhD is on decolonising Western epistemologies, and my experiment, like yours, told me a lot of things about AI and its capability. I was thinking what role AI can play in helping us decolonise Western epistemologies because the results we get from it are based on mass narrative, and those narratives are again more eurocentric. So I just wanted to understand if there is a way because what I understood today is that that's more of a reflection rather than an extension. What would you -- I hope I make sense? Yes. Yes. Absolutely. So my answer is yes, we can use AI to help us with those kind of missions with help being the operative word. That is, we have to understand what its limited role is there, right? For example, you mentioned the efforts that we might have to de colonise the curriculum. We might say, Okay, to a large language model. Here's my syllabus. I would like to decolonise this syllabus. Does this model understand why you want to do that? Does it understand why it's important? By the way, if you ask why it's important, it will give you an answer and it'll sound kind of plausible. That doesn't mean it understands the answer, right? What's going to ensure that you understand it, as opposed to getting into that kind of mindless mode where we basically just sort of transform our syllabi, our curricula without thought, without decision, without value being part of the process. There are good and bad ways to do this sort of thing, right? And the values of learning and what we're there to do with our students have to guide that process, and that's our responsibility as educators. But it is so easy for that all to slip away and be lost when we have the ability to just say, just find for me. The sources or better yet, you construct them into a learning architecture that will be de colonised, whatever that means to you, chatGPT, right? Again, it's about understanding the limitations of these tools and making sure that it's our values and our responsibility and our sense of confidence in our ability to actually make important decisions that is retained. But the other thing is, we should be asking ourselves at each moment, why am I using AI to do this in the first place? If I can do it myself, why am I not doing it myself, particularly given the immense environmental cost of using these tools? As well as the fact that they have been in almost every case, trained on copyrighted data that was taken without permission or compensation. So, I'm not suggesting, though some people have made this argument, but I don't share it. It's not my view that we must resist all uses of these technologies. But I do think actually there's a default argument for saying, what's your argument for using this tool? Is there a reason that will justify this expenditure of energy? Is there a reason that will justify you letting go of this work and handing it over to a machine? And there are cases where we do have a cogent argument, especially for something that's important to be done that we literally cannot do ourselves. Because of the mathematical power of these systems, there are kinds of problems it can solve that might take us centuries. The whole point about DeepMind's AlphaFold winning the Nobel Prize was centred around the fact that that model, which is not a generative AI tool like chatGPT, but it is a very large machine learning model. The reason that that model was worth the expenditure. Was because it can solve a problem that will save lives that we could not have done on our own without perhaps decades or centuries of time lost. Those are those kind of decisions that I think it's incumbent on us to reflect on rather than mindlessly saying, I have a task. Let me let AI fill it. Yeah. Great question. Back here. Are we going 'til 6:30? Is that the stopping point? Yes. Okay. Thank. Thanks, Shannon. Mirrors and even Tetris, for example, are very flat and visual. Humans very multi-modal and multi-sensory. So I was wondering, rather than a mirror, what metaphor should we be aiming for? Um Well, I think all metaphors have their limits. We have to recognise the limitations of the metaphor, but I don't think it's unable to cope with these particular differences. We can reflect images, but we can reflect sound. We can reflect text. We can reflect code. These models can reflect these different modalities of human expression. But I don't think that makes them something other than mirrors. Does that answer your question or have I misunderstood your question? Yes, I was wondering if there is some way of having them all combined rather than separate? We already have that actually. So, multi-modal models are already being released by these large AI companies. And again, I suppose that puts a little bit of pressure on the mirror because we have mirrors that generally operate only in the visual modality, or at least in our experience. But we can imagine a mirror that can reflect to us different kinds of content, different forms of our cultural output. I think it just takes stretching the imagination just a little bit. Again, I don't think a multi-modal AI model is any less of a mirror than a purely text based or image based one. It's just a mirror of a more complex construction. I think there are qualitative differences though in what multi-modal models will do for us on the positive side, but also in terms of their dangers, right? It's multimodal models that allow us to produce a deep fake of you, that not only looks like you, but moves like you and sounds like you, right? So we have to be able to recognise that multi-modal models will increase the ability of these tools to entertain and serve our interests, but also to deceive and confound us. Thanks. Got a question here in the centre? Or is there one in the back? Wherever you were going is fine. We probably have two more questions, so we'll take the second one here in the front. So with your mirror analogy, you were talking about all the intellectual property that's being processed through these systems and being reflected. And the people who created it are not, you know, receiving due compensation. But imagine a future in which some music piece that is concocted by these systems has through some combination of mirroring the past ends up becoming very popular and makes lots of money for somebody. How do we disentangle what, I wonder if you have thoughts about the issues associated with that kind of intellectual property and how, you know, other than preventing these systems from producing that music in the first place, how do we disentangle the rights of those? Yeah. Sure. So there's all different kinds of models for doing this, and I'm going to bracket for now the question of whether AI generated music is something that we ought to be investing in as opposed to funding human musicians who currently are suffering from an inability, right? To make a livelihood out of their talents. And that, I think, ought to be the first item on the agenda. But that being said, let's say there's some future world where we can have it all, right? We have plenty of funding for the arts, lots of space and time for human artists to do their work, and we have the ability to in an environmentally responsible way, use AI tools to generate additional cultural outputs for us. In that scenario, there are a number of models. There would be artists who would be happy to have their work used to train AI models if they were asked. They might not request compensation in some cases, or they might request an initial licencing fee. There's plenty of models in the music industry for saying, pay me once, and then it's yours and anything that comes from it later, you own the profit for, right? So we can negotiate various forms of fees that do not require us to later disentangle what was your contribution to the thing that made, you know, a million dollars. There's other models where we could imagine better systems for data ownership, where we have shared governance and collective ownership of data that is, let's say publicly funded, as much of the arts is, as well as the sciences. I mean, one of the things that bothers me is that all of these models can suck up all of the scientific data that has been largely funded on the backs of citizens for the public good, right? By research councils and universities. And all that knowledge then gets to be taken by an AI company and turned into a proprietary product that they sell back to us at whatever price they choose, right? So I think there are broader issues here of who owns data and particularly data that is created for public benefit, how that public benefit can be retained. But I think there's plenty of paths open there, to answer the question you've posed, which is an excellent one. And one last question up here. Sorry. You can ask me later. Hi. Yeah. Thanks for the great talk. I'm just wondering what your view is on in a world where universities are selling AI courses to students and students view those courses as stepping stones to jobs in tech companies that then sell products to universities, for example. What's the role of universities in this area? That's a great question, and I think one that universities need to be asking right now. But it goes back, I think to my previous answer in that there are tonnes of important human problems that we need AI. I don't know how many times we need generative AI, but we definitely need other forms and techniques in AI to use to advance our goals when it comes to health, when it comes to the environment itself. When it comes to fighting corruption, when it comes to identifying victims of human trafficking, right? I can give you a laundry list of really vital use cases that if universities aren't investing in the research that can advance our ability to solve those problems, there's a clear argument that universities aren't fulfilling their mission. However, what's really happening right now is that universities are being used as a tool by AI platform companies to fund research that will eventually be walled off and sold off for profit to make commercial products that aren't designed very often to solve those kinds of problems, but are designed to simply grab more of our attention or control more of our behaviour for the benefit of a few. I think universities need to be thinking about what kind of AI research are we funding. But also, how do we maintain our independence? Particularly public universities, how do we maintain our independence and our ability to serve our mission to a wider set of publics than six large AI companies who are increasingly buying the expertise, hoarding the chips that universities need in order to do this kind of work at the edge, right, and controlling the terms on which this work is funded. That needs to be the battle fought right now. Thank you. [Applause] Thank you, Shannon. [Applause continues] Thank you very much, Shannon. Thank you to everyone for being here on that extremely profound endpoint that we ended up there in front of yourself as well, Peter. [Laughter] Perfectly. So huge thank you for all of you being here. Also, lots of children here as well from schools and families and Shannon in the Edinburgh Futures Institute, come back and see Shannon, ask more provocative questions. So we're going to now ease out of the building. There's a cafe on one side, called Canopy. Please find your way there, and Peter will want me to tell you all it's the University of Edinburgh's Canopy cafe. So please just find yourselves there. And thank you again for coming to our first inaugural lecture. That was amazing, Shannon--thank you very much. Thank you all. Oct 23 2024 17.15 - 19.00 Inaugural Lecture: Professor Shannon Vallor In a Mirror, Dimly: Why AI Can’t Tell Our Stories, and Why We Must
Inaugural Lecture: Professor Shannon Vallor The School of Philosophy, Psychology and Language Sciences is very proud to announce the inaugural lecture of our colleague, Prof Shannon Vallor: "In a Mirror, Dimly: Why AI Can’t Tell Our Stories, and Why We Must". The event is open to the public and all colleagues, friends and students are welcome to attend. Recording of Professor Shannon Vallor Inaugural Lecture View media transcript My new book, 'The AI Mirror,' This is that moment where you see if the clicker actually works. There we go. Uses the mirror as a metaphor to help us better understand how today's powerful AI tools work, what they are and what they are not, what they can do for us and what they can't. What they can do, but probably shouldn't. Throughout the book, I'm going to talk about AI in the context of our humanity. That is those capabilities that we most treasure in the human animal and that are most essential to our shared flourishing together on this planet. In this lecture, I'm going to explore the rapidly accelerating impact of AI on our humanity by talking about four things. Mirrors, space, time, and stories. Let's start with mirrors. Understanding AI, and its implications for humanity is a daunting and ever expanding task. We need to grasp AI's accelerating impacts on democracy, education, the sciences, media, the arts, and our climate. We have to consider how AI is changing, how we acquire information, how we carry out our daily work, how we make life and death decisions, how we find companionship, and how we write books. How do you understand a new social force so vast and encompassing, that it is all at the same time, changing how we make food, music, love, laws, and wars. When we need to understand a new thing, we seek a metaphor, a way to link the new thing with something that we already understand. No metaphor is ever perfect. But the mirror metaphor for AI is far more apt and illuminating, much safer and less deceptive than the familiar metaphor that we're encouraged by the AI industry to use for understanding their tools, namely the metaphor of a mind. Now you might wonder, how can a mirror possibly be a more fitting and useful metaphor than a mind for something that contains the term intelligence? A word we only associate with minds. Well, let me explain. In some sense, all technologies are mirrors of humanity. Technology is an extension of human values and imagination into the built world. Every technology from the wheel to the book to the engine to the computer reflects what humans of certain times and places have thought was worth doing, worth making, worth enabling or worth trying. But today's AI tools are mirrors in a much more powerful sense. At a book talk earlier this year, a physicist in the audience approached me afterward to tell me that she'd been convinced by my talk that the mirror is not just a metaphor for AI, that AI tools like large language models, functionally just are mirrors, only of a mathematical rather than physical construction. Before I explain how this could be true, it's important to point out that AI is not one technology, but many. AI is much more than technology, too. It's a distinct field of scientific research, one that's been carried out here at Edinburgh for over 60 years. But today, most people in the world know AI as a suite of tools and techniques for solving human problems with computers. The mirror metaphor is a stronger match for some kinds of AI than others. It's best suited to describe machine learning tools trained on large volumes of human generated data, which fortunately happens to be the kind of AI that you've heard the most about lately, namely large language models and their cousins. The foundational technology behind consumer facing generative AI applications like Chat GPT, Gemini, and Claude. I'm not going to get too deep into the weeds of how large language models are constructed, but a simplified understanding of how mirrors work can help anyone grasp the basics. When you see an image in a glass mirror, it's a result of three interacting components. The first is a polished and coated glass surface that has the required physical properties to receive and reflect light. The second is the light itself. The third is the reflected image that you see when you look at the glass. Now, when you talk to something like a large language model, a very similar set of components is interacting with you. First, corresponding to the polished glass surface is the machine learning algorithm. That's a set of encoded software instructions for training something called a machine learning model. A machine learning model is a large, complex mathematical structure extracted from an initial data set. The learning algorithm first structures and then further trains the model by specifying how that model will change as it is exposed to incoming data. Much as the physics of a polished mirror determine how the glass will receive and reflect incoming liight. Imagine if a mirror's glass surface could be physically altered in subtle ways by the light that fell on it, such that the more light hit the mirror, the better the glass performed in reflecting it. Now you've got a basic intuitive grasp of how machine learning models work. We create a reflective mathematical surface using a type of algorithm that can polish the model surface further by being exposed to a large volume of training data. When the training process is complete, the mirror is polished to its final or near final state, and we have a mathematical structure, what we call a trained model that can receive and successfully reflect the patterns in any new relevant data given to it. Light is the second component of the metaphor. This is an easy one, right? Data is the light. Instead of visible light falling on physically polished glass surfaces, AI mirrors work when stored digital data falls on mathematically polished algorithmic surfaces. The model gets its initial properties, what we call weights or parameters by means of exposure to a torrent of digital light. Our data, all the hoarded data that large AI companies have grabbed and scraped and stolen from us, so that they can beam our collected light, the light of our minds, into their own tools, training them to reflect new light, new data inputs, back to us in ways that will look intelligent. That's the third and final component of an AI mirror structure. The reflected images they show back to us are the outputs we get when we use the model, whenever we enter the new light of our prompts and queries, whether those are typed by hand into a chatGPT window, spoken into your phone, or by a gesture like pressing an icon on your screen. The answers that we get back from the model. The intelligent seeming replies, and text, audio or video form are just composite reflections of all the data taken from us that now resides in the model as a mathematical structure. So, if AI tools like large language models are just mirrors made of math, why do the responses of a large language model sound so intelligent or conscious, even alive? It's because they're reflections of data taken from intelligent, conscious, living beings--from us. But if you now understand that this is a mirror image of intelligence, you realise that what looks intelligent is not. Think about when you walk into your bathroom in the morning and you look in the mirror. You know very well that there's not a second face with you in that room, right? No second body. If you're alone, there's only one face, one body in the room, along with a mirror image of it. You know that the image of the face that you're looking at has no depth, no warmth, no smell, no life. It's a reflection, not a duplicate. A reflection of a face is not a face. A reflection of a flower is not a flower. It needs no water. A reflection of a flame is not a flame. It does not burn. A reflection of a mind is not a mind. It does not think. Reflections are their own kind of thing. And as psychologists have demonstrated, only an infant or creature who hasn't passed the cognitive stage known as the mirror test will confuse the thing with its reflection. AI companies very much want you to fail the mirror test when it comes to their products. They want you to believe that you're seeing a new kind of intellect emerging on the other side of that algorithmic mirror, something even smarter, more powerful and more capable than you are. Something that cannot only help you, but teach you, comfort you, even speak and write for you. One day that can't come soon enough for those who control the image, replace you. That's the power of the conjurers mirror illusion and the danger. Much like the Roman poet Ovid's tragic figure of Narcissus, the boy who lost his life to his captivation by his own reflection. We today are at risk of losing ourselves in our AI mirrors, slowly becoming convinced that the thing on the other side is just as real, just as much of a mind as we are. Maybe one day, even more so. In my new book, I described more fully the power of these algorithmic mirrors to not only usefully reflect and magnify valuable patterns in our data, a power that can be lifesaving when used responsibly. But like all mirrors to also distort, confound and occlude our reality. For example, AI mirrors not only reproduce and magnify the patterns of injustice in our data, They routinely produce even more distorted versions of these, bouncing back to us, funhouse mirror images of our histories, of racial abuse and exclusion, or reflecting past worlds where women have never been CEOs. These are, of course, images returned by a generative AI model, in this case, stable diffusion from the prompt CEO, or worlds where women have never been artists. Even showing us nonexistent worlds. Worlds where all attractive people are light skinned and young. Worlds white people never use social services, and where productive people are only young men with too much hair product. [Audience laughter] Now, these unpredictable distortions can be really absurd. Consider Google's fine tuned AI that was designed to remedy some of these distortions and instead wildly overcompensated for its underlying biases by portraying Nazis as racially diverse. Or consider its AI overview tool telling us that nutritionists want us to eat one rock a day or that you should glue your cheese onto your pizza to secure it. Or we have X's tiresome GROK bot, making up a news story in April about New York City Mayor Adams sending the NYPD into the subway to shoot the April earthquake before it could strike again. Now, we should be grateful actually for these more ridiculous distortions, not just because they're really entertaining, but because they quickly puncture for us the mirror illusion of a machine mind and show the utter absence of thinking and understanding behind these hollow reflections. We need that perspective when falsehoods sound dangerously real. Like the same GROK newsbot falsely claiming also in April, that Iran had struck Israel with missiles. Or when New York City's AI bot from Microsoft confidently advised local businesses to break the law, telling them that it was legal to discriminate against disabled people or the elderly or to steal their workers tips. But it's important to know that there's no more thinking behind the reflection when these AI mirror images get things right. AI tools can often get the reflection right. When they do, they serve the primary purpose of a mirror to reveal things that can be impossible for us to see otherwise. While the reliability of large language model outputs remains a major problem, the power of AI mirrors is well demonstrated by machine learning models trained on narrower, but higher quality data sets than large language models. These are the types of AI models that can reveal early signs of cancer or sepsis in our medical data or show evidence of insider trading in our e mail and banking history. Like all AI mirrors, however, there are also powerful magnifiers of the patterns found in our data from patterns of subtle truths, which are quite helpful to magnify for our sight to the harmful stereotypes and unjust biases that they reproduce in nearly every context. But mirrors do far more than reveal and magnify our reality. Because as I've shown, they also distort reality, they can be instruments of deception. There's a reason that we write warnings on glass mirrors like this, when they distort our vision of what is true. AI mirror images are now being widely used to confound us, as mirrors have always been by entertainers and charlatans alike. They can send us into surprise laughter at Balenciaga Pope-- truly delightful. I enjoy that a lot. But they can also deceive us with fraudulent, deep fake images, audio and video designed to gin up your political rage over a non existent campaign speech, to make you break up with your involuntarily pornified girlfriend, or to get you to wire cash to the fake face of your pleading parent or child. That you would swear on your life is begging you over that phone, in that video, for help. Crying in their own distinctive voice, moving their own injured body, and shedding their own desperate tears. In my book, I ask us to consider the heavy social, psychological, and increasingly environmental cost to us of this new mirror world, and how we might refuse the offer and rewrite its terms in our favour. But in this talk, I want to take a different turn. I want to focus on these AI mirrors. How these AI mirrors, which can be used as tools to enlarge our possibilities, enhance our freedom, and secure our futures, are instead increasingly being used to steal these from us. To snatch away from us the space and time that we use to think, the power we hold to know and craft our own narratives, and the confidence we need to choose and build more humane futures. This talk is a call to reclaim our humanity, to retake our space, our time and our stories. So, now that the mirror metaphor has been explained, we're going to slow down the visuals just a little bit while I lay out some deeper ideas. In my book, I talk about the space of reasons, especially the space of moral reasons. That's another metaphor. Sorry about that. The philosopher Wilfrid Sellars, last century spoke of the space of reasons as the field of shared human thinking, where we explore and exchange thoughts about what might justify our beliefs and claims. Knowledge is social, and it's made up not of singular truths or data points, but instead of the web of justifying reasons that thinking weaves to hold those truths and our minds together in a shared grasp of reality. The spatial metaphor is important. It reflects the fact that thinking is action, like physical limbs, thoughts need space to move in. When we think and especially when we reason, we need an understanding of the possibilities that still lie open to us. We need to be able to see ahead of us the contours of known reality that both constrain and enable our ambitions, whether we're navigating our shared physical reality, our social, economic, and political reality, or our moral reality. We need to explore what the space of the real world allows us and what it forbids us. And so, to reason, we need to use thought as a way to stretch into that open space, and imagine the different paths that we might travel through it. The space of reasons has to be held open in the mind for this to happen. That's why we speak in spatial terms, of the virtues, of an open mind, and the vices of a closed mind. An open mind has room to think and reason with others, Room to navigate objections and contradictions, space, to chart the new routes that will justify unfamiliar or contested ideas. A closed mind is boxed in, immobilised and helpless. This is why, of course, philosophers following the model of Socrates are trained to ask many questions before they attempt to answer one, and to follow every answer with even more questions. It's also why we're super irritating. Well, not the only reason, but it's up there. The questions are what hold open the space around the answers and allow for better ones to come along. AI tools can absolutely be designed to help us hold open the space of reasons, but for the most part, they aren't. They're designed to instantaneously and tirelessly fill our mental space with answers. The one intelligent thing you'll not likely hear a large language model say to you, unless it's been specifically instructed by you to respond in this way is, 'I don't really know,' or 'I'm not sure,' or, 'why don't you tell me?' Instead, the version of AI mirrors now being aggressively pushed by platform AI companies like Microsoft, Google and Meta into the hands of educators, novelists, journalists, artists, doctors, judges, policymakers, scientists, and business leaders everywhere is a mental space filling machine. You want to know what 'Moby-Dick' is really about, besides a whale. Are you curious about the strongest and weakest arguments against imposing term-limits on judges? Do you want to know what you should do to repair a damaged friendship? Do you want to know how to find a better ending to your novel? An AI mirror has the answers or at least some kind of answers, and will deliver them to you in milliseconds with a tone of unflagging confidence so that you need not walk one more step through the endless space of reasons. The pesky business of thinking can be done away with as soon as possible. Efficiency at last. Because of course, efficiency is the ultimate goal behind today's AI gold rush. We could try to use AI mirrors as aids to our own learning, thinking, deliberating and deciding together in the space of reasons, but that would be less efficient. It's not clear just how much help they can be right now because unlike the high dimensional vector space occupied by an AI model, the human space of reasons is not mathematically closed but open to the future. Unlike our minds, which can invent new patterns of thought, AI mirrors point backward in time. They know only the patterns in the light of the data we've already created. They can only mirror the shapes of the thoughts we've already had. This is not what it's like to be in the space of reasons where our thoughts can move forward, not just trawl backward. In the space of reasons, our thoughts are not restricted to the shape of old reasons. They can also take on new shapes forming new reasons. But like the light cones that constrain the travel of information in physics, AI mirrors cannot see or think into the future. They predict only what was already there in the training data. When I ask large language models about styles of art that have not yet been invented, they suggest extremely well established artistic approaches, like bio-sculpture. Yeah. We've tried that. Multi-sensory art. Yeah. No one's ever thought of that before. Using technology to visualise emotional states. We've been there and done that. AI systems, however, are rear-view mirrors. Try asking the latest GPT model, 'what kind of stories have not yet been written?' Here's what it suggests. It took me 10 minutes in an airport hotel bar. Yeah. We've written all this stuff. These are variations of themes that we have already invented. But we managed to invent these without having GPT draw us the map. Now, of course, you might fine-tune a model not to answer your questions, but instead to ask you new ones or to point out new paths through the problem space that others have seen, but that you didn't even know were there to explore. Some people do have the skills, time, motivation, and permission to tweak these tools for that purpose. But for most users, especially for the companies and governments who are betting on these tools as magic beans that will spoon--soon spark hots-- Why did I write 'soon spark sky high' and not test that out and know I was going to trip over that 12 times? Magic beans that will soon spark sky high productivity and efficiency gains. That's the whole point of these tools, and the only thing that justifies their immense cost. That is their ability to save us all the effort of thinking. Now, right now, generative AI mirrors like GPT and Gemini can't really do this. They're still too distorted and unreliable to generate massive profits. By design, they don't find facts. They make up outputs that look and sound like facts, which is a real problem when the truth actually matters to you or your organisation. Because they can't actually think their way out of a paper bag if the shape of that bag wasn't already seen in the training data. The algorithmic golden goose has yet to start laying profitable eggs for most companies, as Goldman Sachs observed in a summer report that spooked the whole AI investment community. But despite the scare, most investors are doubling down. Convinced that the next generation of AI mirrors will work out the kinks and deliver the dream. A thing that can do all the work of a mind without the inconvenient luxury and unpredictable freedom of thinking, and we can't take comfort in that prospect. It's actually the worst case scenario. The philosopher Alfred North Whitehead said in 1911, that civilization advances in proportion to the number of important mental operations that we can perform without thinking about them. He was speaking of mathematical techniques to solve complex problems with less effort, a kind of, low level cognitive automation, so that we could save more of our mental energy for higher order thinking, and deciding. It was a pretty good point. But now that we have machine mirrors that will mimic the most plausible outcome of any thought process at all. I think we need to ask Whitehead's ghost and ourselves. What thoughts do the civilised keep? In a 2021 magazine essay of that title, I compared the seemingly astonishing ability of AI mirrors to generate plausible sounding answers to any complex problem to a robot being flung to the summit of a towering mountain by an immensely powerful catapult. It's kind of amazing when the robot lands on the peak, but it's not mountain climbing. And when the robot or the AI mirror misses the right answer, it often misses not by feet but by a mile. And God help the person who hires the robot as a mountain climbing guide or instructor. Of course, here's another metaphor. The mountain climbing is thinking. Because if you ask an AI mirror to describe the reasoning that it used to get an answer to a difficult problem, the cognitive mountain, you'll get a completely fictional made up path. The mathematical vector space it traversed to get the answer doesn't actually look anything like our mountain, our mental space of reasons. When asked to give its reasons, an AI model does the same thing it did to get the original answer. It makes up the best sounding, most plausible guess at a valid chain of reasoning, which will often land in the ballpark, but sometimes 100 miles away. The point of the mountain climbing analogy is that thinking like mountain climbing is a skilled activity and an open ended process, not an event. A person who gets to either summit on the back of a flying robot learns squat about the mountain and acquires no skill. Just like mountain climbing, thinking requires repeated practise of scaling, daunting peaks. Thinking skills must be gradually cultivated and mastered in the shared space of reasons. It's these skills that allow a thinker to get up the next mountain, a new cognitive challenge they haven't seen before. Maybe one that no one has seen before. Now, are we going to use AI mirrors to build those skills in ourselves and most importantly, in the next generation? Right now it's not looking that way. Because to do that makes nobody a quick buck. That's a long term investment in human cognitive development and that's not what the AI market wants. What the AI market wants is productivity with fewer steps. They want to shrink the human space of thinking as much as possible. Not only because it saves time, we can go from data set to decision summit in 3 seconds, not three days, three weeks or three years, but also because it cuts out the middleman or woman, you. You're not only slow, you're unpredictable and so are your colleagues. You might question the whole task. An AI mirror is not going to do that. You might get together in the space of reasons and decide amongst yourselves that the original problem is misframed, or not a problem to be solved at all. You might decide that the most efficient and profitable short term solution to the problem is an unethical solution or an unsustainable one. You might turn left in the space of reasons when the market wants you to walk a straight line. What better solution to that eternal problem of business and politics than offloading the task to a machine that doesn't need to walk with us in the space of reasons at all? I've been talking about space, and now I'm going to talk about time before we finish by talking about our stories. But these are all connected, of course. We know that space and time aren't separate. From the standpoint of the universe, there's just spacetime, and conscious beings are the only ones who can perceive them apart. Stories are just how conscious beings make sense of our perceived space and our experienced time, how we give them meaning and how we come to understand what might come next from each. As narrative theorists of identity, such as Paul Ricoeur and others have argued, narrative storytelling is not only the activity through which stable, reflectively aware selves come into being. It's how we grasp the full arc of our lives and our communities. Without the ability to tell and understand stories, there is no history, only disconnected events. No expanse of time, no growth, only fleeting moments. Now, AI mirrors have a peculiar relationship with time that I've already mentioned. Like the massive optical mirrors in orbit that produced this image from NASA by collecting light that travelled to the mirror from the early universe. AI mirrors are also confined to the light cone of the past. As I've said, the future is utterly closed to them for their predictions can only extend the patterns found in the data, which of course is only the data that we already have. As the philosopher Abeba Birhane has observed, AI is an inherently conservative technology. Generative AI tools like ChatGPT, DALL-E and Gemini do have a unpredictability built into them, of course, which is why they can often surprise and amuse us with their unexpected answers, images, or predictions. That generative part of generative AI comes from a mechanism built in to introduce some artificial randomness in the selected output. Generative AI models even have control settings called things like temperature that allow us to ramp up that statistical variance and inject more surprise into their results. But this too is just a reflection of the past. The surprise isn't actually a departure from those learned patterns. It just uses a wider spread of our historical data, a wider statistical distribution of the likely answers within the past data cone as the basis for the new prediction. The guess in this case is no less based on the past. It's just a slightly wilder guess. Just as our AI mirrors don't know the space of reasons that we walk in, they don't know it's time either. That is they don't have what philosophers like Edmund Husserl called our temporal horizon. The human experience of an open and indeterminate world, held open by the flow of conscious time where something new is always just over the horizon of our present experience and about to come into view. What comes into view is often familiar, recognisable. The sun rises every single day, just as it has in all of human history. Taxes are always due, and my cat always wants breakfast way too early. But things and events of startling new shapes do arrive from time to time on an utterly unpredictable schedule. As the Scottish philosopher David Hume pointed out centuries ago, the unremovable wart on human reasoning is that we've got absolutely zero evidence that the future won't suddenly go all pair-shaped on us at any given instant. Life is utterly predictable until it isn't. But then that's a wart I'm eternally grateful for. Our ability to experience the future as utterly open is what makes us human because the future being a giant question mark is the heart of our cognitive, moral, artistic, and political freedom. It's what gives us the existential task of deciding each and every day and each and every moment, just how much of our past we want to bring with us into the future, how much of it we want to leave behind, and what we want to create in the open space that is new. The Spanish philosopher José Ortega y Gasset called this the task of auto-fabrication, literally self making. He said, to be human is to be called to be an engineer. An engineer of yourself by choosing from your future's open possibilities, which of those to make real? But also with others, you're called to be an engineer of the future that we will share together. We are engineers of human futures, all of us. We might try as engineers to keep things looking much the same, but that too is an engineering choice to carry the past forward. We can always choose otherwise and sometimes do. We abandon a commitment that we promise to hold for life, or we make a commitment that we swore we'd never take on. We can extend our trademark artistic style into endless satisfying variations or we can drop it on a dime and leave our audiences utterly baffled. We can stick with the dire politics we know, or we can say, 'to hell with this -- we're changing the game.' We can keep burning the world for the temporary benefit of a handful of billionaires, or we can say, 'no more.' But here's the thing. AI mirrors can't make these kinds of radically new choices. They're stuck pointing backward, reflecting what has been seen in the past. That's why their reflections keep regenerating the most common social biases and most widespread errors, a digital humanity's history. That's why the speeches and essays and songs they generate are so weirdly generic and somehow familiar, even though we've never seen them before. This regressive power of AI mirrors has been described by philosopher Mark Coeckelbergh, as a kind of time machine. But an AI mirror is a perverse kind of time machine, one that can't take us into the future. But instead, lets the patterns of the past leak ever more into the present. That's a real problem now that everyone from government leaders, to screenwriters, from university educators, to criminal court judges, and journalists are being told by AI companies that they need to use AI to write the future to predict from the past data cone, what decision or word or idea is most likely to come next, and then automate that statistical ghost of the past into our lived reality. We live at a time where the most dominant economic, political, and environmental patterns of human activity on the planet are all under growing strain because these patterns for reasons that are tightly interwoven, are jointly unsustainable. They not only serve increasingly few of us at the growing expense of everyone else. They endanger the future of the entire human family and all other intelligent life on this planet. Now more than at any other point in human history. We need to exercise our collective power of auto-fabrication. The power that only human beings have to engineer another future to turn the wheel in a new direction to give ourselves more road to travel before we reach a cliff. AI mirrors when used to optimise the efficient reproduction of human work, won't invent that future. They'll steal it from us, or rather they'll be the thieves tools. What we need in order to reclaim the future for humanity is new tools and new systems of value that build them, design to hold open, and enlarge human space and human time. Time to write new stories for ourselves and time to give our descendants to do the same. Yet to do that, we need to change the relationship between technology and time. Today, we use algorithmic technologies from smartphones to social media platforms to video games to AI mirrors to generate content to fill all open time and all open mental space. The tendency to fill our time and space with any available distractions long predates the digital age. But only the digital age could deliver today's 24-7 torrent of on-demand content of endless variety and deceptively low cost, algorithmically fitted to your strongest existing preferences. This qualitative shift in the media age has been widely lamented in countless diagnoses of the ills of the attention economy, where the demands of platform business models and digital media politics mean that your eyeballs and your children's eyeballs are now the most prized object of economic and political control. Sure, those of us captured by the attention economy walked into the trap. But we had a good push. And once the doors closed behind us and all our friends and colleagues were there in with us, it got harder to see a way out that wasn't exile. And because of this, we increasingly do not know what it is to have unfilled time or unfilled mental space. And a growing number of us don't know what to do with it on those rare occasions that time and space does arrive, other than to fill it as quickly as possible with more algorithmically generated content. AI mirrors are the latest delivery system, not just for finding that content, but for producing it. That's the moment when the digital revolution challenges our most vital human capability, the capacity and the will to tell and write our own stories. This is the end of the talk. As many of you will know, and more than a few lament, AI mirrors can be used to literally generate stories. ITV just ended up in the news this week, for planning to replace some of their writers with AI experts that can use the tools to generate stories for TV, and certainly they're not alone. And the goal is to generate any kind of story, as long as it's somehow like the stories we've already told with maybe just a bit of random statistical noise thrown in for extra flavour. AI mirrors are being used to quickly and cheaply generate news stories, blog posts, and op eds, turning the vocation of the human journalist at many cash strapped outlets from a crafter of human narrative, something that takes time and the luxury of mental space into a polisher, fact checker, and enlivener of algorithly generated instantaneous prose. The writer still has a vital role to play in most of these cases. AI mirrors are still too incompetent at storytelling to avoid needless repetition, stale cliches, and illogical contradictions. But AI developers are working night and day to reduce those flaws. And at the same time, some audiences are learning to grudgingly tolerate them. Fiction writers are being encouraged to use AI to generate new story outlines, early drafts, possible plot twists, character motivations, and background sketches, and possible endings. We're reassured that AI can't yet write a whole novel that's worth reading. But from these fairly sorry, ragged pieces, a former writer might assemble and polish a passable Frankenstein. Even those who aren't storytellers by profession are being marketed AI tools that will record the story of your workweek or the story of your family vacation. Summarise its highlights for your perusal later. Prepare a diary record of it for your safe keeping. Remind you of any follow up actions. Suggest a plan for executing those follow up actions. And in the near future, it'll take care of those actions for you. AI mirrors are being pushed on educators at schools and universities around the world, where we're told that they can help us more efficiently develop our lesson plans and structure our lectures. Summarise for our quick review the key points of the readings we've assigned to our students, and even help generate feedback for students on the bland insights about those readings that they increasingly use their AI mirrors to write for them. Many educators are vigorously resisting this campaign, but the platform and EdTech companies are relentlessly pushing AI into virtually every tool we use. And administrators see it as the way to manage ballooning academic workloads without actually having to reform the system that produces workload bloat. More than a few academics having already lost to administrative bloat, the precious time and mental space they need to conduct meaningful research, will be tempted to claw some of it back however they can. And if this trend continues, the result will be a classroom environment where the stories of our history, our literature, our politics, our scientific discoveries will increasingly be told, read, interpreted, and critiqued by our machines. Some of you like me are old enough to remember this image from the 1985 movie, 'Real Genius,' where the professor fully disenchanted with students tape recording his lectures and skipping class, leaving their tape recorders on their seats, starts playing his lectures from the podium by a boom box and skips the whole farce himself. Today, the farce is becoming reality. Only instead of farce, we call it innovation. We can demand better. But first, we have to want better and we have to believe we deserve it. At the moment, that's a challenge, perhaps for young people most of all. A recent Atlantic piece by Rose Horowitch titled, 'The Elite College Students Who Can't Read Books,' describe the growing resistance of university students to reading any extended narrative, from a novella down to a sonnet. In the article, a psychologist notes that digital media has altered young people's sense of what is worthy of attention. But in the shift from the age-old pre-digital phenomenon of students opportunistically skipping the reading, which I can confirm has always been a reality. The article highlights professors worries that unlike previous generations, many of today's students seem to be skipping the reading, not because they won't do it or don't want to, but because they can't. The account of a growing number of university students instantly shutting down at a single encounter with more than a few pages. Experiencing the tracking of a story as an impossible cognitive task isn't new to me. It's been a worry of many of us for a while. But AI mirrors are now promising to close that gap for students, launching them to the top of the exam or essay mountain in a single prompt. Even preparing seemingly thoughtful and perceptive discussion questions to ask in class or tutorials or coming up with likely objections they can make to the core text. Pretty much any learning task or assessment that an instructor can imagine can be ginned up by an AI tool with the right prompting, short of a direct one to one oral examination, which most educators are blocked from giving by reasonable concerns about accessibility and student anxiety. The important question though, isn't whether today's students can make their way through, 'The Iliad,' or 'War and Peace.' The question is whether our students will retain the will, confidence, and capacity to participate in the vital human task of reading and writing their own stories, which requires the mental allowance of time and space. One of the things that's widely reported as uncomfortable for many digital natives and older digitally habituated readers alike is the experience of empty mental space. To allow mental space and quiet to arise and dwell in it, to not respond to the powerful impulse to fill it with something, anything, is a challenge for many of us, and increasingly impossible for more than a few. It's what the novelist knows as the torture of writer's block that pushes us to close the laptop and clean another part of the house. It's what a child knows as the torture of boredom that quickly takes on the tinge of anxiety until he grabs for the phone to scroll. It's what the journalist staring at a sheaf of notes or file of audio recordings feels when she can't begin to see the shape these fragments must take. In the Atlantic piece, the psychologist states that boredom has come to feel unnatural to many young people. But that yawning empty space that boredom opens when we realise we can bear it and live through it and walk through it, is a portal to a future on the other side. Empty mental space is where time opens and where values are chosen. When the story could be anything, and yet the story as nothing feels unbearable, I choose something. And then I realise that I've made that chosen new thing matter in some way. I've brought it into reality, and it's part of my world now, part of myself now. The new plot point I chose, was chosen not because it was already interesting and important to me. Otherwise, it would have already been within my grasp. Instead, it becomes interesting and important because I chose it, yanked it into reality from all those undefined possibilities on the other side of time's horizon. The innovative artist does something similar. When as a child, I chose in desperate boredom. I was an only child, so I had a lot of that. To try to learn how to play my mother's guitar or to climb a big rock or to invent a new computer adventure game, or pick up my father's dusty, neglected Bible and find out what that was all about. I was choosing to make things important to me that had not been important before. Even if I later abandoned some of them, my choice at that moment proved to me that I'm free to make things matter, that my life is not a passive waiting game, but an action, a construction of meaning and value that never ends. I didn't understand that framing as a child, of course, but even then I knew implicitly, non-verbally, as a feeling that it was a kind of power. A power to be alone with my thoughts and yet not captive to their present shape or their present formlessness. These choices, more than all the choices that came easily because they followed a familiar pattern, became part of my story in a way that all selves are forged and shaped. We learn that we have power over the future. We are not helpless victims of fate. By reaching across the void of mental space into times open and formless horizon and bringing something over from that undefined future into our present world, giving it a place in our story, and by that choice, making it real. We do that when we innovate in art and media and design. We do it when we come together to invent new politics, and we do it when we press new moral claims on the world. But to learn that power, we have to be given space and time to think and feel without someone else filling it for us. Empty space and quiet is the field of the future, that's the opposite of what digital culture gives us. Modern digital life is much like a 90s Tetris game. Every moment is a challenge to fill the open space of existing as efficiently and seamlessly as possible. The blocks are falling faster and faster, and they must be quickly slotted into their predictable and rational places. See what's coming next. What meeting, what exam, what tweet, what photo, what reel? React and process, like, or save, fit it into the most likely slot, or let AI do it for you. But most importantly, leave no space unfilled. We need instead to hold open our space and our time so that we can write the next chapter of the human story and let it go somewhere new. And that's the opposite of what AI currently offers us. In the words of Mark Coeckelbergh, instead of using AI to save time by filling up the future with automated projections from the past. Quote, 'We need to figure out how to make time in a different way, how to let in the future, and make time for social change, how to make time for interpretation and judgement, and how to make time for people and their stories.' AI mirrors can't tell our stories for us because they don't know why those stories matter. There's nothing at stake for an AI storyteller, because an AI bot has no future to make for itself. From the standpoint of a trained AI model, the shape of the mirror image, the arc of the data story has always already been made. Unlike the work of being human, its work has been done, and all that is left is to generate its most probable echoes until it is trained again. The image of our data that appears in the AI mirror can be dimmed or blurred or fogged or distorted, but it can't be unknown, unmade. For an AI mirror, there is no story still untold. Unlike an AI model, we each and together sit on the edge of a future that is open. One that need not reflect the past. Not unless we surrender to machines the power of telling our own stories. This narrative power is the self's crucible. But even more importantly, it's the crucible of our collective humanity, which will have a story that only we have the power and the duty to write. Thank you. [Applause] So, this is the fun part where we have questions for ten or fifteen minutes. And I get to point the fingers. So, who has one? Yes. Yes, right there. Hi. Hi. My name is Sidra. And through this lecture, I learned that how AI actually like you said, it's the reflection of our light. But I was thinking more in terms of that, I don't know, instead of us resisting that narrative that of course, it is a reflection of her light, but instead of resisting the narrative that we can not work with AI, we have to work with it as an assemblage. In terms of that, I was thinking a lot, because my PhD is on decolonising Western epistemologies, and my experiment, like yours, told me a lot of things about AI and its capability. I was thinking what role AI can play in helping us decolonise Western epistemologies because the results we get from it are based on mass narrative, and those narratives are again more eurocentric. So I just wanted to understand if there is a way because what I understood today is that that's more of a reflection rather than an extension. What would you -- I hope I make sense? Yes. Yes. Absolutely. So my answer is yes, we can use AI to help us with those kind of missions with help being the operative word. That is, we have to understand what its limited role is there, right? For example, you mentioned the efforts that we might have to de colonise the curriculum. We might say, Okay, to a large language model. Here's my syllabus. I would like to decolonise this syllabus. Does this model understand why you want to do that? Does it understand why it's important? By the way, if you ask why it's important, it will give you an answer and it'll sound kind of plausible. That doesn't mean it understands the answer, right? What's going to ensure that you understand it, as opposed to getting into that kind of mindless mode where we basically just sort of transform our syllabi, our curricula without thought, without decision, without value being part of the process. There are good and bad ways to do this sort of thing, right? And the values of learning and what we're there to do with our students have to guide that process, and that's our responsibility as educators. But it is so easy for that all to slip away and be lost when we have the ability to just say, just find for me. The sources or better yet, you construct them into a learning architecture that will be de colonised, whatever that means to you, chatGPT, right? Again, it's about understanding the limitations of these tools and making sure that it's our values and our responsibility and our sense of confidence in our ability to actually make important decisions that is retained. But the other thing is, we should be asking ourselves at each moment, why am I using AI to do this in the first place? If I can do it myself, why am I not doing it myself, particularly given the immense environmental cost of using these tools? As well as the fact that they have been in almost every case, trained on copyrighted data that was taken without permission or compensation. So, I'm not suggesting, though some people have made this argument, but I don't share it. It's not my view that we must resist all uses of these technologies. But I do think actually there's a default argument for saying, what's your argument for using this tool? Is there a reason that will justify this expenditure of energy? Is there a reason that will justify you letting go of this work and handing it over to a machine? And there are cases where we do have a cogent argument, especially for something that's important to be done that we literally cannot do ourselves. Because of the mathematical power of these systems, there are kinds of problems it can solve that might take us centuries. The whole point about DeepMind's AlphaFold winning the Nobel Prize was centred around the fact that that model, which is not a generative AI tool like chatGPT, but it is a very large machine learning model. The reason that that model was worth the expenditure. Was because it can solve a problem that will save lives that we could not have done on our own without perhaps decades or centuries of time lost. Those are those kind of decisions that I think it's incumbent on us to reflect on rather than mindlessly saying, I have a task. Let me let AI fill it. Yeah. Great question. Back here. Are we going 'til 6:30? Is that the stopping point? Yes. Okay. Thank. Thanks, Shannon. Mirrors and even Tetris, for example, are very flat and visual. Humans very multi-modal and multi-sensory. So I was wondering, rather than a mirror, what metaphor should we be aiming for? Um Well, I think all metaphors have their limits. We have to recognise the limitations of the metaphor, but I don't think it's unable to cope with these particular differences. We can reflect images, but we can reflect sound. We can reflect text. We can reflect code. These models can reflect these different modalities of human expression. But I don't think that makes them something other than mirrors. Does that answer your question or have I misunderstood your question? Yes, I was wondering if there is some way of having them all combined rather than separate? We already have that actually. So, multi-modal models are already being released by these large AI companies. And again, I suppose that puts a little bit of pressure on the mirror because we have mirrors that generally operate only in the visual modality, or at least in our experience. But we can imagine a mirror that can reflect to us different kinds of content, different forms of our cultural output. I think it just takes stretching the imagination just a little bit. Again, I don't think a multi-modal AI model is any less of a mirror than a purely text based or image based one. It's just a mirror of a more complex construction. I think there are qualitative differences though in what multi-modal models will do for us on the positive side, but also in terms of their dangers, right? It's multimodal models that allow us to produce a deep fake of you, that not only looks like you, but moves like you and sounds like you, right? So we have to be able to recognise that multi-modal models will increase the ability of these tools to entertain and serve our interests, but also to deceive and confound us. Thanks. Got a question here in the centre? Or is there one in the back? Wherever you were going is fine. We probably have two more questions, so we'll take the second one here in the front. So with your mirror analogy, you were talking about all the intellectual property that's being processed through these systems and being reflected. And the people who created it are not, you know, receiving due compensation. But imagine a future in which some music piece that is concocted by these systems has through some combination of mirroring the past ends up becoming very popular and makes lots of money for somebody. How do we disentangle what, I wonder if you have thoughts about the issues associated with that kind of intellectual property and how, you know, other than preventing these systems from producing that music in the first place, how do we disentangle the rights of those? Yeah. Sure. So there's all different kinds of models for doing this, and I'm going to bracket for now the question of whether AI generated music is something that we ought to be investing in as opposed to funding human musicians who currently are suffering from an inability, right? To make a livelihood out of their talents. And that, I think, ought to be the first item on the agenda. But that being said, let's say there's some future world where we can have it all, right? We have plenty of funding for the arts, lots of space and time for human artists to do their work, and we have the ability to in an environmentally responsible way, use AI tools to generate additional cultural outputs for us. In that scenario, there are a number of models. There would be artists who would be happy to have their work used to train AI models if they were asked. They might not request compensation in some cases, or they might request an initial licencing fee. There's plenty of models in the music industry for saying, pay me once, and then it's yours and anything that comes from it later, you own the profit for, right? So we can negotiate various forms of fees that do not require us to later disentangle what was your contribution to the thing that made, you know, a million dollars. There's other models where we could imagine better systems for data ownership, where we have shared governance and collective ownership of data that is, let's say publicly funded, as much of the arts is, as well as the sciences. I mean, one of the things that bothers me is that all of these models can suck up all of the scientific data that has been largely funded on the backs of citizens for the public good, right? By research councils and universities. And all that knowledge then gets to be taken by an AI company and turned into a proprietary product that they sell back to us at whatever price they choose, right? So I think there are broader issues here of who owns data and particularly data that is created for public benefit, how that public benefit can be retained. But I think there's plenty of paths open there, to answer the question you've posed, which is an excellent one. And one last question up here. Sorry. You can ask me later. Hi. Yeah. Thanks for the great talk. I'm just wondering what your view is on in a world where universities are selling AI courses to students and students view those courses as stepping stones to jobs in tech companies that then sell products to universities, for example. What's the role of universities in this area? That's a great question, and I think one that universities need to be asking right now. But it goes back, I think to my previous answer in that there are tonnes of important human problems that we need AI. I don't know how many times we need generative AI, but we definitely need other forms and techniques in AI to use to advance our goals when it comes to health, when it comes to the environment itself. When it comes to fighting corruption, when it comes to identifying victims of human trafficking, right? I can give you a laundry list of really vital use cases that if universities aren't investing in the research that can advance our ability to solve those problems, there's a clear argument that universities aren't fulfilling their mission. However, what's really happening right now is that universities are being used as a tool by AI platform companies to fund research that will eventually be walled off and sold off for profit to make commercial products that aren't designed very often to solve those kinds of problems, but are designed to simply grab more of our attention or control more of our behaviour for the benefit of a few. I think universities need to be thinking about what kind of AI research are we funding. But also, how do we maintain our independence? Particularly public universities, how do we maintain our independence and our ability to serve our mission to a wider set of publics than six large AI companies who are increasingly buying the expertise, hoarding the chips that universities need in order to do this kind of work at the edge, right, and controlling the terms on which this work is funded. That needs to be the battle fought right now. Thank you. [Applause] Thank you, Shannon. [Applause continues] Thank you very much, Shannon. Thank you to everyone for being here on that extremely profound endpoint that we ended up there in front of yourself as well, Peter. [Laughter] Perfectly. So huge thank you for all of you being here. Also, lots of children here as well from schools and families and Shannon in the Edinburgh Futures Institute, come back and see Shannon, ask more provocative questions. So we're going to now ease out of the building. There's a cafe on one side, called Canopy. Please find your way there, and Peter will want me to tell you all it's the University of Edinburgh's Canopy cafe. So please just find yourselves there. And thank you again for coming to our first inaugural lecture. That was amazing, Shannon--thank you very much. Thank you all. Oct 23 2024 17.15 - 19.00 Inaugural Lecture: Professor Shannon Vallor In a Mirror, Dimly: Why AI Can’t Tell Our Stories, and Why We Must
Oct 23 2024 17.15 - 19.00 Inaugural Lecture: Professor Shannon Vallor In a Mirror, Dimly: Why AI Can’t Tell Our Stories, and Why We Must