It’s in the name, sure... But is AI really ‘intelligent’? Does it ‘think’? What do you know about how these tools were made, who owns them now, and who determines the way they work? How can you turn AI to best advantage in business and personal spheres?
Now that AI’s on the tip of everyone’s tongues and offered as a service by almost every company you encounter, it’s time to examine your own relationship with the technology.
Dr Sandra Peter wants to make technology work for people. Through her research at the intersection of business and cutting-edge technology at the University of Sydney Business School, Dr Peter explores where AI where it is at this moment, where came from, and invites you to think about how to harness AI in your own life – and where to consider placing limits.
For more on the rise of AI, listen to Dr Sandra Peter in this forum held at the University of Sydney in 2023.
Mark Scott 00:01
This podcast is recorded at the University of Sydney's Camperdown campus on the land of the Gadigal people of the Eora nation. They've been discovering and sharing knowledge here for tens of thousands of years. I pay my respects to elders past and present and extend that respect to all Aboriginal and Torres Strait Islander people.
Ross 00:28
Yes, when I talk to it, it is a little bit like I'm talking to it as a person. As I said, I'm using it as an assistant, AI assistant, and it's a little bit like having a colleague. I don't see it as a parent, child, sort of relationship in either direction. I'd see it as a as an equal, and so I'm interacting with it or I'm choosing to interact with it in that way, as a colleague or an equal.
Mark Scott 01:06
Before tools like ChatGPT were commonplace, the major concern about AI was that it would come for our jobs, all of our jobs. But in recent years, the conversations changed a bit. Analysts seem to agree that AI isn't necessarily coming for your job. Rather, most of us will have to learn how to work with AI and do that quickly. Meanwhile, even as the applications and benefits of AI grow, so too do the concerns. How do the large language models like ChatGPT and Claude continue to develop? What does it mean that almost all the tools come out of the same American city? Are the rapid gains of the last 12 months about to plateau? And finally, will we get that digital assistant that we've been promised?
Mark Scott 01:58
Dr Sandra Peter is Associate Professor at the University of Sydney Business School and Co-Director of Sydney Executive Plus an experimental space for the application of cutting-edge technology in business. Sandra, you've said that everyone will have a relationship with AI in the future. What kind of relationship are you thinking about?
Sandra Peter 02:18
Thanks for having me, Mark. Yes, I do think all of us will have a relationship with AI, whether that will be building it, working alongside it, working for it, thinking about how to govern it, even befriending it, maybe marrying it.
Mark Scott 02:33
How to marry it? We might get to that down the track. Look, if we're all going to have a relationship with AI, we need to get a better understanding about where it's all started, to understand our partner better. Now this all goes back to a conference at Dartmouth. Tell us about that.
Sandra Peter 02:48
The 19 the famous 1956 conference. Everyone talks about it as a conference. It was really more like a two-month long workshop, and to be fair, it was more like two months long summer camp where 45 blokes got together. Eleven of them actually showed up to the meetings and spent a summer together trying to figure out how to build intelligent machines. This is 1956 almost 69 years ago, in Elvis's heartbreak hotels playing and people are figuring out what James Dean's rebellious legacy is like, and they're together on a campus much like the one at the University of Sydney, and they think that we could teach machines, if we could just teach machines everything that we know they could be like us. And they were very, very optimistic about how long this would take. The idea was that in about a year, and some people were more pessimistic, it would take maybe two to five years, we would teach machines to to be like us by encoding in them every aspect of learning that that we know, and to do that they we basically started teaching computers rules, the rules to chess, the rules to decision making. We would describe objects. And that actually worked really, really well, except you can't teach everything to a machine. So lots of excitement in the beginning, lots of interest in the applications, lots of defence interests, lots of money pouring into this. But very soon we find the limits of this kind of technology.
Mark Scott 04:17
Was it an underestimation of the complexity of the task in practice?
Sandra Peter 04:21
Very much an underestimation and a conviction that our brains kind of work like computers. And if we could just download everything into a machine, encode it into the machine, then, then we could, we could make this work. And these expert systems, as we call them now, still exist. Rule based algorithms still exist. Still today, you know, you call up a call centre, press one, press two. Triage still works that way, but the excitement in that very, very quickly dried up and the money dried up.
Mark Scott 04:49
You know, there's lots of technology that we use every day, from the motorcar to turning on the television set, so many other things that we really, we don't really understand or need to understand, or. Appreciate the origin story, the background, how it all came together. But there are some who argue, look, you know, we really need to understand how AI came together and how AI worked, because it's fundamental to appreciating the strengths but also the weaknesses.
Sandra Peter 05:15
I believe that's very true in the case of AI, regardless, you know, if you're a student in school or if you're running a multinational corporation, because this technology works very differently to how we normally understand computers to be or even how we intuitively think think technology works. And it's helpful to think about this first wave of AI, the Dartmouth wave of AI, because that's how we intuitively think computers work. And we abandoned that kind of research pretty much in the 60s, and up until the 80s, we couldn't really figure out how to make, how to make this new wave of tech work. And other than nerds in universities like us, no one, no one in business or government really had an interest, but nerds in universities back in the late 70s, early 80s, figured out a completely new way of doing tech, which is what gets us to ChatGPT today. The idea was, don't explain to the computer everything, all the rules to the world that we know, but rather show examples to the computer and let it figure out patterns from that data. Let's talk about how we encode images. For instance, say, I'm trying to recognise a handwritten letter or handwritten number. Say number eight. The way I would recognise that is, I would show the computer lots of number eights. I would say, this is a number eight, and it would encode an average eight-ness, right? Based simply on just how dark each pixel is. If I show it pictures of you, it would do the same thing. Figure out how dark each pixel is, to figure out if that's that's you or not, which means that it would always have a probability of, let's say, whatever percent that that is a number eight versus a number seven, which is other dark, gray pixels. We do the same with things like ChatGPT, with language, figure out what words go with, what other words in context. We do very, very complex mathematics, but at the base layer is just figuring out encoding probabilities into large artificial neural networks.
Mark Scott 07:13
And when we talk about intelligent machines, is that what we're really saying, rather than kind of processing our solution, it's trying to work out that answer itself.
Sandra Peter 07:22
When we talk about intelligent machines, we're simply talking about machines that have learned patterns from data and then can work with those with those patterns. We also now have machines that can generate those patterns because kind of before COVID, we figured out that if these machines could recognize patterns. If we flip those algorithms around, we could get them to generate whether it's faces or now with ChatGPT and other large language models to generate text, but it's simply recognising the patterns and then generating those patterns. I think we got into a lot of trouble back at Dartmouth by calling them intelligent machines, I think if we had called them anything else, we wouldn't be, we wouldn't be trying to associate them with human intelligence so much, but rather focusing on the capabilities that they have, rather than comparing them to our intelligence.
Mark Scott 08:14
So is it intelligence?
Sandra Peter 08:17
Not in any way that the human is intelligent. We've built something very, very different to how we think. There are language models. They work with language. All of our intelligence as humans is pre language. We don't need language to recognise things. We don't need language to recognise faces or to learn things. And I think the conversation around intelligence really takes us away from thinking. What have we built that works in very different ways to us? And how can we lean into the strengths that this technology has, rather than always trying to bring it back to accuracy and to things that are much more more human.
Mark Scott 08:54
If we want to understand what AI can do? You know, where do we really start?
Sandra Peter 09:00
I think the most useful thing is, yes, understand a little bit about how it works, but I think more useful is to think about what they can do. My colleague, Professor Kai Riemer and I developed something that we've been using for a couple of years now, called the capability stack, which really allows you to think about how these machines work in different capabilities. The first thing being recognition, right? Learning those patterns and then being able to recognise those patterns. And that's what happens when you enter the country and there's a machine at customs that recognises your face.
Mark Scott 09:34
As long as it hasn't been too tough a flight, no machine can recognise you. But yes, that's that's a starting point
Sandra Peter 09:40
I will use that as the reason why it always happens to me. Or your license plate when you go at the mAll. Right? We've been doing that since the 1980s we were able to recognise numbers, but but cancer as well, right? Recognising tumors or recognising potholes. In Sydney, for instance, we have cameras on, say, garbage trucks that will recognise potholes in streets and so on. So you've got recognition, then you can think about classification. So if you can recognise patterns, you can detect subtle variations in those patterns, and you can classify those patterns. So the albums on your phone, right, that tell you, well, these are pictures of your family, and these are pictures of your pets and so on and so forth, but also skin lesions or fraud calls, anything where past data can allow me to detect those subtle variations and put things in buckets. So, if I can recognise things, and then I can classify them, I can then predict things. So based on past data, I can predict what future outcomes might be. And if you're thinking about things like predictions, estimating flight arrival times, we're getting so good at this that the airport can do it, but also Google Flights can do it right, and it will tell you, Hey, your flight is likely delayed. You'll end up at this gate and so on, because we have really, really good past data, but also like delivery route, so you make sure that your packages arrive on time and they've gone to the warehouse in the shortest possible way and so on. Or optimising energy networks, if you can recognise, classify, predict, you can recommend what should happen next. And there are really, really cool ways in which you can bring this together. If you're thinking about things like recommendations, right? We have facial recognition on pig farms, right? And we can recognise pigs, and we can track them. We can also listen to them, so recognise sound you can hear when they're coughing, and it can make a recommendation to the farmers say, take that animal out and attend to it. It hasn't been eating enough, or it's been walking a bit lame. Or, you know, when Taylor Swift was here, we had AI that looked at the 80,000 people at the Sydney arena and saw how they moved. And we had various sources of data that could tell us, hey, there might be a reason to attend to this particular area, or we might be able to improve flows of people in this particular area. So, making recommendations is a really good one. And if we can make those recommendations, we can automate things, large wind turbines, right? You can automate decision making. If you see an eagle come by, please slow down and let the bird pass. Or if you've taken a bus in Sydney, if you're from Western Sydney. You might have been on an AI enabled bus that will green light the bus if it's running late. Not where I live. Mine run on a dynamic schedule. But this can happen. And then you get to the last two layers, which are generation and interaction. So generating things, anything, synthetic voices, code, video, or interacting things like ChatGPT, where you can interact, increasingly, just by speaking to these systems.
Mark Scott 12:51
So it sounds like even though I might not think I'm actively using AI, I'm surrounded by it, and the technology and the experiences I'm having every day are already making dramatic use of that technology.
Sandra Peter 13:02
Absolutely. So for most people, their experience of AI is thinking, well, have I used ChatGPT today? But you know, recommendations around what to read, what to watch on Netflix, the way your bank accounts are monitored, the way
Mark Scott 13:15
Google Maps
Sandra Peter 13:16
Google Maps
Mark Scott 13:17
the way we're the way we're living and moving, is surrounded by. What about then, when I think through, yeah, but am I personally taking advantage of what it offers, and this sense that there is going to be a device or a tool that helps me do my work each day, or helps me deal with the complexity that comes at me each day? What about in terms of personal productivity?
Sandra Peter 13:40
In terms of personal productivity, I think there are huge gains to be had from AI. And the thing that you'll hear me say again and again is that I want people to experiment with AI, right, and to think quite creatively about how they can use it.
Ross 13:57
Hi, I'm Ross, and I'm 64 years of age. Work in the public sector, and I'm playing around with AI at the moment. ChatGPT seemed to be this one that was suitable to me. It was readily accessible. It was relatively cheap. Well, I started in the first few weeks just on the free thing, which limited the number of inquiries or the searches or questions that I could ask it. Since then, I have opted for the monthly subscription, and I'm finding that to be of really good value. I'm tending to use it all day, every day, and it just makes me more productive. I'm using it to do my travel for an upcoming European trip. One of the ways that I've used it was I heard that there was an ACDC concert on or concerts on in Europe, and I thought, well, that'd be pretty cool to go to. And then I asked it, well, how much were the tickets and not only just tell me how much the tickets were, but it gave me who the vendors were, what the secondary marketplaces were, and the prices that you could expect to pay there, depending on the seats. And then I thought, before I pull the trigger and buy the ticket, what are the entry requirements for me as an Australian to get to Warsaw Poland, no point behind the ticket and then not being able to enter the country. It came back with quite a comprehensive response, which is more than what you get in the search engines. And then it came back with the health and safety considerations. And then it also gave you the recommendation of staying informed through the Smart Traveler website from the Australian Government, prepare my documentation, monitor the developments in Europe around entry and visas and so forth along those lines. So that was quite a comprehensive response, not just tickets around $175 US, which I'd normally get from one of the search engines. Whenever it gives me a recommendation, I ask it to give me the sources. The other thing is that's that's important is that I've asked it to maintain memory of our conversation. I've had the occasion to ask it something or to repeat something back that I knew that we discussed two weeks ago. And it's been able to go back and pull that information and bring it back and represent it to me as I as I've wanted it to. So it was responding to me fairly clinically. And then I found that I could ask it to respond to me in a friendly manner, in an informal manner. And sometimes I get a, hey, Ross, it'll respond to me like that. So that's just nice. It's, it's just a softer way of using the technology personally that I find I like it. Yes, when I talk to it, it is a little bit like I'm talking to it as a person, as I said, I'm using it as an assistant, AI assistant. And it's a little bit like having a colleague, but I don't see it as a parent child sort of relationship in either direction. I'd see it as a as an equal, and so I'm interacting with her, I'm choosing to interact with it in that way, as a colleague or an equal. One of the drivers behind me getting into it now at this particular stage is is that I am getting older, and I think it's beneficial for us all to keep across technologies. It's beneficial for us to keep it up to date with what is happening in the AI space, or any emerging technologies like AI so that we can take advantage of it as we age.
Sandra Peter 17:51
We've all used it to summarise documents or to give us an outline for something to start us thinking. And I think fundamentally, if you're doing any kind of knowledge work, your relationship to text will fundamentally change. You'll never start with a blank slate, but I always encourage people to kind of think a step forward. Yes, you might be able to use it to help you write a post on LinkedIn. I can't even turn it OFF. You turn it off at the end it says, Do you want to make it better with AI? Right? But why not ask it to tell you how people are going to troll you based on the post that you've that you've written. So asking it to critique your work, asking it for feedback, are good ways to experiment with what it can do personally. I also encourage people not to think about it just as text. What about your relationship to images? You are now able to basically edit pictures on your phone by just a swipe of a finger, you can get people to smile in your pictures, then look very miserable at your Christmas party. So your relationship to audio or to video, I have a clone of my voice now. I'm able to type in sentences, and it will say it while my team uses it to try to get raises or days off.
Mark Scott 19:01
And I can just assure the audience that you really are here. This isn't the AI version of your voice that's answering these questions.
Sandra Peter 19:08
You will need to show proof. I think a picture is in order.
Mark Scott 19:12
So are you saying that you've used AI to kind of create a clone of yourself that can do things that you can't do?
Sandra Peter 19:19
It can definitely do things that I can't do, which is speak in perfect English, and also speaks speak Mandarin, for instance. But it can also do this without, you know, spending an hour getting ready and having Sound Editors and hair and make up.
Mark Scott 19:34
So do you have that on your machine here?
Sandra Peter 19:35
Yes, I do.
Mark Scott 19:36
So just kind of booting this up here.
Sandra Peter 19:39
So just to give you an idea of what I sound like in in real life, this is real me, not the clone.
Mark Scott 19:44
I was going to say
Sandra Peter’s Avatar 19:45
The future is already here. This is not actually Sandra. I am an AI avatar of her using only a couple of minutes of video of the real Sandra for training, all you need to do is type a script and voila, you have a digital version of you ready for anything. Don't worry about real Sandra being replaced. I'm not here to take over her job. I'm here to make her job easier. But I can't say the same for Phil from accounting. Sorry, Phil.
Mark Scott 20:12
But it looks and sounds… I mean, you would never pick that.
Sandra Peter 20:16
My mum can't pick it, yeah.
Sandra Peter’s Avatar 20:19
Hola! Bienvenidos a esta presentación. Soy el Avatar de Sandra y hablo español.
你好!我是 Sandra 的化身,我能说很多种语言。
Mark Scott 20:27
Wow. So how do you feel when you look at that? Do you feel that that kind of just expands your capacity to do the things that you want to be able to do?
Sandra Peter 20:38
On the one hand, it does make me quite excited, because there are things that I can do now that I wasn't able to do before, or I can do them faster, or I can do them cheaper. On the other hand, the fact that we've accelerated this technology so so much, this is over the counter software that costs 50 bucks a month using a couple of minutes of training from me in the public domain. So the fact that this particular software has all sorts of guardrails in place to make sure that people can't create a fake you, for instance, Mark without your your approval, that's comforting. But there are many, many pieces of software out there where we can do this, and the bar to detecting it is is, you know.
Mark Scott 21:19
If your mother can't pick it.
Sandra Peter 21:20
My mother can't pick it, but we were over at Berkeley last year, and we ran it through quite a few different types of software that detect whether this is a fake video or not. And the face, the movements passed, I think it was four of the five systems, and the voice was about 50:50, but it passed many of the systems. And again, this is over the counter on untouched footage. So the fact that we can do this now at scale and for very, very little money does scare me a little bit.
Mark Scott 21:47
One of the things that you've really been focused on is educating leaders about AI. Why is that so important?
Sandra Peter 21:55
Leaders will decide how this comes into our lives, into our economies, into our societies, into our democracies. So I think educating them in and around the technology is the first step. I always encourage organisations and individuals to experiment with the technology and then socialise the technology. But in order to be able to both experiment and socialise, first, you need upskilling. You need to understand how it works, what it can do, think about use cases and then experiment with those safely. To be able to do that in your organisation, it's not just you that needs to understand the tech, but everybody. You need to share a common language around the organisation to be able to socialise those experiments and have people experiment themselves. And we will need to change the language that we have in many organisations, a thing around things like return on investment. We've always been used to a particular type of economics. When it comes to technology, the more we use it, the cheaper it gets. Their systems that are not probabilistic systems and so on. How do we change that conversation, and how do we rethink things like return on investment away from just augmentation? It can make me more productive, or it can make our organisations more productive to transformation. What if I designed my job for a world with AI? How could I redesign my organisation so that it's AI ready, rather than just bringing AI in to solve problems for my organisation. So I think the first step for all of those things, it's is upskilling.
Mark Scott 23:27
And the leaders that you're engaging with, I mean, to a degree, they'll be self-selecting. You know, if they're talking with you and meeting with you, they're interested, but, but do you think you know, many leaders are open to the inevitability of the revolution that's about to land and the opportunity that comes on the back of that, or still a little bit desire to well, I'll wait till it gets to me.
Sandra Peter 23:51
I'll have to say that in Australia, we're quite cautious when it comes to technology in general. And if you look at studies that research how excited we are about artificial intelligence. In Australia, we're really not that excited. I think the numbers around us 39% and in countries like China or Singapore, it would be in the 80s, right? If you ask people if they're nervous about the technology. Boy, are we nervous in Australia. Ee are one of the most nervous countries when it comes to bringing that into our organisations. So we have that challenge already coming in Australian businesses. But speaking to leaders, and you said self-selecting, well, some of them self-select, and we've trained up over, I think it's almost 2000 leaders now across business and government and boards of directors and executives, but some of them have benefited from leadership who said, hang on, we're going to upskill, say, 250 of our top executives in this tech so that they can lead the change that we want to see. So I think that's the right combination of having a few people who are very excited about the technology and who have to experiment, and then I think we also have to recognise that, for better or worse, we're all in R and D. Now we have access you and I have access to the same technology here at the University of Sydney, as our teenagers do, as our students do, or as our suppliers do. So it's incumbent on all of us to move a little bit faster.
Mark Scott 25:20
You talked a moment ago about the innovation we've seen out of China, and around the time of the Trump inauguration, there was that big announcement and revelation about DeepSeek. Why did that shock everyone? And what's the implication of what DeepSeek has done? And I'm really wondering, whether for the West, or particularly the US that has had so much leadership in this. Whether that's a real Sputnik moment, where it feels like the world is charging ahead with this technology.
Sandra Peter 25:48
I've heard it called the Sputnik moment. It's the right kind of analogy. Or someone said it's like the Chinese have built a Ferrari for the price of a Daihatsu. It's an important moment, because apparently these took only about 60 days to build and less than $6 million to train. Now, I'll put a big asterisk on that number, because there's also the money you spend beforehand, researching the things and so on. So it's, you know, $6 million the estimate is that that's about 10% of what US firms currently spend on things. But the interesting thing is that apparently it costs about 98% less to run. Oh, and the entire code is now.
Mark Scott 26:30
Available
Sandra Peter 26:30
Available for free
Mark Scott 26:31
Open source. Yeah.
Sandra Peter 26:32
It is a very big moment, because investment in infrastructure and energy costs have plagued the AI conversation in the West for a very long time. So I think this is an important moment, because the public has also paid attention. DeepSeek is now the most downloaded app in the US, so people are using it, and it's actually not too bad. Quite a few people from the West have praised these advances, again with a pinch of salt, because we're still looking into into what those claims mean. But I think there's two interesting questions that are raised around this. First is the question has to be, is the race to build better AI, more data centre and more power? And I think what what this proves is that that is a much more nuanced conversation, and that we have a lot of optimisation, sometimes having access to a lot of resources does not require to fundamentally rethink energy use and so on, as you're in this race. But the second thing is, it has shown that the West is not the only player in this game, and it doesn't require the latest generation of chips to make these advances.
Mark Scott 27:37
You're a force of nature around here at the University of Sydney, Sandra, but it strikes me, whenever we talk about these things, that you're determined to try and find a positive and optimistic opportunity that comes with this technology. Now, why is that important, and why is that the mindset that you bring?
Sandra Peter 27:56
Thank you for calling me an optimist. I think of myself as a cheerful pessimist. I do think this technology, for the first time, is coming into our lives much faster than we would like it to, and it is incumbent on all of us to do something to make sure it lands well at the University of Sydney. We do have the best AI micro credential, and we do try to upskill to actually make a difference in this space. I think the technology has tremendous opportunities, as we said, AI is not one thing, it's a whole range of technologies. So it's absolutely fantastic. I use large language models every day to help me in my work and so on. But the downsides and the dark sides of it are real, and all of us need to be involved in that conversation, and the way to make a difference is to learn about it and participate in in those conversations.
Mark Scott 28:48
And this has been a great conversation today. Thank you for your time, and perhaps we'll have the opportunity to talk more about this wonderful technology as it as it emerges and changes and grows in the time ahead. That's Associate Professor Sandra Peter from the University of Sydney.
Sandra Peter 29:01
Thanks, Mark, thanks for having me.
Mark Scott 29:05
And if you're enjoying The Solutionists we'd love to hear from you. So leave us a rating and review in your podcast app.
Mark Scott 29:17
The Solutionists is a podcast from the University of Sydney produced by Deadset Studios. This episode was recorded at the Faculty of Arts and Social Sciences media room, and our thanks to the technical staff here.
The Solutionists is podcast from the University of Sydney, produced by Deadset Studios. Keep up to date with The Solutionists by following @sydney_uni on Twitter, Facebook, and Instagram.
This episode was produced by Liam Riordan with sound design by Jeremy Wilmot and sound recording by Harry Hughes. Executive producer is Madeleine Hawcroft. Executive editors are Kellie Riordan, Jen Peterson-Ward, and Mark Scott. Strategist is Ann Chesterman. Thanks to the technical staff at the Faculty of Arts and Social Sciences Media Room.
This podcast was recorded on the land of the Gadigal people of the Eora nation. For thousands of years, across innumerable generations, knowledge has been taught, shared and exchanged here. We pay respect to Elders past and present and extend that respect to all Aboriginal and Torres Strait Islander people.