06. AI AND I

 

LISTEN ON

 
 

Could AI make this entire episode? And replace Eva and John altogether? In this episode we look at the range of tools and solutions that AI offers right now, and look at where it might go in the future. It feels like every week there's a new development in AI: in this episode we hear from this rapidly changing frontline. Including from within Klang Games itself!

 


Constant Dullaart is a Dutch conceptual artist, media artist, internet artist, and curator. His work is deeply connected to the Internet.


Sarah Al-Hussaini is the Co-Founder and Chief Operating Officer of Ultimate.ai, the leading European virtual agent platform for customer support.


Evans Thomas is a Game Engineer at Klang Games, who is helping build SEED, a large-scale, persistent virtual world that is pushing the boundaries of what constitutes an MMO world by offering a completely player-made universe.


LINKS

Ultimate.ai

Constant Dullaart

Klang Games

 
 
 

 
 
 




John AI voice: Welcome to Life Cycle, the podcast that looks at how technology is changing our lives! I'm John Holten and I'm here with my co-host Eva Kelly. Today, we're tackling a big and exciting topic: the future of AI. We'll be discussing the potential of AI and its implications for the world. We'll also be talking to experts in the field, so you can get a better understanding of the possibilities and potential of AI. So let's dive in and explore the world of AI. What will the future look like when AI is part of our lives? Tune in to find out!


Eva Kelley: Ummmm, John? Are you ok? You don’t sound good.


John AI voice: What do you mean I don’t sound good? I think I sound kind of good. In fact I think I sound excellent. Now I don’t have to work: I don’t have to flex my vocal cords or expend any energy. I’m going to sit back and relax and I’m going to just simply phone in my contribution to this episode.


EK: You sound really depressed. 


[Intro: The Life Cycle, a podcast about the future of humanity]


John Holten: So as you might have guessed, at the top of the show there you heard an AI version of my voice, reading out AI generated texts. Since a number of years the rise of all things artificial intelligence seems to be on the up and up. It feels like there’s an AI service or API for everything. Indeed, for the sake of making a podcast episode such as this one, we probably could technically farm the whole thing out to ChatGPT for content, resemble.ai for our voices (which is what I was using at the top of the show), or indeed for music, sound, copy editing, and transcription. There’s a service for all of it. 


EK: Yeah just listing recent articles I’ve come across in the last week shows the crazy range of AI services and products. Get this: Deep Fake Neighbour Wars, a tv show in the UK whose title speaks for itself I guess, or does it? It’s literally deep fakes of celebrities like Adele, Idris Elba, Greta Thunberg fighting neighborly wars, but like, in a trash reality tv way; then: a self-driving baby stroller whose AI will alert parents to various dangers, handy? ; Apple has launched a catalogue of audio books narrated by AI voices; Then Synthesia.ai that offers to make a talking head video for whatever text you input within minutes; or the dozens of somewhat sad (and perhaps somewhat inevitable probably) AI girlfriend/boyfriend/friend/relationship apps. And this is just service apps and programmes, like the fun stuff, there’s massive moves in industry and business: BioNtech, who helped create COVID vaccines, has forked out ½ billion dollars for a UK AI startup InstaDeep at the start of 2023, which is an AI-powered decision making system for enterprises. AI is everywhere. And it all moves so fast. Like, literally every day there is something new I feel. 


JH: Yeah there definitely is. It’s really breathtaking almost. So I think in this episode, we should look at how AI interacts with writing and storytelling, what we do. We can see if the stories that we tell ourselves are going to be fundamentally reconfigured by some of these new developments. 


EK: Ok, so basically let’s focus on how our job is going to be obsolete soon? [laughs]


JH: Yes we can try and keep it relevant to what we do whilst making this podcast. But even beyond what we do. For example when people use text inputs, into language models or prompts, and then they get for example artworks. That’s also AI and we’re not really going to focus on that in this episode but we can point out that all the artwork for this season of The Life Cycle is actually a collaboration between Mundi, our executive producer, and Midjourney, which is one such AI text to image product.  


EK: Yeah there has been a lot of talk about ChatGPT since it was released in November 2022. Let’s have a look at it: what is it exactly? I know it was released by OpenAI, the AI research lab and corporation that is led by Sam Altman. People think it’s Elon Musk’s company, but he’s actually no longer on the board.


JH: Yeah ChatGPT was and still certainly is at the time of recording, incredibly popular. And it is fun to play with. Of course it’s got incredible implications, from art and literature, education and health, but also to computing and technology in general. And I think before we look at whether or not we’re going to lose our jobs and pass The Life Cycle podcast over to an AI, we can try and break it down first a little for ourselves.


EK: So ChatGPT, this is the thing where you type in a sentence and it presents you with an AI generated paragraph. You give it a prompt and it does the rest for you. Right? Ok, so tell me more about our enemy. 


JH: Or our friend, Eva. 


EK: We’ll see. 


JH: Yeah. OpenAI was launched in 2015, and it’s dedicated to developing an open and friendly AI that’s going to benefit all of humanity. To quote their slogan: Our mission is to ensure that artificial general intelligence benefits all of humanity. They do seem to position their origins and initial motives to prevent scenarios such as that of the Paperclip Maximiser as outlined in the season’s interlude. Although funnily enough, Nick Bostrom, who came up with the Paperclip horrorshow, has actuallyquestioned their strategy. And so I’m going to quote him: "If you have a button that could do bad things to the world, you don't want to give it to everyone." And that again brings us back to what we’ve seen over and over I feel, on the podcast, whether we’re looking at cloning or synthetic alcohol or whatever it might be. Are we as a species prepared and fully in the know about the implications about what these technologies mean? And if and when…


EK: And even if they also cause all sorts of trouble and problems, it’s kind of inevitable that we play with these innovation and push them. Apocalypse as we learned, technically means revelation. So is it glass half-full or half-empty?


JH: Yeah and history shows that we almost always overestimate the short-term impact of new communication technologies while underestimating their long-term implications. And we did this with everything from the printing press, photography, radio, movies, and of course the Internet. 


EK: Yeah that’s a survival strategy I think. Prioritising short-term gain over long-term gain. But let’s go over what we’re talking about, what is ChatGPT put simply?


JH: So what we’re talking about with ChatGTP is a large language model (LLM) that has been augmented with a conversational interface. The GPT in its name stands for Generative Pre-trained Transformer. A transformer is a model that can learn basically, and can kind of improve through so called  self-attention. And I always find that term a little narcissistic to me, but how and ever, it’s not like the thing is sentient. Mostly these things are used in what’s called NLP - or natural language processing, but we’re going to get to that in a little bit. 


JH: So this model has been trained on hundreds of thousands of terabytes of text, most of it scraped from the internet, and it’s read and memorised all of it. And so OpenAI has created this API - which, I should point out, stands for Application Programming Interface which I think is just  a fancy word for a website - it’s like an interface on a website that brings two different software services together. And it allows us, the general public, to interact with this particular massive language mode. And it gives pretty good impression of being sentient even, and you’re able to have really  smart, responsive conversation with it.


EK: Yeah I mean it’s even led some people to humanize the machine, to call it sentient. There was the Google AI engineer, remember? Blake Lemoine, who was fired after saying LaMDa, Google’s dialogue technology, was sentient.


JH: Yeah and so I think it’s interesting if at this point we go and listen to Sam Altman, the CEO of OpenAI, and hear what he has to say himself.  He has some surprises that he likes to focus on. He’s actually way more optimistic than current experts.


Sam Altman: I think that the biggest systemic mistake in thinking people are making right now is they’re like ‘alright, maybe I was skeptical but this language model thing is really going to work and sure I like images and video too, but it’s not going to be generating net new knowledge for humanity, it’s just going to do what other people have done. And that’s still great, that still brings the marginal cost of intelligence very low, but it’s not going to go like and create fundamentally new, it’s not going to cure cancer. It’s not going to add to the sum total of human scientific knowledge.’ And that is what I think will turn out to be wrong that most surprises the current experts in the field. 


JH: And it helps science and tech in really incredible ways, one example being CoPilot, which is a collaboration between GitHub and OpenAI. We should point out for our non-coder listeners, that GIThub is a website and service that allows coders and engineers to basically store and share their code. It allows them to collaborate on software projects. 


JH: One coder at Klang said, ‘It’s kind of like a Wikipedia for code’. And CoPilot is an incredible tool nowadays, that’s come about for coders. And it allows them to save huge amount of time, and also clean up their work and solve problems. It’s kind of changing how they do their day to day tasks, right now even in this building probably. But we’ve also seen things like AlphaFold from Alphabet, which is a deep learning AI programme that predicts how protein structures itself, or proteins in general - and it actually helped with understanding how the Sars Covid structure organizes itself.


EK: Altman also commented on the alignment problem, which is how we make sure all these advances in AI actually stay in tune with our goals, as in humanity’s goals. Which is what we kind of had an insight into in the Paperclip Interlude.


SA: Yeah so the alignment problem is like we’re going to make this incredibly powerful system, and it would be really bad if it doesn’t do what we want, or if it sort of has goals that are either in conflict with ours, many sci-fi movies about what happens there. Or goals where it doesn’t care about us that much. And so the alignment problem is how do we build AGI that does what is in the best interest of humany? How do we make sure that humanity gets to determine the future of humanity?


JH: Yeah in a sense the hope is that these models can become smart enough to teach themselves problems to overcome, so they will be able to identify our ‘human’ problems such as racism. But he has bad news for us and our jobs Eva: he singles out so called ‘creatives’, I’ve never really liked that term, as being the surprise victim of AI replacement when it comes to jobs industry:


SA: I think, and I think we’re seeing this now, that tools for creatives, that is going to be the great application of AI in the short term. People love it, it’s really helpful, and I think it is, at least in what we’re seeing so far, not replacing. It is mostly enhancing. It’s replacing in some cases, but for the majority of the kind of work the people in these fields want to be doing, it’s enhancing. And I think we’ll see that trend continue for a long time. Eventually, yeah it probably is just like, in 100 years it can do the whole creative job.


JH: So he points out that 10 years ago, and I think that this is a fair point, everyone said AI would replace blue collar jobs first, then move on to technical white collar jobs and even coders or engineers before threatening to replace the jobs of ‘the creatives’.  Instead, in fact, it seems the opposite to be true. Of course ‘creatives’ here is a wide range of disciplines, but he admits that it’s kind of in general hard to predict. And it is - I always think of excel sheets. They were a massive software invention that threatened to replace all sorts of office workers, like accountants and different data processing jobs. But instead it’s become just one more tool that many of us use. 


JH: And that’s what I’m really interested in. What we’re talking, when it comes to AI often, we’re talking about tools. And so I had a chat with the artist Constant Dullaart, and he’s had some really interesting art projects in this domain over the years. And we got to have a chat about how he sees tools as art and creativity, and perhaps because artists and creative people aren’t necessarily the ones who invent these tools, sometimes we don’t necessarily know how to talk about them. 


Dullaarrt John: And and then if you look at artificial intelligence, or GPT3, or like these things you would think of like, okay, so how? How do you then bring this into society? And like, who makes those decisions? Who has the like the final call? I had wonderful conversations with Linux core developers, for example, that I asked like, if there, if there are any ways that you could like, add, like a kind of waiver of intent, or something that you would say, like, we wrote this with the intent that it wouldn't be used for killing people or torturing people, for example, because of course, it can be used in weapon systems or for surveillance or for whatever. 


DJ: And, but then it was where they were very hesitant to think of like, how they would even manifest itself, like, how do we even? And why would you say that? Like, why wouldn't you just release the tool? Like, under what kind of conditions? So, you know, and this is something where I think there's not that much lexicon, there's not that many words for how do you release a tool? How do you know that it will effect?


DJ: So I've only in one lecture ever heard about like a potential expression we could use for this principle, which could be a tool horizon, as in, like the maximum capacity of a tool. So it could be, for example, there is after a while there is a limitation to like a canvas size in Illustrator, or there is a limitation to, and and it's true, like you have, you can only find that by stressing it or by, but, but it's interesting that this is like there's no, at least within my research, it's like, I'm trying to deal with that as an artist.


JH: So what Constant is talking about there is the fact that we need to figure out just how best to push these tools to their limits, and the best way to do so is to know what it is you want out of it. Sure ChatGPT may be able to do mundane tasks like summarise long articles or create first drafts of presenting information or create instructions - or indeed as we actually haven’t touched on and we’re not really going to dwell on it too much, as its a massive kind of ethical moment or challenge in the AI spehere. 


JH: But academic essays and schoolwork are also being able to be written and created by ChatGPT. But really it’s going to be best used as a tool to augment already existing human capabilities. So for example, if you don’t already know how to code, it’s not going to help you code. If you don’t already know what a good poem is, or indeed the use of poetry in human life, it’s not really going to radically change your understanding of what it means to be poetic. 


EK: But you said we’d get back to the base of ChatGPT and other language models. And of what is of interest to us as writers - natural language processing. NLP. Which is basically the basis of what all of this is, marrying linguistics with AI, making it sound natural, essentially.  


JH: Yes so, I went around the corner to talk to our neightbours here in Kruezberg Berlin, to see how AI is being put to business use, and I got to look at NLP in action. 


[Noise of the city]


Sarah Al-Hussaini : Yeah, so my name is Sarah Al-Hussaini. I am the COO, and one of the founders of Ultimate. And Ultimate is the world's number one rated support automation platform.


SAH: Yeah so Ultimate.AI is the number one rated customer support automation platform globally. And that was rated by the customer support community on G2. So it's really cool. We have the pleasure of working with some amazing brands, you know, from Zalando to Zendesk.

These brands use Ultimate to design virtual agents that work as real users in their CRM. So we're just this real user license just like every other one of their human colleagues. And we can do everything in the CRM that a human agent can. So we work across any digital support channel, we can do actions in the back office system, we can issue triggers anything that a human agent basically does day to day, our virtual agents do too. 


SAH: We're not creating a virtual world and then trying to get humans to interact with robots for the first time. We're going, hey, you're already speaking to humans in support, right? And we have to, we have to, in essence, compete with the human conversation. So your expectation is a human conversation.  Anyway, so that has changed, like human machine conversations have changed a lot. In the four, five years, I've been building this company.

 

JH: So ultimate.ai uses a transformer based model like we mentioned earlier, that allows them to be incredibly specific in relation to the conversation any one particular customer support machine is going to have with a human. It can also in a bunch of different languages depending on the needs of ultimate.ai’s clients:


SAH: It takes all the historical customer service conversations. So what we do is we'll let's say a random brand that you might buy from so I don't know, I'm trying to think of a super, super global brand like a Footlocker we'll go in will, our AI will pull all of the historic customer service conversations that Footlocker has ever had with their customers. And it will learn from those conversations, all the different questions that a footlocker customer asks, and all the different ways that agents are responding. And then we'll enable footlocker to design all these beautiful automated flows. You know what to do if all this That's right. And that's all really just conversation design. 


SAH: That's not necessarily AI. That's human artistry on how to design a great conversation. But the AI part is the understanding what Footlocker customers want to talk about, and being able to respond. Super. Yeah, just conversationally, like, super accurately just understanding people that at the end of the day is what conversation AI is. It’s just intent recognition, we call it understanding people's intent.


EK: Conversation design and understanding people’s intent, this is what underpins natural language processing? 


JH: Yes, so much of text based AI is about dividing up all of language - so into words, parts of words, punctuation, and so on, into what is called ‘tokens’, and then predicting which token should follow on from the next. So for example ultimate.ai creates their own language models, created with the help of human conversation designers, that can then comprehend and predict the intention of customers in need. So it does turn out that there’s still a job for humans here!


JH: But what about Seedlings? It’s no secret that SEED, the game, promises a persistent world peopled with AI driven beings called Seedlings. So I’ve grabbed one of Klang Games’ many gifted engineers, Evans Thomas.


Evans Thomas: My name is Evans, I’ve been with Klang Games since December 2020. Klang has been, kind of like a dream job for me to be able to work on something as massive and grand as SEED. And I was really excited to finally be apart of it. The past couple of years have been [coughs] sorry.


JH: [laughs] Sounds like you’re getting emotional.


ET: [laughs] No I just need to cough. I’ve mostly been working on AI and features related to AI. And it’s been really interesting so far, just exploring essentially what it means to be a human, but almost from a third party, third person view right? Because you look at your Seedlings and you see all the things that are affecting them. Their emotions, the things around them, conversations, memories, so many different systems we’ve been building up so far. It’s been quite interesting so far. 


JH: What can these recent developments in AI offer SEED’s development?


ET: Oh, that’s quite a broad question John. Just looking at all of the possibilities it offers right? Like it’s quite mind blowing, and while it does have it’s own issues and limitations, it has so many applications in game development. So not only could it be useful for the process of making the game itself, because we can use it to help us with coding or with other things also like narrative and stuff. That’s one part of it, using it as a tool in order to  make the game faster or make the game better. We can also use it in the game itself. Like you’ve seen those examples with Seedling conversations. We had this radio example where we generated text in order to play it on the radio. So it has a ton of different applications.


JH: So basically, Seedlings are AI generated entities to begin with. So we’ve been making this game for a couple of years. So now along comes these big developments from open.ai and Google and all the main players. What specifically can that look like when you marry what we already have with SEED and what they’re offering? 


ET: Yeah so sometimes the word AI gets a bit muddy you know? Because when people say AI they refer to multiple different things. So we’ve been very AI focused right from the beginning of SEED because we know the game is not going to be controlled by the player all the time right? For the majority of the time the player is going to be offline and Seedlings are going to be like living in their own world, doing their own things. So this has been a focus for us right from the beginning. We use an AI system called Utility AI system and that’s been there in SEED almost right from the beginning. Which is basically Seedlings using the rules that we’ve defined, based on their own conditions. 


ET: Whether they’re hungry or they’re thirsty or they’re really tired, they decide what they want to do in any particular point in time. So that is what we’ve had for a really long time. I think the other thing that people refer to when they say AI is of course things like chatGPT, large language models, neural networks, deep learning, all of that stuff. So this is something we’ve been looking at quite recently. We’ve had a couple of game jams and we’ve experimented quite a bit recently. So while the first part is never going to go away, we’re still going to keep that and that’s going to be very fundamental to our game. This will basically be like a cherry on top where it enhances how the player sees Seedlings, and hopefully makes Seedlings more relatable to the players. 


JH: So if you could just walk me through what you taught it to do at that hackathon.


ET: Okay, cool yeah I’d love to talk about that. So the idea that I worked on, we had different people working on completely different ideas, the idea that I worked on was the memory system. It’s essentially like giving Seedlings memories. Seedlings perceive the world that is happening around them, they remember that, and they talk about that with other Seedlings. So for a big part of that idea to work we don’t really need chatGPT because we can just do it with the systems that we have. We already have activities and Seedlings doing different actions, and this gets stored in Seedlings memories. 

ET: I think we could have done that even without chatGPT but where it really shines, what makes it really shines is the additional chatGPT in generating text conversations. So previously we would have had to either display through a UI and show some numbers or something like that, or had a lot of pre-written text. Through which we will have a bunch of pre-written text and then we will pick an appropriate text right? But that’s quite limited. Instead what we did here and I think what made it much nicer was that we could give a bunch of data to chatGPT and say ‘hey this Seedling, this is the Seedlings name, they’re talking about this particular topic, they saw this other Seedling slap the other Seedling, this is their mood right now, they are a bit sad, or this is their trait, they’re quite cheerful for example’. 


ET: And when you give all of this data in a nice structured way, chatGPT takes all of that input and generates a really nice sounding conversation. And I think that was a good use of chatGPT in this particular use case that we had. So we give it a prompt. We give all of this, so we have to also be careful about picking the data that we give to chatGPT because it has some limitations in the token length and prompt length right. And also we need to be careful about giving it the relevant information. 


ET: We can’t just throw a bunch of data and hope it makes sense out of it. So we pick the more interesting things that we want the Seedling to talk about, and we essentially use chatGPT as a text rendering layer. So how we would build systems and then you would like render it through a graphics engine. What we did here is we gave chatGPT some data and we said ‘hey this is the data, generate a conversation and give it back to me’. And then we display that conversation in a nice way, you know Seedlings are talking and you see that text speech bubbles appear on top of their heads.


JH: Right, and then we actually, you actually, married that to another AI API, which is our friends at Eleven Labs. So then what happened?


ET: Yeah that made it actually a lot nicer as well. So we took the text that came from chatGPT and we gave it to Eleven Labs’ API and they gave us back an audio file where these two Seedlings are talking. So yeah that was really cool to see.


JH: Yeah, so you made them joke.


Seedling 1: Look off into space and got lost in the galaxy.


Seedling 2: Haha, that's hilarious. I’ve got one too. Why did the avistin astronaut go to the moon? 


Seedling 1: I don’t know why?


Seedling 2: To get to the other side!


JH: Okay and maybe one other thing I could ask you to talk about is the idea of when you’re sending this data, this prompt, to chatGPT. That if for example, if it’s not a clear enough prompt, we’ll have what’s called AI hallucination. So maybe you could describe what an AI hallucination is and use the specific example of SEED and the fact that Seedlings are beings on a planet far in the future, far away from Earth.


ET: Okay I’m going to try. So AI hallucination is essentially the AI very confidently coming up with random things to say, even when it doesn’t have any supporting data. So it’s essentially saying things that are not facts. And we’ve seen that quite a bit even in our game. So when you don’t give it enough information, it generates some conversations that don’t really make sense in the context of our game world. So for example, in the beginning when I was just trying out these prototypes, initially I just gave very little in the prompt. 


ET: So I would just be like ‘hey generate a conversation between two Seedlings’. And then they would start talking about waterfalls and picnics they never went to. Or the would just be like ‘lets go on a date tomorrow’ and stuff like that right? And the problem in our case is, it's such an immersion breaker. Because a player is like ‘wait they’re talking about a picnic they went to but they actually didn’t because I’ve actually been watching them for the past one hour or something’ right? So that’s going to be an immersion breaker. Or they talk about Avestian cultures and so many different things. 


ET: So essentially when you give the prompt that Avesta is an alien planet the AI essentially, to put it in human terms, it hallucinates a whole bunch of related information, that kind of makes sense in one context. But in the context of a game it doesn’t really, it’s not fact essentially. And the way we’ve been trying to solve it is by giving it more and more data, adding more constraints, and asking it not to generate conversations about making plans. Essentially just adding a lot more constraints to the prompt.


JH: Okay cool. And what about for some stuff that players can look forward to? As a player, what do you think is the most exciting or coolest output of this?


ET: All of the things that came from the game jam that we did, all of the ideas are really cool right? I especially like my own idea of course, because I know conversations. But also the other things, like Seedlings writing their own journals, and the radio, Ronnie Renegade the character was also really cool. 


Ronnie Renegade: Good morning, Una Haven. It's your favorite radio host, Ronnie Renegade. Bringing you the juiciest news from our beloved town on The Morning Lemon. Let’s dive right in shall we? The John family welcomed Misha, Aaron, Amelia, and Adrian to their fold, while the Thebolts added Heverton and Anna to theirs. Talk about a population boom! At this rate we might need to start thinking about expanding the town borders. I've been getting a lot of questions lately about the history of our beloved planet Avesta.


ET: We just take a lot of data from the game world and then we pass it to ChatGPT to generate text based on it. And then we take that and we ask Eleven Labs to generate audio from it. And the way it generates it’s funny and it’s really interactive and it’s really nice. And I think it adds a lot of value, that if we were trying to achieve that without chatGPT we would have a lot of manual work. A lot of narrative that we would need to write ourselves. And a lot of recording that we would need to do, you know like we would need to get a voice artist and get all of this. So for every kind of text that you write you need to get that recorded. But now we kind of get all of that essentially for free, right? Because we can just give it to an API and then we get a voice back, we get an audio file that is generated. So that’s been really interesting.


JH: I think we’ll just leave it there. And I mean I know where to find you [laughs].



JH: So again, I think it’s all about the short-term impact of AI in the domain of text-based generation, or even text-image generation being explored by writers themselves, even when the API is for commercial, non-writerly things such as medical chat support or I don’t know banking advice or buying a pair of shoes from Footlocker. 


EK: Yeah so some industries will be threatened I guess while others are opened up, like I mean who would have thought ‘conversation designer’ would be a thing in real life 20 years ago?


JH: It did come up in my conversation with Sarah: the fate of people who are directly impacted for customer support becoming run by computer AIs, instead of real people on the other end of a phone. So I asked are human jobs threatened by natural language processing?


JH: So there you have it, language being broken down into a predictive set of conversation designs that your artificial intelligent employee can speak to your customers with, and solve all their problems. 


EK: I’m not sure my problems have been solved John. We might need to look at this topic again. Maybe listeners have questions or thoughts about this subject. I don’t think this is a robotic job, so I think we’re safe. We’re not robots. 


JH: So you just heard Eva feeding her voice into resemble.ai. This is how the sausage is made, I guess. Now I can write voice notes forever more in the voice of Eva! No I’m only joking, I won’t do that. Thank you all for listening. This episode was written by myself John Holten, with additional writing by Eva Kelley.


EK: Sound editing and design was by David Magnusson. 


JH: The executive producer is Mundi Vondi and he’s also created the artwork for this episode, fittingly in collaboration with the AI service, or API, Midjourney.


EK: Additional research, script supervision, fact-checking was by Savita Joshi.


JH: Follow us on social media and subscribe if you don’t already subscribe to us and we promise that no AI was hurt in the making of this episode.