Exploring Generative AI

Exploring Generative AI

How does generative AI blur the lines between human and machine creativity?


In this episode, J.R. and Lucas Flatter delves into a discussion about generative AI - what it is, how it works, its current capabilities and limitations, and the philosophical implications of advanced AI replicating human cognition, emotions, and decision-making. They analyze the potential of AI to understand and emulate human experiences, relationships, and creativity. The conversation touches on topics like the role of bias, unpredictability of AI outputs, autonomy of AI systems, and the value society places on human-created works versus AI-generated content.


Key topics covered include:

  • What is Generative AI
  • The Practical Applications of Generative AI
  • The AI and Human Language
  • What are the philosophical aspects of AI?


Building a Coaching Culture is presented by Two Roads Leadership

Produced, edited, and published by Make More Media

Building a Coaching Culture - #116: Exploring Generative AI === J.R. Flatter [00:00:00]: I think we're generations if ever away from replicating the human brain and the joys of human life, the emotions You're listening to the building Voice Over [00:00:08]: a coaching culture podcast. If you need to compete and win in the 21st century labor market as an employer of choice, this podcast is for you. Each week, we choice, this podcast is for you. Each week, we share leadership development, coaching, and culture development insights from leading experts who are developing world class cultures in their own organizations. And now, here's your host, JR Flatter. Hey. Welcome back, everybody. It's JR Flatter with Lucas. Voice Over [00:00:44]: How are you doing? Lucas Flatter [00:00:45]: Pretty good. Recovering from a cold, but, yeah, I'm just rounding the corner. I feel better today. J.R. Flatter [00:00:51]: Alright. So today, we wanted to kick off a conversation about generative AI. I'll tell you right up front, I could barely spell AI. So I'm gonna rely heavily on you, my computer scientist friend, to talk about AI. So just in a nutshell, let's define what generative AI is. And so AI, artificial intelligence is exactly that. It's a computer or a series of computers or a database, interrelated computers. I can only envision in my mind where they're drawing their data from. J.R. Flatter [00:01:25]: There are several tools out there. Probably the most famous right now as you and I are talking this chat gpt. I think it's up to version 5 o right now. I'm using version 4 o. But through some complex algorithm, I can query artificial intelligence and it'll give me a response back. And, you know, wherever it might be drawing data from, you would hope is unbiased and coming back to you as factual as possible. When I use AI, I've been using it to and I'm a very a very early stages of even learning how to use AI. But using it to give me thoughts and maybe articulated something a little clearer than I might have written it tend to speak in PhDs. J.R. Flatter [00:02:12]: I know the rest of the world doesn't. I guess where I was using the the right click synonyms, you know, give me a synonym for that word, I can now just put the sentence into Gen AI, and it'll give me back what it thinks might be a clearer way to say that. You can either take it or or not. But the artificial intelligence in addition to and stop me if I wander astray here. But the artificial intelligence, in addition to responding to my query, is also learning my style of questioning and my style of thinking and writing and all of that. And that's the the generative piece of it. It's getting smarter about me. And there's unintended consequences to that and also very positive consequences to having a partner in writing. J.R. Flatter [00:03:05]: And I know you're an artist, so there are images you can create. You know, I'll just give you a quick anecdote of how new I am. I tried to use AI to help me create a logo this morning. And after 20 minutes of trying, it said, you're out of tries. Voice Over [00:03:22]: Check back after 2 o'clock. Lucas Flatter [00:03:26]: And I don't know. J.R. Flatter [00:03:26]: I I pay a subscription, so I don't know what I did to to anger the AI gods, but, I guess I could jump back in now. It's after 2 o'clock in the afternoon where you and I are. But I'll pause there to see what your thoughts are as we jump into this very complex topic. Lucas Flatter [00:03:44]: I guess the way that I think about it is it's almost like the monkeys typing Shakespeare kind of problem. Like, if you had unlimited amount of time and, you know, unlimited resources, would random text eventually have some sort of meaning? And so what these algorithms are doing is taking, you know, the entirety of human language and based on your prompt, trying to create meaning, you know, so let let's take all the words and all the grammar that are possible and try to come back with, you know, a coherent meaning. And a lot of times, it might not be your exact intention. You know, 99% of the time, it'll sound like actual English language, which is pretty successful. But then sometimes, you'll say, oh, you know, you didn't really take into account, you know, the bullets that I provided or, you know, things like that. So I think, I mean, on a practical sense, it's yeah. Like you're saying, it's something it's a tool that you're you learn to use over time. What does it do well? You know, and what do I need to verify before? You know, sometimes you don't necessarily want it to, like, publicize things before you have the chance to check it, you know? J.R. Flatter [00:05:07]: No. You're right. And you said something a minute ago that it draws all human language. So I would suppose my doctoral dissertation, all the articles I've ever written, my first book, they're all in there? Lucas Flatter [00:05:25]: So I think that, you know, GPT, it's it's definitely using a lot of, like, Internet sources. Like, Britney had mentioned that our CEO, Britney, had mentioned that she was asking it about Flatter Inc, our company, and it came back with things from the website. So sometimes you can say, like, oh, this is the source that it that it was trained on. But what I meant was just that, like, it takes all of the possible, you know, combination of words and grammar and things that are acceptable and unacceptable and kinda uses that as its starting point. And then for, like, subject matter expertise or, you know, particular topics, it's definitely trained on, like, Internet sources. And I think sometimes even, you know, verified by humans that are kind of data checking. Like, oh, is this the correct information that we want the AI model to pick up and learn from? J.R. Flatter [00:06:20]: Yeah. It's interesting you mentioned that because I'm writing a new book, and I've already decided I need to say upfront, no AI was used in the authorship of this book. And it's kind of interesting that, you know, AI is so brand new, but it's already become such interwoven into so much that people now suspect. Well, that's really well written. I wonder if he actually wrote it himself. And then the second part and I go back to my experiment from this morning. I kept putting the title of an organization that I wanted to make a logo for. And I would say keep everything the same except change this one thing. J.R. Flatter [00:07:03]: And it would come back with a completely different design and still misspelled. So it never spelled it correctly in the 20 odd times I tried. What's going on there? Lucas Flatter [00:07:14]: So pretty much, like, these algorithms, like, the prompting, it'll have a bit of I think they refer to it as, like, salt, which is like, okay, how are we seasoning like what you're saying? So if I said like, give me a blue circle, every single time it might have, like, a slightly different result. And it can be tuned where it's like, okay, don't vary the result at all. And that kind of thing yeah. It's hard to pin down. I've seen things like, make me a hamburger without cheese, and it's always has a slice of cheese. Remove the, like, the square that's yellow that looks like cheese. Nope. Another cheeseburger. Lucas Flatter [00:07:57]: You know? So Yeah. J.R. Flatter [00:07:58]: That sounds like my morning this morning. Lucas Flatter [00:08:00]: Yeah. It's like, which parts do I need to pin down and which parts are, like, being regenerated. And I think, yeah, that's that's some of the frustration now is that it's I wouldn't say it's like random output every time, but it's somewhat unpredictable, you know, whether Yeah. Text or an image. J.R. Flatter [00:08:18]: Yeah. What I thought was gonna be a very simple query ended up consuming much more time than I thought, and I was locked out. You know, I kept thinking, how could I say this more clearly so that it'll understand? It'll lock down what I like and change what I don't. I wasn't able to figure that out. So it's not plug and play as we might say. There is a learning curve associated in even the simplest task. And then I heard somebody say a few days ago, if 2 AI tools begin to talk to one another, they'll create their own language. What's that all about? Lucas Flatter [00:08:55]: I've seen this too. And it's kind of like this, like, some people refer to it as like a doomsday kind of scenario where, you know, the AIs begin being autonomous, and we can't understand, you know, how they're interacting with each other. I think the idea there is that, you know, why would they use human language to, like, convey meaning to another artificial intelligence when it could be, you know, more efficient or quicker? Or, you know, maybe they're doing it secretly on purpose in this doomsday scenario. But, like, yeah, for example, can you and I communicate by blinking and then nobody else knows what we're saying? J.R. Flatter [00:09:35]: Yeah. So is it theoretical or it actually happens? Lucas Flatter [00:09:40]: I think that it's happened in like a lab scenario, but I think that the theoretical part is like that there are AI agents that are kind of autonomously posting and reading and editing data and things that nobody's necessarily monitoring. And I don't think that that's happening quite yet. J.R. Flatter [00:10:01]: Yeah. I guess if you extrapolate this doomsday scenario out, and I'm just thinking off the top of my head, I'm not a doomsdayer. I hope they could gain access to 3 d printers and create weapons and figure out how to use them. Yeah. But that's a long way away. I mean, every time so I've heard the, Shakespeare analogy dozens and dozens of times over my life. And the likelihood of that happening is about the same likelihood as AI take it over the world. You know, to get a monkey to type Shakespeare, you know, as a statistician, the monkey won't live long enough to even get close. J.R. Flatter [00:10:41]: I mean, the joke is if you buy 2 lottery tickets, you have twice you're twice as likely to win. Yeah. Instead of 1 and 18,000,000, you now have 2 and 18,000,000. You're still not likely to win. Yeah. AI taking over the world. But I guess there is this risk of becoming over reliant on AI and what they call them EMPs, electromagnetic pulse, shutting down all of our GPS, shutting down all of our computers, shutting down all of our AI. And then what are your thoughts on all that as a computer scientist? Lucas Flatter [00:11:18]: I think part of it and part of what we're kinda butting against earlier is, like, what is the model optimizing for? So, you know, on the base level, it's optimizing for human language. So it, you know, needs to be grammatically correct. But then it's like, you mentioned, you know, bias and political bias. It's like, okay, once you have a coherent sentence, does that meaning convey, you know is it meant to persuade somebody? Is it meant to, you know, be factual? And I guess, yeah, like, a possible if we have AI agents, you know, performing autonomously, what are they optimizing for? And you hear that example of, like, oh, they're gonna say, you know, humans are a detriment to the planet and try to J.R. Flatter [00:12:08]: They're too slow. They have emotion. They're not rational. Yeah. And that's one of the reasons I think, and again, I'm not an expert. I think we're generations if ever away from replicating the human brain and the joys of human life, the emotions of human life. What's the test where we test people to to retest the computer to ask if it's a bot? Lucas Flatter [00:12:33]: Turning test? J.R. Flatter [00:12:34]: Yeah. Yeah. Yeah. The turning test. You know, a simple test. And AI is getting smart and figuring out how to get around that simple test. But, you know, the ability to replicate yours and my relationship, your mother's and your relationship, my relationship with her, I think we're a long way if ever. What are you thinking about that? You know, best friends, AI as a best friend. J.R. Flatter [00:12:59]: Right? There's some of that going on that kids are staying in the room and talking to bots rather than another human being. I think, you Lucas Flatter [00:13:09]: know, there's one end of the extreme which is like, you know, what you're describing, like, can an AI or whether it's by emulating a human brain or just by, like, its own means, can it, you know, experience things or, you know, have that level of consciousness where it doesn't just know factually, but it might have, like, an intuition and, you know, a sense of self even. But then you think about, like, the way that we emote onto stuffed animals and cartoon characters and pets. Like, some people consider pets, you know, more so their members of their family. I mean, everybody that has a pet has that level, but, like, even more so where it's like, oh, this pet is my child. And nobody would argue, you know, that pet has the same level of consciousness as a child. But on an emotional level, you know, it I could see that. You know, there's plenty of people that would agree with that. So it's like, how much do you need to be able to emulate to get, you know, at least close to that emotional connection to, like, a, you know, live creature, even if it's not another human. Lucas Flatter [00:14:26]: And I think we're probably closer to that. Like, you know, our love and relationship is nuanced, but the interactions themselves, you know, it might be like a 1000000 interactions. All of those could probably be, you know, held by an AI and understood by an AI. And it's like, maybe they could trick me into thinking an AI is, like, somebody that I know. So, yeah, I guess it's it's kind of how much do humans really need to make that leap to say, like, this is a real entity? And will we ever get, you know, on the technological side, will we get close to the human brain? It's kinda 2 different questions to me. J.R. Flatter [00:15:08]: Yeah. I think there are a lot of tasks that AI can easily replicate. And then one of the themes that is emerging is human checked. So an AI bot could easily take your temperature and your pulse and several other vital statistics and say, you know, and have a quick conversation with you and say, well, the likelihood is you have a cold as accurately as a human being. I wouldn't say more accurately because there's so many, you know, you said a moment ago, the millions of interactions. But, you know, you can walk into a room, the person never says a word to you, and you know something's going on. Right? That is part of communication that we're a long way away from replicating or being able for an AI bot to even absorb. Because the only way you really have to talk to AI is through typing on a keyboard. J.R. Flatter [00:16:10]: I would imagine they could look at your face eventually. What are they gonna notice? Your posture or your I don't know. Maybe. But the other thing that comes to mind is human beings are the only creature, even plant, that has volition, choice. You know, one of my social science words, we have choice. And as a statistician, one of the biggest mistakes we make predicting for human beings is predicting from a group to an individual. You heard me say that several times. And so how does AI replicate choice? It comes out with an answer. J.R. Flatter [00:16:48]: I guess you could ask it given these symptoms and these vital statistics, what are the 3 top likelihoods of what's wrong with me today? But still that doesn't account for the infinite variables, family history. And even then, not to get too macabre, but even this far into the 21st century, medicine as much as art as it is a science. And how do you replicate art? I know AI is drawing art. Can it replicate your cartoons? Probably pretty accurately, but, you know, the love and emotion that you put into them, are you not gonna do that? And, you know, going back to the human being, because we talk about this a lot in our coaching and coaching education. We are human beings and we might not know why we behaving the way we're behaving or loving what we love thus far into the 21st century with medicine as much art as it is science, we're finding that getting a catastrophic disease or even avoiding catastrophic disease when you live an unhealthy lifestyle is largely a factor of luck. You might have genetic predispositions to certain things, which would I guess, be another way of saying family history, or you might be born with a heart murmur or the vast majority of occurrences are good or bad luck. I talk about, you know, staying in shape and all of that and having a rest in heart rate in the forties. I can't point to any one thing that's any different than anybody else around me that they're doing. J.R. Flatter [00:18:26]: You and I are both runners and very different ages, but they're very different personalities and very different heart rates and blood pressures and all of that that's, you know, part of the human experience. So we're a long way away from any of that. Lucas Flatter [00:18:43]: Yeah. I think, you know, the way we think about AI now, you wanna know, you know, why did it come up with this answer? Like, what sources did it draw from? Or, you know, if it is reading into a level of intuition about, you know, your posture and your body language and all these things, maybe more factors than we could notice about a person. I would wanna know, like, all of those data points. Like, even if I can't understand them, like, tell me where you're getting all this information so that I can kinda track your decision making. With a person, it's like, you mentioned, like, why do I love x person? Why do I enjoy x experience? There's almost a level of, like, you're right as a human being is to not have to justify those things, you know. You don't have to tell the government why you wanna marry somebody or there's this level of respect and dignity that comes with being a human where it's like, maybe nobody can explain their own choices or I'm not gonna have you explain your choices, and we all kinda get along. But with a with, like, an agent or, a robot or, like, an a non sentient thing. I wanna know those things and, you know, it doesn't have the same freedom to have the volition that you and I do. Lucas Flatter [00:20:08]: So I I don't know if it's like a philosophical philosophical conversation. J.R. Flatter [00:20:12]: It is very much so. I mean, you and I were just joking about this yesterday. You drew me a cartoon from my history as a dairy farmer, and you kept the original. I want the original, but it looks exactly the same as the reproduction. Why do I care? Why do you care? Because there's something inherent in the fact that it's the original. And the same is said for any artwork, any book. That's the reason that I'm inclined and and and will say on the, you know, very first page inside the cover, this was not written with any AI. Because I want the reader to know it's my original thought and not created by some bot. J.R. Flatter [00:21:00]: And I don't know if we'll ever get away from that. If you watch any antique roadshow or shark tank or any business encounter, people wanna know the story. What's your story? What's the story of this piece? And the story will increase its value. Now, it goes back to the fundamentals of economics. And one of the fundamentals of economics is the value that you as an individual place on something. And it might be a guitar, a child, a spouse, and no one other than you can define that value. And, you know, they say the heart loves who the heart wants. And if the other person isn't reciprocating in the value you're laying on that person, then the relationship's probably not gonna work out. J.R. Flatter [00:21:54]: That's why a lot of us, when we're young, run around all heartsick because we had aspirations for a particular partner that they didn't have for us. Never happened to me. I dated 1 woman in my my entire life. Been married to her for 40 years now. Well, that concludes this episode of building a coaching culture. Voice Over [00:22:15]: I truly hope that this episode was helpful to you. If it was, be J.R. Flatter [00:22:19]: sure to follow us wherever you listen to podcasts. Maybe stop and give us a rating or a review and share this podcast with someone who might find it helpful as well. Thanks again, and we'll see you next time.

© 2024. All Rights Reserved.

Your cart is empty Continue
Shopping Cart
Subtotal:
Discount 
Discount 
View Details
- +
Sold Out