A.I. Uses In Learning & Development

A.I. Uses In Learning & Development

Could generative A.I. be the key to truly personalized and scalable learning experiences? Or will the human element remain irreplaceable when it comes to coaching and development?


In today's episode, hosted by J.R. and Lucas Flatter, today they discusses the potential impact of generative AI on learning and development, specifically in enhancing learning materials, customizing learning experiences, and providing feedback on coaching sessions.


If you're keen to understand how generative AI could revolutionize the learning landscape while considering the timeless fundamentals of human touch, this episode is for you.


Key topics covered include:

  • The Fundamentals of Human Interaction
  • What is the relation of generative AI to learning and development
  • The importance of human interaction and emotional intelligence in the learning process.


Building a Coaching Culture is presented by Two Roads Leadership

Produced, edited, and published by Make More Media

Building a Coaching Culture - #118: A.I. Uses In Learning & Development === J.R. Flatter [00:00:00]: There's a lot of human stuff still going on, a lot of old school methods and technologies that are still working. Those fundamentals are never gonna change because we're dealing with human beings. Voice Over [00:00:12]: You're listening to the building a coaching culture podcast. If you need to compete and win in the 21st century labor market as an employer of choice, this podcast is for you. Each week, we share leadership development, coaching, and culture development insights from leading experts who are developing world class cultures in their own organizations. And now, here's your host, JR Flatter. J.R. Flatter [00:00:43]: Hey. Welcome back, everybody. JR Flatter here with my shotgun, Lucas. Hello. How are you doing? Lucas Flatter [00:00:50]: Pretty good. I'm just curiously typing notes because the AI topic is so thought provoking. J.R. Flatter [00:00:58]: Oh, that's good. That's why I'm asking most of the questions. You're giving most of the answers. So now you and I are largely in a learning and development space and generative AI with regard to learning and development. And then specifically focusing on return on investment. Is it a matter whether you're a government or private industry, you wanna return on your investment. So every dollar spent, you're gonna get what in return. So that's what this discussion's about. J.R. Flatter [00:01:32]: See if we could shed the light on a few things for our listeners. Lucas Flatter [00:01:36]: I mean, I think AI is one of the areas and we mentioned in our first discussion, you know, gaining new knowledge. That's one of the areas where it can have a strength. If I want to summarize a bunch of different information, summarize it with different wording, you know, give me the next thing to learn based on these five things. I think that can all be aided by AI. Mhmm. J.R. Flatter [00:02:03]: So I see this phrase, enhance learning materials. So you've created curriculum and delivered curriculum. How do you see generative AI giving me enhanced learning materials? Lucas Flatter [00:02:16]: I think that the dream, you know, so for, like, past the past the initial stages, the dream would be that I understand how this particular student needs or wants information to be presented, and it's tailored to each student. So based on these 5 curriculum points that every student needs, generate visuals, audio, video, whatever that they might need that would be the most effective for that individual. J.R. Flatter [00:02:50]: You know, I think about this in regard to our own content. So you and I teach large cohorts globally and usually to a single audience with a single deck. And so the customization comes when the individual comes into the conversation and ask a specific question related to their use study or their personality, whatever it might be. How could we take our content to the next level using generative AI and this enhanced style of learning. Lucas Flatter [00:03:25]: I think, you kinda prompted a thought where, you know, maybe it is the same, you know, maybe it's not personalized, the deck, but then there's, you know, we kind of synthesize how we might answer questions, and maybe the AI has learned our our answering style to the point where maybe they can ask questions without asking you know, just asking a question individually with, oh, can you explain this? And it sounds like you or I, but it's a language model. J.R. Flatter [00:03:57]: And is this gonna become like the peripherals world where I don't try to create this content on my own. I use a an existing peripheral to help me augment that capability. Do you Lucas Flatter [00:04:10]: mean on, like, the creating presentation kinda? J.R. Flatter [00:04:13]: Yeah. So if let's say we have we have 200 plus decks in our catalog, and we began using generative AI to customize the experience for an individual. Would I need an intermediary or is that something I create in house? Lucas Flatter [00:04:30]: Yeah. That's a good question. It's like, where does this where does the AI lie in a sense? Like, do I, as a student, have, you know, a model that I'm using to interpret? Or do you have a model that you're sharing that you would like me to use to interpret? Or is the slide generated by AI on our side? Or is it generated based on the content we provide on their side, you know? So it it's almost like, what is the source and how does it get manipulated and where kind of thing? I guess, like, we would want control over a certain amount of it just to, like, have that verification and, like, you know, a stamp of approval in a way. J.R. Flatter [00:05:11]: I could give you an example. So psychometric testing, which you and I use a lot. I tell people, and I would never attempt to replicate, create a psychometric tool. The complexities associated with that are staggering. The likelihood I'm gonna get valid validity and reliability is the outcome valid and is the outcome reliable? Meaning you can get the same result again and again. It's just fraught with risk. I don't know yet, but I kinda suspect the same thing that if I wanted to use generative AI to create an individualized learning experience, I'm gonna need an expert in their tool rather than try to create my own. Maybe I would put my deck into the generative AI tool, and it can be customized per the. J.R. Flatter [00:06:06]: So I guess you could create a bot for me. Call that an avatar. I don't know. I don't know the language word. Lucas Flatter [00:06:13]: I used to say agent most of the time. J.R. Flatter [00:06:16]: Okay. Agent. That would answer the way I would answer if I were there with them. Because q and a is important. Mentoring is important. Coaching is important. So we do all all of those in our classes. Lucas Flatter [00:06:31]: I feel like, the way that the tools are being pitched currently is, you know, it's so generalized and adaptable that whether you're using it to generate learning presentations or, you know, something in a completely different domain that doesn't have anything to do with learning, that we should be able to tailor, like, a tool for our use case and not have to you know, if you think about, like, software, there might be, like, you know, the learning company. And then there's all these different companies that specialize in different things. But I guess if we're just thinking about a couple of years in the future, just assuming that the AI is special or generalized enough where it becomes useful, you know, so we wouldn't have to have some 3rd party, like, specialized education model. J.R. Flatter [00:07:25]: Yeah. I guess, I mean, to get a little more specific and less generalized. We give 3 performance evaluations in our program. 2 of them practicing, one of them actual final performance evaluation. From what I've learned, it sounds like I could put that performance evaluation into AI, and AI could give me feedback on my coaching that would be similar to what I would give if I were there listening. So usually, the person coaches for 30 minutes, and then I provide or another mentor provides 30 minutes of feedback, both written and verbal. It sounds like with a generative AI tool that I could do that without an actual mentor present. And I guess that's where the gap in my understanding is. J.R. Flatter [00:08:13]: How do you close that gap from implicit knowledge I have and make it explicit or any other mentor for that matter? Lucas Flatter [00:08:23]: Part of it would be, you know, providing an ideal answer or, like, this is what a good marker looks like. This is what a bad marker looks like. Usually, with just, like, a little bit of that information in the prompt, whether it's, like, built into the prompt or if they've learned it over time. Usually, yeah. Like, for exam like, if I wanna create, like, like, an Excel PowerPoint or and I mean, an Excel spreadsheet and I'm saying, you know, the headers go across like this, and this is an example of, like, 5 rows that I made up myself now generate the rest. It does a pretty good job of, like, matching the format of the data that's provided. J.R. Flatter [00:09:07]: Yeah. So you've touched on an interesting point. The the ethics core values, competencies, and markers of coaching are very finite. What's not finite is the interpretation of those 8 competencies, 4 core values, 40 plus markers that International Coaching Federation uses. And you and I were talking about this in a previous session, and that is the subjectivity associated with human interaction. And the irrational is probably going a step too far, but I ask people all the time if they don't believe in emotional intelligence or the irrational behavior of the human species, you know, rationally explained to me why you chose your life partner. It's just not possible. Oh, I like the way they thought and etcetera, etcetera. J.R. Flatter [00:10:03]: But at the end of the day, it's completely irrational decision. Doesn't mean it was wrong. So I guess it would be a rather constructed feedback that they would just look at the markers, look at the core values, and look at the competencies. And and AI would say, I noticed this competency. I didn't notice this one. I noticed you interrupting. And it says in this competency, you shouldn't. Is that you think that's a relatively simple if I took a transcript of a coaching seminar or a session and plugged in the competencies, core values, and ethics and say, compare these 2 and give me feedback. J.R. Flatter [00:10:44]: Is that something AI could do right now? Lucas Flatter [00:10:47]: I feel like, yeah, we could probably come up with something. And maybe you would want, like, some sort of impact or confidence rating on each of them. Like, you know, this person interrupted, but maybe it didn't have any impact on the conversation and kept on going. Or, you know, I noticed this, but it was subtle. So I'm gonna say it's like a 30% out of, you know, 30 out of a 100. To just give a little bit more of the context of, like, like we said in our previous AI discussion, give me your reasoning for why you said this was either a marker, positive or negative. J.R. Flatter [00:11:27]: So I guess the student could do that interaction. Right? Independent of you and I. Well, another aspect of this customization of learning. In our world, we do a lot of virtual, but we're always interested in seats. Right? Like how many seats can our customer afford or how many seats have they budgeted for? But if you're using AI customization, there might be a small licensing fee associated with the interaction. But beyond that, the scalability is infinite. And this this need to talk about seats kind of goes away. And I guess it goes back, you know, some of the same concerns of online learning. J.R. Flatter [00:12:12]: Is the student actually learning it? Is the are they doing their required reading and practice? But at the end of the day, the the performance evaluation and the mentoring, so part of the journey to coaching accreditation is an interaction with a mentor over time to get in a very intimate environment observed. So one would hope that that ferrets out people who are perhaps not as engaged as they might be and those who are getting it. And then so what, you know, we teach, cohorts and each cohort has its own personality. And each person within the cohort learns at their own pace, yet we're teaching the cohort. And the customers customized solution would be able to teach at their pace or give them if you subscribe to Malcolm Gladwell's ideal that it takes 10,000 reps to become an expert, they could get reps independent of us. Right now, we try to give them reps in session. They get reps in their mentor coaching. They get reps in the live coach training in the evening. J.R. Flatter [00:13:22]: But if there was a customized solution, they could get infinite reps. Lucas Flatter [00:13:27]: Yeah. I mean, I could see that being useful. Like, you know, I wanna do 2 or 3 mentor coaching sessions and just record it and not have to, you know, schedule it with you. And then when I'm ready, do my 3 in front of a real person. J.R. Flatter [00:13:43]: Yeah. Record them, send me the transcripts, and run it through AI to get the first look. And then you and I were talking about last session human interaction because we're generations of AI away from not needing a human. You know, it's one thing, and this goes this is cultural as much as it is practical. It's one thing to have an autonomous drone. As soon as you put a human being in there as a passenger, they're not gonna want AI flying that airplane even though whether and the known functioning of an airplane are all pretty standardized. There's still way too many variables. So we're decades away, maybe. J.R. Flatter [00:14:27]: I won't put a time frame on it because Moore's law is dead. We're generations away from having that kind of cultural acceptance and even practical. Lucas Flatter [00:14:37]: Yeah. You touched on, like, you know, the whole I'm in the plane and he the AI is responsible for my safety, but, you know, what investment does the AI have in my safety versus what investment I have? I think, you know, when you talk about education and, like, the difference between self directed learning and, like, learning in a school context or a class context or in the core cohort. There's somebody that's responsible for my learning outcomes in a sense. Their reputation, you know, their integrity, etcetera. And, like, if I'm not learning it, there's an expectation that there's gonna be, you know, either more resources provided or, you know, some kind of discussion. Whereas if it's just there's no humans involved at all, it's like, who cares ultimately at the end of the day? J.R. Flatter [00:15:36]: Yeah. Now you sound like my lawyer when we were setting up my estate plan. You're dead. Why do you care? Well, I love human beings who are gonna still be alive. You know, you've just reminded me of something I say all the time, and I always learn from you. And so this time I've learned not only are bureaucracy is incapable of loving you, AI is incapable of loving you. AI doesn't know you as a as a person. I don't even know what metaphor you might use. J.R. Flatter [00:16:06]: So when I was talking to the the AI this morning, who am I talking to and who do they think they're talking to? Right? There's no expectation of human to human interaction there. It's my brain and and that data. So you take a step back and you talk about emotional intelligence is willingness and ability to make a connection with a human being. Yeah. That's a that's a long way away that I could consider AI as a member of my team, that I'm concerned with their welfare. How are you doing today? How was your weekend? There's none of that going on there and that in in in either direction. So that opens an interesting Pandora's box because, you know, here we are well into the 21st century. You and I talk about emotional intelligence all the time. J.R. Flatter [00:16:56]: And to become a 21st century employer of choice, how important it is to have a culture that attracts and retains the world's best talent. And we're talking about AI, which is incapable of any of that. I'm reading more and more articles about the insocial ability of the youngest generations, if they prefer to be alone. I'm not so sure that that's true. Maybe time will tell, but humans want human to human interaction. Not to be too scientific, but we're a herd animal, like buffalos and cows and other things that gather together because they like their interaction with each other. So whenever you violate the herd mentality of the human being, you're you're on a journey fraught with risk. They won't probably have the outcomes you're expecting. J.R. Flatter [00:17:48]: So rather than succumbing to this idea that, you know, I'm I'm looking at data driven insights. Like, there was a time in my life that I believed, thankfully, very short time, if you give me enough data, I could tell you anything. And the wiser and older I got, the more I realized that wasn't true. But now we're here. We're in another cycle where data the pendulum swinging toward data. Give me more data. Give me more data. And I think that's part of the draw to AI is you can have data driven decisions. J.R. Flatter [00:18:21]: But again, going back to the fundamentals of of analysis, first, you're never gonna have all the data you want. And even when you do, let's say you ran out of time or you ran out of money, you need to make a decision. You're still gonna take a gigantic leap of faith. Whether your faith is in science or faith is in your instincts or faith wherever you might apply your faith, where it might come from, it's gonna be a faith driven decision. So I think, again, going back to the idea of modeling and AI is a next generation modeling tool that we can dump millions of data points into. You know, in our last session, we talked even about synthetic data. Dump real and synthetic data in there or or one or the other. And the answer is gonna get spit out of the algorithm. J.R. Flatter [00:19:08]: But how much confidence do we have in that outcome? Lucas Flatter [00:19:11]: Yeah. I mean, we talk about, like, building an environment, like, whether it's a classroom or a coaching conversation environment. And you try to build, an environment like a zoo and you put the trees in the right place and you put the water and you put the noises. But, you know, there's billions of variables out in the forest versus, like, what you might have in a contained space. And it's, like, maybe we're only aware of the 100 that we, you know, we put into the zoo. So, yeah, there's all these I think, like, yeah, give me enough data and I can tell you anything. It's like, we'll never know if we have enough data, you know. J.R. Flatter [00:19:51]: You don't know what you don't know. Right? Yeah. Just think about it for a second. If you take a lion out of the wild, put it in a zoo, replicate its habitat as closely as you can, there's still so many unknowns. And the first is nothing's trying to kill it every day. Yeah. And so what happens to that lion's brain when nothing's trying to kill it every day? You don't know. Or it's not out trying to kill its dinner. J.R. Flatter [00:20:25]: Why does that change in its brain and behavior? Yeah. We don't know what we don't know. Lucas Flatter [00:20:30]: And, like, thinking about, like, teacher, student, teacher or student, other students. Like, you're not thinking about this consciously, but if you take a step back, it's like, I don't wanna disappoint the professor. I don't wanna disappoint the other students. I don't wanna, you know, embarrass myself. There's all these feelings involved. And it's, like, if they're coming from me and projected onto another person, that makes sense. But if they're being projected onto an AI, it's like, is the AI taking that into account? You know, like Mhmm. J.R. Flatter [00:21:05]: And, again, I'm back at the same place I was in our last conversation about this. We were talking about AI in general, but now we're focused on AI and learning and development. And going back to the analogy of the the birth of the Internet, and that was long before the nineties, but really came into my consciousness in the nineties, maybe the mid to late nineties. Now here we are 30, 35 years later, and it's an invaluable resource. But there's a lot of human stuff still going on. A lot of old school methods and technologies that are still working. I just started my sales journey a couple of years ago, and your brother had told me a 100 times, create old school processes before you try to digitize them. Oh, yeah. J.R. Flatter [00:21:56]: Yeah. Yeah. But then I realized, you know, old school sales works. Calling up a human being or emailing them and, you know, the human to human interaction and warming of the relationship before you reach out and, you know, give, give, give before you ask to receive. Those fundamentals are never gonna change because we're dealing with human beings. So I suspect years from now as I'm retiring in your in the prime of your career, we're gonna look back at this time, and it's gonna be very analogous to the late nineties. What do you think as you look 30 years into the future Lucas Flatter [00:22:46]: Maybe I do see myself embarrassed in how I'm speaking with an AI or wanting to please them in a way or, you know, not wanting to disappoint them and having those feelings, you know, because the barrier has kind of been crossed. But I think now, you know, I'm seeing all these barriers the more we talk about it. And it's not that it's not useful at all for education, but maybe it's more analogous to, like, self directed learning where it's like, you know, this textbook can teach you everything you need to know about x, y, and z. You know, it helps if I have the Internet to help me gain additional context. And it helps even more if I have, like, an AI assistant that can, you know, explain the book. But it's still self directed, you know. Like, nobody's there's no emotion, like, involved. J.R. Flatter [00:23:43]: You just brought up 2 vivid memories. And one is the day you were born. And I could look back at my journal on the day you were born and see how ignorant I was as a 29 year old father of 4. And so the fact that 30 years from now, you're gonna look back and say, what an idiot I was. I mean, we're all gonna do that. I could look back last month and say, you know, what an idiot I was. But then also, gosh, 1986, I'm learning calculus for the first time, and I have a calculus book on my desk. And I know everything I need is in there. J.R. Flatter [00:24:20]: I just don't know how to get it out. And then we went through the Internet stand up, and now we have access to tons more information. And now I have AI with even more information at my disposal, now interpretive information. And I still don't know if I could open that calculus book and figure it out. And so we need to be thoughtful about how we go down this path. And if you remember, you you might have been too young, but the dotcom bust, it all came crashing down. They were trying to sell everything. And as it came crashing down, people were spending $1,000,000 for a 32nd ad and bragging about it, that they were a dotcom company and they just wasted a $1,000,000 for the 32nd ad. J.R. Flatter [00:25:06]: That's all he talked about, and that company was, you know, bankrupt within days. But many other companies that had relied so heavily on the Internetization of their corporation that they just crashed and burned. There's gonna be a lot of venture capital burned going down this AI journey. I've actually written a proposal for an investment on generative AI as it relates to learning and development. I hope somebody funds. But with all that with all venture capital, you hope 1 in 5 makes a dollar eventually, unless you're really, really doing well. Yeah. So it's I look at the future with a great optimism, but also a bit of trepidation. J.R. Flatter [00:25:45]: We need to be thoughtful. I think we're generations if ever away from AI taking over the world. But, you know, if you wanna be macabre, scientists are predicting in several 1000000000 years, our planet's going to implode as 2 galaxies crushed together. And so we're not on a finite infinite journey anyways. Lucas Flatter [00:26:06]: So, you J.R. Flatter [00:26:07]: know, every time I run you know, I live near a quarry. You could just look at the layers of rock and see all the violence that created this world over many millennia even more. And know we were living in a violent galaxy and eventually you just look at the moon at night and look at all the pockmarks in the moon. We got to remain optimistic while we are here, raise our families, and run our businesses, and do our jobs. Alright, my friend. Thanks for everything. Well, that concludes this episode of building a coaching culture. I truly hope that this episode was helpful to you. J.R. Flatter [00:26:41]: If it was, be sure to follow us wherever you listen to podcasts. Maybe stop and give us a rating or review and share this podcast with someone who might find it helpful as well. Thanks again, and we'll see you next time.

© 2024. All Rights Reserved.

Your cart is empty Continue
Shopping Cart
Subtotal:
Discount 
Discount 
View Details
- +
Sold Out