The eLearn Podcast
The eLearn Podcast is your go-to resource for creating successful learner outcomes. Whether you’re an instructor, an eLearner, a parent supporting an eLearner, a content creator or just an eLearning enthusiast, our episodes provide the latest strategies, practices and technologies for supporting, engaging and retaining students... both on and off line! Learn more at https://www.elearnmagazine.com/elp
The eLearn Podcast
Moving Learning From The Classroom To The Real World With Josh Kamrath
Hello everyone! My name is Ladek and my guest for today is Josh Kamrath, the CEO of Bongo, a start-up whose mission and vision is to change the way people are evaluated in the workplace, the classroom, and beyond.
In this ‘very personalized’ conversation we talk about
00:00 › Start
01:24 › Enter Josh—The unusual way Josh entered the eLearning space
06:07 › Mind, Bridge The Gap—How Josh defines the gap between Higher Education and corporate learning in terms of real-world applications
10:18 › AI-driven—Josh discusses the common challenges he faces when introducing Bongo and AI-driven learning, or when discussing the gap between education and corporate learning
16:44 › AI-overtaken?—Josh considers the increased use of AI-driven learning and whether or not it is due to its availability and on-demand nature, or because it's perceived as a better method
20:23 › On Real-time Coaching, Feedback—Josh elaborates on how Bongo focuses on assessment and offers tools for coaching and feedback in learning in real time
27:19 › Success Tales—Josh shares specific success stories and future plans and developments for Bongo in the next 18 to 24 months.
Subscribe for the latest news, practice and thought leadership at eLearnMagazine.com
This is the eLearn podcast. If you're passionate about the future of learning, you're in the right place. The expert guests on this show provide insights into the latest strategies, practices,
and technologies for creating killer online learning outcomes. My name's Ladek, and I'm your host from OpenLMS. The eLearn Podcast is sponsored by eLearn Magazine,
your go -to resource for all things online learning. Click -by -click how -to articles, the latest in edtech, spotlight on successful outcomes and trends in the marketplace. Subscribe today and never miss a post at elearnmagazine .com and OpenLMS,
a company leveraging open -source software to deliver effective, customized and engaging learning experiences for schools, universities, companies and government. governments around the world since 2005.
Learn more at openlms .net. Hello, my name's Lattic, and my guest for today is Josh Kamrath, the CEO of Bongo, a venture -backed startup whose mission and vision is to change the way people are evaluated in the workplace,
the classroom, and beyond. In this very personalized conversation, Josh and I talk about how he entered the eLearning space. because it's a bit unusual and I think it's worth telling.
Then we talk about how Josh defines the gap between higher education and corporate learning in terms of real -world applications. Josh then discusses the common challenges he faces when introducing bongo and AI -driven learning or when discussing the gap between education and corporate learning.
Josh then considers the increased use of AI -driven learning learning and whether or not it's due to its availability and on -demand nature or because it's perceived as a better method. Josh then elaborates on how Bongo focuses on assessment and offers tools for coaching and feedback in learning in real time.
Josh then ends our conversation by sharing specific success stories and future plans and developments for Bongo in the next 18 to 24 months. Now remember, we record this podcast live so that we can interact with you,
our listeners, in real time. So don't be surprised when you hear Josh and I answer questions and react to comments as they come in. If you'd like to join the fun every week on LinkedIn, on Facebook, or YouTube,
just come over to elearnmagazine .com and subscribe. Now, I give you Josh Cameron. Hello, everyone. Welcome to the Elearn podcast. My name is Ladek, as you will hear at Nauseam,
or at least you maybe have already heard at Nauseam. I'm with a company called open LMS, but as I say always the show is not about me I'm very excited to have my guest here today Josh Kamrath.
How are you today Josh? I'm doing great Excellent, you know, we have guests from all around the world on this show. Where do we find you sitting? So I am about an hour north of Denver in a town called Loveland,
Colorado Nice Loveland and I mean mean, I'm not gonna be, I'm not gonna be beat around the bush. I'm a Colorado in it as well. Love, Loveland actually went to school in Fort Collins.
I think we had, we had this conversation, you know, a couple of weeks ago, fantastic. You know, we're recording this or, and we're live right now, but we're recording this October 19th. What's,
I think the slopes are open, isn't it? Like the ski resort called Loveland, I'm pretty sure is open, isn't it? - Um, yeah, I haven't been. following it. My wife and I were avid snowboarders. I'd still consider myself that.
But we've been having little babies the last three years, so haven't gotten up there for a few years now. Babies love the snow too, my friend. For sure.
So we're hoping this year to get our our older son is three and a half right now, so we're hoping to get him him up there this, this winter. That is awesome,
man. Wow. I remember those days. That was a long time ago, don't worry. You are the CEO of Bongo. But why don't you give us, you know, the 60 seconds on who you are,
how you came into this, this eLearning space or this, this, you know, essentially this learning space. And just kind of set the stage for us. Sure. Yeah. So, yeah, I work for Bongo. So I actually,
also went to Colorado State University up in Fort Collins. And, you know, basically, when I graduated, I worked for a middleware company. My girlfriend at the time,
now wife, wanted to move to San Diego and she graduated, you're behind me. So I quit that job and kind of put my feelers out. And one of my professors, Dr. Jeff Lewis, is the founder of Bongo.
And I was, you know, just asking, Hey, do you know anyone in San Diego? "that could work for?" And he said, "Why don't you try to go sell "the first iteration of Bongo?" So we ended up moving into my,
at the time, girlfriend's parents' basement, and that was effectively the first employee of Bongo. And we started in education, so I just went around knocking on doors at different colleges and universities,
talking about, or trying to uncover, problems or hardships in assessment and having learners give presentations. USC was my first sale over a decade ago now.
They're still a customer of ours. And yeah, we've just kind of grew organically for several years. And then about five and a half years ago,
that performance professor wanted to go back to teaching. And so he did that and then I became the CEO. And then two years ago, we otherwise were organically grown,
but two years ago, we raised institutional money from a fund called Peakspan out of the Bay Area. And, you know, our footprints, probably about 50 to 60 % of our user base is in education.
The rest is in corporate L &D. But without a question, corporate L &D is growing significantly fast right now. - Yeah, sure.
Yeah, I think a lot of people are saying that. I gotta ask, what is the name Mungo? Where did that come from? Like, what was that, what's the genesis? - Yeah, so, you know, we basically were trying to find a word or phrase that we could use could own and didn't have another company already having affiliation or ownership,
if you will. And it was actually quite difficult looking across both education market and corporate L &D market. And we ultimately landed on Bongo as in-- and we thought there was a number of cute marketing things we could do.
like, you know, beat to your own drum and a big part of what our company or thesis as a company does is, you know, help people be better communicators and be better at demonstrating their knowledge and applying what they learned in the classroom in a real world setting.
And, you know, having there be some instrument affiliation was, you know, we thought would would just be a nice marketing angle,
if you will. So yeah, we landed on Bongo. We, you know, our URL is still bongolearn .com because the we still haven't been able to get the bongo .com. It's actually owned by a denim company in Russia.
So, of course, yeah, you know, maybe someday we'll we'll get that URL, but But, but yeah, Bunga is a, you know, where we live. I think there's an incredibly, there's a lot of incredible stories.
I'm going to go off on just a little tiny change for a second. Of, you know, now that we're looking at 2024, you know, the interwebs have been around for, you know, obviously a very long time. But let's just let's take it back to like the dot com boom in the beginning of the century.
It's really interesting at some of the conferences, like the detrius. detritus of what is out there, that I feel like there's going to be all these stories. For example, there was one company that was brokering IP addresses,
right? Because universities have all these IP addresses that are not being used. And so they're running out, or there's not enough of them. But then again, it's like domain names, there's so many of them that are out there that somebody bought a long time ago,
but then it's locked now, it's locked up in some aggregator in Tanzania. Tanzania and just getting a hold of somebody and then negotiating a deal that's realistic is pain in the butt.
Anyway, I find that super fascinating. So we're here to talk about transferring learning from the classroom to the real world. I'm going to offer you the opportunity again to set the stage on this as well because obviously,
you know, if you look through Bongo site, you guys have been focusing on assessment, but there's a lot of tools that you have out there. around coaching, around, you know, different ways to give feedback to a learner and those kinds of things.
What does that mean to you? Like define that for me how, this is a, it seems to be never ending gap between the higher education space and the corporate learning,
and the corporate space of, hey, I want to hire you, but how do I know that you can do this, you know, show me you've got the experience, but then the students like, wait, I can't get the experience. because, etc. Over to you.
Yeah, for sure. So what Bongo does is we scale observational assessment. So you can think of, you know, in any area where you're trying,
you know, trying to acquire knowledge, you know, there's traditional forms of assessment, like multiple choice and free response. Those do a great job validating that, you know,
know, you can regurgitate or remember what you just read. What Bongo does is, you know, similar to having a manager or being called out in a classroom setting by the professor,
asked an open -ended question and seeing how you respond. Bongo both captures that. So like we call it an authentic assessment. So we're having a learner record a video of themselves demonstrating some skill or competency.
And then historically we've had workflows where human evaluators can come in and provide coaching or evaluation. And again, validate that that person can apply the knowledge that they just learned.
Now, and really the last probably three years we've really had a big focus on adding AI elements. So there's a handful of machine learning capabilities,
but especially this year, we've added a lot of AI capability where it will, similar to a human evaluator, give coaching and feedback on like the delivery.
So was that person being empathetic? What was their tone like? Were they confident or nervous? And then as well as pro tips on how to improve that delivery of knowledge, knowledge of AI.
representation. And then actually, literally today is a pretty major release that our engineers are launching where it can basically evaluate in the contextual sense,
right? So a professor, a coach, an expert, somebody authoring a video assignment that's again, powered by Bongo can benchmark the assessment.
to specific content, and the tool will give contextualized coaching and evaluate was that, for instance, the learner represented three of the four core concepts that were delivered in this,
again, ILT lesson or throughout this textbook. Here's the fourth one they missed on and why it's important the next time that learner articulated. this.
And then it'll also evaluate people, again, similar to a human that goes in and evaluates or grades off of like a rubric per se. And it is like incredibly accurate,
right? Like more often than not, less than 5 % variance of how a human would score someone. So, again, the thesis of Bongo is validating validating that knowledge can be applied in a real -world application and helping scale that observational assessment element.
Hi there. I'm sorry to break into the show right now, but if you're enjoying this show, if you are challenged, if you're inspired, if you're learning something, if you think that you're going to be able to get something out of this to put into your practice, do me a quick favorite.
Pause right now and just hit subscribe on your channel. podcast player right now. It doesn't matter which one, just hit subscribe because that way it'll make sure that you never miss an episode in the future. Thanks. Now,
back to the show. So I asked this question to a lot of guests here. What you've just described, it sounds like an O -brainer, right? It sounds like, of course,
if I'm an employer, if I'm hiring, or if I'm upskilling or re -skilling people, in my organization, or if I'm onboarding or bringing in,
attempting to bring in new people into my company, this is like, yeah, this is a no -brainer. What are the challenges that you usually hear from individuals when you're either introducing Mongo or when you talk about bridging that gap?
What are the sticking points? I would say the biggest challenge is is, to be honest, status quo. It's easy to do it through a multiple choice exam the same way you've done,
you know, education has done things for 150 years. What we're offering or bringing to market is a better mousetrap, like a better way to evaluate if skills can actually be applied.
Right? So, you know, in like a cloud. classic example would be having, you know, an organization publish or distribute to customer facing teams like a white paper,
you know, and the thought is, okay, this is gonna help our sellers sell or our customer success, people communicate value to our end customers, they should read it because it'll better their knowledge of our product to our end customers.
of this use case But what often happens is, you know, people skim it or they skip it and don't don't you know Actually absorb the knowledge that it's in that white paper Or or any piece of content You know and you may be able to check the box through a multiple choice exam of like does that person,
you know Do remember or can they? skim it quickly and find the answer for that?" What Bongo works towards though is a different way to evaluate.
It evaluates people in a capacity of, "Can they actually take these concepts and apply it in an actual customer conversation?" And then Bongo will give them AI -driven coaching on how to improve communicating that you know,
white paper, the knowledge that was hopefully just captured. So, yes, you know, I would say the biggest subjection is just status quo of,
you know, and especially this year, you know, of course, the things that are fun to talk about, at least for me, are how AI is bringing more value and holding learners more accountable.
But But we've also added a lot of automated capability to make authoring a lot easier, right? So making it a lot less cumbersome for the instructor,
for the manager to author this form of assessment, you know, and again, that's why I kind of go back to like scaling observational assessment because traditionally it's one -on -one or it's the manager looking over the shoulder of an employee.
It's extremely hard to scale even if it is the most authentic way or the most rigorous way to evaluate folks. Bongo can do that now at unlimited scale with this new smart scoring capability.
How much fear have you found around validating or bringing? AI into this process? Again, it's just sort of a sub question of that status quo where it's I'm either the hiring manager or I am the HR professional Or I'm the professor,
whatever, right, where it's like, you know, hey, I You know, I'm not sure I can quite trust this totally. How do you how are you like? So two part questions,
like what do you say to those of just and then what kind of aha moments have you seen from individuals who are like, "Oh wait, this actually does work." Yeah, so what we say and you know,
what is reality is our tool, what we're building, you know, we're building it with the vision of scaling the evaluators or scaling the managers,
right? So we're not trying to replace them. And for instance, some of the the workflows ask the instructor or the assignment author for human validation.
You can input the source material, our tool will generate. Here's the learning objectives that are likely being represented in the source material. Is that correct?
Then the human reviews that. It's an important step. because, you know, it could be 100 people, it could be 100 ,000 people that are evaluated based off that,
you know, checklist, you know, and having a human validate it. Those are the right questions to be looking at, or those are the right learning objectives to be seeking.
We feel like it's a very important step and adds more, I guess, like a kind of. but, but like also just like believability that like this tool is doing a good job,
not just like, you know, hallucinating. So, you know, there's, you know, the I guess the first part, your answer to the first part of the question is, you know,
we're building products to help scale people. And we're using AI to basically make their life easier. You know, nobody wants to sit through a 10 minute video and count how many times filler words are used.
But it's a pretty important component of helping somebody articulate their knowledge more fluidly. And the AI can do that instantly at unlimited scale and at a lot cheaper of a rate than having to pay a human evaluator to count those mundane things.
So the So yeah, that would be the, I guess, like how I would overcome that objection is we're not trying to replace the human evaluator, we're trying to scale them and reduce the boring parts of evaluation.
But then the other thing of just how have we seen it being used in the real world? Basically, and this is a really cool fact, I think, we have millions of users using our products.
in, like, most countries in the world. We have a footprint, so a huge global footprint. Across that user base, when AI is enabled, learners end up practicing three and a half times more compared to pure human evaluation.
And if you, you know, so that, that's, you know, for, Is that just because of, is that just because of availability? And they're, you know, they're able to kind of do it on demand. and whatnot rather than scheduling with somebody, et cetera? Or is it really just like,
hey, this is cool and it's better? - Sure, so we've looked into that pretty deeply too. And yeah, so when you double click into like, why are people, when AI is enabled,
using the, you know, practicing a lot more? And it ends up, they feel, or their kind of reality is when it's feedback being provided, by AI,
they feel like a judgmental free zone, right? They don't feel like they're, isn't that cool? Like they don't have that subconscious thoughts in their head of like,
shoot, my manager is going to think I look silly or I'm going to screw this up. And ultimately, when people practice how they're articulating their knowledge, one,
it reinforces it quite a bit. bit. So they end up learning it more and it combats the forgetting curve. But then they also, ultimately, that workflow is practiced at a number of times and then submit for final human evaluation.
So it basically just leads to way better learning outcomes if they're practicing what they're being assessed on more. But yeah, it is that judgmental free. I'm not going to look like a fool in front of my peers.
And the reason why I find that interesting, we've had other guests on the show here and lots of conversations around AI tools and whatnot. And I think one of the funny ones is people apologize to chat GPT.
They thank it. And it's like, oh, hey, thanks for that answer. So there is a personification there that's very robust. and so to hear that we're accepting of this feedback and this sort of interaction in your scenario,
but it does not come with the same stigma as having like a supervisor with you. That's very, I think that's a really interesting slice or nuance in the findings, yeah.
Yeah, and the other cool thing is that isn't just one specific use case. case, that's pretty consistently across the board. So for frontline workers,
like all the managers at Chick -fil -A, for instance, they're learning how to give constructive criticism a little more fluidly to their employees or colleagues. So I guess persona practices a ton to other frontline workers,
like firefighters. firefighters, we have a huge population in Canada where they're validated and they can put their fire suits on, but then also having to talk about, put their fire suits on quickly,
but also talk about what they're doing and why they're doing it. So they're thinking about it and having to have cognition take place. It's not just muscle memory. So those are two totally different use cases.
Leadership, you know, developed. and just a vocational skill. But it goes to also a ton of technology companies where they're just articulating product knowledge or overcoming sales objections or practicing building rapport.
For all of those use cases, they all end up practicing more with AI coaching as a to just pure human evaluation.
So again, just a cool finding. - Yeah, for sure, fantastic. So if I'm the, again, I'm gonna just take the role of the hiring manager.
- Yeah. - What's my uptake in putting something like this together, excuse me, where I'm bringing this virtual assistant in to help me,
you know, assessment, et cetera. Am I responsible for figuring out what the dataset is? Am I connecting to other datasets? I guess, take me first to the what dataset am I using in order to train this thing or...
- Yeah. - Just 'cause you like, for instance, you just gave Chick -fil -A managers and firefighters in Canada as two different ones. Those are incredibly different datasets that you got to work around. So...
So how does that work and how do you know, is that something that's in the company that you are ring fencing and let's look at this or what? - Yeah, so, you know, I would, a two -part answer.
One is there were, there's two aspects of AI coaching and AI feedback. One is the delivery. So, you know, were you confident or nervous, informative or persuasive? How empathetic were you?
What's your tone? Things like that. Things like that. It's different per language, for sure. There's nuances between English and Spanish for your speaking rate,
for instance. There is some differences in the capacity, but generally, the delivery is very standardized. There's no customization.
It just gives feedback automatically, depending on, you know, and it'll automatically... adjust for the language as well. So you just start speaking in Spanish, the tool will readjust, you have to speak for like 10 or 12 seconds.
You have, it'll readjust itself to that language. So that, but that's standardized, the delivery. Then there's the actual contextualized feedback. Did they get the concepts correct?
Are they, you know, saying the right thing? And that, I mean, the AI world, it's called a zero -shot approach that we're taking. So it's not sucking up the knowledge base out of the company and trying to train a giant large LLM.
It's a philosophically different model that we're using, again, called zero -shot, where you submit a piece of source material. So it can be anything, right? You know,
so like, for, you know, for, uh, uh, the firefighters, it's the quick start guide. I'm putting your fire suit on the, you know, the tool, um, ingests that.
So it's literally a copy paste in, in, as you're authoring assignments, it'll ingest that and it uses the underlying it's built as a microservice. So it's not necessarily one and it's not like just GBT or just,
you know, a bar or something. It's, it's, it's using the under. algorithms of generative AI, but it's benchmarking itself off the knowledge or the materials in the source material.
So I guess the more of a layman's way of saying that is it can be incredibly dynamic and span like any use case effectively.
Of course, it's deeper. We have a deeper, you know, more utility. with like power skills or customer facing roles or in education departments that are like,
you know, math, you know, a math major is not going to use our products as deeply as like public speaking or language learning or or an MBA, you know, program. So like there's you wanted to,
you could like literally put in from the math textbook, you know, as the source material put in like a description of, you know, the difference between a derivative and anti derivative and calculus,
and then ask the students an open -ended question of like describe that difference. And, you know, the tool will evaluate them based off of the material that was submitted.
Does that make sense? Yeah, it does. Obviously, without sort of being able to visualize and see how the parts are, but I guess take me back to sort of maybe a more simple answer around for that hiring manager or that person who's in company XYZ.
Is this a... I can set up this tool in a day, a week. Oh. this is going to be a, you know, this is a month that I'm going to have to kind of like bring this together outside of going through the status quo,
right, where it's, you know, I can see that change management, convincing leaders, putting this in my workflow, that could take longer, but I'm just like actually getting the tool up and running and like feeling confident,
like I'm going to run it. Yeah. Yeah, so it, you know, you can set it up in a long coffee break, right? You can start authoring assignments with like-- - You just reduced your million dollar idea to a coffee break,
my friend, no, it's not. - Yeah, well, it can be set up in seven or 10 minutes. You can author assignments. It's really, what are you wanting to evaluate people on? And again,
now that we've been able to come full circle, like in the past, past We've required a human to do the evaluation. So it's not only deciding We're gonna evaluate people with more rigor hold them more accountable to the learning like in my opinion Those are really important things But in the past it's been a huge amount of like effort and labor to watch You know,
it's not just set the assignments up But then you have to watch 20 10 -minute videos and provide critical critique or fill out a rubric. You can still do that for sure. But now all that's required is to set the assignment up and then the AI can hold them accountable and the AI can give them coaching and feedback and pro tips on how to improve.
And that human evaluator can come in and triage or look for outliers of people who scored low. Maybe that human goes in or the manager goes in and...
then watches instead of 20 videos, they may watch the two that scored the lowest and they can give more white glove treatments or attention to the students that need more attention.
And so setting it up is extremely fast. And like I mentioned earlier, the fun things to talk about are the AI coaching and like,
you know, how we're... improving practice and improving learning outcomes some of the more practical or tactical benefits that to be honest From the building a company standpoint probably have more Association or correlation with money Yeah,
you know selling the new deals is it's really we've put a bunch of automation into making it really easy to author this kind of assessment assessment. But so maybe that's a,
hopefully a better way to answer the question. - I don't understand. So take me to, take me to the success stories. I mean, I assume that you get feedback from customers on a regular basis.
- Oh yeah, absolutely. - And I'd love to hear if you, you know, if you can, like I'd love to hear maybe we're set, you know, again, it's always, it's always easy to put the positives out there. I'd love to hear it's like,
Hey, you know what? We, It didn't really work out in this instance," or, "We discovered something that wasn't there and we had to adjust," or, "Hey, this particular sector isn't so good for this." Like any kind of challenges like that?
Yeah, I would say, of course, there's lots of success stories and it spans like tons of different industries, tons of different applications or use cases.
cases and it's you know enterprise customers We have like service now or Microsoft or obviously monsters You know, you we have huge universities and colleges using us in an enterprise or kind of across the board capacity But also a lot of small,
you know, what would be considered SMB so likes, you know where it's 50 employees or 30 employees and and having the manager who also wears a bunch of different hats go in and evaluate somebody's sales presentation.
It's a massive distraction, and it doesn't scale to do that without a tool like Bongo. So a long -winded way of saying lots and lots of use cases,
lots of success stories. We have several hundred colleges and universities around the world that use our product. And again, about the same number of corporate organizations,
too. But yeah, some of the applications were, or maybe like hardships, like I guess I was just a little bit monologuing around authoring a site.
like, um, you know, in the past, getting organizations to, to, again, think about it differently, um, of like, well, you know, it's just easier if we,
you know, buy some content from any content vendor, um, and, you know, push it on how to give constructive criticism. Um, it's a lot,
you know, so like that's the status quo is like, oh, we'll just buy the content and like kind of force everybody to watch it and we'll measure them on, you know, whether they watched the, the 20 minute module.
Right. Um, you know, I would argue that there's a very high chance that that's going to be a huge waste of money, um, because people push play and then they go back to their other screen and do emails and maybe a multiple choice question comes up and a lot of times those are designed,
you know, not really to hold the learner accountable to learning that thing but just to get them to like complete it. That's why we call it compliance right I think. Yeah for sure for sure but you know but you know it could be compliance in the traditional sense but it could also be like you know another cool use case is like doing you know evaluating and doing assessments around like forklift training right where
they have you know so people use our product where they you know whip out. their phone and have somebody go through the 15 step checklist of how to drive a forklift safely. Now,
if they're just watching a video and answering a multiple choice question, like there can be some bad hard consequences if they're not truly learning that. And that's what Bongo's working towards changing basically in the industry is holding people more accountable.
to learning something and being able to apply it in a real world application. But yeah, so I guess, you know, to get back to the question of like where we've had some hardship,
making it easy, you know, we've had hardship in terms of just coming up with like, you know, having people who are offering assignments come up with like how do I use this tool,
right? Like it's a new kind of paradigm and, you know, that's, I think, again, in this release, being able to identify learning objectives, you know,
even if an educator or a manager is not used, not even wanting to use the contextualized feedback, you know, the smart scoring, if they're just wanting to, like,
take a module module and figure out, you know, hey, our product marketing team just created this really nice sales enablement asset. But I'm not a subject matter expert.
You can just punch that into the tool and it'll come up with, you know, a recommendation of here's the learning objectives that the AI is identifying in this asset. You know,
so like that adds value in terms of just like thinking through, you know, what are the important components of this thing? I wonder, I wonder how often as well, like you come across,
you know, especially something, maybe a topic that's a little more complex, you know, some technical skills or something that where the manager or whoever believes that, okay, look, here's the three points that we need to pull out of this.
And then the AI will come back and be like, actually, here's point number four and five that are actually kind of nuanced that, you know, again, I don't know how that would happen because it's a ring, because it's,
as you said, a zero shot dataset that you couldn't go and compare, it's not comparative, it wouldn't be able to say, "Oh, I can contrast it to another thing," but I still think there would be discovery in there of like,
"Oh, wow." - Yeah, well, you can-- - In this technical manual, actually, yeah, I didn't, we haven't been training towards point number four 'cause we just haven't for some reason. (speaking in foreign language) fully anticipate that's what's going to happen is it'll kind of help approach things in a more open -minded sense.
And that's why, at least for right now, we're requiring the human verification. They're like, yep, these learning objects or learning objectives are what's important.
and you just check a box. You can select it or deselect it, and then it's included or not included. But another thing to mention is authors can submit multiple assets.
You can submit some knowledge -based articles, that white paper, and then maybe the head of sales doing their own demo. demo. So like a video of that and the tool,
combine like sifts through all of that, combines that knowledge. So it's not, it doesn't have to be like a singular point, like a one -pager, right? It can be a handful of different elements that it's being benchmarked off of.
- Super cool. I'm always, I'm always surprised at how fast 35 minutes minutes goes by here. I've gone. Oh my gosh My final question for you today is like so what what's next in the universe of assessment for you?
I mean, you're you're launching a new new product today or a new iteration of your product today Take me, you know 18 months out 24 months out like what how does this continue to scale grow expand evolve?
Like what what are you looking like? What are you most excited about I guess? Yeah, you know, I'm I'm I'm, you know, just the level of accuracy, you know, benchmark, you know.
So what I'm most excited about right now is just the current release, because it is, I'll answer it in a larger capacity set, but like, how closely it can mimic a human evaluator is like remarkable,
right? It's, you know, we actually had to back it down. in refinement cycles because the AI would grade harsher than a human evaluator. So, you know,
but, you know, it's in some cases, it's less than 1 % variance, right? So it's like indistinguishable from a human evaluation. So just being able to put that power into organizations and allow them to scale and have,
you know, basically hold learning. more accountable to what they're supposed to be learning. Yeah, I love geeking out about that. But just from a, you know, then to answer your question more directly in terms of like,
you know, what the future holds. I think, you know, continuing to go through refinement cycles and making it easier or more even more accurate. The the biggest thing for the next couple quarters is is working towards allowing access in the easiest way.
So again, we've done a lot of work in terms of making it really easy to author assessment. But I guess I don't wanna be spilling all our secrets,
if you will, but like, you know, automatically. coming up with questions, for instance. So right now, we were looking at, should we create a tool that you can submit source material, like a module,
and it'll come up with questions for the author? We ultimately went down the path of, well, let's identify the learning objectives first, and then the questions that are generated are gonna be a lot more meaningful.
So that's, you know, I think a project that's... that's going to be tackled in the next few months. But also just making it really easy and slick to be integrated into existing platforms or tools.
So, you know, making it so it doesn't have to be its own disjointed product, like a standalone solution. You know, we want to, you know, meet learners where they're actually learning.
So, you know, making it invisible inside of LMSs or inside of. of content organizations, marrying to existing content. So there's, you know,
some sophisticated ways that we're, you know, working right now to be able to do that or facilitate it. And like I said, I basically want Bongo to be like the measuring stick of skill validation.
And, you know, especially in corporate, the corporate world skills is like a really popular. popular hot topic right now. You know, and there's lots of, you know, really cool skills ontologies or content that's,
you know, trying to teach skills in a more meaningful way. You know, from my perspective, like what gets measured gets done. And if Bongo can be the measuring stick of does that person truly have that skill,
not just like a self -proclamation on LinkedIn, that yeah, I'm really good at that. at Excel and my buddy and my wife and my friend from college say that I am too. Like, that doesn't mean anything.
And so what we want to do is add more accountability to learning and skill validation. And yeah, I'm just excited to, you know, rattle some cages in the industry.
I think it's, you know, putting this power into organizations hands is, I think, going to change things. Josh Cameron, you are the CEO of Bongo.
This is, we've been talking about how you bring learning out of the classroom into the real world. I love this observational assessment. I'm going to carry that with me from this conversation. I absolutely love it. Thanks so much for taking time out of your busy day to speak with us.
I really appreciate it. Hope we get to talk again soon. Awesome. Yeah, thank you. Maybe, maybe on the slopes of those babies. Right on. Yeah, totally. totally, I'd love that, for sure. - Thank you again for listening to the eLearn podcast here from OpenLMS.
I just wanted to ask one more time, if you enjoyed this show, if you learned something, if you were inspired, if you were challenged, if you feel like this is something you can take into your practice, please do me a favor.
And right now on your podcast player, hit subscribe. That way you're never going to miss a future episode. Also, come over to elearnmagazine .com. and subscribe there as well, because we have tons of great information about how to create killer online learning outcomes.
Thanks.