GC: As a philosopher [Heraclitus] once said, âThe only thing that is constant is changeâ.
And itâs happening more rapidly now than ever. To use the examples youâve already given, electricity changed the world but wasnât widely available to people for 50 years or so. It took a few decades before computers were everywhere. It took less time again to get smartphones into peopleâs hands. The adoption curve with AI is something else entirely, and itâs already blowing everything else out of the water. There are three primary conditions driving this: 1) consumer comfort, 2) network effects, and 3) exponential tech advancements (also known as Moore’s lawâsee graph below).

In short, each new tech advance paves the way for others. In my opinion, whole new economies of innovation will emerge from AIâand theyâll arise more quickly than at any time previously.

GC: Like most people, I knew that AI was coming down the tracks, but as the CEO of a tech company, I guess you could call me an early adopter.
My first time using generative AI, I remember I prompted it to write a scene from Friends, only set in 20 yearsâ time. It churned that out easily. So I went more abstract and gave a prompt to write a scene from Friends crossed with Breaking Bad. And it gave me a detailed script with stage directions, dialogue, the lot. Walt and Jessie were talking and Joey was there in the background doing Joey stuff. Now maybe the crossover wasnât perfect, but it was still a OMG moment for me. This kind of instant imagining that could bridge two totally disparate things that I could then control, change, improve. I knew right away it would be a game-changer across all industriesâand definitely for education.
GC: To restate it, our mission as a company is to advance education and learning, worldwide, with best-in-class technology.
Thatâs the lens we put on all our work. AI is no different in that respect. Trends come and go so we always need to take a big-picture viewâdoes this feature or product add real value for our users and for their users in the long term? Does it make their lives easier? We need to ask those kinds of questions.
Back in April I attended the ASU+GSV conference in San Diego. There was a livestream interview with Sam Altman, the Founder and CEO of OpenAI. On a side note: He didn’t make it to the conference so actually dialled in from his car, which was pretty weird. Still, he spoke about how better tools make us more ambitious. I agree with that point. AI raises the bar in allowing us to do a lot of new, transformative things. This is most obvious with generative AI, which has mass market appeal because its use isnât restricted to data scientists or engineers. It has massive implications for everyone in educationâfrom product owners and publishers to teachers and learners.
AI raises the bar in allowing us to do a lot of new, transformative things…It has massive implications for everyone in educationâfrom product owners and publishers to teachers and learners. Click To TweetBut to go back to how AI fits into our missionâweâre an API company. That means we can abstract complexity into an API. Thatâs what weâre doing with AI. Thatâs what Open AI and other companies are doing with AI. So while companies in the âbefore timesâ had to do everything themselves, now AI and all its potential is just ready and waiting for them.
Just think about training an AI to do voice-based search. Itâd take endless resources to do well. But now itâs easy to access thanks to the voice APIs developed by the likes of Google or Apple. Our ambition is to make AI an easily accessible tool within assessment.
The game now is multiple companies building on top of these AI APIs. The AI piece is âsolvedâ, and itâs the job of entrepreneurs to build applications on top of thisâessentially workflows on top of an API. Our ambition is to make AI an easily accessible tool within assessment.
GC: Our initial AI product, Author Aide, is mainly focused on assessment content authoring. Itâll be a huge productivity booster for test creatorsâwe’re talking a 10x increase in author outputâwhile also dramatically improving content quality and making question types themselves more interactive. There are so many ways AI can be used to deliver better, more timely learner feedback or increase engagement too.
Down the line, the possibilities are endless. Itâs really just about what people want to create with AI. Our job is to streamline the process for them as much as possible, to optimize it, make it readily available and easy to use.
GC: There are valid concerns, for sure.
Itâs tempting to dismiss them as simply resistance to change. There are always concerns in the face of major change. Oral cultures were hostile to writing because they thought it would weaken their memory. The printing press freaked so many people out that the Pope was threatening excommunication to anyone who printed a book and guilds were running around destroying the printing presses themselves.
In an educational context, even calculators caused a furore when they were brought into schools because people feared theyâd lose the ability to perform mental arithmetic.
Existing publishers have legit concerns that these large language models are just trained on their copyrighted material thatâs already on the web. Some Iâve spoken to have put it less politely.
My opinion is that the conversation around AI really shouldnât be binaryâitâs neither all good nor all bad.
Whatâs a fact though, is that AI technology is in the public realm right now. So the balloon has popped, the horse has bolted, the genie is out of the bottleâyou can choose whatever metaphor you like for it. There is no going back now. So how do we use it responsibly? How do you weigh short-term use against long-term implications?
The concerns you raise there are things weâve to find ways to overcome. And we can. We can train language models to work within the parameters of our customersâ content. We can use plagiarism checkers. Thereâll always be ways of dealing with misuse. Thatâs just another part of our job.
GC: In truth, itâs impossible to predict where and how things will change. Itâd be like trying to predict stock prices or currency exchange rates. When search became the big thing you had companies like Yahoo and Altavista leading the way. You couldnât have known that a little search engine called Google would come to totally dominate and shape the market.
Things will change, that much we know. I think that how we interact with learning material and assessment will look a lot different in ten yearsâ time. What costs millions to develop in AI now will cost a fraction in future. The trick is to stay informed so you know whatâs worth pursuing and investing your resources in.
I think that how we interact with learning material and assessment will look a lot different in ten yearsâ time. What costs millions to develop in AI now will cost a fraction in future. The trick is to stay informed. Click To TweetAh, nice question! Are humans served in the future or do they get servedâthatâs the gist of it, right?
The way I see it, Terminator 2 was a cautionary tale. What happens when thereâs no regulation or oversight? What happens if scientists make all the decisions and get carried away by what theyâve made possible? Regulation is essential to prevent things getting away from us. Creators and owners shouldnât be given free reign. We need guardrails to protect against outgrowths of god-knows-whatâextremism, mis- and dis-information. As I mentioned earlier, AI is neither all good nor all bad, but it does require some kind of democratic process that allows us to develop a clear understanding of what could change and how itâll impact the future. If we manage that, Iâd strongly lean toward the future being more like the one in Back to the Future 2.
To receive regular updates on Learnosity’s work in AI, sign up here.
Note: AI-generated feature image of a DeLorean car created with craiyon.com.