Search This Blog

Wednesday 10 July 2024

AI and the school holidays

This school holidays, you should spend some quality time with your children.

Ask your children to show you how to use ChatGPT to do your work for you. They should know, because the data suggests that school children are likely to be using it a lot during school term -- during NSW school holidays the number of google searches for chatgpt goes down. (Mann-Whitney u-test: p-value < 0.001).

If you want to show them something they don't know, they might find Anthropic's Claude Sonnet 3.5 pretty good. Or if you want speed.


Monday 1 July 2024

Social immobility and GDP and some predictions on the possible effect of AI

Something a little bit different today, but with a twist at the end. Let’s talk about social mobility — in a country with high social mobility, you might be born into the lowest level of society but still become wealthy in your life (and vice versa); in a country with low social mobility, you will live your life in the same level of society that you were born in.

Social mobility is a key measure of how “fair” your society is: if by accident of birth you are guaranteed a privileged life regardless of your life decisions (or guaranteed a life of suffering), that’s a problem.

How does social mobility change with GDP? If a country is terribly poor, then there may be very few opportunities to move out of the lowest social classes. If a country is extremely wealthy, then the chances of someone stumbling upon an opportunity may be much greater. Let's see if that's true.

Wikipedia has the data we need… when I was younger this would have been a project in itself to collate it:

I didn’t even write any code to make this chart — I just asked ChatGPT to plot it based on the data in those two pages.

There’s a trend there, but it isn’t easy to see. Let’s ask ChatGPT to make the x-axis logarithmic and plot “immobility” (100 - mobility). And draw a line of best fit through it.

Just to make that picture make sense:
  • The x-axis is how wealthy the country is. Doubling the country’s wealth per capita moves you along one unit across.
  • The y-axis is how “stuck” you are. High on the y axis means that your social status always the same as your parents. Low on the y axis means that your life isn’t dictated by your parents.
For the stats nerds, that’s an R^2 of 0.771 — that’s a very strong effect. Wealthier countries are fairer (in this sense) and give more opportunities across all levels of society. Nice!

I couldn’t label every point without it being a mess, so I extracted the residuals out as numbers.

If you are below the line (negative residuals), that means your country has more social mobility than you would expect. Looking at the numbers… most below the line:

  • Denmark -0.48
  • Finland -0.47
  • Sweden -0.42
  • Iceland -0.35
  • Netherlands -0.33
  • Ukraine -0.24
  • Belgium -0.23
  • Austria -0.21
Yay for Nordic social policy. If your ancestors were Vikings who conquered and pillaged and enslaved other people, then you live society that’s fairer than you would expect from your wealth. Ukraine surprises me. Austria doesn’t. But #10 on the list is Senegal — not a wealthy country, but doing much better for social mobility than you would expect.

If you are above the line (positive residuals), that means that your country has less social mobility than you would expect from its wealth:

  • Saudi Arabia 0.49
  • Panama 0.45
  • Turkey 0.43
  • Singapore 0.29
  • Ireland 0.27
  • South Africa 0.26
  • Greece 0.23
  • USA 0.23
I think that breaks down into two or three groups:

  • Countries that have by accident of geography natural resources far in excess of what would be expected makes them wealthier than they really “should” be. Saudi Arabia and Panama seem obvious candidates here; maybe htis applies to Turkey and Singapore?
  • Countries with inflated GDP figures from tax-haven status (Ireland). Maybe this is also part of the Singapore story?
  • Political failure to share wealth equitably (South Africa, Greece, USA).
Australia, New Zealand, India and China are pretty much exactly on trend. That’s a bit surprising, you would expect Australia to have the Saudi Arabian curse.

Tuesday 18 June 2024

Celebrating cute ChatGPT calendar creation capability causes calm contentment

A cute thing that ChatGPT can do: if you have a PDF (or similar) of a schedule, it can turn it into a .ics file that you can import into your calendar.

When you buy a ticket with NSW Trainlink you just get a PDF -- I'm surprised that it isn't still hand-written paper tickets to be honest -- so this just solved one of the petty annoyances in my life.

Saturday 8 June 2024

Generative AI and cybersecurity: governance, risk, compliance and management

This is a quick summary of a much longer substack: and a ~40 minute video:

The world of AI is rapidly evolving. AI researchers are divided on whether superhuman intelligence can be achieved by scaling up current technology or if fundamental breakthroughs are still needed. 

Key AI Concepts for Cybersecurity Professionals:

  1. AI Alignment: Controlling AI systems that may become smarter than humans.
  2. Explainable AI: Understanding why AI programs make certain decisions.
  3. Agentic AI: AI systems that can perform actions autonomously.
  4. Prompt Injection: A major security concern in AI systems.

Impact on Management:

  1. Distinguishing between genuine understanding and AI-generated artifacts.
  2. Using AI for performance improvement plans and automation.
  3. Dealing with potential fraud and impersonation using AI.
  4. Monitoring AI-driven automation by employees.
  5. Adapting to increased use of speech recognition and document management.

Corporate Governance and Responsible AI Use:

Pre-2022, AI governance focused on controlling the training process. Post-2022, governance must shift to maximizing benefits while managing risks. Key challenges include prompt injection, hallucination, and lack of moral sense in AI systems. Blocking AI usage may lead to data leaks to less reputable companies. Organizations must embrace AI while implementing proper guardrails.

Required Capabilities for AI Governance:

  1. Observatory: Monitoring AI usage and its impacts.
  2. Reward Giving: Incentivizing staff to automate tasks responsibly.
  3. Expansion: Propagating new ideas and methods for AI use.
  4. Financial: Balancing AI costs with staffing savings.
  5. Cybersecurity: Defending against new attack vectors.
  6. HR Responsiveness: Managing job role changes due to AI automation.

Key Processes:

  1. Audit/Discovery/Inventory Management: Identifying new AI activities.
  2. Incident Response: Handling prompt injection attacks and rogue employee actions.
  3. Rapid Iteration on Education and Training: Keeping staff up-to-date on AI capabilities.


Few regulations currently exist for generative AI usage. China requires AI systems to act in the benefit of social harmony. Japan allows training models on copyrighted works, potentially leading to faster AI adoption. India requires registration for AI model training, hindering the development of language models. The USA has proposed limitations on large-scale AI training.

Monday 27 May 2024

Reflections on student research projects

I was supervising 7 Masters of AI / Masters of Data Science / Masters of Cybersecurity students this semester.  Reach out to me if you are looking for people with these skills and I'll introduce you; also, if you have small industry projects that need doing, there are cohorts from Macquarie and ANU next semester.

On Friday we had the final presentations from my students and the rest of the cohort. Observations:

  • Very few students trained their own computer vision models. Viet (one of my supervision students) was one of the few that did, and was quite successful. Instead, research in computer vision is now often asking "how can I prompt Claude/ChatGPT/Gemini able to get the right answer?" As Marco found, this is often not a trivial exercise, is vastly more inefficient than a trained model, and not necessarily all that accurate. Yet.
  • Explainability is a big deal. Both Viet and Himanshu looked at explaining the results of their models. Shout out to Himanshu for delivering his presentation in the dark (I don't know why the lights weren't working...) after racing to get there after his car broke down.
  • We don't know whether blockchains can coexist with quantum computers. The problem is obvious: RSA or ECC public keys are too easy for quantum computers to break. Lots of solutions have been proposed: Suyash, Proshanta and Pradyumn all found problems in the different solutions they looked at.
  • In research, ChatGPT wasn't all that popular: about a third of the cohort that used an LLM in their research used ChatGPT. Open source models like llama and mixtral were far more popular in academia than in business (that I have seen).
  • About 90% of projects didn't use LLMs at all in their research. They might have used it to clean up their writing or other "mundane" tasks, but it didn't play a part in the research process itself. I am going to try to track that over the next few years as I expect it will drop.
Reflecting on the reflections:
  • For $20/month, you can have access to all the necessary tools to do world-leading computer vision tasks. It has been a long time since the forefront of technology has been that accessible and that cheap anywhere in anything. Truly we live in wondrous times.
  • As we move into a (possibly) AGI+quantum world, being confident that you can trust complex systems will become much more important than building the thing itself.

Monday 20 May 2024

10 years of this blog

I started this blog 10 years ago.

The most popular post was at the start of the pandemic, where I wrote a detailed post How I teach Remotely I had been doing that for many years before the pandemic; I could have written it before any lockdowns started.

The next most popular post was Something funny is about to happen to prices which talked about some of the weird things that I predicted would happen as we started getting negative electricity prices. I assumed that Australia would develop industries that made use of nature's subsidy; but alas, Australian industry is too uncompetitive even if electricity is free-er than free.

After that, there were some posts about how to fix things in HP DataProtector. I think I was expecting the blog to be mostly about data protection: the first post was about HP's confusing line-up of overlapping backup products, and identifying when it was cheaper to go with one choice over the other. HP no longer have overlapping backup products -- they sold them all to Hewlett-Packard Enterprise Entco Microfocus OpenText who now has a confusing line-up of overlapping backup products (different ones). Those a

Anyway, it looks like what I should be writing about is extrapolations into the future of what I'm seeing happening now. That's good. I have started writing a book on what we should expect in the next 5 years in AI, so I'll post excerpts as I write it.

Saturday 27 April 2024

Do Australian banks care about AI?

Both the 2022 and 2023 Expert Surveys on Progress in AI have identified that acting as a telephone banking operator (including card services) should be fully achievable by AI by 2026. So a forward looking bank would be reporting their AI expenditure as a way of signalling to the market that they aren't going to be left behind. Will any of them do that this year?

If you want to bet one way or the other on this question, here's a prediction market on it:

Sunday 21 April 2024

Mysterious latitude and longitude coordinates

There's the coincidence of the Pyramid of Giza's latitude being a very similar-looking number to the speed of light. It's nonsense of course, because when the Pyramid of Giza was being built (roughly 2600BCE):

  • No-one knew that the earth was round (so the concept of latitude didn't make any sense). It was more than 2000 years later Eratosthenes calculated the circumference of the earth (around 200BCE). He also made a map that connected the Atlantic Ocean with the Gulf of Arabia, and had the Equator substantially south of all of Africa.
  • No-one described angles in degrees, only ratios of lengths or ratios of quadrature (so describing a latitude in degrees wouldn't have been possible). Angles were probably Babylonian (2000 years later); Aristarchus of Samos (2300 years later) is the first writer we know who used them.
  • They didn't have decimal numbers; they didn't even have fractions with numerators. It's only in the Middle Kingdom period of Egypt (2000BCE, about 600 years later) that we see fractions of any kind.  Even then they couldn't represent 2/3 other than saying (1/3 + 1/3, or 1/3+1/5+1/12+1/20). It was the height of mathematics six centuries later than the Pyramid of Giza to be able to say that 1/5 + 1/12 + 1/20 = 1/3. So they wouldn't have been able to represent 29.979.
  • The length of the metre was defined as being the distance travelled by light in 1/299792458 seconds. That is, we chose that the speed of light would be that in 1983. If we had chosen 1/299792459 instead, presumably the great pyramid would have shifted north a bit. (I suppose some physicist might have been playing a prank on the world by making the speed of light match the latitude of the Great Pyramid of Giza. But that's not a mystery then, and it doesn't involve ancient aliens. It just involves a physicist with a sense of humour.)
    • Actually, the metre was defined before that. One of the complaints of the French revolution peasants was the inconsistent measures imposed by their pre-revolution feudal lords, who would change the size of measures when they were buying vs selling. So they defined the metre by saying "a metre is one ten-millionth of the distance from the equator to the north pole via Paris." They knew that the Earth wasn't completely round, and it solved the problem that they had, but they still got it a bit wrong anyway.
    • Before that, a metre was the length of a seconds pendulum -- a pendulum that swings every two seconds is roughly a metre long, as long is it only swings a little bit and the flexibility of the pendulum arm doesn't count.  Anyway, yes, the ancient Egyptians had pendulums, but they didn't have the concept of seconds. They probably had some way of dividing up time into units smaller than an hour using water clocks, but nothing as small as minutes. They didn't know that pendulums swung at a constant rate, that wasn't until later, so even if they had had the concept, they couldn't measure it.
The bigger problem is that the exact coordinates of the Great Pyramid of Giza are 29.979167N, 31.134167E and the speed of light in a vacuum is 299792458m/s. So it's close, but it's out by about 8 metres. (And why didn't they put it about 28km further east, at 31.415926E ? That might have been weirdly convincing of the existence of time travel.)

Which brings me to the point of this post. Here are some important things in the world that have either their latitude or longitude (measured in degrees) be a number that's close to a multiple of a 10th power of the speed of light. All of these are closer to this ideal than the Great Pyramid of Giza.

  • Port Neches Elementary School in Texas, Latitude=29.97925, Longitude=-93.95905, according to Geonames. Presumably this is the address of the entrance, if you just move 47cm south (18.5 inches), you'll be spot on. Fun fact for Port Neches Elementary School students: if you travel east or west (and don't deviate even a little bit), you'll eventually hit the Great Pyramid of Giza. Google maps puts Port Neches Elementary School well away from this, and says that the Subway opposite the Neches Federal Credit Union / Magnolia Church is the speed-of-light latitude.
  • Crowne Plaza Astor New Orleans Latitude=29.97922, Longitude=-90.1137. If you are the events manager for the Crowne Plaza Astor, I hope you have locked up the lucrative market for Physics Conferences at important latitudes. Just 2.87 metres north and you're on the light speed location. (Google Maps is again out-of-sync with geonames, and suggests the pedestrian crossing on North Tonti Street where it meets Aubry Street; it also suggests the Jewish Cemetry near the Hurrican Katrina Memorial... and the Mortuary Haunted House.) I don't know whether to believe Google maps or Geonames. I was on the Google Maps SRE team briefly in 2006--2007, but I spent the majority of my time on the maps team nowhere near any other maps team members, so I don't have an insider opinion on Google Maps accuracy.
  • The Omnipotent Missionary Baptist Church. This seemed to have disappeared from Google Maps, if it was ever there, which I guess is a thing you can do if you are an Omnipotent Missionary. (But what do Omnipotent Missionaries evangelise about? Themselves?) It was at Latitude=29.9792, Longitude=-90.03.
  • I tried to find a few places in China, but between inaccurate maps and other problems it's hard to be sure. The Wuhan Institute for Virology is just a little bit too far north to be on the light-speed latitude.
  • According to Google Maps, the Benedictine Monastery of Toumliline is very close. But Wikipedia puts it several degrees of latitude in a different direction. For Moroccoa, Geonames suggests the town of of Aguerd Issegane, which seems like cheating because it's a town... but it's surprisingly small.
  • Geonames also suggests New Orleans' East Bank Sewage Treatment Plant.
But why latitude? Why not longitude? (Well, we couldn't calculate longitude accurately until well after Galileo, because we had no accurate time keeping mechanism that could survive a journey. For a long time, the only option was to observe when eclipses of Jupiter's moons happened; compare that to an almanac of when they were supposed to happen, then look at the angle of the sun or a fixed star and work out what longitude you were at. Way beyond what any ancient Egyptian could have done.)

So for longitude 29.97925E:
  • It's quite close to the Chernobyl nuclear power plant (51°23′21″N 30°05′58″E) -- but the powerplant is about 8km too far east. 
  • Suri Suri Dam in Zimbabwe is very close to the light-speed longitude (-18.08597,29.97944)
  • Playing around with Google Maps, I found that the southern most shed of the University of Alexandria's Faculty of Agriculture Poultry Farm was is on the speed-of-light longitude.

But why latitude 29.97925? Surely 2.997925 would be better? And here we hit the jackpot:
  • Taman LEP 7 Stop (which I think is a bus stop in Kuala Lumpur). Latitude=2.99793, and longitude=101.64858. The KL bus transit authority has encoded the speed of light in the bus stop!

Here's how I did it. Download the Geonames database, unzip it, create a postgresql database:



 createdb geonames

Then in the postgresql session:

CREATE TABLE geoname (

        geonameid int,

        name varchar(200),

        asciiname varchar(200),

        alternatenames varchar,

        latitude float,

        longitude float,

        fclass char(1),

        fcode varchar(10),

        country varchar(2),

        cc2 varchar(60),

        admin1 varchar(20),

        admin2 varchar(80),

        admin3 varchar(20),

        admin4 varchar(20),

        population bigint,

        elevation int,

        gtopo30 int,

        timezone varchar(40),

        moddate date


\copy  geoname (geonameid,name,asciiname,alternatenames,latitude,longitude,fclass,fcode,country,cc2, admin1,admin2,admin3,admin4,population,elevation,gtopo30,timezone,moddate) from 'allCountries.txt'  null as '';

create index on geoname(latitude);

create index on geoname(longitude);

select * from geoname where latitude < 29.980 and latitude > 29.979 order by abs(latitude - 29.9792458);

And then likewise for different latitudes and longitudes.


  • Gungele is at Pi latitude. Latitude=3.14159, Longitude=28.14137. (So close to being 10 times e!)
  • VilledonnĂ© is at e longitude (to 6 decimal places) Latitude=47.27323, Longitude=2.71828
  • A beach in Tiomann Island (Pasir Gerengganin Malaysia is at e latitude. Latitude=2.718,  longitude=104.1724 

Monday 15 April 2024

Interview with the Australian Writers' Centre

I was interviewed about ChatGPT, Anthropic and other GenAI tools and their impact on jobs for writers. We ended up talking about speech recognition, why SEO might be becoming irrelevant and the computer gaming industry.

Tuesday 2 April 2024

Setting up a custom GPT

 If you are running a class, you might want to have a bot that students can ask questions of -- questions about the content, possible exam questions, dates and times for assessments, and so on.

I walked Dr Emily Don through this process -- it's not long or complicated. We also talked about transcribing lectures with many technical terms.

There is an equity problem, though: students without access to ChatGPT Pro (or Microsoft Copilot Pro, or equivalent) can't access it. There are various workarounds for this.

Monday 4 March 2024

Why isn't there a degree called a Bachelor of Contact Centre Adminstration?

 It would cover IP networking, VoIP and telephony. It would teach students about AI and its applications in all aspects of compliance, optimisation, training, CSAT improvement, translation; and enough maths to understand queue behaviour under pressure. Contact centres often have high turnover / transient staff, unique training needs and large numbers of staff, so there should be a strong HR component. Perhaps there should be units on contract negotiation, sales techniques and other parts from a business degree.

I was thinking about this yesterday when I was creating an assignment for my students where they have to write a telephony speech recognition and intent detection system for a contact centre. I realised I would need to explain a few background concepts (e.g. like call routing and queueing)... and it quickly snowballed into "there's actually quite a lot of stuff to know in order to understand a modern contact centre".
Everyone I know had to learn everything about contact centre management on-the-job. Every call centre I know of has people in it studying part-time to get a degree in something other than their current occupation.

Why is a Bachelor of Call Centre Administration not a thing?

Tuesday 13 February 2024

Multi-lingual versions of natural language processing lectures

Last year I taught an introduction to Natural Language Processing at Macquarie University. I grabbed all the recordings, and started editing them so that I could publish them. I’m not even half way through the first two hour lecture, but at least I’ve made some progress.
Here’s what’s working so far, and what isn’t.
I have a “intro theme music” for the first few seconds of the videos that I publish.

That theme music consists of several phrases in the musical language Solresol, an artificial language that lasted almost as long as Esperanto has. The alto flute opens with a call out (in solresol) “today; tomorrow”, then there’s a gong because everyone needs a gong and it says something. The timpani then taps out (in morse code) the name of my company. Meanwhile, the harp is saying (in solresol) “wisely useful” then “wisdom - create a model”. The bells sing out (in solresol) “behold: yours! Behold!”. The orchestral strings just fit all the pieces in the background to make it sound complete.

This all seems very appropriate theme music for a series of lectures on natural language processing, particularly on the first few that focus on encoding text in different languages around the world.

I signed up to ElevenLabs (affiliate link ) so that I could translate my lectures.
They seem to be using Whisper to do speech-to-text into English. This interacts strangely with the start of my videos, because it seems to think that I’m saying something (which indeed, I am! But in Solresol) and often hallucinates greetings or other comments like that. So it tries to modify the Solresol music with words, which just warps the sound and makes it slightly out of tune.

Other issues:

  • I’m finding that its algorithm for identifying the number of speakers is unreliable. It often thinks that there are two of me.
  • Translation into Indonesian and Malay (which are close to the same thing) was not recognisable as being in those languages. It’s not just me thinking “that doesn’t sound like Bahasa”; fluent speakers weren’t even sure if they were listening to Bahasa or random babble.
But overall, I’m impressed. It would be an enormous amount of effort for me to re-record these videos in each of these languages. I’m not sure I even could do it in Japanese or Arabic (which I have studied) let alone Hindi (which I’ve never studied).








If you or your colleagues or friends speak one of these languages and want to hear a bit about the history of text encoding, forward them on and let me know if they are useful.

Friday 2 February 2024

Programming language theology

What does the Bible have to say about different programming languages?

  • Perl (Matthew 13:45-46)
  • Haskell (we are called to a life of purity)
  • Ocaml (it’s easier for OCaml to pass through the eye of a needle and all that, Matthew 19:24)
  • Forth It's part of the great commission in Mark 16:15)
  • Go Again, it's part of the great commission, but also Isaiah 18:2 seems to promote message passing between Golang and Swift)
  • Java (covered in He-brews)
  • Lazarus (which is cheating a bit, because it’s a framework for FreePascal, but John 11 covers it in some detail)
  • SQL isn't specifically mentioned, but Proverbs 2:3-5 seems to describe it
  • C/C++/C# in Genesis 1;10 God gathered the waters together and called them the seas. But in Revelation 21:1 it says that there will be no more C, so presumably C++ and C# take over in a new heaven and new earth.
  • Ada according to Genesis 36:4 Eliphas was bored by Ada, which does indeed speak holy truth about programming in Ada
  • Python This is one of those difficult topics. My best interpretation of John 3:14 says that Jesus should be lifted up like a snake was. I guess that means we should kill Python and see if it comes back to life: presumably we did that with the Python 2 -> 3 transition.
  • Swift I'm unsure of what the Bible says about this generally. Isaiah 5:26 applies to programmers at the ends of the earth, and Romans 3:15 presumably refers to programming with one's feet so hard that they bleed.
  • Rust It seems we should avoid Rust based on Matthew 6:19-20.

Tuesday 9 January 2024

Using AI in Education Part 4

I don't know whether this is a very late advent post or a very early 2024 post.

One of the key themes of 2024 is going to be personalized chatbots or Tutebots in education.

They aren’t very difficult to create. If your students have access to GPT Pro ($20 per month), then this is a trivial task. Simply take the transcripts of the recordings of your lectures and also take the readings ... create a custom GPT from them. If that cost is too great, then things get a little more complicated. Any vendors who want to shill their solutions, please do so in the comment area below. I've been working on an email gateway bot for this kind of task. 

Ethan Mollick reports here on his experiment to see how much of a productivity improvement GPT-4 gives professional workers. If you look at the charts, you'll see that much of the benefit goes to the least able workers. This makes sense that a large language model which produces an average, most predictable output is going to produce a result that's kind of average. If you are below average, then average is an improvement!

This also applies to students, it seems. My friend Gordon Freer teaches International Relations at the University of Witwatersrand in Johannesburg. He did a little experiment last semester. He got more data than I did in my experiment last semester with tutebots since I was only able to create a custom GPT a week before the final exams.

Gordon and I have been analyzing his data in slightly different ways. I looked at the comments made by Gordon’s students and analysed their use of present tense versus past tense, and also their linguistic diversity. This may seem a little odd, but we have good evidence that usage of present tense versus past tense corresponds to an introversion/extroversion divide (highly extroverted and sociable people will talk about all the things that they did with other people, whereas less sociable people will talk about what they're doing right now). Linguistic diversity is a measure of how many different words you use in your writing of a given length and provides a measure of verbal flexibility which is a proxy for verbal IQ. The results were interesting.

Students in Gordon's class whose comments suggested weaker English language skills ended up being far more likely to recommend the use of chatbots in the future. 

It's premature to say that they gained a greater advantage, but it seems likely that being able to ask a chatbot to give you individualised tuition is going to benefit learners who face extra challenges (such as not being a native English speaker).

But other than providing a tutebot, what can we as educators do?

For this I analysed what things Gordon’s students did with the chatbot that were predictive of recommending chatbots. In other words: different students used the chatbots in different ways; which of those gave students the most positive experience?

Cross-validated Ridge regression models found that the two strongeset factors were:

  • Did the student use the chatbot to help with their tutorials? (0.44)
  • Did the student use the chatbot to help with the readings? (0.74)

Nothing in the student’s backgrounds predicted any behaviour here. It is up to us in our teaching practices to encourage students to interact in different ways, and it’s not driven by the background knowledge, familiarity or existing skills. That suggests we need to have exercises in our courses (perhaps graded exercises) to encourage students to make the most of the chatbots we provide.

Here’s my checklist that I think we should make sure students do (feel free to suggest more).

  • Get the chatbot to explain a concept that you aren’t familiar with
  • Handle a text in a language that you aren’t familiar with by translating it into something you are familiar with.
  • Given an arcane and difficult-to-read text, get the chatbot to simplify the vocabulary that's used
  • Ask the chatbot to make analogies with another field with which you are more familiar
  • Here’s what ELI5, ELI13 or ELIUG mean (“explain it like I’m 5/13/an undergraduate”)
  • What happens if you ask the chatbot how you could improve your essay / program?
  • Explain some important concept and get the chatbot to respond with any important ideas that you have missed in explaining it
  • Role-play different people, things or participants from a reading.
  • Generate exam questions for their own self-study.
That way, they won’t think of generative AI merely as a way of cheating on essay writing.

Ping me if you want to run a study on this!