Search This Blog

Wednesday 10 July 2024

AI and the school holidays

This school holidays, you should spend some quality time with your children.


Ask your children to show you how to use ChatGPT to do your work for you. They should know, because the data suggests that school children are likely to be using it a lot during school term -- during NSW school holidays the number of google searches for chatgpt goes down. (Mann-Whitney u-test: p-value < 0.001).


If you want to show them something they don't know, they might find Anthropic's Claude Sonnet 3.5 pretty good. Or groq.com if you want speed.


 

Monday 1 July 2024

Social immobility and GDP and some predictions on the possible effect of AI

Something a little bit different today, but with a twist at the end. Let’s talk about social mobility — in a country with high social mobility, you might be born into the lowest level of society but still become wealthy in your life (and vice versa); in a country with low social mobility, you will live your life in the same level of society that you were born in.

Social mobility is a key measure of how “fair” your society is: if by accident of birth you are guaranteed a privileged life regardless of your life decisions (or guaranteed a life of suffering), that’s a problem.

How does social mobility change with GDP? If a country is terribly poor, then there may be very few opportunities to move out of the lowest social classes. If a country is extremely wealthy, then the chances of someone stumbling upon an opportunity may be much greater. Let's see if that's true.

Wikipedia has the data we need… when I was younger this would have been a project in itself to collate it:


I didn’t even write any code to make this chart — I just asked ChatGPT to plot it based on the data in those two pages.

There’s a trend there, but it isn’t easy to see. Let’s ask ChatGPT to make the x-axis logarithmic and plot “immobility” (100 - mobility). And draw a line of best fit through it.




Just to make that picture make sense:
  • The x-axis is how wealthy the country is. Doubling the country’s wealth per capita moves you along one unit across.
  • The y-axis is how “stuck” you are. High on the y axis means that your social status always the same as your parents. Low on the y axis means that your life isn’t dictated by your parents.
For the stats nerds, that’s an R^2 of 0.771 — that’s a very strong effect. Wealthier countries are fairer (in this sense) and give more opportunities across all levels of society. Nice!

I couldn’t label every point without it being a mess, so I extracted the residuals out as numbers.

If you are below the line (negative residuals), that means your country has more social mobility than you would expect. Looking at the numbers… most below the line:

  • Denmark -0.48
  • Finland -0.47
  • Sweden -0.42
  • Iceland -0.35
  • Netherlands -0.33
  • Ukraine -0.24
  • Belgium -0.23
  • Austria -0.21
Yay for Nordic social policy. If your ancestors were Vikings who conquered and pillaged and enslaved other people, then you live society that’s fairer than you would expect from your wealth. Ukraine surprises me. Austria doesn’t. But #10 on the list is Senegal — not a wealthy country, but doing much better for social mobility than you would expect.

If you are above the line (positive residuals), that means that your country has less social mobility than you would expect from its wealth:

  • Saudi Arabia 0.49
  • Panama 0.45
  • Turkey 0.43
  • Singapore 0.29
  • Ireland 0.27
  • South Africa 0.26
  • Greece 0.23
  • USA 0.23
I think that breaks down into two or three groups:

  • Countries that have by accident of geography natural resources far in excess of what would be expected makes them wealthier than they really “should” be. Saudi Arabia and Panama seem obvious candidates here; maybe htis applies to Turkey and Singapore?
  • Countries with inflated GDP figures from tax-haven status (Ireland). Maybe this is also part of the Singapore story?
  • Political failure to share wealth equitably (South Africa, Greece, USA).
Australia, New Zealand, India and China are pretty much exactly on trend. That’s a bit surprising, you would expect Australia to have the Saudi Arabian curse.


Tuesday 18 June 2024

Celebrating cute ChatGPT calendar creation capability causes calm contentment

A cute thing that ChatGPT can do: if you have a PDF (or similar) of a schedule, it can turn it into a .ics file that you can import into your calendar.

When you buy a ticket with NSW Trainlink you just get a PDF -- I'm surprised that it isn't still hand-written paper tickets to be honest -- so this just solved one of the petty annoyances in my life.

Saturday 8 June 2024

Generative AI and cybersecurity: governance, risk, compliance and management

This is a quick summary of a much longer substack: https://solresol.substack.com/p/generative-ai-and-cybersecurity-governance and a ~40 minute video: https://youtu.be/jHFa_08_Y9E

The world of AI is rapidly evolving. AI researchers are divided on whether superhuman intelligence can be achieved by scaling up current technology or if fundamental breakthroughs are still needed. 

Key AI Concepts for Cybersecurity Professionals:

  1. AI Alignment: Controlling AI systems that may become smarter than humans.
  2. Explainable AI: Understanding why AI programs make certain decisions.
  3. Agentic AI: AI systems that can perform actions autonomously.
  4. Prompt Injection: A major security concern in AI systems.

Impact on Management:

  1. Distinguishing between genuine understanding and AI-generated artifacts.
  2. Using AI for performance improvement plans and automation.
  3. Dealing with potential fraud and impersonation using AI.
  4. Monitoring AI-driven automation by employees.
  5. Adapting to increased use of speech recognition and document management.

Corporate Governance and Responsible AI Use:

Pre-2022, AI governance focused on controlling the training process. Post-2022, governance must shift to maximizing benefits while managing risks. Key challenges include prompt injection, hallucination, and lack of moral sense in AI systems. Blocking AI usage may lead to data leaks to less reputable companies. Organizations must embrace AI while implementing proper guardrails.

Required Capabilities for AI Governance:

  1. Observatory: Monitoring AI usage and its impacts.
  2. Reward Giving: Incentivizing staff to automate tasks responsibly.
  3. Expansion: Propagating new ideas and methods for AI use.
  4. Financial: Balancing AI costs with staffing savings.
  5. Cybersecurity: Defending against new attack vectors.
  6. HR Responsiveness: Managing job role changes due to AI automation.

Key Processes:

  1. Audit/Discovery/Inventory Management: Identifying new AI activities.
  2. Incident Response: Handling prompt injection attacks and rogue employee actions.
  3. Rapid Iteration on Education and Training: Keeping staff up-to-date on AI capabilities.

Regulation:

Few regulations currently exist for generative AI usage. China requires AI systems to act in the benefit of social harmony. Japan allows training models on copyrighted works, potentially leading to faster AI adoption. India requires registration for AI model training, hindering the development of language models. The USA has proposed limitations on large-scale AI training.

Monday 27 May 2024

Reflections on student research projects

I was supervising 7 Masters of AI / Masters of Data Science / Masters of Cybersecurity students this semester.  Reach out to me if you are looking for people with these skills and I'll introduce you; also, if you have small industry projects that need doing, there are cohorts from Macquarie and ANU next semester.

On Friday we had the final presentations from my students and the rest of the cohort. Observations:

  • Very few students trained their own computer vision models. Viet (one of my supervision students) was one of the few that did, and was quite successful. Instead, research in computer vision is now often asking "how can I prompt Claude/ChatGPT/Gemini able to get the right answer?" As Marco found, this is often not a trivial exercise, is vastly more inefficient than a trained model, and not necessarily all that accurate. Yet.
  • Explainability is a big deal. Both Viet and Himanshu looked at explaining the results of their models. Shout out to Himanshu for delivering his presentation in the dark (I don't know why the lights weren't working...) after racing to get there after his car broke down.
  • We don't know whether blockchains can coexist with quantum computers. The problem is obvious: RSA or ECC public keys are too easy for quantum computers to break. Lots of solutions have been proposed: Suyash, Proshanta and Pradyumn all found problems in the different solutions they looked at.
  • In research, ChatGPT wasn't all that popular: about a third of the cohort that used an LLM in their research used ChatGPT. Open source models like llama and mixtral were far more popular in academia than in business (that I have seen).
  • About 90% of projects didn't use LLMs at all in their research. They might have used it to clean up their writing or other "mundane" tasks, but it didn't play a part in the research process itself. I am going to try to track that over the next few years as I expect it will drop.
Reflecting on the reflections:
  • For $20/month, you can have access to all the necessary tools to do world-leading computer vision tasks. It has been a long time since the forefront of technology has been that accessible and that cheap anywhere in anything. Truly we live in wondrous times.
  • As we move into a (possibly) AGI+quantum world, being confident that you can trust complex systems will become much more important than building the thing itself.

Monday 20 May 2024

10 years of this blog

I started this blog 10 years ago.

The most popular post was at the start of the pandemic, where I wrote a detailed post How I teach Remotely I had been doing that for many years before the pandemic; I could have written it before any lockdowns started.

The next most popular post was Something funny is about to happen to prices which talked about some of the weird things that I predicted would happen as we started getting negative electricity prices. I assumed that Australia would develop industries that made use of nature's subsidy; but alas, Australian industry is too uncompetitive even if electricity is free-er than free.

After that, there were some posts about how to fix things in HP DataProtector. I think I was expecting the blog to be mostly about data protection: the first post was about HP's confusing line-up of overlapping backup products, and identifying when it was cheaper to go with one choice over the other. HP no longer have overlapping backup products -- they sold them all to Hewlett-Packard Enterprise Entco Microfocus OpenText who now has a confusing line-up of overlapping backup products (different ones). Those a

Anyway, it looks like what I should be writing about is extrapolations into the future of what I'm seeing happening now. That's good. I have started writing a book on what we should expect in the next 5 years in AI, so I'll post excerpts as I write it.

Saturday 27 April 2024

Do Australian banks care about AI?

Both the 2022 and 2023 Expert Surveys on Progress in AI have identified that acting as a telephone banking operator (including card services) should be fully achievable by AI by 2026. So a forward looking bank would be reporting their AI expenditure as a way of signalling to the market that they aren't going to be left behind. Will any of them do that this year?

If you want to bet one way or the other on this question, here's a prediction market on it: https://manifold.markets/GregBaker/will-a-major-australian-bank-report