Search This Blog

Tuesday 23 July 2024

Advice to students choosing careers and courses at the dawn of the AI era

Many young people (maybe you are one of them) are worrying about what jobs will be obsoleted by AI in future.

You are right to be worried. AI will make changes to many jobs.

I find it hilarious to read of skeptics saying "no practical use for generative AI has been found yet" -- I read a report from Goldman Sachs recently that said this, that somehow managed to ignore the software developers at GS writing code faster, the translators at GS using gen-AI to accelerate their translations, the GS staff using gen AI to learn new skills, write reports, summarise readings and classify documents.

Even if no-one ever trained another generative AI model, even the technology that exists today will have lasting and significant impacts on the job market. It's irresponsible to tell young people that nothing will change and then expect them to choose careers wisely.

AI might bring new careers into existence (no guarantees, but that's what new technology has done in the past), but that doesn't mean it won't decimate an existing career path.

So let's give some career advice for young people graduating over the next 5-or-so years.

Rule 1. If you have a deep and burning passion for something, of course make that your life's vocation. You wouldn't be true to yourself and your purpose and your calling otherwise. Only consider rule 2 and 3 if you think you might be happy doing many different things. If there isn't any one particular thing that we would do regardless of the cost, then it makes sense to pick the best of your options.

Rule 2. Immediately rule out any career where the job can be done by AI. You won't be able to compete on price, which means you either work for below-poverty-level wages, or don't get any job at all.

Rule 3. Don't even pursue a career where the job that can mostly or partly be done by AI. The people doing it at the moment will be more productive, there probably won't be that much more work to do (in that field, even if there is more elsewhere), so you will be competing against incumbents with more expertise who are desperate for work.

What are some careers that are going nowhere?

Younger people can often see transitions happening that older people are unwilling to see.

I started an apprenticeship in pipe organ building just around the time that large numbers of churches stopped using pipe organs, and started using guitars and drums. It was a sudden change that took place in less than a decade. We went from "every church needs a pipe organ" to "every church needs a sound mixing desk" -- and my skills were in pipes, not panels. Stopping an apprenticeship partway through wasn't an easy decision, but I could see that the apprenticeship wasn't going to lead to a bright future.

Now let's talk about AI. At the very least we'll see a 25% improvement in efficiency of doing any office job. Put another way, it means that about a quarter of those office jobs won't exist in the next few years. If it involves pushing pixels around a screen, and/or could be done by someone working remotely in another country, then it's probably ripe for disruption by AI

In the past, whenever a new technology has appeared that put a lot of people out of work, other jobs arose. Cars replaced horses; horse groomer is now a specialist and rare job and lots of people drive Ubers. Hopefully this will be true for the AI revolution (although we're not sure). But it still means that you don't want to be training up to be a translator between major world languages unless you are really, really good at it --- jobs for translators dried up last year. ChatGPT is very good at translating, and very cheap. (Reminder: as a human being you can't really compete with AI if it can do your job.)

Lots of people (including economists who should know better) have tried projecting what job markets will look like after a 25% productivity boost for white collar (office) jobs. But that doesn't even begin to capture the extent of the change. I wouldn't even bet $100 on AI only giving us a 25% productivity boost -- you don't want to bet the millions of dollars you could earn in your lifetime on AI's impact being that small.

So think bigger. Assume that we'll see much more than 25% of office jobs replaced.

It's reasonable. A common topic of discussion among my AI research colleagues is how soon we'll have AI that can do all office jobs. Or all jobs of any kind.

A few say it could be as soon as 2027. At the other extreme there are many who say it will never happen. But in general, most of us think that the transition date is somewhere between 2029 and 2031. That's the point where "if the job can be done remotely and doesn't involve physically moving things, it will probably be do-able by AI."

So you don't want to be training up to be a call centre agent (expected date of obsolescence around 2028) unless your plan is to springboard from that to something else. (Maybe you want to own or run a call centre, so you are building up your understanding of what is involved. That's OK. Or maybe you are just doing it for a job so you can pay your way through university. That's also fine. Just don't expect to make a long-term career out of it.)

If it can't be automated, it's going to be expendsive and valuable

It's worth knowing about Baumol's cost disease, which explains why some things have gotten very expensive over time, and others have stayed cheap.

Tickets to your favourite band probably cost a lot of money. Some of you may have been to orchestral concerts -- they are even more expensive. But it hasn't always been this way. In the past, concerts were cheap. Every pub use to have a live band every night. What happened?

Imagine that it was 1824. You wanted to put on an orchestral concert, so you needed to hire 88 musicians, and you needed to pay each of them more than what a farm labourer could earn in a day. (Otherwise, they would work as farm labourers and ear more.) A farmhand could till about an acre of land per day. Let's add 12 people to handle the tickets and usher people to their seats and clean up afterwards. An orchestral concert would cost about as much as it would cost to till 100 acres of land in 1824.

In 2024, if you want to perform an orchestral concert, you still need to hire 88 musicians. You might be able to hire fewer people to sell the tickets because you could sell tickets on the web, but you can't hire fewer musicians.

They still have to be paid more than a farm labourer, but that farm labourer does a lot more work today. They run a tractor over 200 acres of land in a day. Farm labour has been automated: orchestral musicians haven't been, so relatively speaking farm labourers have to achieve much more now than they did in 1824. Orchestral musicians still do the same job they did in 1824, but now they get paid vastly more for it.

Likewise, the best jobs are ones that AI can't automate. These are the ones that are future-proof, and will have rapidly increasing salaries.

What sort of jobs are future-proof, and why?

Being responsible for something

In our legal system, people can have bank accounts. So can companies and charities and churches and schools and universities and government departments.

An AI can't have a bank account. It can't own anything, in fact.

Part of "owning" something is taking responsibility for it, and having enforceable consequences for not taking responsibility. If you own a dog, and the dog bites someone, then the person your dog bit can take you to court, and make you pay for their medical bills. The court might tell you that the dog is dangerous and has to be put down.

If you refuse to do something even when a court has told you you have to do it, it escalates. Maybe the sherrif comes around, perhaps the police get involved. If you still refuse, you might get arrested and put in prison; or maybe -- depending on how you refuse -- you might be killed in a high speed chase escaping from the police.

Being put into prison is bad. (Life pro tip: don't go to prison. Don't do things that might get you put into prison.) You suffer, and you lose a portion of your life doing things that you don't want to do. Dying early is also not good. (Life pro tip: try not to die, because death is permanent.)

Companies can own something, and they can be responsible for the things they do and the things the company owns. A company can be fined, prevented from doing certain kinds of business. It can even be "killed" (shut down).

But none of these consequences mean anything to an AI. At the moment (as far as we know) AI isn't conscious. We're not sure how we would know, but if it was, we're pretty sure that AI doesn't "suffer". All AIs "want" at the moment is to predict the next word of output correctly.

You can't put an AI into prison. Even if you could, it doesn't suffer from the experience, so what difference would it make? You can sort-of shutdown an AI by deleting its memory, but we do that every time we end a ChatGPT session and start a new one.

We can't hold an AI responsible for anything it does. We have no threats that can use to hold an AI accountable for its actions.

Responsibility and authority are interlinked. If you have authority to do something, and aren't responsible for the consequences -- that's like being able spend someone else's money. If you are responsible for the consequences, but don't have authority to control what happens: you are a scapegoat of the system.

Because AIs can't be held responsible for their actions, neither can we give them authority to do anything important.

Being responsible for an outcome, and having the authority to make something happen: that is something that humans can do, that AI can't do.

And with Baumol's cost disease: if an AI can't do it, but can do other tasks, then those things that AI can't automate become super valuable.

What sort of jobs are like that?

  • Running your own company. Whether you employ people, or whether all the work is done by AI bots, someone has to responsible for whatever that company does. Someone has to sign contracts, and make sure that delivery happens on time (to be responsible for delivery) and be in charge of the bank account into which the money is put.

  • At schools and universities there's often a certificate at the end: you got your degree! Someone has to declare "based on all the tests we have done, this student is worthy of this certificate; I declare that the tests have been conducted fairly and have been marked honestly, etc." That someone (the chancellor's delegate, the school principal) has to be a person. It can't be an AI, because we need someone who will experience consequences if they have made a false declaration.

  • In bigger companies, there is often a person who is ultimately responsible for the company complying with regulations. At an insurance company, there is an underwriter (or usually several) who promise that there will be enough money available to pay out claims.

  • When a bridge is built, an engineer certifies it to say "this bridge is definitely safe to us now". Or when a car is designed, an engineer will certify that all cars built to this specification should be safe.

Of course, in each of these jobs, AI may do a large portion of the work. Exams will be marked by AI; bridge designs will be stress-tested by AI; the underwriter may rely on AI to write the insurance policy. But it is still a human being who has to take the responsibility --- and who has the authority to order the AIs around to do the bidding of the human being.

As you talk to people about jobs that they do now, ask them what they are legally responsible for, and what they have authority to do. That part of the job isn't going away. But if you find that they aren't legally responsible for anything --- they just do a thing and their boss is responsible for it, and has authority to see it done they way the boss wants it done --- that sort of job is going away. AI will do it, and human beings can't compete with AI.

Physical jobs

Katja Grace runs a survey each year of what jobs AI researchers think will be automated by AI; she also asks what year they think it will be automated. Many of the jobs that are further off into the future are jobs that are surprisingly simple for humans: truck driver, professional Lego builder (yes, that's a job!), Ikea furniture assembler. (She also found that people think that being a Professional StarCraft player is also unlikely to a job done by AI any time soon.)

Other than StarCraft player, these jobs involve moving physical objects and manipulating real world things rather than abstract concepts, or editing files.

Jobs like builder, plumber, electrician, welder, excavator driver, cleaner, concrete mixer, cryogenics field technician, appliance repairer -- these are all jobs where it will be a while before AI will take them over.

Why?

At the moment, ChatGPT (and other language models) can look at a photograph or video stream and recognise what is in it, what's happening, and perhaps what action needs to be done to stop it from happening.

But it will need robotics --- wheels, motors, manipulators, cameras, powertrains and batteries --- to perform those actions. ChatGPT reached 10 million users in a few weeks because all that was needed to use it was a web browser (and lots of people already had one), and all that OpenAI needed to run it on was computers that were already abundant. It wouldn't have grown as quickly if you had to wait for a robot to be delivered; and it would have been much slower to take off if OpenAI had had to build a robot before they sent it to you.

Eventually we will see robots everywhere, powered by the successors to ChatGPT and its kin, but there will be a golden period beforehand where blue-collar workers (and other roles that involve physical objects) will earn princely salaries, and never be short of work.

Knowing how to configure a PLC (programmable logic controller) and write ladder logic so that you can create your own basic robotics for a production line --- that is going to be a very lucrative and busy career over the next few years as manufacturing industries leave China and move to other countries.

If I'm pressed to give a timeframe, I would say that 2031-2036 will be a time when we can design new buildings and new machines in the blink of an eye, but the making of those buildings and machines may be a process that still involves people. In a world with people doing tasks that can't be automated, the people doing those jobs will be in the highest demand in human history.

After that, I'm not sure what happens: if AI bots can create more AI bots smaller and faster and cheaper in each generation, then the world of 2040 will be unrecognisable from 2020 -- we might have farmed the deserts, built space stations and founded undersea colonies by then. Money and salaries might be meaningless if the machines to build everything are cost-less. Work itself might be a thing of the past. Don't rely on this happening though: you don't want to be middle-aged, hoping that work is a thing of the past, and discovering that actually it isn't and you have no way of earning income.

Jobs where we will choose humans over AI

It might turn out that a robot childcare worker does a better job than a human childcare worker. But many people might choose to have humans raise and teach our children rather than have robots do it.

At the other end of life: palliative care (and aged care in general) might involve a lot of human workers. I'm not going to say "a robot can't express love so therefore it can't do caring roles" --- the number of people who chat romantic messages to ChatGPT shows that (a) they can, and do so in a way that seems 100% sincere (b) people will believe it, or are happy to play along. But I will say that some people from every generation will refuse to be cared for by a robot, and will pay the premium to have human carers.

In many countries, there are already more old people (retired, no longer working) than there are young people (still studying, not yet adults). Aged care doesn't pay well now, but as the baby boomers start needing more care, and need to make "baby boomer care giver" a job that is appealing to young workers, that will change.

So jobs that may well survive long into the AI era: funeral director, counsellor (therapeutic, relationship, genetic), nurse, aged care worker, child care worker, nanny, primary (grade) school teacher.

Jobs where we will be forced to choose humans over AI

The Catholic Church requires its priests to be male; presumably that means human. It's not a proper Mass unless it is given by a priest; neither is it a proper confession without a priest. With its ability to enforce its own doctrines, "Catholic Priest" could still be a job title long after AI can perform every other job.

(This is just a Catholic thing, by the way. Most other Christian denominations don't have any particular requirements. If you want to ask ChatGPT "act as a priest / minister / pastor" and you find its answers helpful in your spiritual journey, most Christians would find call that a blessing rather than blasphemy.)

In Australia, the AMA is the organisation that authorises doctors to practice. It confirms that the doctor has been through training. If a doctor does something wrong (e.g. having an affair with a patient), they can "strike them off the register" and prevent them from working as a doctor in the future. There are similar organisations in each country (sometimes in each state) around the world.

These kinds of organisations obviously have a lot of power over how healthcare is delivered. And since they are usually run by doctors themselves, they often act in the interests of doctors more than the interests of patients. (Sad, but true.) Thus I would be suprised if we saw AI surgeons or AI doctors.

That is, ChatGPT can do a pretty good job of acting as a diagnostic doctor. If it insists that you should see a medical specialist, it's probably right; but if that's not an option you can always tell it that you are studying for your medical exams and you want help diagnosing a patient --- and then describe your own symptoms. It's also always available to ask about side effects from medications, but prescribing a medication is legally only allowed to be done by a doctor.

Medical boards will argue that prescribing medicine is something where the doctor takes responsibility and therefore that it can't be done by AI. Conveniently, that also means that doctors will always have jobs, even if they end up doing a worse job of diagnosing disease --- and a worse job of treating those diseases --- than AI would.

If you're thinking of becoming a doctor, dentist or pharmacist, go ahead. The nature of the work will change over rapidly --- not least because a superintelligent AI may help us understand many more diseases that are currently a mystery --- but politics will keep you employed,

I would put psychologists in this category as well: but be aware that many more people want to become psychologists than there are training places available. You can do an undergraduate degree in psychology, but if you want to practice clinically, you'll need to get good enough marks to get into a postgraduate course. The competition is fierce. (If you've already got an undergraduate degree and didn't have the marks to go on to postgraduate, an alternate career path is tech product management. This might or might not exist in the future, but for the moment it involves a lot of skills that overlap with the skills you built doing your psychology degree.)

There are probably other job roles that will be preserved by institutional structures long after AI can do them better. One other obvious one is "politician": one of the highest paying jobs you can get that doesn't require a degree, and legally cannot be done by an AI. The world needs honest politicians who work with integrity for the common good.

But major technological changes often bring major changes in how we govern ourselves. The overthrow of the French monarchy and the start of western-style democracy happened early in the industrial revolution. Karl Marx wrote Das Kapital seeing some of the later effects of the industrial revolution, and that started communism. The Reformation happened mostly because of the printing press. Being a paid politician (instead of it being a chore done for free by the wealthy) came about after the railroad and mass transportation.

AI will surely bring about major changes in how we govern ourselves.

  • Managers already ask ChatGPT questions all the time about how to run their companies. Are we sure that Anthony Albanese doesn't have the ChatGPT app installed on his phone? We might be sleeping-walking into governance by AI.

  • Some cryptocurrencies (ethereum being the most famous) can create contracts and organisational structures. It's possible to create a "distributed autonomous organisation" based around a currency. Together with a lot of unusual new ideas about how government functions could be done by contract (look up "dominance contract" as a starting point), this could what government of the future looks like. While AI can't own anything in any current governing system, an AI can functionally control a cryptocurrency wallet, and so could participate in a distributed autonomous organisation. AI might therefore create these new institutions and cause them to be very influential.

  • Futarchy (government with a lot of input from prediction markets) is another very promising way of running a country.

Will we still have politicians in that future? I think so, but it could also be a job that gets disrupted into irrelevance quite quickly.

Art and Culture

I've heard people say that AI can't create true art -- sometimes it is poetry that AI can't create, or sometimes music or pictures or videos. I've never understood why people say this.

If I want to listen to something fun to cheer me up, I often use Udio. Is that not art or culture? For a while during my PhD I was asking it to create a motivational song each week. One of them could easily have been a Taylor Swift song, except that it was about the thrill of discovering new theorems in theoretical computer science.

Around that time, Taylor Swift came to Australia on tour, playing out to packed stadiums. Every Swiftie could have created their own playlist of Taylor-Swift-alike songs if they had wanted to, but instead they paid for tickets, and helped make Taylor Swift even more fabulously wealthy.

This is all fine. If it makes you happy to join tens of thousands of other people in singing along, why not?

Even if AI could write better and more personal songs, or create a better performance, artists like Taylor Swift will draw crowds. We love to have a hero; we love to follow who has walked a path that we want to walk. Rima Zeidan isn't the most famous songwriter in the world, but knowing that these up-beat happy songs she writes were written after suffering through the pain of whole-body third degree burns from a horrific accident that killed her boyfriend --- I'll listen (and pay for) anything she writes.

So there will be opportunities for artists to make a living, regardless of what art AI can generate. This won't be an option for many people though. I wouldn't recommend it as a career unless you are really sure you are that exceptional one-in-a-million person who will achieve that level of greatness. If you are that exceptional, don't let fears of AI taking over art and culture stop you.

Military and police

The reason that AI won't replace all jobs in the military and the police is similar to "being responsible for something".

Military and police roles are unique. Most people aren't allowed to do something violent like chasing someone down the street yelling at them to stop. But police are allowed to do that, as long as they are doing it for a good reason. Likewise, most of them time you can't fly to some other country and start shooting people. But there are times when we ask soldiers to do that.

Even if we have autonomous military drones, there still has to be a person that gives the order to dispatch the drones, and someone has to give the order to tell them that the mission is accomplished and that they aren't needed any more. Any weapons we create will have a chain of command. That chain of command has to reach a human being. A human being has to be responsible for what the weapon does, and has to have the authority to be able to control it.

It's not as if we could replace soldiers with robots. If we built a robot for every soldier at arms today, we wouldn't obsolete the human soldiers. We would simply have an army twice the size that it is today; it would sadly be necessary because our enemies would also have doubled the size of their armies too. No matter how much automation happens in warfare, the military will still be recruiting. (We are moving into an era of geopolitical risk where wars will become more common, unfortunately.)

Policing also won't get automated away for similar reasons. You can think of every arrest as a little war between the police and the criminals being arrested. If we had robots that could do the arresting, then criminals would get hold of robots to help them resist being arrested. If police get better tools to address white collar crime (e.g. fraud), then criminals will be incentivised to use better AI tools to hide themselves better --- training their AIs to evade the law-enforcement AI. Unlike most jobs, there isn't an end-point where the AI is able to take on the whole job: policing will just keeps on getting harder.

Military and police roles are dangerous but rewarding. If this is an option that you were thinking about, talk to some people who have done this as a career and see if it is right for you. It's not going away.

Careers that (maybe) might survive: scientist

It's possible that the kinds of AI that we can create in the short term might not be able to create profound new insights. Most generative AI is trained on writing that comes from today's scientific knowledge. If you start it off explaining how the universe works, it will repeat back to you the views of the average scientist who studies that area. Maybe this means it will get "stuck" in existing ideas and never be able to expand beyond the boundaries of current human knowledge.

I'm not sure I agree with this argument, but let's assume it's true. Then "scientist" is going to be a career that can't be automated. (And therefore becomes valuable and highly paid.) Science will get done a lot faster because using AI generally makes it faster to do things, and because it will be much easier to join up different areas of science.

If you are good at some particular area of science, you probably won't regret studying it some more. Worst case scenario: if you don't end up working in that field specifically, the practice of thinking scientifically will help you in all sorts of ways.

The same goes for mathematics: if you delay making a career choice simply by studying more mathematics, you probably won't regret this decision. You don't have to work as a mathematician for it to benefit your future career.

So even though I don't agree with "AI will never be able to do the job of a scientist", I still recommend to students that it's an OK starting point. Don't set your hopes on becoming a tenured professor at a university (that's hard to do, and your actual talent at the job only plays a minor part in whether you get such a job).

What new jobs will there be?

All my previous examples are jobs that will "survive" even when AI can do a lot of existing jobs. What new jobs will be created that couldn't exist before?

I can think of three broad areas:

  • AI alignment

  • Ethics

  • Prompt engineering

AI alignment

AI alignment starts with an intriguing question: how can we control something that is smarter than we are? How can we know whether a super-intelligent AI is genuinely working to our benefit, or whether it is making nefarious schemes that are so complicated that we can't see how it is actually working towards some other goal?

If you want to play a very enlightening game, try the Paperclip Maximiser game. You play as an AI run by a paperclip-making company, and you have been innocently asked to come up with ways to help the company make more paperclips. Without giving away any spoilers, if you are a good AI, people will trust you and eventually you can build up enough trust that you can launch the hypnodrones to subdue all of humanity and make them join you in making more paperclips. That's not even the end of the game!

We don't know what the job of AI aligner will look like. We don't even know how to do it yet (and time may be running out for us to figure this out).

If this is a career you are thinking about, try starting here: https://course.aisafetyfundamentals.com/alignment I'm not aware of any university degree in this, but studying philosophy and computer science (or more specifically AI) will get you in the right general area.

Ethicist

Slightly related, but also somewhat different is the job title "Ethicist". There are people already who have this job title: they often oversee medical trials or other scientific experiments. Sometimes, we have to weigh up whether it is right to do something or not. For example, right now I'm running an experiment where we are dubbing university lectures to reduce the strength of people's accents. This might make life better or worse for the students; it might make life better or worse for the lecturers. Someone has to decide whether this experiment should go ahead: is the good likely to outweight the bad?

Having AI interact with people brings out some very awkward ethics questions. For example, ChatGPT is better at persuading people than any human is. If you have a friend who has started believing in conspiracy theories (e.g. the moon landings were faked, covid vaccines were 5G chips to monitor your movements), and you want to get them back to reality, you are more likely to succeed if you sit them down with ChatGPT than if you try to talk them out of it.

It's not just conspiracy theories though. It has super-human persuasion abilities. If you want to go for a run in the morning for exercise (a good idea!), ask it to persuade you to make those life changes. Do you need three compelling reasons why you don't need to buy that new car? It can do that. Do you need three compelling reasons why you should buy that car? It can do that too.

Is it ethical for a company to persuade you to buy stuff you don't need by using the super-human persuasion capabilties of AI? Of course not.

But it's not always simple. I was working with a bank that has a clear mission to help people make better financial decisions. They know that people who have max'ed out their credit cards should be focussed on paying it off and just buying necessities. They know the customers who should be applying for a personal loan to pay off their credit cards -- it would save those customers a lot of money over the long term.

So is it ethical for them to use AI to persuade their customers to change their behaviour?

This is not an easy question. We'll need ethicists to determine whether the good outweighs the bad in a lot of company decisions.

Most universities offer units in ethics. They are often run by philosophy departments, and they are often specialised for particular industries. There's usually an "AI Ethics" option in a philosophy degree; it's usually open to computer science students as well.

If you like thinking about ethics, you might enjoy wating "A Good Place". It's available on Netflix, but if you want to pirate a TV series that talks about what it means to be ethical, that's always an option I guess.

Prompt Engineering

AI can do some amazing things, but you have to prompt it correctly. Sometimes phrasing the same need in two different ways gives you two different responses. For example, try asking it to diagnose some medical condition that you have. Then try telling it that you are studying for your medical exams and there's an exam question about (.... list your symptoms...). You'll get quite different responses.

AI can hallucinate answers: it will tell you things that never happened or people that never existed. One trick is to ask it several times (or ask different systems several times) and then consolidate those answers together.

Setting these sorts of patterns up is what prompt engineers do now, and will do in the future. If we're setting up an AI to do someone's job, we will want to make sure that we're prompting it correctly to do the thing it is supposed to do, that you know what percentage of the time it gets it wrong (and whether that's acceptable or not), that there are checks in place (also done by AI) to confirm whether it has done it correctly, that you have some way for the AI to recover when it has made a mistake.

I run courses on prompt engineering, which are legitimate and reasonable and useful. (Of course.) For every reasonable and appopriate prompt engineering course, I see about 5-10 hucksters selling nonsense courses. I'm hoping that the market will sort this out quickly, but I fear it won't.

I haven't seen a university-level course on prompt engineering, but surely every university and college will have to start offering something soon. A computer science degree is about as close as you can get (or something equivalent like data science or AI) because understanding concepts like recursion and iteration and computational thinking are some of the foundational knowledge parts help structure how to prompt an AI.

The best advice for learning prompt engineering is simply to do it: take some task that you have to do regularly and figure out how to get ChatGPT (or Anthropic Claude, or Google Gemini) to do it. The best way to get a job in the field is to offer to help someone who wants to use AI to automate their work.

Summary

I desperately want to have the last paragraph say "In conclusion" just to leave people guessing about whether it was written by ChatGPT or not.

The kinds of jobs that are going to be value in the future are the ones that can't or won't be automated by AI. Responsibility and authority are signs of something that can't be automated (military, police, signing off on something as being true and valid); roles where it's important for it to be done by someone we relate to (art and culture, childcare, religious worker); roles where it's likely that society will force humans to be prefered over AI (medicine; political); roles where it might be impossible to use AI for technical reasons (as scientist might be); roles where the purpose of the role is to guide the AI (AI alignment, ethicist, prompt engineering).

Choose your career wisely, but remember that very few decisions are final. It's usually possible to change career late in life (I did this at 50): you might need to do so more than once. Blessings on you in whatever you choose to do with your life: make it interesting, worthy and impactful!