Prompt engineers could be the next job replaced by AI

Prompt engineering might not be the path to AI riches it has been made out to be.

Catch-up: Despite using everyday language, LLMs don’t always do exactly what a user wants, which has made “prompt engineer” AI’s new buzzy job. They use strategies based on an understanding of how AI reasons to write more effective commands and avoid unforeseen outcomes.

  • The strategies can range from breaking a prompt down into simpler tasks, to making an AI work through an internal monologue before answering, to flattering the chatbot.

What happened: Engineers at cloud computing company VMware recently tested out different prompt engineering strategies on three LLMs and found that what works on one AI might not work for another — throwing the usefulness of the role into question.

Yes, but: What was consistently effective was just asking the chatbot what the best prompt would be. This could be because LLMs use math to process words — which doesn’t line up with how we think of language — so using human logic might not be the best way to figure out how a computer thinks.

  • Models are being created to automate finding the best prompt based on feedback from an LLM.
     
  • An Intel team built a tool to come up with the best prompts for Stable Diffusion’s image generator, also finding automated prompts performed better than ones from human experts.
     
  • A prompt builder and prompt-tuning capabilities were among the AI tools Salesforce announced this week.

Bottom line: Prompt engineering isn’t likely to be a distinct job with a six-figure salary for much longer, but it could stick around, be it as one of several tasks for the person managing a company’s AI or a skill one puts on their resume.