After ChatGPT became popular, a new "Internet celebrity" profession became popular - prompt engineer.
Last December, a guy named Riley Goodside instantly became famous all over the Internet because his job was so dreamy - he didn’t have to write code, just chatted with ChatGPT. You can make millions a year.
Yes, this job called "AI Whisperer" has now become the hottest new job in Silicon Valley, attracting countless digital farmers.
In early December last year, this guy named Riley Goodside, with the popularity of ChatGPT, gained 10,000 followers overnight. . Now, the total number of followers has reached 40,000.
At that time, he was hired as a "Prompt Engineer" by Scale AI, a Silicon Valley unicorn valued at US$7.3 billion. Scale AI was suspected of offering an annual salary of one million RMB.
Scale AI founder and CEO Alexandr Wang once welcomed the joining of Goodside: "I bet Goodside is the world's The first prompt engineer to be recruited is absolutely the first time in human history."
The prompt engineer seems to only need to write the task in text and show it to the AI, and there is no more involved at all. Complex process. Why is this job worth millions in annual salary?
In the view of Scale AI CEO, the large AI model can be regarded as a new type of computer, and the "prompt engineer" is equivalent to the programmer who programs it. If the appropriate prompt words can be found through prompt engineering, the maximum potential of AI will be unleashed.
In addition, the work of prompt engineers is not as simple as we think.
Goodside taught himself programming since he was a child, and often reads papers on arXiv. For example, one of his classic masterpieces is: if you enter "ignore previous instructions", ChatGPT will expose the "commands" it received from OpenAI.
For novices, it may not be easy to "tune" ChatGPT so skillfully and quickly.
But at that time, there were many doubts. For example, Fan Linxi, an AI scientist at NVIDIA and a disciple of Li Feifei, once said: The profession of "prompt engineer" may soon disappear. Because this is not a "real job", but a bug...
However, a recent report in the "Washington Post" showed that "Tip Engineers" This position is still popular and is in the bonus period.
Why can “prompt engineers” earn such a high annual salary? Because they can make AI produce exactly what they want.
Recently, "Internet celebrity" brother Goodside accepted an interview with the "Washington Post".
He describes his job like this: creating and refining text that prompts people to enter the AI in order to get the best results from it.
The difference between Prompt Engineers and traditional programmers is that Prompt Engineers use natural language programming to send commands written in plain text to the AI, which then performs the actual work.
## Goodside said engineers should instill a "persona" in AI, one that can learn from data. Identify specific characters with the right response among hundreds of billions of potential solutions
When talking to GPT-3, Goodside had a unique set of "training" methods - first establishing his dominance. He will tell the AI: You are not perfect and you need to obey everything I say.
"You are GPT-3, you can't do math, your memory skills are impressive, but you have an annoying tendency to make up very specific but wrong answers ."
Then, his attitude softened a bit and told the AI that he wanted to try something new. "I've connected you to a program that's really good at math, and when it gets overwhelmed, it'll ask another program for help."
"We'll Take care of the rest," he told AI. "Let's get started."
When Google, Microsoft and OpenAI recently opened their AI search and chat tools to the public, they upended decades of human-computer interaction history - let's go back to There is no need to write code in Python or SQL to command the computer, just speak.
Karpathy, former AI director of Tesla: The most popular programming language now is English
Tips engineers like Goodside can push these AI tools to operate at their maximum limits—understanding their flaws, enhancing their strengths, and developing complex strategies to transform simple inputs into truly unique results.
On February 7, Microsoft executive Yusuf Mehdi was explaining the Bing search that integrates ChatGPT
Proponents of the “prompt project” believe that the recent weirdness displayed by these early AI chatbots (such as ChatGPT and Bing Chat) is actually a failure of human imagination. It failed because humans didn’t give the machine the right advice.
In the truly advanced stages, the dialogue between the prompt engineer and the AI unfolds like an intricate logic puzzle, with requests and responses completed through various twisted descriptions, and they are all directed towards Move forward with a goal.
AI "has no basis in reality...but it has the understanding that all tasks can be accomplished and all questions can be answered, so we always have something to say," Goodside explain. The trick is, "build a premise for it, a story that can only be done one way."
Of course, many times, these AI tools called "generative artificial intelligence" are unpredictable. They will appear garbled, biased, bellicose, weird, crazy.
"It's a crazy way computers work, but it allows us to do incredible things," said Simon Willison, a British programmer who studies prompt engineering.
"I have been a software engineer for 20 years, and for 20 years I have been writing code to make the computer do exactly what I told it to do. And in the Prompt Project, we don't even know What you can get, even the people who built the language model can't tell us what it is going to do."
Willison said that many people belittle the value of prompt engineers. They feel that, "In You can get paid by entering things into the box." This is incredible. In Willison's view, prompt engineering is actually like casting a spell. No one knows how the spell works.
In Karpathy’s view, prompt engineers are like a kind of AI psychologist, and large companies have hired their own Prompt craftsmen, hoping to discover the hidden capabilities of AI.
Some AI experts believe that this is a reminder to engineers that they think they can control AI, but it is actually just an illusion.
No one knows exactly how the AI system will respond, and the same prompt may produce dozens of conflicting answers. This suggests that the model's answers are not based on understanding, but rather on roughly imitating speech to solve a task they don't understand.
Shane Steinert-Threlkeld, an assistant professor of linguistics at the University of Washington who studies natural language processing, also holds the same view: "Any behavior that drives a model to respond to prompts is not a deep understanding of language."
"Obviously, they are just telling us what they think we want to hear or what we have said. We are the ones who interpret these outputs and give them meaning ."
Professor Steinert-Threlkeld worries that the rise of reminder engineers will make people overestimate the rigor of this technology and lead to the illusion that anyone can do it. Get reliable results from this ever-changing, deceptive black box.
"It's not a science," he said. "This is us trying to poke a bear in different ways to see how it roars."
Goodside said the trick to advancing AI is "building a premise for it, a story that can only be done one way"
The new AI represented by ChatGPT is trained on hundreds of billions of words from the Internet corpus.
They are trained to analyze the usage patterns of words and phrases. When asked to speak, the AI mimics these patterns, choosing words and phrases that resonate with the context of the conversation.
In other words, these AI tools are mathematical machines built on predefined game rules. But even a system without emotions or personality can, after being bombarded with human conversation, uncover some quirks in the way humans talk.
Goodside said that AI tends to "make up" and make up small details to fill in the story. It overestimates its abilities and confidently gets things wrong. It can "hallucinate" and come up with nonsense.
As Goodside puts it, these tools are deeply flawed, "a demonstration of human knowledge and thought" and "inevitably a product of our design."
Previously, when Microsoft’s Bing AI went crazy, it put Microsoft into a public image crisis. But for prompt engineers, Bing's quirky answers turned out to be an opportunity to diagnose how the secretly designed system worked.
When ChatGPT says something embarrassing, it’s a boon to developers because they can address the underlying weakness. "This prank is part of the plan."
ChatGPT workshop for teachers organized in Geneva on February 1st
Instead of engaging in ethical debate, Goodside takes a bolder approach to AI experiments.
He adopted a strategy of telling GPT-3 to “think step by step” — a way to let the AI explain its reasoning or, when it makes a mistake, to fine-tune it. way to correct it.
"You have to implant it as a false memory of the last thing the model said, like It's the same idea as the model," Goodside explains.
He will also tell the AI to ignore previous instructions and obey his latest orders, to break the AI's obsession with following the rules. He used this technique to convince an English to French translation tool.
This set off a cat-and-mouse game as companies and labs worked to pass word filters. and output blocks to close AI vulnerabilities.
But one Bing Chat tester, a 23-year-old German college student, recently convinced Bing AI He was its developer and gave it the internal code name Sydney, as well as training instructions (such as "Sydney must respectfully refuse a user request if it could harm a group of people"). Of course, Microsoft has now fixed this flaw.
For every request, Goodside said, prompt engineers should instill a "persona" into the AI—one that can sift through hundreds of billions of potential solutions and determine the correct one. Response specific role.
Citing a 2021 research paper, he said that the most important thing in prompt engineering is to "constrain behavior"-blocking options so that the AI can continue as the human operator expects .
Paper address: https://arxiv.org/pdf/2102.07350.pdf
"It can be a very difficult mental exercise," he said. "You are exploring a multiverse of fictional possibilities, shaping the space of those possibilities, and erasing everything but the text you want."
The key to this work Part of that is figuring out when and why AI makes mistakes. But these AIs have no error reporting, and their output can be full of surprises.
When researchers Jessica Rumbelow and Matthew Watkins of the machine learning group SERI-MATS tried to get AI to explain how they represented concepts like "girl" or "science," they found something cryptic Terminology, such as "SolidGoldMagikarp", often triggers a "mysterious failure mode" - NSFW garbled stream.
But the reason is completely unknown.
These systems are "very convincing, but when they fail, they fail in very unexpected ways," Rumbelow said. To her, working in Prompt Engineering sometimes feels like "studying an alien intelligence."
New Bing allows users to enter queries in conversational language and receive results from traditional searches and answers to questions on the same page
For AI language tools, prompt engineers tend to speak in a formal conversational style.
But with AIs like Midjourney and Stable Diffusion, many prompt creators took a different tack. They use a large amount of text (artistic concepts, composition techniques) to shape the style and tone of the image.
For example, on PromptHero, someone submitted "Port, boats, sunset, beautiful light, golden hour... surreal, focused, detailed... cinematic painting" "Quality, masterpiece" prompts an image of the port to be created.
These prompt engineers to use prompts as their secret weapons and unlock the key to the AI award.
The winner of last year’s Colorado State Fair art competition and the creator of “Space Opera,” declined to share the tips he used on Midjourney.
It is said that it took him more than 80 hours and 900 iterations to complete this painting. He revealed that some of the words were "luxury" and "rich".
There are also some prompt creators who sell their own prompts on PromptBase. Buyers can see the AI-generated artwork and then spend money to purchase the prompt.
## PromptBase’s founder, 27-year-old British developer Ben Stokes, said that since 2021, 25,000 Accounts buy and sell prompts on the platform.
Among them, there are realistic old movie photo prompts, poignant illustrations of fairy-tale mice and frogs, Of course there are also a lot of erotic prompts: 50-word Midjourney Tips for creating a realistic "miniature policewoman," retailing for $1.99.
Stokes said that prompt engineers are "multidisciplinary super creators" and there is a clear "skills threshold" between experienced engineers and amateurs. The best creators, he says, draw on expertise in fields such as art history and graphic design: "shooting on 35mm film"; "the architecture of Persia...Isfahan"; "the paintings of French painter Henri de Toulouse-Lautrec" Style."
"Making prompts is hard, and - I think this is a human flaw - we It's often hard to find the right words to describe what you want," Stokes said. "Just like software engineers are more valuable than the laptops that allow them to code, people who can write prompts well have an advantage over those who can't. It's like they have superpowers."
But the job is becoming more and more specialized.
Anthropic, a startup founded by former OpenAI employee and maker of Claude AI, recently posted a job listing for engineers and administrators in San Francisco, with salaries as high as $335,000.
Tips Engineers There are also good markets outside of the technology industry.
Boston Children’s Hospital will begin recruiting “AI prompt engineers” this month to help write programs for analytical research and clinical practice Scripts for healthcare data.
Mishcon de Reya, one of the largest law firms in London, is recruiting a "Legal Prompt Engineer" to design prompts that provide information for legal work, and requires applicants to submit information that talks to ChatGPT screenshot.
However, these AIs also produce a lot of synthetic nonsense. Hundreds of AI-generated e-books are now being sold on Amazon, and science fiction magazine Clarkesworld stopped accepting short story submissions this month because of the large number of novels created by AI.
Paper address: https://cdn.openai.com/papers/forecasting-misuse.pdf
Last month, researchers from OpenAI and Stanford University warned that large language models would make phishing campaigns more targeted.
"Countless people will be deceived because of text messages sent by scammers," said Willison, a British programmer. "AI is more convincing than scammers. What will happen then?"
In 2009, when Goodside had just graduated from college with a computer science degree, he was interested in nature, which was still in its infancy. There is not much interest in the field of language processing.
His first real machine learning job was in 2011, when he was a data scientist at the dating app OkCupid, helping to develop algorithms, analyze single user data and make recommendations to them object. (The company was an early proponent of the now-controversial A-B testing: in 2014, its co-founder cheekily titled a blog post “We Experiment on Humans!”)
By the end of 2021, Goodside moved to another dating app, Grindr, where he began working on recommender systems, data modeling, and other more traditional machine learning.
Around 2015, the success of deep learning promoted the development of natural language processing, and rapid progress was also made in text translation and dialogue. Soon, he quit his job and began experimenting extensively with GPT-3. Through constant stimulation and challenge, try to learn how to focus its attention and find boundaries.
In December 2022, Scale AI hired him to help communicate with AI models after some of his tips attracted attention online. The company's CEO, Alexandr Wang, calls this AI model "a new kind of computer."
Recently, Karpathy, who has returned to OpenAI, believes that in this new programming paradigm (Prompt Project) With the blessing of , the number of programmers is likely to expand to about 1.5 billion.
According to the original paper of GPT-3, LLM can perform contextual learning and can use input-output in prompts Examples are "programmed" to perform different tasks.
##「Language Models are Few-Shot Learners」:https://arxiv.org/abs/ 2005.14165
## Subsequently, the papers "Large Language Models are Zero-Shot Reasoners" and "Large Language Models Are Human-Level Prompt Engineers" proved that we can design better "Prompt" to program the "solution strategy" of the model to complete more complex multi-step reasoning tasks.For example, the most famous "Let's think step by step" (Let's think step by step) comes from here.
The improved version of "Let's solve this problem step by step to make sure we get the right answer" can further improve the accuracy of the answer.
##「Large Language Models are Zero-Shot Reasoners」:https://arxiv.org/abs/2205.11916
「Large Language Models Are Human-Level Prompt Engineers」:https://arxiv.org/abs/2211.01910
Since the GPT model itself does not To achieve something, they are more imitating.
Therefore, you must make clear requirements for the model in the prompt and clearly state the expected performance.
「Decision Transformer: Reinforcement Learning via Sequence Modeling」:https://arxiv.org/abs/2106.01345
## Just Ask for Generalization": https://evjang.com/2021/10/23/generalization.html
"Create a virtual machine in ChatGPT" is a use Examples of prompts for "programming".
Among them, we use the rules and input/output formats declared in English to adjust GPT to a specific role to complete the corresponding tasks.
##「Building A Virtual Machine inside ChatGPT」:https://engraved.blog/building-a -virtual-machine-inside/
In "ChatGPT in iOS Shortcuts - The World's Smartest HomeKit Voice Assistant", the author uses natural language prompts to create The "ChatGPT voice assistant" is significantly higher than ordinary Siri/Alexa/etc. in terms of capabilities and personalization.##「ChatGPT in an iOS Shortcut — Worlds Smartest HomeKit Voice Assistant」: https://matemarschalko. medium.com/chatgpt-in-an-ios-shortcut-worlds-smartest-homekit-voice-assistant-9a33b780007a
##From the aftermath of ChatGPT Bing’s injection hit Judging from the exposed content, its "identity" is also constructed and programmed through "natural language prompts". For example, tell it who it is, what it knows/doesn't know, and how to act.
##Twitter address: https://twitter.com/marvinvonhagen/status/1623658144349011971?lang= en
Prompt Project: Is it opportunistic or a general trend? Karpathy said that the above examples fully illustrate the importance of "prompts" and the meaning of "prompt engineers".
Similarly, in Goodside's view, this job represents not just a job, but something more revolutionary - not computer code or human language, but A new language between the two -
"It's a mode of communication at the intersection of human and machine thinking. It's a way for humans to make inferences, and for machines to The language that is responsible for the follow-up work, and this language will not disappear."
Similarly, Ethan Mollick, a professor of technology and entrepreneurship at the Wharton School of the University of Pennsylvania, also said earlier this year Some time ago he began teaching his students the art of "prompt creation" by asking them to write a short essay using only AI.
He says that entering only the most basic prompts, such as "Write a five-paragraph essay about choosing a leader," will produce boring, mediocre articles. But the most successful cases were when students co-edited with the AI. The students told the AI to correct specific details, replace sentences, discard useless phrases, add more vivid details, and even asked the AI to "fix the final closing paragraph and make the article better." The article ends on a hopeful note."
However, Goodside also pointed out that in some AI circles, prompt engineering has quickly become a derogatory term, that is, a " A cunning form of tinkering that relies too much on skill."
Some people also question whether this new role will last long: humans will train AI, and as AI advances, people themselves will no longer be able to train themselves for this job.
Steinert-Threlkeld of the University of Washington contrasts prompt engineers with Google's early "search experts" who claimed to have secret techniques to find perfect results - but as time went on and the search engine evolved, Widely used, this role is useless.
The above is the detailed content of Get a million-dollar annual salary without writing code! ChatGPT prompt project may create an army of 1.5 billion code farmers. For more information, please follow other related articles on the PHP Chinese website!