News on October 12, Bloomberg reported last night that Google and Discord jointly invited loyal users of their own AI chatbot Bard to set up a chat room. Among them, Bard’s product managers, designers and engineers discussed the effectiveness and practicality of AI in the chat room, but some people began to question it.
Two participants in the Bard community on Discord shared to Bloomberg details of discussions in the chat room between July and October. Bard's senior product manager Dominik Rabiej said in the discussion that he "doesn't trust" the responses generated by large language models, and recommended that people only use Bard for "creative" and "brainstorming" purposes. Rabiej also said that using Bard for programming "is also a good choice" - because people inevitably want to verify that the code works.Also this month, Rabiej emphasized the importance of the new “Double-Check Replies” button in Bard. The feature is said to highlight "potentially incorrect" content in orange. He also reiterated that Bard doesn't really understand the text it's getting and simply responds with more text based on the user's prompts. "Please remember that Bard, like any large model, is generative - it is not looking up information or summarizing for you, but generating text.
"This name The product manager once said bluntly in a chat in July, Don’t trust the output results of the large language model unless (users themselves) can independently verify it. "We also want it to meet your expectations, but it's not yet."
Another product manager Warkentin wrote in the discussion, "Humanized improvements are crucial, and only in this way Bard To be a product that works for everyone. Otherwise, users won’t have the ability to judge the functionality of the product, and I think that would be a huge mistake.” “We don’t need products in ivory towers, but products that everyone can A useful product!"
This site previously reported that when Google expanded the deployment of Bard AI tools in July this year, it added the "share conversation link" function. The user can export the conversation between him and Bard as a web link and share it with others, so that others can continue the conversation with Bard. An analysis consultant Gagan Ghotra discovered this problem on the Gagan Ghotra accordinglywarned users not to share any "private content" when talking to Bard, otherwise the relevant information may be leaked
.The above is the detailed content of Google insiders question Bard chatbot's usefulness. For more information, please follow other related articles on the PHP Chinese website!