Mistral-Medium was accidentally leaked? Previously only available through API, performance is close to GPT-4.
The latest statement from the CEO: It is true and it was leaked by an early customer employee. But still saidstay tuned.
Picture
In other words, this version is still old, and the actual version performance will be better.
In the past two days, this mysterious model named "Miqu" has become a hot topic in the large model community. Many people also suspect that this is a fine-tuned version of LIama.
Picture
Mistral CEO explained that they have retrained Mistral Medium and improved it based on Llama 2. This is to deliver an API close to GPT-4 performance to early customers as quickly as possible. The pre-training work was completed on the day Mistral 7B was released.
Now that the truth has been revealed, the CEO is still keeping secrets, and many netizens are jabbing their hands in anticipation.
Picture
Picture
us Let’s review the whole incident again. On January 28, a mysterious user named Miqu Dev posted a set of files "miqu-1-70b" on HuggingFace.
Picture
The document states that the "prompt format" and user interaction methods of the new LLM are the same as Mistral.
On the same day, an anonymous user on 4chan posted a link to the miqu-1-70b file.
So some netizens noticed this mysterious model and began to conduct some benchmark tests.
The results are surprising: it scored 83.5 on EQ-Bench (local evaluation), surpassing all other large models in the world except GPT-4.
For a time, netizens strongly called for this large model to be added to the rankings and to find out the real model behind it.
There are three main directions of suspicion:
Some netizens posted the comparison effect: It knows the standard answer and it makes sense, but it is impossible for the Russian wording to be exactly the same as Mistral-Medium.
Picture
But other netizens discovered that it is not a MoE model, and has the same architecture, same parameters, and same number of layers as LIama 2.
Picture
However, it was immediately questioned by other netizens. Mistral 7b also has the same parameters and number of layers as llama 7B.
Instead, this is more like Mistral's early non-MoE version model.
Picture
However, after many discussions, it is undeniable that in the minds of many people, this is the model closest to GPT-4.
Picture
Now, Mistral co-founder and CEO Arthur Mensch admits that the leak was caused by the overzealous employee of one of their early customers. A quantized version of the old model was trained and released publicly.
As for Perplexity, the CEO also clarified that they have never received the weight of Mistral Medium.
Picture
Netizens are worried about whether this version will be removed.
Picture
Interestingly, Mensch did not ask for the post on HuggingFace to be removed.
Picture
Instead leave a comment saying: Attribution issues may be considered.
Reference link:
[1]https://www.reddit.com/r/LocalLLaMA/comments/1af4fbg/llm_comparisontest_miqu170b/
[2] https://twitter.com/teortaxesTex/status/1752427812466593975
[3]https://twitter.com/N8Programs/status/1752441060133892503
[4] https://twitter.com/AravSrinivas/status/1752803571035504858
The above is the detailed content of An open source model comparable to GPT-4 leaked! The boss of Mistral just confirmed: the official version will be even stronger. For more information, please follow other related articles on the PHP Chinese website!