


Haizhi Technology releases the first knowledge graph fusion large model application platform to help the domestic war of hundreds of models
On the afternoon of September 8, Zheng Weimin, academician of the Chinese Academy of Engineering, professor of the Department of Computer Science at Tsinghua University, and chief scientist of Haizhi Technology, demonstrated and released the "Atlas LLM knowledge map and "Large model fusion application platform", this platform is aimed at the majority of B-end users and uses knowledge graph, a basic artificial intelligence technology, to help large models overcome "illusions" in enterprise-level and industrial applications and achieve accurate deductions based on industries and scenarios
This product has been deployed in scenarios in the fields of energy, finance, government affairs, etc. and applications, striving to open up the "last mile" of general artificial intelligence into B-side and industrial applications.
China Electronics Technology Standardization Institute, National Beacons Committee and other units launched an activity called "Knowledge Graph and Large Model Integration Practice Report", in which Haizhi Technology participated in the entire process. The report has been officially released and will share the above results and some application cases to promote more market entities and technical forces to participate in the multi-technology integration of general artificial intelligence
Large models have "illusions" Knowledge Map Prescribing
“When a large model moves from the C side to the B side, it is like moving from a toy Go to the tools, and the accuracy of the tools is crucial. It doesn’t matter when you’re writing your article that you looked up Einstein’s theory of relativity at the wrong time, but if a large model proposes the wrong options for repairing a power grid failure, the result could be a disaster. ." Academician Zheng Weimin, chief scientist of Haizhi Technology, said in an interview with reporters: "In the short term, it is difficult to solve the 'illusion' problem simply relying on the iteration of the large model itself. In this regard, the knowledge graph serves as a more brain-like artificial intelligence tool. , its precise knowledge derivation ability can very well complement each other with the large model. In turn, the rapid learning ability of the large model also greatly promotes the knowledge generation of the knowledge graph. "
The generality, rapid autonomous learning, and self-improvement capabilities of large language models (LLMs) are considered revolutionary and have been widely recognized. However, since the basic working method of LLM is to analyze the vocabulary, syntactic structure and semantic information in the text, and capture the patterns and probability distributions between them, it is more inclined to generate answers based on statistical rules rather than perform in-depth logical reasoning Or develop advanced cognitive abilities. In addition, when generating text, LLM may be limited by biases and misleading information present in the training data, and may produce inaccurate or unreasonable answers in some cases. This kind of defect based on technical features is vividly likened to the "big model illusion". This unexpected "illusion" is the last and biggest challenge that general artificial intelligence, especially general artificial intelligence represented by large models, faces when it enters rigorous B-side applications
In this context, another Knowledge graph, a widely used basic technology for artificial intelligence, has begun to show its natural complementary capabilities with large models. As a recognized "brain-like" knowledge expression method, knowledge graphs describe entities and relationships in the objective world in a structured form by modeling semantic networks, and are widely used in knowledge reasoning. Knowledge reasoning based on knowledge graphs explains the reasoning process through auxiliary means such as reasoning paths and logical rules on the basis of discrete symbolic representation, providing an important way to realize "explainable artificial intelligence".
Haizhi Technology, with Academician Zheng Weimin as its chief scientist, has been in business for ten years and is currently the largest knowledge graph and graph computing company in China with the widest range of application customers. He has rich and extensive knowledge graph application experience in finance, government affairs, energy, transportation and other fields, and has launched Atlas Graph, the world's leading domestic distributed cloud-native graph database. As a Chinese database representative, he was selected into Gartner's "Global Graph Database Management System Market Guide" ", filling the gap in domestic distributed graph databases.
In October 2022, Academician Zheng Weimin led young scientists to set up a "High-Performance Graph Computing Academician Workstation" at Haizhi Technology and began to track the research and development trends of various large models around the world. They are committed to deeply integrating knowledge graphs with large model technology, and deploying and trialing it in financial, energy, and government enterprises and institutions. They aim at the huge structured data system and computing analysis application system accumulated by B-end industry customers for a long time. Academician Zheng and Haizhi innovatively use the knowledge graph as an intermediary bridge to connect the existing data system and large models, and comprehensively improve the implementation of large models in the industry. Interpretability, interactivity and verifiability
"One measure of the development of artificial intelligence is the learning of human brain intelligence. According to our observation, the rigorous reasoning of knowledge graphs is similar to the human left brain, while the rapid learning of large models is similar to the flexibility of the right brain." Zheng Weimin said: "Our product aims to achieve interoperability between the left and right brains through a set of knowledge mapping, verification and optimization architecture, and promote the in-depth application of general artificial intelligence in enterprise-level scenarios."
Achieving a balance between large model application quality and efficiency
Haizhi Technology’s Chief Technology Officer Yang Juan released a report on knowledge graphs and large model application products News
"We do not produce large models, we are committed to applying large models to production." Dr. Yang Juan, CTO of Haizhi Technology, said that Haizhi Atlas LLM large model fusion application platform has three very unique positioning : First, it realizes the interaction between the knowledge graph and the large model in the whole process, effectively overcoming the interference of the illusion of large models on industrial applications; second, it better manages the customers' existing rich data assets and unifies them with the results of the large model. , avoid reinventing the wheel, making calculations more efficient and applications more accurate; third, it can help customers switch and flexibly apply different open source large models to achieve more cost-effective scenario applications.
Qu Ke, Senior Vice President of Haizhi Technology, listed the above for us An industrial scenario that the platform has verified: In the field of industrial manufacturing equipment operation and inspection, fault identification of complex production systems has always been a place where people have high hopes for artificial intelligence due to its complex fault combination types, heterogeneous data, and fast response requirements. field. “In the past, we used knowledge graph technology to construct the relationships between devices and associated device measurement signals into fault knowledge feature subgraphs to help machines automatically realize fault identification. However, this process requires business experts to cooperate with technical personnel to carry out a large number of entity constructions. and configuration work are prerequisites to achieve knowledge generation. But today we can greatly improve the efficiency of this knowledge extraction and fusion process through large models. On the one hand, through large models, we can quickly extract faulty equipment and associated measurement values, It helps the knowledge graph complete the rapid construction of feature maps and improve efficiency; on the other hand, business experts can also use business experts to more efficiently verify the feature maps automatically generated by large models, solidify and calibrate the empirical knowledge of fault characteristics, and ensure quality. "
Enterprise "Big Model" "Three Steps" to Get Started
In the era of big models, another focus of industry enterprise customers is that future development is Completely overthrow the old calculation and analysis system, or upgrade based on the existing calculation and analysis system? Based on the huge computing analysis applications and business small models that have been established by customers, Haizhi Technology has realized the "three-step" of large model application in accordance with the logic of "basic scene recognition, comprehensive scene orchestration and scene solidification release"
Step 1: Fine-tune the basic scenario services of the customer's existing computing analysis and business small models through the large model, annotate and identify the scenario semantics, and form a basic service scenario library.
The rewritten content is as follows: Step 2: By comprehensively applying high-order scenarios and corresponding Prompt semantics, and using large model reasoning capabilities, intelligently arrange calculation calls and calculation logic
After rewriting Content: In the third step, we will generate a scene orchestration knowledge graph through large model orchestration. By taking advantage of the observable interpretability and interoperability of the knowledge graph, we can observe and manually verify and optimize the large model orchestration results of complex scenes. This can realize the stable solidification of scene knowledge corresponding to semantics and the ability to publish it to the outside world
Currently, Haizhi has realized basic scene recognition, complex scene orchestration and multi-capability based on the existing computing and analysis capabilities of industry customers. The knowledge graph's knowledge observability, solidification verification and publishing capabilities enable large models to achieve accurate computational question and answer with large model inference generation as the core under the two "accuracy controls" of existing computational analysis knowledge and graph solidification scenarios.
The above is the detailed content of Haizhi Technology releases the first knowledge graph fusion large model application platform to help the domestic war of hundreds of models. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Diffusion can not only imitate better, but also "create". The diffusion model (DiffusionModel) is an image generation model. Compared with the well-known algorithms such as GAN and VAE in the field of AI, the diffusion model takes a different approach. Its main idea is a process of first adding noise to the image and then gradually denoising it. How to denoise and restore the original image is the core part of the algorithm. The final algorithm is able to generate an image from a random noisy image. In recent years, the phenomenal growth of generative AI has enabled many exciting applications in text-to-image generation, video generation, and more. The basic principle behind these generative tools is the concept of diffusion, a special sampling mechanism that overcomes the limitations of previous methods.

Kimi: In just one sentence, in just ten seconds, a PPT will be ready. PPT is so annoying! To hold a meeting, you need to have a PPT; to write a weekly report, you need to have a PPT; to make an investment, you need to show a PPT; even when you accuse someone of cheating, you have to send a PPT. College is more like studying a PPT major. You watch PPT in class and do PPT after class. Perhaps, when Dennis Austin invented PPT 37 years ago, he did not expect that one day PPT would become so widespread. Talking about our hard experience of making PPT brings tears to our eyes. "It took three months to make a PPT of more than 20 pages, and I revised it dozens of times. I felt like vomiting when I saw the PPT." "At my peak, I did five PPTs a day, and even my breathing was PPT." If you have an impromptu meeting, you should do it

In the early morning of June 20th, Beijing time, CVPR2024, the top international computer vision conference held in Seattle, officially announced the best paper and other awards. This year, a total of 10 papers won awards, including 2 best papers and 2 best student papers. In addition, there were 2 best paper nominations and 4 best student paper nominations. The top conference in the field of computer vision (CV) is CVPR, which attracts a large number of research institutions and universities every year. According to statistics, a total of 11,532 papers were submitted this year, and 2,719 were accepted, with an acceptance rate of 23.6%. According to Georgia Institute of Technology’s statistical analysis of CVPR2024 data, from the perspective of research topics, the largest number of papers is image and video synthesis and generation (Imageandvideosyn

We know that LLM is trained on large-scale computer clusters using massive data. This site has introduced many methods and technologies used to assist and improve the LLM training process. Today, what we want to share is an article that goes deep into the underlying technology and introduces how to turn a bunch of "bare metals" without even an operating system into a computer cluster for training LLM. This article comes from Imbue, an AI startup that strives to achieve general intelligence by understanding how machines think. Of course, turning a bunch of "bare metal" without an operating system into a computer cluster for training LLM is not an easy process, full of exploration and trial and error, but Imbue finally successfully trained an LLM with 70 billion parameters. and in the process accumulate

Quick Start with PyCharm Community Edition: Detailed Installation Tutorial Full Analysis Introduction: PyCharm is a powerful Python integrated development environment (IDE) that provides a comprehensive set of tools to help developers write Python code more efficiently. This article will introduce in detail how to install PyCharm Community Edition and provide specific code examples to help beginners get started quickly. Step 1: Download and install PyCharm Community Edition To use PyCharm, you first need to download it from its official website

As a widely used programming language, C language is one of the basic languages that must be learned for those who want to engage in computer programming. However, for beginners, learning a new programming language can be difficult, especially due to the lack of relevant learning tools and teaching materials. In this article, I will introduce five programming software to help beginners get started with C language and help you get started quickly. The first programming software was Code::Blocks. Code::Blocks is a free, open source integrated development environment (IDE) for

Editor of the Machine Power Report: Yang Wen The wave of artificial intelligence represented by large models and AIGC has been quietly changing the way we live and work, but most people still don’t know how to use it. Therefore, we have launched the "AI in Use" column to introduce in detail how to use AI through intuitive, interesting and concise artificial intelligence use cases and stimulate everyone's thinking. We also welcome readers to submit innovative, hands-on use cases. Video link: https://mp.weixin.qq.com/s/2hX_i7li3RqdE4u016yGhQ Recently, the life vlog of a girl living alone became popular on Xiaohongshu. An illustration-style animation, coupled with a few healing words, can be easily picked up in just a few days.

Title: A must-read for technical beginners: Difficulty analysis of C language and Python, requiring specific code examples In today's digital age, programming technology has become an increasingly important ability. Whether you want to work in fields such as software development, data analysis, artificial intelligence, or just learn programming out of interest, choosing a suitable programming language is the first step. Among many programming languages, C language and Python are two widely used programming languages, each with its own characteristics. This article will analyze the difficulty levels of C language and Python
