


Tencent releases open source data component Fast-Causal-Inference to support distributed vectorized statistical analysis and causal inference
Tencent announced on its public account "Tencent Open Source" that its open source distributed data science component project Fast-Causal-Inference has been publicly released on GitHub
▲ Picture source "Tencent Open Source" public account
It is reported that is developed by Tencent WeChat, uses SQL interaction, and is a statistical analysis and causal inference calculation library based on distributed vectorization, according to It is said to “solve the performance bottleneck of existing statistical model libraries (R/Python) under big data, provide causal inference capabilities that can execute tens of billions of data in seconds, and at the same time reduce the threshold for using statistical models through SQL language, making them easy to use in production environments. , has been applied in multiple internal WeChat businesses such as WeChat video account and WeChat search."
Official introduction:
Provides Causal inference for massive data execution in seconds Capability
By utilizing the vectorized OLAP execution engine ClickHouse/StarRocks, the speed of user experience can be further improved to reach the ultimate level
Minimalist SQL usage
SQLGateway WebServer lowers the threshold for using statistical models through SQL language, and provides a minimalist SQL usage method on the upper layer, transparently doing engine-related SQL expansion and optimization.
Provides causal inference capabilities of basic operators, high-order operators, and upper-layer application encapsulation
Supports ttest, OLS, Lasso, Tree-based model, matching, bootstrap, DML, etc.
#This site also learned that the official stated that the first version already supports the following features:
Basic Causal Inference Tool
- Ttest based on deltamethod, supports CUPED
- OLS, billion rows of data, sub-second level
Advanced causal inference tool
- OLS-based IV, WLS, and other GLS, DID, synthesis control, CUPED, mediation are incubating
- uplift: tens of millions of data minute level operations
- bootstrap / permutation and other data Simulation framework to solve the problem of variance estimation without displayed solutions
In order to keep the original meaning unchanged, the content needs to be rewritten into Chinese. There is no need to appear the original sentence
- Open Source Announcement | Tencent Distributed Data Science Component
- Tencent / fast-causal-inference — GitHub
The above is the detailed content of Tencent releases open source data component Fast-Causal-Inference to support distributed vectorized statistical analysis and causal inference. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



Image annotation is the process of associating labels or descriptive information with images to give deeper meaning and explanation to the image content. This process is critical to machine learning, which helps train vision models to more accurately identify individual elements in images. By adding annotations to images, the computer can understand the semantics and context behind the images, thereby improving the ability to understand and analyze the image content. Image annotation has a wide range of applications, covering many fields, such as computer vision, natural language processing, and graph vision models. It has a wide range of applications, such as assisting vehicles in identifying obstacles on the road, and helping in the detection and diagnosis of diseases through medical image recognition. . This article mainly recommends some better open source and free image annotation tools. 1.Makesens

Text annotation is the work of corresponding labels or tags to specific content in text. Its main purpose is to provide additional information to the text for deeper analysis and processing, especially in the field of artificial intelligence. Text annotation is crucial for supervised machine learning tasks in artificial intelligence applications. It is used to train AI models to help more accurately understand natural language text information and improve the performance of tasks such as text classification, sentiment analysis, and language translation. Through text annotation, we can teach AI models to recognize entities in text, understand context, and make accurate predictions when new similar data appears. This article mainly recommends some better open source text annotation tools. 1.LabelStudiohttps://github.com/Hu

On May 30, Tencent announced a comprehensive upgrade of its Hunyuan model. The App "Tencent Yuanbao" based on the Hunyuan model was officially launched and can be downloaded from Apple and Android app stores. Compared with the Hunyuan applet version in the previous testing stage, Tencent Yuanbao provides core capabilities such as AI search, AI summary, and AI writing for work efficiency scenarios; for daily life scenarios, Yuanbao's gameplay is also richer and provides multiple features. AI application, and new gameplay methods such as creating personal agents are added. "Tencent does not strive to be the first to make large models." Liu Yuhong, vice president of Tencent Cloud and head of Tencent Hunyuan large model, said: "In the past year, we continued to promote the capabilities of Tencent Hunyuan large model. In the rich and massive Polish technology in business scenarios while gaining insights into users’ real needs

Face detection and recognition technology is already a relatively mature and widely used technology. Currently, the most widely used Internet application language is JS. Implementing face detection and recognition on the Web front-end has advantages and disadvantages compared to back-end face recognition. Advantages include reducing network interaction and real-time recognition, which greatly shortens user waiting time and improves user experience; disadvantages include: being limited by model size, the accuracy is also limited. How to use js to implement face detection on the web? In order to implement face recognition on the Web, you need to be familiar with related programming languages and technologies, such as JavaScript, HTML, CSS, WebRTC, etc. At the same time, you also need to master relevant computer vision and artificial intelligence technologies. It is worth noting that due to the design of the Web side

New SOTA for multimodal document understanding capabilities! Alibaba's mPLUG team released the latest open source work mPLUG-DocOwl1.5, which proposed a series of solutions to address the four major challenges of high-resolution image text recognition, general document structure understanding, instruction following, and introduction of external knowledge. Without further ado, let’s look at the effects first. One-click recognition and conversion of charts with complex structures into Markdown format: Charts of different styles are available: More detailed text recognition and positioning can also be easily handled: Detailed explanations of document understanding can also be given: You know, "Document Understanding" is currently An important scenario for the implementation of large language models. There are many products on the market to assist document reading. Some of them mainly use OCR systems for text recognition and cooperate with LLM for text processing.

Let me introduce to you the latest AIGC open source project-AnimagineXL3.1. This project is the latest iteration of the anime-themed text-to-image model, aiming to provide users with a more optimized and powerful anime image generation experience. In AnimagineXL3.1, the development team focused on optimizing several key aspects to ensure that the model reaches new heights in performance and functionality. First, they expanded the training data to include not only game character data from previous versions, but also data from many other well-known anime series into the training set. This move enriches the model's knowledge base, allowing it to more fully understand various anime styles and characters. AnimagineXL3.1 introduces a new set of special tags and aesthetics

It is understood that Tencent QQ desktop client has undergone a series of drastic reforms. In response to user issues such as high memory usage, oversized installation packages, and slow startup, the QQ technical team has made special optimizations on memory and has made phased progress. Recently, the QQ technical team published an introductory article on the InfoQ platform, sharing its phased progress in special optimization of memory. According to reports, the memory challenges of the new version of QQ are mainly reflected in the following four aspects: Product form: It consists of a complex large panel (100+ modules of varying complexity) and a series of independent functional windows. There is a one-to-one correspondence between windows and rendering processes. The number of window processes greatly affects Electron’s memory usage. For that complex large panel, once there is no

FP8 and lower floating point quantification precision are no longer the "patent" of H100! Lao Huang wanted everyone to use INT8/INT4, and the Microsoft DeepSpeed team started running FP6 on A100 without official support from NVIDIA. Test results show that the new method TC-FPx's FP6 quantization on A100 is close to or occasionally faster than INT4, and has higher accuracy than the latter. On top of this, there is also end-to-end large model support, which has been open sourced and integrated into deep learning inference frameworks such as DeepSpeed. This result also has an immediate effect on accelerating large models - under this framework, using a single card to run Llama, the throughput is 2.65 times higher than that of dual cards. one
