What are the top projects of apache
Since its establishment in 1999, the Apache Software Foundation has successfully established its own strong ecosystem. Many excellent open source projects have emerged in its community, and more and more domestic and foreign projects are coming to this international open source community for incubation.
It is reported that all current Apache projects need to be incubated by incubators and meet a series of quality requirements before they can graduate. Projects that graduate from the incubator either become top-level projects independently or become sub-projects of other top-level projects.
To help everyone understand the standards of Apache incubation, this article counts several top-level projects that were successfully incubated and independently managed by Apache from January 1, 2016 to January 19, 2017.
1. Apache Beam
Apache Beam is an incubation project contributed by Google to the Apache Foundation on February 1, 2016. It was officially launched on January 10, 2017. Announced graduation and upgraded to Apache top-level project.
The main goal of Apache Beam is to unify the programming paradigms of batch and stream processing and provide a simple, flexible, feature-rich and expressive SDK for unlimited, out-of-order, web-scale data set processing. The project focuses on the programming paradigm and interface definition of data processing, and does not involve the implementation of specific execution engines. Apache Beam hopes that data processing programs developed based on Beam can be executed on any distributed computing engine.
2. Apache Eagle
Apache Eagle originated from eBay and was first used to solve the monitoring problem of large-scale Hadoop clusters. It was submitted to Apache on October 26, 2015. Incubated, it was officially announced to graduate as an Apache top-level project on January 10, 2017.
Apache Eagle is an open source monitoring and alerting solution for intelligently identifying security and performance issues on big data platforms in real-time, such as Apache Hadoop, Apache Spark, and more. Apache Eagle mainly includes: high scalability, high scalability, low latency, dynamic collaboration and other features. It supports real-time monitoring of data behavior, can immediately detect access to sensitive data or malicious operations, and take immediate countermeasures.
3. Apache Geode
Apache Geode was originally developed by Gemstone Systems as a commercial product. It was widely used in the financial field in the early days as a transactional, low-latency The data engine is used in Wall Street trading platforms. The code was submitted to the Apache Incubator on April 27, 2015, and graduated as an Apache top-level project on November 21, 2016.
Apache Geode is a data management platform that provides real-time, consistent access to data-critical applications across the entire cloud architecture. It uses dynamic data replication and partitioning technology to achieve high availability, high performance, high scalability, and fault tolerance. In addition, for a distributed data container, Apache Geode is a memory-based data management system that provides reliable asynchronous event notification and reliable message delivery.
4. Apache Twill
Apache Twill submitted the code to the Apache Incubator on November 14, 2013, and announced its graduation on July 27, 2016. Apache top-level project.
Apache Twill provides rich built-in features for common distributed applications for development, deployment, and management, greatly simplifying Hadoop cluster operations and management. It has become a key component behind the Cask Data Application Platform (CDAP), using YARN containers and Java threads as abstractions. CDAP is an open source integration and application platform that enables developers and organizations to easily build, deploy and manage data applications on Hadoop and Spark.
5. Apache Kudu
Apache Kudu is a data storage system developed by Cloudera. It became an Apache incubation project on December 3, 2015 and became an Apache incubation project in July 2016. Graduation was officially announced on May 25th and upgraded to an Apache top-level project.
Apache Kudu is an open source column storage engine built for the Hadoop ecosystem and designed to enable flexible, high-performance analytics pipelines. It supports many operations found in traditional databases, including real-time inserts, updates, and deletes. It is currently used by different companies and organizations in many industries, including retail, online service delivery, risk management, digital advertising, etc., and the more familiar one is Xiaomi.
6, Apache Bahir
The code of Apache Bahir was originally extracted from the Apache Spark project, and later provided as an independent project and released in 2016 It was announced as an Apache top-level project on June 29, 2016.
Apache Bahir expands the coverage of the analysis platform by providing diversified streaming connectors and SQL data sources. Initially it only provides extensions for Apache Spark, and currently also provides Apache Flink, and may also provide extensions for Apache in the future. Expansion services are available on Beam and more platforms.
For more Apache related technical articles, please visit the Apache Usage Tutorial column to learn!
The above is the detailed content of What are the top projects of apache. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics



To set up a CGI directory in Apache, you need to perform the following steps: Create a CGI directory such as "cgi-bin", and grant Apache write permissions. Add the "ScriptAlias" directive block in the Apache configuration file to map the CGI directory to the "/cgi-bin" URL. Restart Apache.

The steps to start Apache are as follows: Install Apache (command: sudo apt-get install apache2 or download it from the official website) Start Apache (Linux: sudo systemctl start apache2; Windows: Right-click the "Apache2.4" service and select "Start") Check whether it has been started (Linux: sudo systemctl status apache2; Windows: Check the status of the "Apache2.4" service in the service manager) Enable boot automatically (optional, Linux: sudo systemctl

When the Apache 80 port is occupied, the solution is as follows: find out the process that occupies the port and close it. Check the firewall settings to make sure Apache is not blocked. If the above method does not work, please reconfigure Apache to use a different port. Restart the Apache service.

This article discusses how to improve Hadoop data processing efficiency on Debian systems. Optimization strategies cover hardware upgrades, operating system parameter adjustments, Hadoop configuration modifications, and the use of efficient algorithms and tools. 1. Hardware resource strengthening ensures that all nodes have consistent hardware configurations, especially paying attention to CPU, memory and network equipment performance. Choosing high-performance hardware components is essential to improve overall processing speed. 2. Operating system tunes file descriptors and network connections: Modify the /etc/security/limits.conf file to increase the upper limit of file descriptors and network connections allowed to be opened at the same time by the system. JVM parameter adjustment: Adjust in hadoop-env.sh file

Apache connects to a database requires the following steps: Install the database driver. Configure the web.xml file to create a connection pool. Create a JDBC data source and specify the connection settings. Use the JDBC API to access the database from Java code, including getting connections, creating statements, binding parameters, executing queries or updates, and processing results.

To delete an extra ServerName directive from Apache, you can take the following steps: Identify and delete the extra ServerName directive. Restart Apache to make the changes take effect. Check the configuration file to verify changes. Test the server to make sure the problem is resolved.

This article will explain how to improve website performance by analyzing Apache logs under the Debian system. 1. Log Analysis Basics Apache log records the detailed information of all HTTP requests, including IP address, timestamp, request URL, HTTP method and response code. In Debian systems, these logs are usually located in the /var/log/apache2/access.log and /var/log/apache2/error.log directories. Understanding the log structure is the first step in effective analysis. 2. Log analysis tool You can use a variety of tools to analyze Apache logs: Command line tools: grep, awk, sed and other command line tools.

There are 3 ways to view the version on the Apache server: via the command line (apachectl -v or apache2ctl -v), check the server status page (http://<server IP or domain name>/server-status), or view the Apache configuration file (ServerVersion: Apache/<version number>).
