The top ten programming algorithms help programmers on the road to becoming masters

WBOY
Release: 2016-08-08 09:27:43
Original
998 people have browsed it
Algorithm 1: Quick Sort AlgorithmQuick sort is a sorting algorithm developed by Tony Hall. On average, sorting n items requires O(n log n) comparisons. In the worst case, O(n2) comparisons are required, but this situation is uncommon. In fact, quicksort is often significantly faster than other Ο(n log n) algorithms because its inner loop can be implemented efficiently on most architectures. Quick sort uses the divide and conquer strategy to divide a sequence (list) into two sub-series (sub-lists). Algorithm steps: 1 Pick an element from the sequence, called "pivot", 2 Reorder the sequence, all elements smaller than the pivot value are placed in front of the pivot, and all elements smaller than the pivot value are placed in front of the pivot The larger one goes behind the base (the same number can go to either side). After this partition exits, the base is in the middle of the sequence. This is called a partition operation. 3 Recursively sort the subarray of elements smaller than the base value and the subarray of elements greater than the base value. The bottom case of recursion is when the size of the array is zero or one, that is, it has always been sorted. Although it keeps recursing, this algorithm will always exit because in each iteration, it will move at least one element to its final position. Algorithm 2: Heap sorting algorithmHeapsort refers to a sorting algorithm designed using the data structure of a heap. Stacking is a structure that approximates a complete binary tree and satisfies the properties of stacking: that is, the key value or index of a child node is always smaller (or larger) than its parent node. The average time complexity of heap sort is Ο(nlogn). Algorithm steps: Create a heap H[0..n-1]Exchange the head (maximum value) and the tail of the heap3. Reduce the size of the heap by 1 and call shift_down(0) , the purpose is to adjust the top data of the new array to the corresponding position4. Repeat step 2 until the size of the heap is 1Algorithm 3: Merge sortMerge sort (Merge sort, Taiwanese translation: merge sort) is an effective sorting algorithm based on merge operations. This algorithm is a very typical application using the divide and conquer method. Algorithm steps: 1. Apply for space so that its size is the sum of the two sorted sequences. This space is used to store the merged sequence2. Set two pointers, and the initial positions are the two already sorted sequences. The starting position of the sorting sequence3. Compare the elements pointed to by the two pointers, select the relatively small element and put it into the merge space, and move the pointer to the next position4. Repeat step 3 until a certain pointer reaches the sequence Tail 5. Copy all the remaining elements of the other sequence directly to the end of the merged sequence Algorithm 4: Binary search algorithm The binary search algorithm is a way to find a specific item in an ordered array Element search algorithm. The search process starts from the middle element of the array. If the middle element happens to be the element to be searched, the search process ends; if a specific element is greater than or less than the middle element, search in the half of the array that is greater than or less than the middle element. , and start the comparison from the middle element just like the beginning. If the array is empty at a certain step, it means it cannot be found. This search algorithm reduces the search range by half with each comparison. Half search reduces the search area by half each time, and the time complexity is Ο(logn). Algorithm 5: BFPRT (Linear Search Algorithm)The problem solved by the BFPRT algorithm is very classic, that is, selecting the k-th largest (k-th smallest) element from a sequence of n elements, through clever analysis , BFPRT can guarantee linear time complexity in the worst case. The idea of ​​this algorithm is similar to that of quick sort. Of course, in order to make the algorithm still achieve the time complexity of o(n) in the worst case, the five algorithm authors have done exquisite processing. Algorithm steps: 1. Divide n elements into groups of 5 and divide them into n/5 (upper bound) groups. 2. Get the median of each group and use any sorting method, such as insertion sort. 3. Recursively call the selection algorithm to find the median of all the medians in the previous step, set it to x, and in the case of an even number of medians, set it to select the smaller one in the middle. 4. Use x to divide the array, let the number less than or equal to x be k, and the number greater than x be n-k. 5. If i==k, return x; if iTermination condition: When n=1, the small element i is returned.

Algorithm 6: DFS (Depth-First Search)

Depth-First-Search is a type of search algorithm. It traverses the nodes of the tree along the depth of the tree, searching the branches of the tree as deep as possible. When all edges of node v have been explored, the search will backtrack to the starting node of the edge where node v was found. This process continues until all nodes reachable from the source node have been discovered. If there are still undiscovered nodes, select one of them as the source node and repeat the above process. The entire process is repeated until all nodes are visited. DFS is a blind search.

Depth-first search is a classic algorithm in graph theory. The depth-first search algorithm can be used to generate the corresponding topological sorting table of the target graph. The topological sorting table can be used to easily solve many related graph theory problems, such as the maximum path problem, etc. Heap data structures are generally used to assist in implementing the DFS algorithm.

Depth-first traversal graph algorithm steps:

1. Visit vertex v;

2. Starting from the unvisited adjacent points of v, perform depth-first traversal of the graph until there is a vertex in the graph that has a path connected to v. All have been visited;

3. If there are still vertices in the graph that have not been visited at this time, start from an unvisited vertex and perform depth-first traversal again until all vertices in the graph have been visited.

The above description may be relatively abstract. For example:

DFS, after visiting a certain starting vertex v in the graph, starts from v and visits any of its adjacent vertices w1; then starting from w1, it visits the adjacent vertex w1 but The vertex w2 that has not been visited yet; then start from w2 and perform similar visits,...and so on until it reaches the vertex u where all adjacent vertices have been visited.

Then, take a step back to the vertex that was just visited last time to see if there are any other adjacent vertices that have not been visited. If there is, visit this vertex, and then proceed from this vertex to perform a visit similar to the above; if not, go back one step and search. Repeat the above process until all vertices in the connected graph have been visited.

Algorithm 7: BFS (Breadth-First Search)

Breadth-First-Search is a graph search algorithm. Simply put, BFS starts from the root node and traverses the nodes of the tree (graph) along the width of the tree (graph). If all nodes are visited, the algorithm aborts. BFS is also a blind search. Queue data structures are generally used to assist in implementing the BFS algorithm.

Algorithm steps:

1. First put the root node into the queue.

2. Take the first node from the queue and check whether it is the target.

If the target is found, the search ends and the results are returned.

Otherwise, add all its direct child nodes that have not yet been checked to the queue.

3. If the queue is empty, it means that the entire picture has been checked - that is, there is no target to be searched for in the picture. End the search and return "Target not found".

4. Repeat step 2.

Algorithm 8: Dijkstra’s algorithm

Dijkstra’s algorithm was proposed by Dutch computer scientist Edsger Dijkstra. Dijkstra's algorithm uses breadth-first search to solve the single-source shortest path problem of non-negative weighted directed graphs. The algorithm finally obtains a shortest path tree. This algorithm is often used in routing algorithms or as a submodule of other graph algorithms.

The input of this algorithm contains a weighted directed graph G and a source vertex S in G. We use V to represent the set of all vertices in G. Every edge in a graph is an ordered pair of elements formed by two vertices. (u, v) means there is a path connecting from vertex u to v. We use E to represent the set of all edges in G, and the weight of the edge is defined by the weight function w: E → [0, ∞]. Therefore, w(u, v) is the non-negative weight from vertex u to vertex v. The weight of an edge can be thought of as the distance between two vertices. The weight of a path between any two points is the sum of the weights of all edges on the path. It is known that there are vertices s and t in V, Dijkstra's algorithm can find the lowest weight path (for example, the shortest path) from s to t. This algorithm can also find the shortest path from a vertex s to any other vertex in a graph. For directed graphs without negative weights, Dijkstra's algorithm is currently the fastest known single-source shortest path algorithm.

Algorithm steps:

1. Initial command S={V0},T={remaining vertices}, the distance value corresponding to the vertex in T

If there is , d(V0,Vi) is < ;v0,vi>The weight on the arc does not exist, d(V0,Vi) is ∞2. Select a vertex W whose distance value is the smallest from T and is not in S , add S3. Modify the distance values ​​of the remaining vertices in T: If W is added as an intermediate vertex, the distance value from V0 to Vi is shortened, then modify this distance value and repeat the above steps 2 and 3, Until all vertices are included in S, that is, W=ViAlgorithm 9: Dynamic programming algorithmDynamic programming is a method used in mathematics, computer science and economics to solve complex problems by decomposing the original problem into relatively simple sub-problems. Dynamic programming is often suitable for problems with overlapping subproblems and optimal substructure properties. Dynamic programming methods often take much less time than naive solutions. The basic idea behind dynamic programming is very simple. Roughly speaking, to solve a given problem, we need to solve its different parts (i.e., subproblems) and then combine the solutions to the subproblems to arrive at a solution to the original problem. Often many subproblems are very similar, so dynamic programming attempts to solve each subproblem only once, thereby reducing the amount of computation: once the solution to a given subproblem has been calculated, it is memorized and stored in case the same subproblem is needed next time When solving the problem, look up the table directly. This approach is particularly useful when the number of repeated subproblems grows exponentially with the size of the input. The most classic problem about dynamic programming is undoubtedly the knapsack problem. Algorithm steps: 1. Optimal substructure properties. If the optimal solution of a problem contains solutions to sub-problems that are also optimal, we say that the problem has optimal substructure properties (that is, it satisfies the optimization principle). The optimal substructure properties provide important clues for dynamic programming algorithms to solve problems. 2. Overlapping nature of sub-problems. The overlapping nature of subproblems means that when a recursive algorithm is used to solve a problem from top to bottom, the subproblems generated each time are not always new problems, and some subproblems will be repeatedly calculated multiple times. The dynamic programming algorithm takes advantage of the overlapping nature of this sub-problem. It only calculates each sub-problem once, and then saves its calculation results in a table. When it is necessary to calculate the calculated sub-problem again, it only needs to calculate it in the table. Simply review the results and gain greater efficiency. Algorithm 10: Naive Bayes classification algorithm The Naive Bayes classification algorithm is a simple probability classification algorithm based on Bayes’ theorem. The basis of Bayesian classification is probabilistic reasoning, which is how to complete reasoning and decision-making tasks when the existence of various conditions is uncertain and only the probability of their occurrence is known. Probabilistic reasoning is the counterpart to deterministic reasoning. The Naive Bayes classifier is based on the independence assumption, that is, it is assumed that each feature of the sample is not related to other features. The Naive Bayes classifier relies on an accurate natural probability model and can achieve very good classification results in the sample set of supervised learning. In many practical applications, Naive Bayes model parameter estimation uses maximum likelihood estimation methods. In other words, Naive Bayes models can work without using Bayesian probability or any Bayesian model. Thank you for following the websites blog!

The above has introduced the top ten programming algorithms to help programmers become masters, including aspects of content. I hope it will be helpful to friends who are interested in PHP tutorials.

Related labels:
source:php.cn
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template