Home > Web Front-end > JS Tutorial > How to design an algorithm? Introduction to common algorithm paradigms

How to design an algorithm? Introduction to common algorithm paradigms

青灯夜游
Release: 2020-10-22 19:23:11
forward
4196 people have browsed it

How to design an algorithm? Introduction to common algorithm paradigms

How to design an algorithm? The following article will analyze common algorithm paradigms for you. It has certain reference value. Friends in need can refer to it. I hope it will be helpful to everyone.

First clarify three concepts:

Algorithm: The process of solving problems step by step.

Paradigm: A mode of thinking about a problem.

Algorithmic Paradigm: A general approach to building efficient solutions to problems.

This article discusses some commonly used algorithm paradigms, such as

  • Divide and Conquer Algorithm
  • Dynamic Programming
  • Greedy Algorithm
  • Backtracking Algorithm

Divide and Conquer

Among the sorting algorithms, the common point of the two algorithms, merge and quick sort, is the divide and conquer algorithm.

Divide and conquer is a common algorithm design. The idea is to decompose the problem into smaller sub-problems that are similar to the original problem. Subproblems are usually solved recursively and the solutions to the subproblems are combined to solve the original problem.

The logic of the divide-and-conquer method can be divided into three steps:

  1. Divide the original problem into smaller sub-problems.
  2. Solve the sub-problem recursively, and return the solution to the sub-problem after the solution is completed.
  3. Merge the solutions to the subproblems into the solution to the original problem.

Example of divide-and-conquer method: binary search

The following is a binary search implemented using divide-and-conquer.

function binarySearchRecursive(array, value, low, high) {
    if (low <= high) {
        const mid = Math.floor((low + high) / 2);
        const element = array[mid];

        if (element < value) {
            return binarySearchRecursive(array, value, mid + 1, high);
        } else if (element > value) {
            return binarySearchRecursive(array, value, low, mid - 1);
        } else {
            return mid;
        }
    }
    return null;
}

export function binarySearch(array, value) {
    const sortedArray = quickSort(array);
    const low = 0;
    const high = sortedArray.length - 1;

    return binarySearchRecursive(array, value, low, high);
}
Copy after login

Please note that the binarySearch function above is for others to call, while binarySearchRecursive is where the divide and conquer method is implemented.

Dynamic Programming

Dynamic Programming is an optimization technique used to solve complex problems by breaking them into smaller sub-problems. It looks a lot like divide and conquer, but instead of decomposing the problem into independent sub-problems and then combining them together, dynamic programming only decomposes the problem into independent sub-problems.

The algorithm logic is divided into three steps:

  1. Define sub-problems.
  2. Repeat to solve sub-problems.
  3. Identify and solve basic problems.

Dynamic Programming Case: Minimum Coin Change Problem

This is a common interview question called the Coin Change Problem. The coin change problem is a way of finding how many specific numbers of coins can be used to make change, given the amount of change. The minimum coin change problem is simply finding the minimum number of coins required to use a given denomination of money. For example, if you need change for 37 cents, you can use 1 2 cent, 1 5 cent, 1 1 dime, and 1 2 cent.

function minCoinChange(coins, amount) {
    const cache = [];
    const makeChange = (value) => {
        if (!value) {
            return [];
        }
        if (cache[value]) {
            return cache[value];
        }
        let min = [];
        let newMin;
        let newAmount;
        for (let i = 0; i < coins.length; i++) {
            const coin = coins[i];
            newAmount = value - coin;
            if (newAmount >= 0) {
                newMin = makeChange(newAmount);
            }
            if (newAmount >= 0 && 
            (newMin.length < min.length - 1 || !min.length) && (newMin.length || !newAmount)) {
                min = [coin].concat(newMin);
            }
        }
        return (cache[value] = min);
    }
    return makeChange(amount);
}
Copy after login

In the above code, the parameter coins represents the denomination ([1, 2, 5, 10, 20, 50] in RMB). To prevent double counting, a cache is used. The makeChange function is implemented recursively and is an internal function with access to cache.

console.log(minCoinChange([1, 2, 5 10, 20], 37)); // => [2, 5, 10, 20]
console.log(minCoinChange([1, 3, 4], 6)) // => [3, 3]
Copy after login

Greedy algorithm

Greedy algorithm is related to the current optimal solution and tries to find a global optimal solution. Unlike dynamic programming, it does not consider the overall situation. Greedy algorithms tend to be simple and intuitive, but may not be the overall optimal solution.

Greedy Algorithm Case: Minimum Coin Change Problem

The coin problem solved by dynamic programming above can also be solved by greedy algorithm. Whether this solution is optimal or not depends on the denomination used.

function minCoinChange(coins, amount) {
    const change = [];
    let total = 0;
    for (let i = coins.length; i>= 0; i--) {
        const coin = coins[i];
        while (total + coin <= amount) {
            change.push(coin);
            total += coin;
        }
    }
    return change;
}
Copy after login

As you can see, the greedy algorithm is much simpler than the dynamic programming solution. Let’s take a look at the same solution case to understand the difference between the two:

console.log(minCoinChange([1, 2, 5 10, 20], 37)); // => [2, 5, 10, 20]
console.log(minCoinChange([1, 3, 4], 6)) // => [4, 1, 1]
Copy after login

The greedy algorithm gives the optimal solution to the first problem, but the second one is not the optimal solution ( It should be [3,3]).

The greedy algorithm is simpler and faster than the dynamic programming algorithm, but the solution obtained may not be the optimal solution.

Backtracking Algorithm

Backtracking Algorithm is great for finding and building solutions step by step.

  1. Try to solve the problem one way.
  2. If it doesn’t work, backtrack and repeat step 1 until you find a suitable solution.

For the backtracking algorithm, I will write another article to introduce more complex algorithms. I haven't figured out what to write yet. Maybe it's to write a program for solving Sudoku. If you are interested in this, please follow my official account!

Algorithms are never-ending. I hope this article can help you understand some common algorithm paradigms.

Related free learning recommendations: js video tutorial

The above is the detailed content of How to design an algorithm? Introduction to common algorithm paradigms. For more information, please follow other related articles on the PHP Chinese website!

Related labels:
source:dev.to
Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn
Popular Tutorials
More>
Latest Downloads
More>
Web Effects
Website Source Code
Website Materials
Front End Template