John Paul Mueller - Algorithms For Dummies

Здесь есть возможность читать онлайн «John Paul Mueller - Algorithms For Dummies» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Algorithms For Dummies: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Algorithms For Dummies»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

Your secret weapon to understanding—and using!—one of the most powerful influences in the world today
Algorithms For Dummies,
Algorithms For Dummies

Algorithms For Dummies — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Algorithms For Dummies», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Representing the problem as a space

A problem space is an environment in which a search for a solution takes place. A set of states and the operators used to change those states represent the problem space. For example, consider a tile game that has eight tiles in a 3-x-3 frame. Each tile shows one part of a picture, and the tiles start in some random order so that the picture is scrambled. The goal is to move one tile at a time to place all the tiles in the right order and reveal the picture. You can see an example of this sort of puzzle at https://www.proprofsgames.com/puzzle/sliding/ .

The combination of the start state, the randomized tiles, and the goal state — the tiles in a particular order — is the problem instance. You could represent the puzzle graphically using a problem space graph. Each node of the problem space graph presents a state (the eight tiles in a particular position). The edges represent operations, such as to move tile number eight up. When you move tile eight up, the picture changes — it moves to another state.

Winning the game by moving from the start state to the goal state isn’t the only consideration. To solve the game efficiently, you need to perform the task in the least number of moves possible, which means using the smallest number of operations. The minimum number of moves used to solve the puzzle is the problem depth.

You must consider several factors when representing a problem as a space. For example, you must consider the maximum number of nodes that will fit in memory, and whether the number of nodes that memory can support matches the expected number of nodes necessary to solve the problem, which represents the space complexity. When you can’t fit all the nodes in memory at one time, the computer must generate them only when necessary and then discard the previous nodes to free memory or store some nodes in other locations. To determine whether the nodes will fit in memory, you must consider time complexity because longer algorithm runs determinate the maximum number of nodes created to solve the problem. In addition, it’s important to consider the branching factor, which is the average number of nodes created at each step in the problem space graph to solve a problem. For the same solution, an algorithm with a higher branching factor will generate more nodes than one with a lower branching factor.

Going random and being blessed by luck

Solving a search problem using brute-force techniques (described in “ Avoiding brute-force solutions,” earlier in this chapter) is possible. The advantage of this approach is that you don’t need any domain-specific knowledge to use one of these algorithms. A brute-force algorithm tends to use the simplest possible approach to solving the problem. The disadvantage is that a brute-force approach works well only for a small number of nodes. Here are some of the common brute-force search algorithms:

Technique Description Cons Pros
Breadth-first search Begins at the root node, explores each of the child nodes first, then moves down to the next level. It progresses level by level until it finds a solution. Must store every node in memory, which means that it uses a considerable amount of memory for a large number of nodes. Can check for duplicate nodes to save time and always comes up with a solution.
Depth-first search Begins at the root node and explores a set of connected child nodes until it reaches a leaf node. It progresses branch by branch until it finds a solution. Can’t check for duplicate nodes, which means that it might traverse the same node paths more than once. It’s memory efficient.
Bidirectional search Searches simultaneously from the root node and the goal node until the two search paths meet in the middle. It’s time efficient and uses memory more efficiently than other approaches, and it always finds a solution. Complexity of implementation, translating into a longer development cycle.

Using a heuristic and a cost function

For some people, the word heuristic just sounds complicated. It would be just as easy to say that the algorithm makes an educated guess and then tries again when it fails. Unlike brute-force methods, heuristic algorithms learn by iteratively trying to improve the solution over time. They also use cost functions to make better choices. Consequently, heuristic algorithms are more complex, but they have a distinct advantage in solving complex problems. As with brute-force algorithms, there are many heuristic algorithms, and each comes with its own set of advantages, disadvantages, and special requirements. The following list describes a few of the most common heuristic algorithms:

Pure heuristic search: Expands nodes in order of their cost. It maintains two lists. The closed list contains the nodes it has already explored; the open list contains the nodes it must yet explore. In each iteration, the algorithm expands the node with the lowest possible cost. All its child nodes are placed in the closed list and the individual child node costs are calculated. The algorithm sends the child nodes with a low cost back to the open list and deletes the child nodes with a high cost.

A * search: Tracks the cost of nodes as it explores them (and choosing the least expensive ones) using this equation: f(n) = g(n) + h(n), wheren is the node identifier.g(n) is the cost of reaching the node so far.h(n) is the estimated cost to reach the goal from the node.f(n) is the estimated cost of the path from n to the goal.

Greedy best-first search: Chooses the path that is closest to the goal using the equation f(n) = h(n). It can find solutions quite quickly, but it can also get stuck in loops, so many people don't consider it an optimal approach to finding a solution.

Evaluating Algorithms

Gaining insights into precisely how algorithms work is important because otherwise you can’t determine whether an algorithm actually performs in the way you need it to. Also, without good measurements, you can’t perform accurate comparisons to know whether you really do need to discover a new method of solving a problem when an older solution works too slowly or uses too many resources. Knowing the basis to use to compare different solutions and deciding between them is an essential skill when dealing with algorithms.

The issue of efficiency has been part of discovering and designing new algorithms since the concept of algorithms first came into being, which is why you see so many different algorithms competing to solve the same problem. The concept of measuring the size of the functions within an algorithm and analyzing how the algorithm works isn’t new; both Ada Lovelace and Charles Babbage considered the problems of algorithm efficiency in reference to computers as early as 1843 (see https://www.computerhistory.org/babbage/adalovelace/ ).

Donald Knuth ( https://www-cs-faculty.stanford.edu/~knuth/), computer scientist, mathematician, professor emeritus at Stanford University, and author of the milestone, multivolume book The Art of Computer Programming (Addison-Wesley), devoted much of his research and studies to comparing algorithms. He strove to formalize how to estimate the resource needs of algorithms in a mathematical way and to allow a correct comparison between alternative solutions. He coined the term analysis of algorithms, which is the branch of computer science devoted to understanding how algorithms work in a formal way. The analysis measures resources required in terms of the number of operations an algorithm requires to reach a solution or by its occupied space (such as the storage an algorithm requires in computer memory).

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Algorithms For Dummies»

Представляем Вашему вниманию похожие книги на «Algorithms For Dummies» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Algorithms For Dummies»

Обсуждение, отзывы о книге «Algorithms For Dummies» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x