Alex Marcham - Understanding Infrastructure Edge Computing

Здесь есть возможность читать онлайн «Alex Marcham - Understanding Infrastructure Edge Computing» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Understanding Infrastructure Edge Computing: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Understanding Infrastructure Edge Computing»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

A comprehensive review of the key emerging technologies that will directly impact areas of computer technology over the next five years
Understanding Infrastructure Edge Computing
infrastructure edge computing
Understanding Infrastructure Edge Computing

Understanding Infrastructure Edge Computing — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Understanding Infrastructure Edge Computing», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

3.5 IPv4 and IPv6

Both IPv4 and IPv6 are examples of layer 3 protocols. They are the most commonly encountered layer 3 protocols, and as such, they provide a method for the end‐to‐end addressing of endpoints across the network using a globally unique address space. When each endpoint has a globally unique identifier, data can be addressed to a specific endpoint without ambiguity; this function allows data to be transmitted between endpoints which reside on different networks, even at a worldwide scale.

In the context of both the internet and infrastructure edge computing, both of the Internet Protocol (IP) versions, IPv4 and, to a growing extent, IPv6 are ubiquitous. Any application, endpoint, or piece of infrastructure must support these protocols; no real competitor currently exists and is unlikely to do so for some time due to the ubiquity of both IPv4 and IPv6, driving their integration into billions of devices and applications across the world. In addition, many of the issues with these protocols have been tempered by the industry using various means, so few see a pressing need to replace them.

IPv6 adoption, although behind its earlier cousin IPv4 as of today, is growing across the world and is expected to reach parity with and then exceed the amount of global internet traffic transmitted atop IPv6 compared to IPv4 as measured on a daily basis. One of the growth areas for IPv6 is expected to be the widespread deployment of city‐scale IoT, where potentially millions of devices must be able to connect with remote applications operating in other networks, requiring these devices to have a globally unique IP address. This need combined with the global exhaustion of the IPv4 address space looks set to drive the future adoption of IPv6, although IPv4 address conservation mechanisms such as network address translation (NAT) remain in use and will continue to be for many years ahead.

3.6 Routing and Switching

Both routing and switching are vast topics, each with significant history and many unique intricacies. The focus of this book is not on either of these fields, but they are closely related to any discussion of network design and operation, and so this section will describe some of the key points related to routing and switching that are relevant to network design and operation for modern networks so that it can be referred to during later chapters as many of the same core principles apply to the new networks being designed, deployed, and operated to support infrastructure edge computing as well.

3.6.1 Routing

On the subject of routing, which is the process where a series of network endpoints use layer 3 information as well as other characteristics of the data in transit and of the network itself to deliver data from its source to its destination, there are two primary approaches to performing this process.

One approach is referred to as hop‐by‐hop routing. Using this routing methodology, the onus for directing data in transit on to the optimal path towards its destination is placed on each router (a term referring to an endpoint which makes a routing decision, based on layer 3 data and other knowledge of the network, in order to determine where to send data in transit) in the path. Each of these routers uses its own local knowledge of the state of the network to make its routing decisions.

Another approach is resource reservation. This approach aims to reserve a specific path through the network for data in transit. Although this approach may seem preferable (and is in some cases), historically it has been challenging to implement as the act of resource reservation across a network requires additional state to be maintained for each traffic flow at each network endpoint in the path from source to destination to ensure that the resource allocation is operating as expected. In cases where the entire network path between the source and destination is under the control of a single network operator, this methodology is more likely to be successful; a resource reservation scheme is easier to implement in this case as the network operator can be aware of all of the resources that are available on the path, compared to a path which involves multiple network operators who may not provide that level of transparency or may not wish to allocate available resources to the traffic.

Both of these approaches seek to achieve best path routing, where traffic is sent from its source to its destination using the combination of network endpoints and links, which results in the optimal balance of resource usage, cost, and performance. If both approaches have the same aim, why are there two approaches to begin with? First, the definition of what would make a particular path from source to destination the best path is not always as simple as the lowest number of hops or using the lowest latency links in the network; once factors such as cost are introduced, business logic and related considerations begin to influence the routing process, which is where resource reservation becomes more favourable in many cases. Second, there is a trade‐off between the ability of a single system which oversees the network to identify and reserve specific paths in a manner that provides enhanced functionality or performance compared to a hop‐by‐hop routing approach. Across a large network such as the internet, it is not uncommon for traffic to pass through a number of networks, many of which use hop‐by‐hop routing alongside others which use resource reservation internally.

One consideration is when a router receives traffic for which it does not know a specific route on which to send the traffic to reach its destination. In this case, a router will typically have a default route configured. This is a catch‐all route for destination networks that the router is unaware of, which often directs the traffic back to another router within the network in the hope that a route will be found for the traffic. The alternative is for the traffic to be dropped and a message be sent back to the source of the traffic indicating this. Should a situation arise where two routers are each other’s default route, traffic will not bounce between them forever; both IPv4 and IPv6 feature a time to live (TTL) field in their packet headers, which will result in the traffic being discarded if it becomes stuck in such a routing loop for a considerable period of time, protecting the network against unnecessary congestion caused by routing misconfiguration or any temporary conditions.

3.6.2 Routing Protocols

The majority of network and internet routing is performed by using the hop‐by‐hop approach today. As this approach relies on each router using its own knowledge of the network in order to make any routing decisions, it stands to reason that each router must have a means by which to generate its own map of the network so that it can make the optimal routing decision for a given piece of data. Routing protocols are used to allow a router to generate this network map. Using a routing protocol, routers exchange information between themselves across the network including the state of their local links and the locations of any IP address ranges that they are aware of. These pieces of data, combined with cost metrics and best path calculations which the particular routing protocol in use provides, are then used by each router to generate its own picture of what the network looks like from its perspective. When each router in the network has generated this picture or map of what the network looks like, hop‐by‐hop routing can be performed with each router using this map to route data to its destination according to the best route of which it is aware using this information.

Routing protocols can be organised into two categories: Interior Gateway Protocols (IGPs), as well as Exterior Gateway Protocols (EGPs). The former is concerned with routing data in transit across the network using layer 3 information within a single network. In this context, a network is defined as the administrative domain of a single network operator, even if the networks within that domain consist of multiple segments of layer 3 devices. In comparison, EGP protocols provide the means to route data between the networks of different network operators. Whether a network is internal or external is typically not a major technical distinction; rather it is one of administration, as each of the network operators agree to establish what is referred to as a peering between their networks by means of their EGP of choice to route data between them. This is the combination of an agreement at the business level between two network operators to accept traffic from and send traffic to each other’s networks as peers, as well as the establishment of a peering session using their agreed EGP.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Understanding Infrastructure Edge Computing»

Представляем Вашему вниманию похожие книги на «Understanding Infrastructure Edge Computing» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Understanding Infrastructure Edge Computing»

Обсуждение, отзывы о книге «Understanding Infrastructure Edge Computing» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x