Mike Bursell - Trust in Computer Systems and the Cloud

Здесь есть возможность читать онлайн «Mike Bursell - Trust in Computer Systems and the Cloud» — ознакомительный отрывок электронной книги совершенно бесплатно, а после прочтения отрывка купить полную версию. В некоторых случаях можно слушать аудио, скачать через торрент в формате fb2 и присутствует краткое содержание. Жанр: unrecognised, на английском языке. Описание произведения, (предисловие) а так же отзывы посетителей доступны на портале библиотеки ЛибКат.

Trust in Computer Systems and the Cloud: краткое содержание, описание и аннотация

Предлагаем к чтению аннотацию, описание, краткое содержание или предисловие (зависит от того, что написал сам автор книги «Trust in Computer Systems and the Cloud»). Если вы не нашли необходимую информацию о книге — напишите в комментариях, мы постараемся отыскать её.

Learn to analyze and measure risk by exploring the nature of trust and its application to cybersecurity 
Trust in Computer Systems and the Cloud The book demonstrates in the importance of understanding and quantifying risk and draws on the social and computer sciences to explain hardware and software security, complex systems, and open source communities. It takes a detailed look at the impact of Confidential Computing on security, trust and risk and also describes the emerging concept of trust domains, which provide an alternative to standard layered security. 
Foundational definitions of trust from sociology and other social sciences, how they evolved, and what modern concepts of trust mean to computer professionals A comprehensive examination of the importance of systems, from open-source communities to HSMs, TPMs, and Confidential Computing with TEEs. A thorough exploration of trust domains, including explorations of communities of practice, the centralization of control and policies, and monitoring Perfect for security architects at the CISSP level or higher, 
 is also an indispensable addition to the libraries of system architects, security system engineers, and master’s students in software architecture and security.

Trust in Computer Systems and the Cloud — читать онлайн ознакомительный отрывок

Ниже представлен текст книги, разбитый по страницам. Система сохранения места последней прочитанной страницы, позволяет с удобством читать онлайн бесплатно книгу «Trust in Computer Systems and the Cloud», без необходимости каждый раз заново искать на чём Вы остановились. Поставьте закладку, и сможете в любой момент перейти на страницу, на которой закончили чтение.

Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Trust in social interactions is “the willingness to be vulnerable based on positive expectation about the behaviour of others”. 2 Cheshire notes that Baier's definition 3 “depends on the possibility of betrayal by another person”.

For Hardin, when considering interpersonal trust, “my trust in you is encapsulated in your interest in fulfilling the trust”. 4 Cheshire distinguishes trustworthiness from trust and discusses how risk-taking can act as a signal that one party considers another trustworthy. 5 Dasgupta 6 has seven starting points for establishing trust, of which three are related directly to punishment, one to choice, one to perspective, one to context, and one to monitoring.

All of these examples may be helpful when considering human-to-human trust relationships—though even there, they generally seem a little vague in terms of definition—but if we are to consider trust relationships involving computers and system-based entities, they are all insufficient, basically because all of them relate to human emotions, intentions, or objectives. Applying questions around emotions to, say, a mobile phone's connection to a social media site is clearly not a sensible endeavour, though we will examine later how intention and objectives may have some relevance in discussions about trust within the computer-to-computer realm.

One further definition that deserves examination is offered by Diego Gambetta, the source of our original trust definition. We will spend a little time on this as it will set up some interesting issues to which we will return at some length later in the book. Gambetta proposes the following definition:

trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he [sic] can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action.7

There are some interesting points here. First, Gambetta discusses agents, though the usage is somewhat different to that which we employed in Chapter 1. We used agents to describe an entity acting for another entity, whereas he is using a different definition, where an agent is an actor that takes an active role in an interaction. Confusingly, the usage within computing sometimes falls between these two definitions. A software agent is considered to have the ability to act autonomously in a particular situation—the term autonomous agent is sometimes used equivalently—but that is not necessarily the same as acting as a person or an organisation. However, in the absence of artificial general intelligence (AGI), it would seem that software agents must be acting on behalf of humans or human organisations even if the intention is to “set them free” to act autonomously or even learn behaviour on their own.

The second important point that Gambetta makes is that a trust relationship—he is specifically discussing human trust relationships—is partly defined by expectations before any actions are performed. This resonates closely with the points we made earlier about the importance of collecting information to allow us to form assurances. His third point is related to the second, in that he discusses the possible inability of the trustor to monitor the actions in which they are interested. Given such a lack of assuring information, the ability to evaluate the likelihood of trust is based on the same data: that presented beforehand.

For his fourth point, however, Gambetta also identifies that there are contexts in which actions can be monitored, though he seems to tie such actions to actions the trustor will take. This seems too restrictive on the trustor, as there may be actions taken by the trustee that do not lead to corresponding actions by the trustor—unless the very lack of such actions is considered action in itself. More important, however, is the implicit assumption (from the negative explicit in the previous statement) that monitoring should take place.

The Role of Monitoring and Reporting in Creating Trust

This assumption about monitoring should not be glossed over. Monitoring is important because without it, there is no way for us to check or update a trust relationship. Without some sort of feedback mechanism to allow us to monitor the actions being taken by the trustee, any trust relationship that we have created to the trustee can only be based on our original expectations. It is difficult to feel that we have modelled a trust relationship well if there is no way to verify or validate the assurances we have, so monitoring definitely has a role to play.

One difference that we will encounter when we start examining trust relationships to computer systems, however, is that the opportunities for direct sensory monitoring of actions are likely to be more limited than in human-to-human trust relationships. When monitoring human actions, they are often readily apparent, but the same is not true for many computer-performed actions. If I request via a web browser that a banking application transfer funds between one account and another, the only visible effect I am likely to see is an acknowledgement on the screen. Until I get to the point of trying to spend or withdraw that money, 8 I realistically have no way to be assured that the transaction has taken place. It is as if I have a trust relationship with somebody around the corner of a street, out of view, that they will raise a red flag at the stroke of noon; and I have a friend standing on the corner who will watch the person and tell me when and if they raise the flag. I may be happy with this arrangement, but only because I have a trust relationship to the friend: that friend is acting as a trusted channel for information.

The word friend was chosen carefully because a trust relationship is already implicit in the set of interactions that we usually associate with someone described as a friend. The same is not true for the word somebody , which I used to denote the person who was to raise the flag. The situation as described is likely to make our minds presume that there is a fairly high probability that the trust relationship I have to the friend is sufficient to assure me that they will pass the information correctly. But what if my friend standing on the corner is actually a business partner of the flag-waver? Given our human understanding of the trust relationships typically involved with business partnerships, we may immediately begin to assume that my friend's motivations in respect to correct reporting are not neutral.

The example of the flag was chosen as a simple one: it is binary (either performed or not performed), and the activity was not imbued with any further significance. If, however, we give or associate our example with some value—say, a signal that money should be transferred into my bank account from the flag-waver and their business partner's bank account—it is clear that a lot more is at stake. If, for example, my friend chooses to favour the business relationship over our friendship, my friend might collude with the flag-waver and tell me that they raised the red flag even when they did not. Or, my friend might tell me the flag-waver has not raised the flag when they have, colluding with a third party with the intention of defrauding both me and the third party by somehow accessing the funds in my bank account that I do not believe to have been transferred.

The channels for reporting on actions—i.e., monitoring them—are vitally important within trust relationships. It is both easy and dangerous to fall into the trap of assuming they are neutral, with the only important one being between me and the acting party. In reality, the trust relationship that I have to a set of channels is key to maintaining the trust relationships that I have to the main actor that is the monitor—who or what we could call the primary trustee . In trust relationships involving computer systems, there are often multiple entities or components involved in actions, and these form a chain of trust where each link depends on the other: the chain is typically only as strong as the weakest of its links.

Читать дальше
Тёмная тема
Сбросить

Интервал:

Закладка:

Сделать

Похожие книги на «Trust in Computer Systems and the Cloud»

Представляем Вашему вниманию похожие книги на «Trust in Computer Systems and the Cloud» списком для выбора. Мы отобрали схожую по названию и смыслу литературу в надежде предоставить читателям больше вариантов отыскать новые, интересные, ещё непрочитанные произведения.


Отзывы о книге «Trust in Computer Systems and the Cloud»

Обсуждение, отзывы о книге «Trust in Computer Systems and the Cloud» и просто собственные мнения читателей. Оставьте ваши комментарии, напишите, что Вы думаете о произведении, его смысле или главных героях. Укажите что конкретно понравилось, а что нет, и почему Вы так считаете.

x