The causal chain leading to a smoking-related disease in this scenario would look like the following: social interaction among peers leads to smoking that, when continued over time, results in regular smoking and addiction to cigarettes having a high probability of eventually producing health problems. While perhaps not all smokers begin smoking with someone else’s assistance, it appears that almost all do. Moreover, even when smoking is self-taught, the novice smoker confirms the practice in the company of other smokers (Haines et al. 2009). Growing up in a household where one or both parents smoke, having a spouse who smokes, and regularly socializing with smokers are other social situations promoting smoking. In practically all cases, smoking is behavior initially acquired in the company of other people (de la Haye et al. 2019; Thomeer et al. 2019). The origin of this causal chain is social. Removing the social element breaks the chain and prevents the disease process from occurring.
Since smoking typically begins in social networks, it is logical that such networks can also curtail its use. This possibility was considered by Nicholas Christakis and James Fowler (2008), who investigated smoking patterns in densely interconnected social networks in the Framingham, Massachusetts, Heart Study. They found that whole clusters of closely connected people had stopped smoking more or less together. This was due to collective pressures from their network, coming mainly from spouses, siblings, other family members, and co-workers who were close friends. As a social and therefore shared behavior, Christakis and Fowler determined that smokers were more likely to quit when they ran out of people with whom they could easily smoke. They concluded “that decisions to quit smoking are not made solely by isolated individuals, but rather they reflect choices made by groups of people connected to each other both directly and indirectly” (Christakis and Fowler 2008: 2256). Those who remained smokers were pushed to the periphery of the networks as the networks themselves became increasingly separated into smokers and non-smokers. Other research subsequently confirmed the decisiveness of friendship networks in the smoking decisions of a diverse, nationwide sample of adolescents (de la Haye et al. 2019). When it comes to smoking, the social is clearly causal.
A new crisis involving another form of smoking emerged in 2019 in the US as hospitalizations from lung injury and deaths from vaping or smoking e-cigarettes were increasingly reported. E-cigarettes work by heating a liquid that produces an aerosol that is inhaled into the lungs. As this book goes to press over 2,807 people had been hospitalized in all 50 states and the District of Columbia; 68 had died. The common factor was vitamin E acetate in e-cigarettes with tetrahydrocannabinol (THC), which is a derivative of marijuana that produces a “high.” Vitamin E acetate is used to dilute THC oil, thereby requiring less of it and increasing profits. However, when heated, the vitamin burns the lining of the lungs. E-cigarettes were originally developed to help people quit smoking tobacco products but attracted large numbers of adolescents and young adults when “fruit” flavors were introduced. It is this segment of the population in which lung damage from vaping is most prevalent.
Smokers typically have less healthy lifestyles across many related behaviors, such as poorer diets, less regular exercise, and more problem drinking (Burdette et al. 2017; Cockerham 2005; Edwards et al. 2006; Lawrence 2017; Lawrence, Mollborn, and Hummer 2017). This is in addition to the powerful influence of other social variables like class and gender that influence health-related behavioral practices like smoking either positively or negatively. To minimize or deny the causal role of social processes in the onset and continuation of health problems stemming from smoking renders any other explanation far from complete.
The relegation of social factors to a distant supporting role in studies of health and disease causation reflects the pervasiveness of the biomedical model in conceptualizing sickness. The biomedical model is based on the premise that every disease has a specific pathogenic origin whose treatment can best be accomplished by removing or controlling its cause using medical procedures. Often this means administering a drug to alleviate or cure the symptoms. According to Kevin White (2006), this view has become the taken-for-granted way of thinking about sickness in Western society. The result is that sickness has come to be regarded as a straightforward physical event, usually a consequence of a germ, virus, cancer, or genetic affliction causing the body to malfunction. “So for most of us,” states White (2006: 142), “being sick is [thought to be] a biochemical process that is natural and not anything to do with our social life.” This view perseveres, White notes, despite the fact that it now applies to only a very limited range of medical conditions.
The persistence of the biomedical model is undoubtedly due to its great success in treating infectious diseases. Research in microbiology, biochemistry, and related fields resulted in the discovery and development of a large variety of drugs and drug-based techniques for effectively treating many diseases. This approach became medicine’s primary method for dealing with many of the problems it is called upon to treat, as its thinking became dominated by the use of drugs as “magic bullets or projectiles” that can be shot into the body to cure or control afflictions. As British historian Roy Porter (1997: 595) once explained: “Basic research, clinical science and technology working with one another have characterized the cutting edge of modern medicine. Progress has been made. For almost all diseases something can be done; some can be prevented or fully cured.” Also improvements in living conditions, especially diet, housing, public sanitation, and personal hygiene, were important in helping eliminate much of the threat from infectious diseases. Epidemiologist Thomas McKeown (1988) found these measures more effective than medical interventions on mortality from water and food-borne illnesses in the second half of the nineteenth century.
However, as a challenge to the biomedical model, McKeown’s thesis is considered rather tame since a rise in living standards – as would be expected – naturally improves health and reduces mortality. Moreover, McKeown has been criticized for his focus on the individual when an analysis of various social structural factors, such as changes in health policies and reforms, would have been insightful (Nettleton 2020). Nevertheless, general improvement in living standards and work conditions combined with health policies and the biomedical approach to make significant inroads in curbing infectious disease. By the late 1960s, with the near eradication of polio and smallpox, infectious diseases had been largely curtailed in most regions of the world. The limiting of infectious diseases led to longer life spans, with chronic illnesses, which by definition are long-term and incurable, replacing infectious diseases as the major threats to health. This epidemiological transition occurred initially in industrialized nations and then spread throughout the world. It is characterized by the movement of chronic diseases such as cancer, heart disease, and stroke to the forefront of health afflictions as the leading causes of death. As Porter (1997) observed, cancer was familiar to physicians as far back as ancient Greece and Rome, but it has become exceedingly more prevalent as life spans increase.
Epidemiologic transition theory offers an explanation of this progression as it finds that some diseases are more prevalent in particular historical periods than others (Omran 1971). The theory divides the major causes of mortality into three distinct stages or “ages”: (1) the “Age of Pestilence and Famine” in which infectious and parasitic diseases are the major causes of death from the earliest times until the 1800s; (2) the “Age of Receding Epidemics,” a transitional stage during which infectious and parasitic diseases are brought under control by improved hygiene, sanitation, nutrition, public health measures, higher standards of living, and medical advances featuring mass immunizations, antibiotics, more advanced surgical techniques, and other innovations from the early 1800s to about 1960; and (3) the “Age of Degenerative and Man-Made Diseases” in which chronic diseases, such as cardiovascular disease and cancer, emerge as the dominant causes of mortality beginning around 1960, thereby making infectious diseases even less important. In the third stage, social factors become more prominent because of their connections to heart disease (Cockerham, Hamby, and Oates 2017c) and cancer (Hiatt and Breen 2008) by way of health lifestyles, stress, and environmental hazards.
Читать дальше