In a sane world, serious and growing threats to the survival of human civilization would take top priority in the press, government, and business, and such threats would prompt political and business leaders to band together urgently to overcome nationalism and its juvenile racist thinking and the urge to denigrate others for the color of their skin or the poverty of their country of origin.

Besides, not so very long ago, the “huddled masses yearning to breathe free, The wretched refuse of your teeming shore” were fleeing the grinding poverty and feudal misery of places like Ireland, Scotland, Wales, Italy, and Germany and more.

And when those “homeless, tempest-tost” souls passed through the golden door by the light of Lady Liberty’s lamp, they often found grinding poverty and feudal misery in America as well, in her brutal mines and logging camps, in sweatshops, and spinning mills that killed and maimed child workers relentlessly. And throughout the Jim Crow South, Slavery by Another Name ensured that the systematic and savage theft of the lives and bodies of African-Americans to produce wealth for white-owned corporate masters would continue, which had the effect of crippling unions throughout the South. With the rule of the Dixiecrats — the Congressional barons with essentially lifetime tenure so long as they supported segregation — the South’s poisonous politics infected all of America, as they do to this day.

But we don’t just need to have an accounting and confront and change our racist structures for moral reasons. Another reason is that, until we do, we seem entirely unable to comprehend treating most of the worlds’ billions of peoples as full human beings, with a shared stake in the global commons and a livable future.

In 1984, the malevolent O’Brien tells Winston Smith “If you want a picture of the future, imagine a boot stamping on a human face — forever.” But as it turns out, thanks to our monkey-brain inability to muster the discipline to defer immediate gratification in the face of well-known suffering that immediate pleasure is causing others, the new picture of our future could be a young brown child struggling to keep from drowning while floodwaters rise ever higher.

So, because those rising waters and the forces driving them are what would be the real top story in a sane world,  OregonPEN presents the first part of the Royal Society’s 2017 Climate Update Report, with the second part to follow. 

Climate Update

What have we learned since the IPCC’s 5th Assessment Report?

Adapted from Royal Society’s November 27, 2017 Climate change update report
Abbreviations used throughout:

  • IPCC – Intergovernmental Panel on Climate Change
  • AR4/AR5/AR6 – Fourth/Fifth/Sixth Assessment Report of the IPCC
  • RPC – Representative Concentration Pathway

Introduction

  1. Question 1  – How sensitive is global temperature to increasing greenhouse gases?
  2. Question 2 – How are methane concentrations changing and what does this mean for the climate?
  3. Question 3  – Was there a ‘pause’ in global warming?
  4. Question 4 – How high could sea level rise because of anthropogenic climate change?
  5. Question 5 – Decreasing Arctic sea ice – is there any influence on the weather in middle latitudes?
  6. Question 6 – Have temperature and rainfall extremes changed, and how will they change in the future?
  7. Question 7 – Are there thresholds beyond which particularly dangerous or irreversible changes may occur?

Introduction

“Climate change is one of the defining issues of our time.” – Dr. Ralph J Cicerone and Sir Paul Nurse, in the foreword to ‘Climate Change: Evidence and Causes. An overview from the Royal Society and the US National Academy of Sciences’ 2014

Climate has a huge influence on the way we live. For example, it affects the crops we can grow and the diseases we might encounter in particular locations. It also determines the physical infrastructure we need to build to survive comfortably in the face of extremes of heat, cold, drought and flood.

Human emissions of carbon dioxide and other greenhouse gases have changed the composition of the atmosphere over the last two centuries. This is expected to take Earth’s climate out of the relatively stable range that has characterized the last few thousand years, during which human society has emerged.

Measurements of ice cores and sea-floor sediments show that the current concentration of carbon dioxide, at just over 400 parts per million, has not been experienced for at least three million years. This causes more of the heat from the Sun to be retained on Earth, warming the atmosphere and ocean.

The global average of atmospheric temperature has so far risen by about 1 ̊C compared to the late 19th century, with further increases expected dependent on the trajectory of carbon dioxide emissions in the next few decades.

In 2013 and 2014 the Intergovernmental Panel on Climate Change (IPCC) published its fifth assessment report (AR5) assessing the evidence about climate change and its impacts. This assessment considered data from observations and records of the past. It then assessed future changes and impacts based on various scenarios for emissions of greenhouse gases and other anthropogenic factors. In 2015, almost every nation in the world agreed (in the so-called Paris Agreement) to the challenging goal of keeping global average warming to well below 2°C above pre-industrial temperatures while pursuing efforts to limit it to 1.5°C.

With the next assessment report (AR6) not due until 2022, it is timely to consider how evidence presented since the publication of AR5 affects the assessments made then.

The Earth’s climate is a complex system. To understand it, and the impact that climate change will have, requires many different kinds of study. Climate science consists of theory, observation and modelling.

Theory begins with well-established scientific principles, seeks to understand processes occurring over a range of spatial and temporal scales and provides the basis for models. Observation includes long time series of careful measurements, recent data from satellites, and studies of past climate using archives such as tree rings, ice cores and marine sediments. It also encompasses laboratory and field experiments designed to test and enhance understanding of processes. Computer models of the Earth climate system use theory calibrated and validated by the observations, to calculate the result of future changes.

There are nevertheless uncertainties in estimating future climate. Firstly the course of climate change is dependent on what socioeconomic, political and energy paths society takes. Secondly there remain inevitable uncertainties induced for example by variability in the interactions between different parts of the Earth system and by processes, such as cloud formation, that occur at too small a scale to incorporate precisely in global models.

Assessments such as those of the IPCC describe the state of knowledge at a particular time, and also highlight areas where more research is needed. We are still exploring and improving our understanding of many of the processes within the climate system, but, on the whole, new research confirms the main ideas underpinning climate research, while refining knowledge, so as to reduce the uncertainty in the magnitude and extent of crucial impacts.

 

Figure 1 – Historic atmospheric carbon dioxide levels (NOAA)

 

This report considers a number of topics that have been a focus of recent attention or where there is significant new evidence. This is by no means a comprehensive review such as that being carried out for the AR6 or in IPCC special reports that are underway. It instead tries to answer, in an authoritative but accessible way, some of the questions that are asked of climate scientists by policymakers and the public. The answers start from the evidence in AR5, updated by expert knowledge and by a necessarily limited assessment of work published since then.

A full description of the process used is discussed in the appendix. The information here is supported by supplementary evidence available on the Royal Society webpages (royalsociety.org/climatechange) that describes the evidence base and literature sources used. This report does not attempt to cover every topic, and does not address more distant socioeconomic impacts of climate change such as its possible impact on migration and conflict. In particular, it does not discuss policy questions about how the aims of the Paris climate agreement might be achieved.

Each section of this report is designed to be read on its own, but the document as a whole follows a broad thematic progression, starting with aspects relating to the physical basis of climate change, and progressing through physical impacts towards those related to ecosystems and human wellbeing. The report shows where new studies are starting to fill identified gaps in knowledge. In some cases, new work suggests changes in the probability of certain outcomes occurring, but in most cases the broad statements made by IPCC still appear valid.

QUESTION ONE – How sensitive is global temperature to increasing greenhouse gases?

Summary

In 2013 the IPCC report stated that a doubling of pre-industrial carbon dioxide concentrations would likely produce a long-term warming effect of 1.5 – 4.5 degree Celsius; the lowest end of that range now seems less likely.

In AR5 IPCC said:

Equilibrium climate sensitivity is likely in the range 1.5°C to 4.5°C.

Transient climate response is likely in the range 1.0°C to 2.5°C.

What is this about?

Climate sensitivity is a measure of how global surface temperature rises in response to increasing atmospheric concentrations of greenhouse gases. Understanding this measure provides insight into the amount of carbon that can be emitted for a given amount of future warming.

A higher value of sensitivity implies a lower remaining budget of greenhouse gas emissions to stay below a given warming threshold, and vice versa.

Equilibrium climate sensitivity is the increase in global surface temperature that would arise from the Earth fully adjusting to a doubling of atmospheric carbon dioxide (generally calculated from its preindustrial level). Temperature adjustment is slow, and surface temperatures will continue to rise well after the date of the doubling (even if the concentration of carbon dioxide has then stabilised).

In contrast, transient climate response is the increase in global surface temperature at the time when doubling of carbon dioxide occurs and relates more directly to the temperature increases we might expect to see in the coming century. The transient response represents a situation in which the climate has not yet fully adjusted and so is smaller than the equilibrium sensitivity.

The heat-trapping properties of carbon dioxide have been known since the 1860s and, if the only thing to change was the carbon dioxide level, it would be straightforward to calculate the warming resulting from a given concentration.

However, physical processes, known as climate change feedbacks (due, for example, to changes in humidity, cloud or ice cover) modify the direct impact of carbon dioxide substantially.

Climate sensitivity can be estimated by several different methods. Direct measurements of temperature have been made since 1850, and, prior to that, records can be deduced indirectly from, for example, ice cores formed over 100,000s of years.

One method uses this record together with energy-balance models and estimations of the effect of natural and anthropogenic processes to relate historical changes in carbon dioxide concentration to records of surface temperature change. Energy-balance models estimate the global average climate based solely on considerations of heat transfer (to the Earth from the Sun, and from the Earth via infrared radiation). These models make a number of assumptions, including how much heat is taken up by the oceans, and generally do not consider the geographical distribution of warming.

Another method to estimate equilibrium climate sensitivity uses computer simulations with complex global climate models. These models attempt to represent detailed physical processes, such as ocean heat uptake and climate feedbacks, and calculate a resulting sensitivity value. Each method is subject to its own approximations and uncertainties resulting in a range of estimates of sensitivity.

What was the basis for the statement in AR5?

Studies using different data sources and methodologies had produced a range of estimates of equilibrium climate sensitivity. In 2007 AR4 concluded that doubling of carbon dioxide concentration would lead to an equilibrium sensitivity in the range 2.0 to 4.5°C. In 2013, AR5 expanded the range to 1.5 to 4.5°C, to reflect some more recent studies based on past observations, but with no best estimate given. The range of transient climate response given in AR5 was 1.0 to 2.5°C.

 

Figure 2 – Global mean surface temperature projections (ICMP5, Coupled Model Intercomparison Project)

 

What do we know now?

Publications since AR5 continue to show equilibrium sensitivity estimates across the IPCC range. Those based on past observations and energy-balance models generally produce lower values than those derived from the more complex global climate models, including some suggesting ranges extending to values lower than those of AR5. There have, however, been advances in understanding of the reasons for this disparity.

One important advance is that it is now known that as the climate warms it becomes less effective at emitting heat to space, mainly as a result of regional variations in surface warming. This means that climate sensitivity derived from historical data (which typically fails to fully represent regional areas that may be warmer or cooler than the average) gives an underestimate of the value for high carbon dioxide atmospheres. It is also now clear that the very slow changes in patterns of ocean surface warming are inadequately represented in time varying global climate models resulting in an underestimate of climate sensitivity.

Insight has been evolving into the impact of localized processes on warming, for example volcanic eruptions or emission of industrial sulphate particles. The individual impact of these varies from type to type, but models ignoring such regional variations tend to give lower values for sensitivity. Another approach, in which global climate models that have been assessed on the basis of their ability to reproduce observed changes in cloud cover and properties, such as ice content and reflectivity, shows that the best performers generally have higher sensitivities.

Surface temperatures continue to be imperfectly observed. Gaps in the observation network and differences between measurement techniques for land and ocean mean that blending procedures are required to produce a global dataset. It has been demonstrated that incomplete geographical sampling of temperature can impact estimates of sensitivity. For example, the use of data with less coverage over the Arctic, where warming has been larger, has biased some climate sensitivity estimates to be too low.

How might this affect the IPCC statement?

Growing understanding of the complex, non-linear factors determining climate sensitivity is leading to improvements in methodologies for estimating it. A value below 2°C for the lower end of the likely range of equilibrium climate sensitivity now seems less plausible.

QUESTION TWO – How are methane concentrations changing and what does this mean for the climate?

Summary

After an apparent slow-down between 1999 and 2006, atmospheric methane concentrations have entered a period of sustained growth, increasing their contribution to surface warming.

In AR5, IPCC said:

Methane [concentrations] began increasing in 2007 after remaining nearly constant from 1999 to 2006

The exact drivers of this renewed growth are still debated.

What is this about?

Human activity results in a number of drivers of climate change. Carbon dioxide emissions have the largest overall effect, but, for example, increased concentrations of greenhouse gases such as methane and nitrous oxide add to carbon dioxide’s warming effect. The non- carbon dioxide drivers of climate change are a continuing research priority, in part because many influence local air quality as well as climate. Methane is the major greenhouse-gas driver of climate change after carbon dioxide, and there have been notable increases in its atmospheric concentration in recent years (and since AR5) that are not yet understood.

What was the basis for the statement in AR5?

Methane concentrations had increased markedly since the beginning of the industrial era, more than doubling from 770 parts per billion (ppb) approaching 1800 ppb in 2011. This increase was mostly attributed to human activity, including agriculture, waste, landfills, biomass burning and fossil fuel extraction. As for co2, evidence from air enclosed in polar ice cores demonstrated that present-day methane concentrations exceed any seen over the past 800,000 years. The growth rate of methane concentrations had not been steady; there was a slow-down in growth from 1990, which was particularly marked between 1999 and 2006.

At the time of writing of AR5, there was an indication that this period of slowdown had ended.

The total warming effect of methane emissions for the period 1750 – 2011 was assessed to be about 55% of the size of the warming effect of carbon dioxide emissions over the same period. This value includes methane’s direct warming effect and the impact of a number of indirect effects, notably the increase in ozone concentrations that results, via a sequence of atmospheric chemical reactions, from methane emissions.

 

Figure 3 – Global monthly mean methane (NOAA)

What do we know now?

The end of the slowdown in the growth of methane concentrations has been confirmed by continued global measurements. Annual-average concentrations increased from 1800 ppb in 2011, exceeded 1840 ppb in 2016 and may exceed 1850 ppb in 2017. Average growth rates now approach those seen in the 1980s prior to the slow-down.

Methane concentration is impacted by the rates of both emission and destruction, and the contributors to the recent changes remain debated. Evidence from the geographical distribution of changes, and from isotopic measurements, indicates that increased emissions have been strongest from biological sources, most likely associated with tropical agriculture and tropical wetlands, but increased emissions from fossil-fuels, due to their extraction and use, may also play a role.

There is little evidence of a significant increase in emissions from the Arctic. There is also further evidence that the rate of atmospheric destruction through chemical processes has slowed compared to what it was during the 1999 to 2006 period; the destruction rate is affected by human activity (including emissions of pollutants and concentrations of ozone), but the exact drivers of variations are not yet known.

How might this affect the IPCC statement?

There is no doubt that a period of renewed and sustained growth rate in methane concentrations has occurred since AR5. As a result, estimates of methane’s contribution to climate change have increased above those in AR5. Significant debate surrounds the factors that influence these trends, and projections of future emissions will need to focus on both emissions of methane and the rate at which chemical reactions destroy it.

QUESTION THREE – Was there a ‘pause’ in global warming?

Summary

In the 2000s the rate of surface warming was slower than in some previous decades, but the ocean continued to accumulate heat. Globally, 2015 and 2016 were the warmest years on record, and seen in this context the multi-decadal warming trend overwhelms shorter-term variability.

In AR5 IPCC said:

In addition to robust multi-decadal warming, global mean surface temperature exhibits substantial decadal and interannual variability. Due to natural variability, trends based on short records are very sensitive to the beginning and end dates and do not in general reflect long-term climate trends.

What is this about?

Earth’s surface temperature, averaged globally over ocean and land areas, is one important measure of climate change. Since pre-industrial times, it has increased by around 1°C. However, the rate of increase has not been constant, and observational data assessed by the IPCC in AR5 suggested only a small increase between 1998 and 2012.

This period was referred to as a ‘hiatus’ or ‘pause’ in global warming, and raised questions in the media and elsewhere about whether it was evidence of problems with the models used to project future climate. Since then (and since AR5) global temperature has significantly increased.

What was the basis for the statement in AR5?

More than 90% of the heat energy associated with global warming accumulates in the ocean rather than in the atmosphere. Observations of ocean heat content and sea level rise suggested that over the period of slow surface temperature rise Earth’s climate system had continued to accumulate heat, particularly in the ocean beneath the surface.

It was understood that natural processes cause variability in surface temperatures from year-to-year and decade- to-decade, and hence in the rate of surface warming. Interactions within and between different parts of the climate system (known as ‘internal variability’), volcanic eruptions and fluctuations in the Sun’s energy output all contribute to the overall variability.

There were unresolved questions about the specific processes that had contributed to the slower surface warming seen between 1998 and 2012. The IPCC concluded that both internal variability and reduced heating of the Earth “due to volcanic eruptions and the timing of the downward phase of the 11-year solar cycle” were important factors. With regard to the comparison between models and observations, the IPCC again highlighted the importance of internal variability but acknowledged that weaknesses in some of the models and inaccurate estimates of some forcing agents (such as volcanic eruptions) might be an additional factor.

What do we know now?

Globally 2015 and 2016 were the warmest years in the surface temperature record, even allowing for the effects of the strong El Nino that affected both years. Seen in the context of the most recent years, the multi-decadal warming trend overwhelms shorter-term variability

The ‘pause’ apparent in the data used in AR5 can be attributed to two main factors: observational biases and the variability caused by natural processes. There is some evidence that changes in atmospheric aerosols (small particles in the atmosphere) caused by human activities may have been an additional factor.

Figure 4 – Global temperatures relative to 1850 – 1900 (Met/NASA/NOAA)

 

Improved understanding of observational biases has shown that the rate of surface warming between 1998 and 2012 was greater than the evidence available at the time of AR5 suggested.

There is now more evidence that the handling of observational gaps over the Arctic, a region of rapid warming, is important. When these biases are taken into account, a temporary slowdown in the rate of surface warming can still be seen in the data, albeit less prominently. Research since AR5 has strengthened the conclusion that this slowdown was primarily caused by natural variability, associated partly with variations in the surface temperatures of the Pacific Ocean.

The apparent differences in the rate of global surface temperature rise between models and observations have now been largely reconciled by taking proper account of internal variability, volcanic eruptions, and solar variability, in addition to the biases in the observational records. There are outstanding questions about the mechanisms that shaped the regional pattern of surface temperature change during the ‘pause’ – this is an area of ongoing research.

How might this affect the IPCC statement?

New evidence since AR5 supports the IPCC assessment that the period of slower surface warming that was observed between 1998 and 2012 was a short- term phenomenon not representative of long-term climate change. Despite the ‘pause’ in surface temperature rise, climate change carried on: the Earth continued to accumulate energy, particularly in the ocean, at a rate consistent with warming caused by human activities. In future the rate of surface warming is expected to continue to exhibit year-to-year and decade-to-decade variability in addition to the longer-term trend.

QUESTION FOUR – How high could sea level rise because of anthropogenic climate change?

Summary

Global mean sea level will likely rise by no more than a metre by 2100, but if warming is not limited, then its effects on the ocean and ice sheets could make a rise of several metres inevitable over centuries to millennia.

In AR5 IPCC said:

Global mean sea level rise for 2081 – 2100 relative to 1986 – 2005 will likely be in the ranges of 0.26 to 0.55 m for RCP2.6 … and 0.45 to 0.82 m for RCP8.5. Only the collapse of marine-based sectors of the Antarctic ice sheet, if initiated, could cause global mean sea level to rise substantially above the likely range during the 21st century.

What is this about?

The majority of large cities and 10% of the global population are located in low-lying coastal areas. Coastal floods are, generally, most likely to occur when storms drive the sea onto the land, but their increasing incidence during the 20th century was caused mainly by the rise in sea level (global mean of about 0.2 m since 1901), rather than greater storminess. Assessing the amount and rate of sea level rise into the future is therefore essential for assessing the risks and frequency of such flooding.

What was the basis for the statement in AR5?

Global mean sea level rise is caused by both expansion of the ocean as it gets warmer and addition of water to the ocean due to loss of ice from glaciers and the ice sheets of Greenland and Antarctica. During the 21st century, the largest projected contribution was from thermal expansion. However, the greatest uncertainty related to the contribution from ice sheets, which could become significantly greater after 2100. Surface temperature warming passing an estimated threshold in the range 2 to 4°C above pre-industrial temperatures could lead to the complete loss of the Greenland ice sheet over a millennium or more, with a 7 m rise in global mean sea level.

Warming of sea water which is in contact with those parts of the West Antarctic ice sheet resting on land below sea-level could cause partial disintegration of the ice sheet, through a process called ‘marine ice sheet instability’, and lead eventually to several additional metres of global mean sea level rise.

What do we know now?

Recent work has confirmed that observed warming of the ocean, contraction of glaciers and sea level change in the last few decades is due mainly to anthropogenic climate warming. An acceleration in the rate of sea level rise since the 1990s is consistent with increasing ice mass loss particularly from the Greenland Ice Sheet. There has recently been more attention paid to the West Antarctic Ice Sheet. Some glaciers there are currently retreating, and this has been suggested to be a sign that marine ice sheet instability is underway.

For 2100, under high emissions scenarios, most recently published estimates for the Antarctic contribution (mainly West Antarctica) to sea level rise do not exceed 0.4m. Global sea level rise from ice loss in both Greenland and Antarctica could however increase in rate beyond 2100, and will continue for centuries under all scenarios.

Concern about the likely long-term sea level rise is heightened by evidence that sea level was 6 – 9 m higher than today during the last interglacial period (125,000 years ago) when new climate reconstructions confirm that polar temperatures were comparable to those expected in 2100.

How might this affect the IPCC statement?

With the exception of one prominent study that projects the loss of most West Antarctic ice by 2500 under even moderate warming scenarios, other recent research is still broadly consistent with the AR5 assessment that marine ice sheet instability contribution to sea level rise will “not exceed several tenths of a meter” by 2100. Thus the AR5 projections still represent current understanding, although suggestions that the contribution could be greater than was previously assessed need further evaluation.

Quantitative uncertainties, reflected in the spread of results from recent studies, reinforce the need for better understanding of the processes leading to ice shelf and ice sheet retreat. It is moreover virtually certain that sea level rise will continue for many centuries. In a climate as warm as those projected in many models for 2100 and beyond under high emissions scenarios, large parts of both ice sheets would be lost over millennia, leaving sea level many metres higher than present.

QUESTION FIVE – Decreasing Arctic sea ice – is there any influence on the weather in middle latitudes?

Summary

The long-term decrease in Arctic sea ice extent continues and the effect of ice loss on weather at mid-latitudes has become a subject of active scientific research and debate.

In AR5 IPCC said:

The annual mean Arctic sea ice extent decreased over the period 1979 to 2012 with a rate that was very likely in the range 3.5 to 4.1% per decade (range of 0.45 to 0.51 million km2 per decade), and very likely in the range 9.4 to 13.6% per decade (range of 0.73 to 1.07 million km2 per decade) for the summer sea ice minimum (perennial sea ice).

What is this about?

The Arctic has warmed more rapidly than elsewhere. There are a number of reasons for this. Warming leads to a reduction in Arctic sea ice area, which leads to less of the Sun’s energy being reflected from the surface, and therefore additional warming during the summer, which is mainly absorbed by the ocean. During the winter the reduced Arctic sea ice area allows heat to escape from the ocean to the atmosphere above it.

Since 1979, when satellites first enabled a complete picture to be obtained, the reduction of sea ice is striking, particularly in the late summer minimum ice period, when the decrease is at a rate of more than 10% per decade.

Despite the long-term average increase in surface temperature at high-latitudes, there has been a wintertime cooling trend both in eastern North America and in central Eurasia over the last 25 years including a number of extremely cold winters (e.g. 2009/10 in northern Eurasia and 2014 in eastern North America). This period coincides with the period of pronounced Arctic sea ice decline. Some research has suggested that warming in regions of reduced sea ice leads to a weakening westerly polar jet stream that is more likely to meander. In such meanders very cold air may reach deep into middle latitudes.

What was the basis for the statement in AR5?

Increased levels of warming in the Arctic and the associated decrease in sea ice had been observed and were in general understood. However, at the time there was no indication of any particular link with changed patterns in extremes of mid-latitude weather and the lack of comment by IPCC reflected this.

What do we know now?

In the last five years, changes in the extent of Arctic sea ice has been consistent with a general decline and large natural variability from year to year. 2012 had a record September minimum, some 40% below typical values seen in the early 1980s. 2016 and 2017 have seen the smallest March maxima in sea ice area. There is no particular basis for making significant changes to the IPCC projections for future amounts of sea ice.

It is challenging to attribute observed changes in mid-latitude weather to Arctic sea ice loss, but there are indications from observations that sea ice loss may be causally linked to changes in wintertime atmospheric circulation over Eurasia that are consistent with the cooling seen there.

 

Figure 6 – Arctic sea ice area in September from 1979 to 2017 (National Snow and Ice Data Center)

 

There has been considerable use of computer models to investigate possible influences of Arctic warming on regional mid-latitude weather, and some theoretical, but conflicting, mechanisms have been proposed. If the weather systems stayed the same, enhanced Arctic warming would mean that the cold air blowing into middle latitudes from Arctic regions would be less cold.

However, there is some evidence from models that regional decreases in sea ice, such as in the Barents- Kara Sea (north of Finland and western Russia), can interact with the regional weather systems to increase the likelihood of very cold winter weather in Central Asia, as has been more prevalent since 1990. The nature and strength of linkages between Arctic sea ice loss and mid-latitude weather is a focus of considerable current research.

How might this affect the IPCC statement?

Arctic sea ice extent observed in the past five years is consistent with the statements made in AR5 on its general rate of reduction. It is likely that the next IPCC report will include more discussion on linkages between Arctic sea ice loss and midlatitude weather, particularly in Central Asia.

QUESTION SIX – Have temperature and rainfall extremes changed and how will they change in the future?

Summary

Climate change has increased the frequency of heatwaves. The effect on rainfall and tropical storms is more complex and harder to detect, but there is strengthening evidence that warming may increase the intensity of the strongest tropical storms.

In AR5 IPCC said:

It is now very likely that human influence has contributed to observed global scale changes in the frequency and intensity of daily temperature extremes since the mid-20th century, and likely that human influence has more than doubled the probability of occurrence of heat waves in some locations.

There are likely more land regions where the number of heavy precipitation events has increased than where it has decreased.

It is very likely that heat waves will occur with a higher frequency and duration.

Extreme precipitation events over most of the mid-latitude land masses and over wet tropical regions will very likely become more intense and more frequent by the end of this century, as global mean surface temperature increases.

What is this about?

Extreme events such as unusual heat, heavy rainfall, month- long droughts, or hourly very intense rainfall can have important impacts, for example on health, food production and infrastructure, especially if they happen infrequently which makes it difficult to adapt. As climate warms, some events that used to be rare, or even unprecedented in the context of today’s climate, will become more common, such as summer heat waves, while others will become less common, such as winter cold spells.

The warmer atmosphere increases the potential for heavy rainfall in general, even while some regions will receive less rainfall due to changes in atmospheric circulation. As temperature rises evaporation increases and will add to the potential for drought in some regions.

As well as these more direct effects, extreme events can also be affected indirectly by the impacts of changes in vegetation or ecosystems.

What was the basis for the statement in AR5?

The statements in AR5 were based on research considering observed trends in extremes on a globally widespread scale. Observed large-scale changes were compared with changes simulated over the 20th century in climate models, and with changes that are expected from natural climate variability only, attributing them to human influences. Confidence was higher for daily temperature extremes than rainfall extremes. There was also an emerging scientific literature determining to what extent climate change has influenced the likelihood of individual events, such as a particular observed heat wave event for example the European heatwave of 2003.

What do we know now?

Observations show that many extremes have continued to become more frequent and intense. Heat waves continued to increase in frequency even between 1998 and 2012, and research indicates an important interaction between dry conditions and heat waves.

Since AR5, analysis of specific extreme events has continued to indicate that human influences have made many individual heat waves much more likely, and cold spells less likely. Methods to quantify this change have improved, and different methods and approaches tend to lead to the same conclusions. Nevertheless some uncertainty remains as changes in atmospheric weather patterns can locally have a strong impact.

It is much more difficult to determine if humans have influenced other types of events, such as drought, or heavy rainfall events. Generally a warmer atmosphere is more conducive to heavy rainfall just because it can hold more water. However, natural climate variability in precipitation is very large, and changes in atmospheric circulation patterns have a substantial influence.

Therefore, results of attribution studies for precipitation- related events tend to depend on the type of event that is considered, and what assumptions are used. For example, results will often differ depending on whether a study considers how extreme the rainfall would have been without greenhouse gas increases for the exact same atmospheric conditions, or if it considers how extreme rainfall overall has changed in a region.

2017 was (at least until early October) a very active tropical cyclone season where severe damage was caused. IPCC AR5 indicated low confidence in observed long-term changes of intense tropical cyclone activity, and low confidence in the causes of those changes, but predicted more likely than not increases in intensity by the end of the century in the Western North Pacific and North Atlantic.

There is evidence from physical understanding and modelling that warming may increase the intensity of the strongest tropical cyclones. Also, analysis of model simulations and physical understanding suggest that heavy rainfall associated with tropical cyclones and other extreme storms should increase in a warmer atmosphere, all else being equal. Sea level rise exacerbates the impact of storm surges.

How might this affect the IPCC statement?

Further evidence supports the existing IPCC statements. Temperature extremes have become more frequent globally and rainfall extremes have increased in some regions and these trends are likely to continue in the future. More specific statements about the role that human influence has played in changing the frequency of specific types of events, particularly heat waves, are becoming possible.

Improved model simulations and physical understanding may strengthen confidence in projected changes in extreme daily and sub-daily rainfall, and in tropical cyclones and the heavy rainfall and the coastal inundation associated with them.

QUESTION SEVEN – Are there thresholds beyond which particularly dangerous or irreversible changes may occur?

Summary

There are a number of possible thresholds, but unless warming significantly exceeds expectations it is not expected that the most dangerous ones discussed here will be crossed this century.

In AR5 IPCC said:

It is unlikely that the AMOC [Atlantic Meridional Overturning Circulation] will collapse beyond the end of the 21st century for the scenarios considered but a collapse… for large sustained warming cannot be excluded.

It is very unlikely that methane from clathrates will undergo catastrophic release during the 21st century.

There is low confidence in projections of the collapse of large areas of tropical and/or boreal forests.

What is this about?

Several components of the Earth system might have thresholds or “tipping points”. If climate change passes certain levels, abrupt transitions could occur and parts of the climate system could be significantly altered. In some cases, these changes may be irreversible and in others it may take much longer to return to the original state even when the underlying drivers of climate change have ceased. Among the phenomena of concern are:

  • Collapse of the Atlantic Meridional Overturning Circulation, which transports ocean heat to North Atlantic surface waters, with widespread consequences for the climate.
  • Rapid release of methane from organic carbon
    in permafrost on land, or from methane hydrates (clathrates) below the ocean or causing significant further warming.
  • Large-scale dieback of the Amazon forest and consequential loss of ecosystem and carbon sink.

Potential thresholds for loss of large ice sheets leading to sea level rise, are discussed under the topic of sea level.

What was the basis for the statement in AR5?

AR5 concluded that collapse of the overturning circulation would cause significant global-scale climate disruption, including abrupt cooling around the North Atlantic. Weakening was expected in the 21st century, but an abrupt collapse was not, unless models seriously underestimate sensitivity to heat or freshwater, or the input of meltwater from Greenland is much faster than expected.

Warming at high latitudes will reduce the area of permafrost, and this will cause carbon dioxide and methane to be released to the atmosphere. However there was a wide range of estimates for the magnitude of these emissions. Ocean warming can destabilize clathrates below the sea floor, releasing methane to the ocean.

If large volumes reached the atmosphere, this would have a massive warming effect. However, AR5 concluded that oxidation would convert most of the methane to carbon dioxide before it reached the ocean surface, and the slow rate of heat penetration through the sediment meant that the destabilization of hydrates would be small on century scales.

AR5 recognized that the Amazon rainforest might have a critical threshold; particularly in relation to a rainfall volume below which large-scale dieback might be expected. However considering likely scenarios and the combined effects of carbon fertilization, warming, and changes in rainfall, fire and land use, they gave the cautious statement above.

What do we know now?

New palaeoclimatic measurements have strengthened the evidence linking changes in overturning circulation in the last glacial period to abrupt climate change, indicating that destabilization of overturning circulation can occur and is associated with climate disruption. However, these occurrences are not direct analogues for today’s interglacial period, because they were associated with inputs of meltwater from ice sheets much larger than the one that remains in Greenland.

Modern measurements confirm the variability of the Atlantic Meridional Overturning Circulation on daily, seasonal and interannual timescales, which makes detecting current trends challenging. Recent work suggests that climate models have biases favouring stability. This could imply that the likelihood of circulation collapse has been underestimated, but much more research is needed to reach firm conclusions.

Many new measurements have led to revised estimates of the amounts of carbon stored in permafrost, and the amounts of greenhouse gases released when permafrost thaws. These show that release of permafrost carbon will be a significant positive feedback for climate change; however, release is still expected to be prolonged and gradual rather than abrupt on decadal scales.

Several new measurements have suggested a limited influence of current clathrate releases (and indeed from permafrost on land) on the atmosphere. Assuming that the whole ocean does warm significantly, heat will reach larger volumes of clathrates, but this is expected to be gradual, implying a commitment to slow rather than catastrophic release to the ocean.

Many of the factors that influence the nature and health of forest ecosystems have been reported on, but recent modelling studies considering all the interactions and the ecosystem complexity show that there remains much uncertainty about the possibility of substantial spatially- coherent forest loss.

How might this affect the IPCC statement?

Based on current models, significant but gradual reductions in strength of the overturning circulation are expected if warming continues. However, sudden ocean circulation collapse remains unlikely, while still not being excluded, especially beyond 2100. Ocean warming implies a long-term commitment to some clathrate destabilisation with timescales up to millennia, but not necessarily to significant methane release into the atmosphere. The cautious IPCC statement about the Amazon as a whole is still valid.

In summary, gradual climate change could trigger abrupt changes – with large regional and potentially global impacts – associated with thresholds in the Earth system. The possibility of crossing any of these thresholds increases with each increment of warming. However, although surprises cannot be excluded, there is no compelling evidence that the thresholds discussed here will be crossed this century, or that the IPCC statements need significant amendment.

The text of this work is licensed under the terms of the Creative Commons Attribution License. License is available at creativecommons.org/licenses/by/4.0

To be continue: the final five questions of the Royal Society Climate Update and a further look at rising ocean acidity and its effect on mussels.

Greenhouse gas emissions don’t just disrupt the climate. The same emissions also pump billions of tons of acid into the oceans, year after year.

Result: Greater ocean acidity.

And disruption of the entire planetary food web, starting at the base. 

Oh, and … a massive economic toll.

Dissolved CO2 is called “carbonic acid” — which you probably recognize as that sharp bite when you taste soft drinks. When oceans absorb massive amounts of the excess CO2 humans have been emitting for 250 years, the ocean pH drops.

We use pH to report how far from neutral (pH = 7.0) — that is, how acid (pH below 7) or how basic (pH above 7) — a mixture has become.

The pH scale is based on powers of 10. So acidic coffee with a pH of 5.0 is 100 times (2 factors of 10) more acid than neutral water. In the other direction, the highly alkaline Great Salt Lake at pH = 10.0 is 1,000 times (three factors of 10) less acid (called “basic”) than neutral water with pH = 7.0.

Since the Industrial Revolution began, the pH of surface ocean waters has fallen by 0.1 pH units – which represents about a 30% increase in acidity.  

The greater acidity of the sea surface has wide-ranging implications for life on the planet: risk of the loss of marine species such as oysters, clams, shallow water corals, deep-sea corals and plankton. Jeopardize these organisms and the entire food web of the planet that arises from them is in jeopardy as well.

As we emit ever more CO2 each year, the rising ocean acidity speeds up. NOAA – the National Oceanic and Atmospheric Administration – part of the US Department of Commerce, tacit recognition that our economy rests on our ability to understand our environment – estimates suggest by the year 2100, ocean surface waters could be nearly 150 percent more acidic, a pH last present more than 20 million years ago.

While capturing acidic CO2, the oceans are the ultimate heat sink, absorbing as much as half of the excess heat we are trapping on Earth with our greenhouse gas emissions. So oceans are growing more acid and warmer too. Greater ocean warmth means increasing tropical storm severity, rising sea levels, “dead zones,” and the foreseeable loss of marine life such as large tuna and keystone species like sharks and whales. 

Oregon has 363 miles of coastline, connecting us to the great World Sea that covers 71% of the planetary surface and that formed the cradle for all life around the globe. 

The scale of these slow-moving (to human eyes — they are abrupt in geologic time) sibling catastrophes, rising ocean acidity and temperatures, both children of our addiction to fossil fuels – is massive. So massive that people struggle to accept that they are even possible; it is difficult to accept that a colorless, odorless gas can force changes in something as massive as the oceans.

And it seems impossible to quantify the costs these changed conditions will impose on us.

Luckily, the Center for Sustainable Economy (CSE), a 25-year old environmental think tank located in Oregon, has has begun the effort to put into dollars and cents what our thoughtlessness will cost us.

For decades SCE has investigated the obstacles to a truly sustainable economy by rigorously examining both science and economics in relation to issues as diverse as fossil fuels, timber and biodiversity. The Center utilizes the work of distinguished fellows with expertise in the fields of ecological economics, conservation biology, sustainability analysis and public interest law to analyze public policy and current practices.

Today’s OregonPEN is devoted to presenting the work of CSE President and Senior Economist John Talberth who, jointly with Ernie Niemi of Natural Resource Economics of Eugene, explores the economic costs of rising ocean acidity and warming. Talbearth and Niemi outline ways we might quantify (assign a dollar cost to) what happens — including to fisheries, to coral reefs, to other species, and to us — if we so saturate the oceans with CO2 that they will not accept more.

This is denser than the usual OregonPEN but, with the rate of ocean warming and acidification predicted to accelerate, we think all Oregonians need to know.

The punch line is that even as the 2018 Oregon Legislature discusses putting a price on carbon emissions in Salem, the best science suggests that the true cost of carbon emissions — the amount it would be worth paying to avoid them, once you recognize the costs of ocean acidity and warming (OAW) — is much, much greater than we currently accept. The sooner we understand the high “social cost of carbon” (SCC), the more likely we are to respond wisely to it by raising what we’re “willing to pay” (WTP) to avoid emissions.

Ocean Acidification and Warming: The Economic Toll

by Center for Sustainable Economy (sustainable-economy.org/)
Used with permission.

In a new study authored by Dr. John Talberth and Ernie Niemi of Natural Resource Economics, CSE reviewed the economic consequences of ocean acidification and warming – the two most prominent effects of climate change on our oceans – and estimated what increment to the existing social cost of carbon (SCC) needs to be made to account for these damages.

Preliminary results suggest that proper accounting of an economic risk that could approach $20 trillion per year by 2100 would raise SCC 1.5 to 4.7 times higher than the current federal rate, to $60–$200 per metric ton CO2-e. The study has been published online by Elsevier as part of their Reference Module in Earth Systems and Environmental Sciences.

Climate change has the potential to disrupt ocean and coastal ecosystems on a scale that is difficult to grasp. There are two interrelated processes at work: ocean acidification and ocean warming (OAW). Oceans have absorbed roughly half of all anthropogenic emissions of carbon dioxide. Acidification occurs as the absorption of CO2 triggers a series of chemical reactions that increase the acidity and decrease the concentration of carbonate ions in the water. So far, absorption of CO2 has increased acidity of surface waters by about 30% and, if current trends in atmospheric CO2 continue, by 2100 these waters could be nearly 150 percent more acidic, resulting in a pH that the oceans haven’t experienced for more than 20 million years.

Among the dire predictions associated with acidification include dramatic reductions in populations of some calcifying species, including oysters, clams, sea urchins, shallow water corals, deep sea corals, and calcareous plankton – the latter effect putting the entire marine food chain at risk. Some models suggest that ocean carbonate saturation levels could drop below those required to sustain coral reef accretion by 2050.

The second process is ocean warming. The mechanisms of ocean warming are complex, and include heat transfer from the atmosphere, downwelling infrared radiation, stratification, reductions in mixing, changes in ocean currents, and changes in cloud cover patterns. Already, the global average sea surface temperature (SST) has risen by over 2.0 °F since the post-industrial revolution low point in 1909. Sea level rise is one of the most conspicuous effects with potentially catastrophic consequences.

Models that account for collapse of Antarctic ice sheets from processes driven by both atmospheric and ocean warming indicate sea level rise may top one meter by 2100 and put vast areas of coastal infrastructure at risk.

Obviously, all these physical effects have enormous economic consequences, yet relatively little research has been completed to date on their expected magnitude, timing, and distribution. Indeed, as late as 2012, several prominent climate researchers concluded that economic assessments of the effects of ocean acidification “are currently almost absent.” To help fill in this information gap, we combed through all published research on OAW economic consequences, updated figures where needed, and made some original calculations of our own to estimate some plausible worst-case scenarios. These scenarios appear in Table 4, below.

Alarmingly, they suggest that OAW costs could near $20 trillion per year by 2100 in association with a variety of dramatic impacts, such as loss of all charismatic marine species.

Table 4: Plausible worst-case scenarios and values at risk from OAW

Resource or service at risk Scenario Values at risk
($billions/yr,
2016 dollars)
Net primary productivity Ocean net primary productivity reduced by 16% $9,232.00
Coral reefs Loss of at least 50% of current coral reef area $5,661.70
Coastal infrastructure Additional SLR of 3 meters via WAIS collapse $3,561.69
Charismatic species 25% of charismatic marine species go extinct $1,104.08
Carbon sequestration 50% loss of ocean CO2 uptake $641.16
Mangroves Loss of at least 15% of current mangrove area $287.42
Fisheries 400 million at significantly increased risk of hunger $245.74
Coastal ecosystems Marine dead zones expand in area by 50% $126.82

The relative lack of understanding about economic consequences has, in turn, translated into a lack of policy mechanisms and research focused on OAW. One of the policy mechanisms where OAW costs are notably absent is the social cost of carbon (SCC) – an increasingly popular regulatory tool for assessing both the costs of greenhouse gas emissions and the benefits of actions to limit emissions.

Ostensibly, the SCC includes all known market and non-market costs, yet there are many categories missing or incomplete.  One of the bigger holes is OAW and one of the justifications for its absence is the relative dearth of methods or data to quantify economic consequences and the assumption that such impacts are minor enough that society will be able to adapt.

In the paper, we argue that such barriers need not restrain the government agencies participating in the SCC’s development and application from incorporating estimates for OAW based on the best available information and inclusive of high-impact but low probability scenarios – two factors that are baked into the regulatory framework for the SCC.

We do so by demonstrating three basic approaches rooted in standard microeconomic models of externalities, capital investment, and risk aversion. The first is based on federal agencies’ current approach for quantifying externalities from GHG emissions using the Dynamic Integrated Climate-Economy (DICE) integrated assessment model and economic damage functions suggested by existing literature. The second is a replacement or adaptation cost approach, which views SCC as a current capital investment liability that can be amortized over the adaptation time horizon. The third is an averted-risk approach based on willingness to pay to eliminate the risk of catastrophic changes, an approach that seems most compatible with worst-case scenario requirements under existing law.

In the next phase of this work, the study will be presented to the Interagency Working Group on the Social Cost of Carbon and the National Academy of Sciences, who is conducting a review of SCC methods and accepting recommendations for changes in approaches and sources of information. If the SCC is to be an effective regulatory tool and send the right market signal to polluters it must be as complete as possible. By engaging with the IWG on how to best incorporate the enormous toll associated with ocean acidification and warming, we hope to help fill one of SCC’s most serious omissions. The author’s manuscript follows:

Ocean Acidification and Warming

The economic toll and implications for the social cost of carbon

by John Talberth
President and Senior Economist Center for Sustainable Economy 16869 SW 65th Avenue, Suite 493 Lake Oswego, Oregon 97035-7865 jtalberth@sustainable-economy.org

(Corresponding author) Ernie Niemi

Natural Resource Economics 1430 Willamette St., Suite 553 Eugene, Oregon 97401-4049 ernie.niemi@nreconomics.com

Mounting evidence indicates ocean acidification and warming (OAW) pose significant risks of systemic collapse of many critical ocean and coastal ecosystem services. Attention has focused on drastic reductions, if not extinction, of coral reefs, inundation of coastlines, massive ocean dead zones, collapse of both capture and subsistence fisheries in highly dependent regions and significant disruption of the ocean’s carbon sequestration capacity.

The economic costs of OAW have yet to be adequately researched or included in estimates of the social cost of carbon (SCC). This paper summarizes current knowledge about the economic costs of OAW and suggests alternative approaches for incorporating these costs into the federal government’s SCC. Preliminary results suggest that accounting for OAW would raise SCC 1.5 to 4.7 times higher than the current federal rate, to $60–$200 per metric ton CO2-e.

Keywords

Ocean acidification, Ocean warming, Sea level rise, Social cost of carbon, Risk aversion

Introduction

Among the most startling manifestations of the Anthropocene is the widespread degradation and collapse of ocean and coastal ecosystems already underway as a result of symbiotic interactions between climate change, pollution, habitat destruction and overexploitation of fisheries. Over 90% of large game fish species have disappeared as a result of factory trawling and other industrial fishing methods (Myers and Worm 2003). Roughly 20-25% percent of all marine species are at risk of extinction (Webb and Mindel 2015). One fifth of all mangrove forests have been destroyed since 1980, primarily from aquaculture, agriculture and urban land uses (Spalding et al. 2010). Marine dead zones caused by nutrient runoff have spread exponentially since the 1960s and now encompass over 245,000 km2 (Diaz and Rosenberg 2008). Enormous quantities of marine debris, mostly plastic, are found floating in all the world’s oceans and litter both the seabed and coastlines. At least 267 different species are known to have suffered from entanglement or ingestion of this debris (Allsopp et al. 2006). Alarming as these effects are, they are likely to be eclipsed by climate change.

Climate change has the potential to disrupt ocean and coastal ecosystems on a scale that is difficult to grasp. There are two interrelated processes at work: ocean acidification and ocean warming (OAW). Oceans have absorbed roughly half of all anthropogenic emissions of carbon dioxide (Sabine et al. 2004). Acidification occurs as the absorption of CO2 triggers a series of chemical reactions that increase the acidity and decrease the concentration of carbonate ions in the water. So far, absorption of CO2 has increased acidity of surface waters by about 30% and, if current trends in atmospheric CO2 continue, by 2100 these waters could be “nearly 150 percent more acidic, resulting in a pH that the oceans haven’t experienced for more than 20 million years” (PMEL). Among the dire predictions associated with acidification include dramatic reductions in populations of some calcifying species, including oysters, clams, sea urchins, shallow water corals, deep sea corals, and calcareous plankton – the latter effect putting the entire marine food chain at risk. Some models suggest that ocean carbonate saturation levels could drop below those required to sustain coral reef accretion by 2050 (Hoegh-Guldberg, et al. 2007).

The second process is ocean warming. The mechanisms of ocean warming are complex, and include heat transfer from the atmosphere, downwelling infrared radiation, stratification, reductions in mixing, changes in ocean currents, and changes in cloud cover patterns (Hoegh- Guldberg 2014). Already, the global average sea surface temperature (SST) has risen by over 2.0 °F since the post-industrial revolution low point in 1909 (EPA). Sea level rise is one of the most conspicuous effects with potentially catastrophic consequences. Models that account for collapse of Antarctic ice sheets from processes driven by both atmospheric and ocean warming indicate sea level rise may top one meter by 2100 and put vast areas of coastal infrastructure at risk (DeConto and Pollard 2016).

Obviously, all these physical effects have enormous economic consequences, yet relatively little research has been completed to date on their expected magnitude, timing, and distribution. Indeed, as late as 2012, several prominent climate researchers concluded that economic assessments of the effects of ocean acidification “are currently almost absent” (Narita et al. 2012). This relative lack of understanding has, in turn, translated into a lack of policy mechanisms and research focused on OAW (Billé et al. 2013). One of the policy mechanisms where OAW costs are notably absent is the social cost of carbon (SCC) – an increasingly popular regulatory tool for assessing both the costs of greenhouse gas emissions and the benefits of actions to limit emissions.

Ostensibly, the SCC includes all known market and non-market costs, yet there are many categories missing or incomplete (Howard 2014). One of the bigger holes is OAW and one of the two justifications for its absence is the relative dearth of methods or data to quantify economic consequences and the assumption that such impacts are minor enough that society will be able to adapt (Howard 2014).

Here, we argue that such barriers need not restrain the government agencies participating in the SCC’s development and application from incorporating estimates for OAW based on the best available information and inclusive of high-impact but low probability scenarios – two factors that are baked into the regulatory framework for the SCC.

We do so by demonstrating three basic approaches rooted in standard microeconomic models of externalities, capital investment, and risk aversion. The first is based on federal agencies’ current approach for quantifying externalities from GHG emissions using the Dynamic Integrated Climate-Economy (DICE) integrated assessment model and economic damage functions suggested by existing literature. The second is a replacement or adaptation cost approach, which views SCC as a current capital investment liability that can be amortized over the adaptation time horizon. The third is an averted-risk approach based on willingness to pay to eliminate the risk of catastrophic changes, an approach that seems most compatible with worst- case scenario requirements under existing law.

In Section 2, we review the recent literature on the valuation of ocean and coastal ecosystems. In Section 3, we discuss what portion of this value is at risk from OAW including a set of plausible high-impact scenarios. In Section 4, we discuss the current regulatory approach and methods for estimating the SCC, and demonstrate three alternative models for incorporating the effects of OAW. In Section 5, we offer concluding thoughts and recommendations for further research and data gathering.

2.0 The value of ocean and coastal ecosystem services

Ocean and coastal ecosystems provide goods and services worth many trillions of dollars each year to the global economy. The concept of ecosystem services provides a comprehensive framework for valuation that incorporates both market and non-market benefits. Table 1 provides a partial list of important ecosystem services using the standard four-tier typology for these services including provisioning, regulating, cultural and supporting.

Table 1: Typology of ocean and coastal ecosystem services

Provisioning goods and services Regulating goods and services
•  Human food
(calories, protein, essential micronutrients)•  Livestock food•  Pharmaceutical and cosmetic compounds•  Fertilizer•  Water for desalination and industrial cooling•  Construction materials•  Commercial products (jewelry, curios, ornamental fish)
•  Energy storage

•  Carbon sequestration and storage

•  Oxygen production

•  Filtration of runoff by sea grasses

•  Bioremediation of waste

•  Biological control of harmful algal blooms

•  Shoreline protection

Cultural goods and services Supporting goods and services
•  Subsistence

•  Cultural and scientific education

•  Recreation opportunities

•  Tourism opportunities

•  Intrinsic values for threatened and endangered species

•  Sense of place for coastal communities

•  Cultural identity for coastal communities

•  Research opportunities

•  Biological primary and secondary production

•  Biological diversity

•  Habitat/refugia

•  Nutrient cycling

Many of these services generate multidimensional economic benefits. Fish and shellfish for human consumption, for example, typically provide high-value protein with essential micronutrients (vitamins, minerals, polyunsaturated omega-3 fatty acids) but low levels of saturated fats, carbohydrates, and cholesterol (World Bank 2013). The oceans play key roles in limiting the multiple costs of climate change by absorbing more than 90% of the thermal energy accumulated because of GHGs in the atmosphere, and about 30% of the emitted anthropogenic CO2 (IPCC 2014). Subsistence fish can embody many benefits besides nutrition: aesthetic, place/heritage, activity, spiritual, inspiration, knowledge, existence/bequest, option, social capital/cohesion, identity, and employment (Chan et al. 2012).

Costanza et al. (2014) updated their groundbreaking 1997 study on the value of the world’s natural capital and ecosystem services to account for changes in both the area of marine and terrestrial ecosystems and their unit values. The total estimated value for marine ecosystems was found to be over $57.4 trillion per year in 2016 dollars. This stream of benefits was further subdivided into those provided by open oceans ($25.3 trillion/yr), estuaries ($6.0 trillion/yr), seagrass and algae beds ($7.9 trillion/yr), coral reefs ($11.4 trillion/yr) and continental shelves ($6.8 trillion/yr). The total value of all marine and terrestrial ecosystem services was estimated to exceed $144.2 trillion/yr. Of particular note is that this aggregate global value is roughly twice that of gross world product ($75 trillion in 2015), and encompasses valuable functions – like maintenance of the atmospheric gas balance that enables us to breathe – that cannot be captured in market-based transactions.

3.0 Values at risk from OAW and plausible scenarios

OAW presents a significant threat to ocean and coastal ecosystem services. The literature paints an alarming portrait of large-scale adverse changes to ocean processes and marine habitats and organisms (Table 2). Key processes at risk include carbon sequestration and storage, production of atmospheric oxygen, nutrient cycling, heat transfer, regulation of acidity, and regulation of weather patterns. Among the most disconcerting risks is to the ocean’s capacity to produce atmospheric oxygen. If the oceans were to warm by more than 6 °C, disruption of oxygen production by phytoplankton could cause the atmospheric oxygen concentration to fall below the level most organisms require for respiration (Sekerci and Petrovskii 2015).

Table 2: Risks associated with ecological and biogeochemical systems

Keyprocesses at risk Keyrisks to marine habitats and organisms
•  Increase in acidity of sea water

•  Increase in sea temperature down to 1km

•  Changes in ocean currents

•  Release of seafloor methane to atmosphere

•  Intensification of extremes in El Nino/Southern Oscillation and weather events

•  Poleward movement of storm tracks and changes in monsoons

•  Decline in phytoplankton’s production of atmospheric oxygen

•  Changes in nutrient cycling

•  Slowdown of the Biological Pump (transfer of atmospheric CO2 to the ocean floor)

•  Discharge into the atmosphere of heat and CO2 previously absorbed by the oceans

•  Intensification of global hydrological cycle

•  Rising sea levels from heat expansion of sea water

•  Melting of Arctic summer sea ice

•  Increased incidence of harmful species and toxic compounds

•  Negative effects on growth, survival, fitness, calcification, and development of marine organisms

•  Changes in metabolic pathways and biological processes

•  Global redistribution of marine biodiversity

•  Evolution of some organisms towards smaller size

•  Reduction in primary production of some marine ecosystems

•  Expanding deoxygenation, with shift away from species not adapted to hypoxia

•  Spreading anoxic dead zones and toxic blooms

•  Changes in food-web dynamics

•  Contraction of metabolically viable habitats of marine animals

•  Synergistic interactions with other stressors (pollution, etc.) of marine ecosystems

Key biological impacts include loss of habitat, increase in marine hypoxic dead zones, reduced primary production, extinction of sea-ice dependent species and declining abundance and distribution of species with thresholds for acidity or temperature. The disappearance of all the world’s coral reefs is one particularly worrisome scenario that may already be manifesting in places. A somewhat sensational article declared that the Great Barrier Reef was dead for all practical purposes from warming-related bleaching and acidification after a 25 million year reign as one of the world’s most concentrated hotspots of biological diversity (Jacobsen 2016).

A few studies predict economic losses from OAW, but mostly for just one ecosystem good or service and for either warming or acidification but not for the two effects together (Table 3). Most of these studies concentrate on impacts to one or more region, with a focus on commercial seafood production. Notable exceptions, though, address widespread global ecosystem service costs. For example, Brander et al. (2012) shows the global costs from lost recreational opportunities associated with coral reef loss could top $1.2 trillion/yr by 2100. By 2200, costs associated with warming-induced release of stored methane from methane clathrate, or hydrate, gas (CH4) trapped in ice under the East Siberian Arctic Sea could reach $60 trillion as flooding, drought, severe heat stress and other climate disasters worsen (Whiteman et al. 2013). Most global losses of ecosystem services remain unaddressed, however, largely because the economic valuation literature has not yet caught up with the relatively fast proliferation of research on the physical dimension of OAW.

Table 3: Potential economic cost of lost ecosystem servicesdue to ocean warming (OW) and/or acidification (OA)

 

Lost ecosystem service Source Estimated Cost Location, Year Estimate
Coral reef recreational value (OA) Brander et al. (2012) Global, 2100 $1.2T/yr
Shellfish landings (OA) Turley et al. (2009) UK, 2006 $52–131M/yr
Mollusk catch and aquaculture (OA) Narita et al. (2012) Global, 2100 $7-101B/yr
Mollusk catch and aquaculture (OA) Narita et al. (2012) USA, 2100 $436M/yr
Fish, mollusks/bivalves, crustaceans, aquaculture (OA) Armstrong et al. (2012) Norway, 2010-2110 $360M
Carbon sequestration (OA) Armstrong et al. (2012) Norway, 2010-2110 $114B
Shellfish production (OA) Hilmi et al. (2015) Global, 2100 $2.3B/yr
Sardine catch (OW) Garza-Gil et al. (2015) Spain, 2036 $17M/yr
Fish catch (OW) Jones et al. (2014) UK, 2005-2050 $0.44B
Methane storage East Siberian Sea (OW) Whiteman et al. (2013) Global, thru 2200 $60T

Practically all of the physical effects can nonetheless be quantified, at least in a preliminary sense, with standard valuation methods applicable to both market and nonmarket dimensions of economic welfare. Here, we demonstrate by discussing seven distinct high- impact/low probability outcomes of OAW by 2100 or earlier and making preliminary estimates of economic values at risk suggested by existing research and relevant methods (Table 4). Existing values at risk do not represent the cost of losing a key good or service in the year of the loss, but only what is at risk on today’s terms.

Resource or service at risk Scenario Values at risk
($billions/yr,
2016 dollars)
Net primary productivity Ocean net primary productivity reduced by 16% $9,232.00
Coral reefs Loss of at least 50% of current coral reef area $5,661.70
Coastal infrastructure Additional SLR of 3 meters via WAIS collapse $3,561.69
Charismatic species 25% of charismatic marine species go extinct $1,104.08
Carbon sequestration 50% loss of ocean CO2 uptake $641.16
Mangroves Loss of at least 15% of current mangrove area $287.42
Fisheries 400 million at significantly increased risk of hunger $245.74
Coastal ecosystems Marine dead zones expand in area by 50% $126.82

3.1 Decrease in net primary production by 16%

Primary production is the production of chemical energy in organic compounds by living organisms, or more simply the rate of accumulation of biomass. Some of this biomass is used in respiration, and so net primary production measures what is left over. Contributing roughly half of the biosphere’s net primary production (NPP), photosynthesis by oceanic phytoplankton contributes roughly half of the biosphere’s net primary production (NPP) and, as such, is a vital link in the cycling of carbon between living and inorganic stocks.

In many climate models, NPP will fall dramatically because of the effects of OAW on phytoplankton productivity. Worst-case scenarios predict a global average decline in NPP of 41% by 2100, although a range of 2% to 16% is regarded as more plausible (Randerson and Moore 2015). A preliminary valuation of the top of this range (16%) is relatively straightforward, since NPP is a widely accepted proxy for the total ecosystem service value of marine ecosystems – something valued by Costanza et al. (2014) at $57.4 trillion/yr through calibration of 14 separate studies. A 16% decline in ocean NPP translates into a values-at-risk estimate of over $9.2 trillion.

3.2 Loss of half of all coral reefs

The bleaching and death of coral reef ecosystems from OAW is already underway. As previously noted, the Great Barrier Reef has lost extensive areas due to the combined effects of warming and acidity and some models predict that the process of coral reef accretion may entirely halt by 2050 for many reefs.

In particular, models show that increases in atmospheric CO2 above 500 parts per million and a sea surface temperature rise of over 2°C relative to today will push carbonate-ion concentrations well below levels needed to sustain the accretion process and “reduce coral reef ecosystems to crumbling frameworks with few calcareous corals” (Hoegh- Guldberg et al. 2007). Less pessimistically, but only addressing the acidification effect, Brander et al. (2012) predict losses in 2100 to range between 16% and 27%. Given this, we split the difference and adopt a plausible scenario of a 50% loss of current coral reef ecosystem extent (14 million hectares) by 2100. Applying the mean value of ecosystem services from coral reefs, $404,407 per hectare, Costanza et al. (2014) suggests a current values-at-risk estimate of roughly $5.7 trillion/yr.

3.3 Additional sea level rise of one meter due to Antarctic ice sheet collapse

Current climate models used in calculating SCC depict a sea level rise of roughly 0.55 meters by 2100. But new research suggests a much more dire situation due to the effects of ocean warming on Antarctic ice sheets. Through a process known as basal melting from below, the collapse of marine-terminating ice cliffs in Antarctica could contribute more than a meter to sea level rise by 2100 (Deconto and Pollard 2016).

To translate this into an economic loss estimate, we first calculated the additional land area inundated by sea level rise of 1.55 meters (vs. 0.55 meters) for various regions including the US, southeast Asia and north Australia, the Mediterranean, northwest Europe, the Amazon Delta, east Asia, and south Asia primarily using figures published by Rowley et al. (2007). The research also reported population affected in these newly inundated areas.

We use gross domestic product (GDP) per capita to develop an initial estimate of potential economic losses without adaptation from these areas– at least for market-based transactions. (Below, we show an alternative approach, based on adaptation cost.) Using region- specific GDP per capita figures, we estimate a global values-at-risk from newly inundated areas of about $3.6 trillion/yr should the additional meter of sea level rise occur.

3.4 At least 25% of all charismatic marine species go extinct

People of all nationalities and income groups place a value on sustaining the existence of whales, dolphins, polar bears, salmon and other charismatic marine species. The loss of this “existence value” is thus an important category of OAW costs to consider. OAW is likely to cause many treasured species – like the polar bear – to slip into extinction as sea ice, coral reefs, and mangroves are reduced and food chains disrupted. One model predicts that 37% of all marine mammals are at risk of extinction from climate change and other synergistic effects (Davidson et al. 2012). Others predict that the extinction risk is in the 20% to 25% range.

We can derive a ballpark estimate of worst-case global costs by making different assumptions about the share of global income people are willing to pay (WTP) to prevent these outcomes. The range of WTP reported in the literature generally varies from <1% to about 5% for conservation and humanitarian causes. Using the upper bound figure suggests a values-at-risk of >$1.1 trillion/yr as marine species people value for their existence decline or go extinct from OAW.

3.5 Carbon sequestration capacity of the oceans declines by 50%

Currently the oceans absorb 25–30% of anthropogenic carbon dioxide emissions, and they have taken up almost half of accumulated emissions since the industrial revolution. Basic physics and standard climate models suggest this capacity will increase in the future simply as a result of the differences between the partial pressure of CO2 in the atmosphere (higher) relative to the ocean surface (lower) and the resulting diffusion into water that results.

But OAW will compromise the oceans’ future ability to capture and store emissions through a complex set of factors, including warming sea surface temperatures, changing wind patterns, changes in ocean currents, and reduction of ventilation or mixing of surface and deep ocean layers. In the North Atlantic, researchers have noted an absolute 50% reduction of CO2 uptake from the mid-1990s to 2002–2005, at least partially in response to these climate change dynamics (Schuster and Watson 2007). Other research has predicted a reduction in cumulative CO2 uptake of 38% and 49% for a doubling and quadrupling of atmospheric CO2 concentrations relative to 1996 levels, respectively (Sarmiento and Le Quéré 1996).

Society’s WTP for carbon sequestration provides the basis for valuing this loss. Kotchen et al. (2013) found that that households are, on average, willing to pay between $79 and $89 per year in support of reducing domestic greenhouse gas (GHG) emissions 17% by 2020 – the current US target. This translates into a mean WTP of $134.56 per metric ton CO2, and we use this amount to represent the global value of sequestration. We then apply this amount within a plausible scenario that assumes the ocean’s annual sequestration will decline by 4.8 billion metric tons CO2 (about half of the current annual sequestration) by 2100 to arrive at a values-at-risk of roughly $641 billion/yr.

3.6 Loss of at least 15% of current mangrove area

The World Bank has recently modeled the expected loss of mangrove habitat as climate change unfolds. Inundation from sea level rise and an increase in storm intensity are the key drivers. Modeled losses include 100% of coastal mangroves in Mexico, 85% in the Philippines, 59% in Venezuela, 31% in Papua New Guinea and 27% in Myanmar (Blankespoor et al. 2016). These and other regional estimates support a global loss range of 10%–15%, the upper bound being equivalent to a loss of 2.2 million hectares. The mean value of lost ecosystem services, $130,736 per hectare (Costanza et al. 2014), indicates a global values-at-risk of about $287 billion per year.

3.7 400 million people suffer increased risk of food insecurity

Observations and forecasts suggest that OAW will disrupt the supply of food from the sea in many regions and increase the number of food insecurities. The combination of water surface warming, the spread of low oxygen zones and increasing acidity due to decreasing pH values is altering the body size of individual animals. This is shifting the habitat ranges of whole stocks and influencing species abundance and composition, food chain linkages and the dynamics of interactions between individuals within and among species. Potential losses in the ocean’s’ yield of shellfish, mollusks, and fish for both commercial and subsistence uses have been relatively well studied in the literature (Table 3).

According to the IPCC, climate change puts the 400 million people who depend heavily on fish for food at risk, especially small-scale fishermen in the tropics (Holmyard 2014). That’s because yields are expected to fall by 40% to 60% in that region. Widespread increases in starvation and malnutrition will materialize unless food distribution systems are expanded to bring replacement food to affected communities without delay when seafood catches decline. And while seafood yields may increase in the high latitudes, it will not solve the food security issue unless there is a way for fishing infrastructure and associated distribution systems to migrate to those areas as well and unless the subsistence catch in seafood-dependent regions is replaced with other sources of nutrition.

The welfare loss associated with putting 400 million at increased risk of food security can also be evaluated from a WTP standpoint. People care about starvation, and regularly donate to organizations feeding the hungry. Globally, studies have consistently documented willingness to pay values of 1% or more of income to cut hunger in half. Globally, there are about 800 million affected by hunger, and so the 1% figure is a good proxy for the welfare loss associated with having 400 million people more at risk from OAW. This translates into a global annual values-at- risk of about $246 billion/yr should the scenario unfold.

3.8 Marine dead zones expand in area by 50%

The term “dead zone’ is a common term for hypoxic (low oxygen) areas in the world’s oceans and lakes caused mainly by nitrogen and phosphorous pollution from human agricultural lands and settlements and the burning of fossil fuels. Within these dead zones, the oxygen consumed by algae that thrive in polluted waters depletes that required to sustain most other forms of marine life. Diaz and Rosenberg (2008) estimated the global extent at 245,000 square kilometers.

Continued growth of these marine dead zones undermines global biodiversity conservation goals and poses a significant challenge to meeting the world’s increasing demands for capture fisheries and aquaculture.

CO2 emissions have the potential to increase the extent of oxygen-depleted water by 50%, or 12,250,000 ha by 2100 (Oschlies et al. 2008). This depletion would occur independent of, but compounded by the impacts of other pollutants. So the 50% figure seems reasonable as a basis for assessing the risk. The mean value of services derived from marine ecosystems is $10,271/ha/yr (Costanza et al. 2014). Assuming that this value would fall to zero in the new dead zones, the resulting values-at-risk would be about $127 billion per year.

4.0 Alternative approaches for incorporating values at risk into the SCC

The social cost of carbon (SCC) represents the increase in net global economic damage expected to result from an increase in atmospheric greenhouse gases (GHGs) equivalent to one metric ton of carbon dioxide (tCO2-e). A reliable monetary estimate of the SCC is essential for measuring, in economic terms, the potential harm from actions that would increase emissions of greenhouse gases or slow their sequestration, and the benefit of actions that would have the opposite effect. It also can broaden public understanding of the risks associated with greenhouse gas emissions by translating scientific descriptions of these risks, such as decreases in arctic ice or reductions in biodiversity, into more familiar, economic terms.

An Interagency Working Group (IWG 2016) of U.S. federal agencies has developed partial estimates of the SCC, focusing on potential costs arising from the effects of climate change on terrestrial portions of the globe: changes in agricultural production, flooding, wildfire, human health, water supply, drought, and the like. With various assumptions about discount rates and other modeling factors, IWG (2016) estimates that emissions over the next few years will have an SCC of about $42 (tCO2-e)-1. This and other efforts to quantify the SCC have not incorporated the social costs of OAW (Howard 2014). As noted in Section 3, these changes in ocean conditions are likely to have profound economic consequences for billions of people, especially the world’s poorest. As such, efforts to integrate OAW costs into the SCC will provide a much better signal of the benefits of climate action and costs of business as usual.

4.1 Regulatory mandate

Incorporating the economic costs of OAW into the SCC is of interest not just from the perspective of improving the SCC’s rigor. It also is strongly suggested by the regulatory framework governing federal agencies’ use of the SCC in decision-making. There are seven cabinet-level agencies or departments participating in the IWG that are already using or planning to incorporate the SCC into regulatory-impact analysis, including the Environmental Protection Agency and the departments of Energy, Agriculture, and Interior.

All of these agencies are bound by statutes, regulations, and rules governing economic and environmental analysis that require use of best available science, attention to all known benefits and costs of agency actions including non-market effects, treatment of uncertainty, and worst-case scenarios.

For example, Circular A-94, which provides guidance for all federal agencies conducting economic analysis, requires consideration of externalities, monetization of all benefits and costs to the extent practicable, and treatment of uncertainty through the use of expected values (OMB 1992). Executive Order (EO) 12866 as amended by EO 13563 direct agencies conducting benefit-cost analysis “to use the best available techniques to quantify anticipated present and future benefits and costs as accurately as possible.” Regulations for implementing the National Environmental Policy Act, an often-used venue for SCC, require consideration of worst-case scenarios that have “catastrophic consequences, even if their probability is low” (40 CFR §1502.22).

In the following sections, we offer three possible paths forward for meeting these mandates and present the results of some preliminary estimates of what they imply for the SCC.

4.2 Damage function approach

The IWG’s current approach to calculating the SCC relies on three integrated assessment models (IAM) known as DICE, Policy Analysis of the Greenhouse Effect (PAGE), and Framework for Uncertainty, Negotiation and Distribution (FUND) (IWG 2016). These models calculate SCC in five-year increments through 2200 based on functions that express economic costs as a fraction of gross world product in each year that would be enjoyed in the absence of climate change. In other words, the models compare gross world product with and without climate change. The IWG then divides the present (discounted) value of this difference as it unfolds in five-year increments through 2300 by the increase in cumulative emissions in the prior period to arrive at the marginal SCC estimate. The damage function itself is based on the following relationship, as reported by Ackerman and Stanton (2012):

[1] Rt = [1+(Tt 18.8-1)2]-1

In this quadratic equation, the term R represents the share of gross global product remaining at year t after accounting for damages D (so that Rt =1 – Dt) and is solely a function of temperature T expressed as an increase in degrees Celsius over pre-industrial levels. The basic function has been often criticized not only for excluding major categories of damage but also because it leads to absurd results in the long run. In particular, at an increase of 12 °C the model suggests that economic damages would only amount to 30% of gross world product, when in fact at this temperature most life on Earth, much less the human economy, may not exist. For this reason, several alternative damage functions have been proposed to account for catastrophic outcomes.

These alternatives suggest that the SCC could be almost $900 (tCO2-e)-1 for emissions in 2010, rising to $1,500 (tCO2-e)-1 by 2050 (Ackerman and Stanton 2012).

Regardless of the relevant form of the SCC damage function, incorporating OAW costs into the framework requires recalibrating damages at each point in time (Dt), re-estimating equation [1], and then running the IAMs to produce new SCC results. The full impacts on the SCC can be determined when the IWG updates its estimates. For the purposes of this paper, we use a short cut to illustrate what the effects on SCC likely would be. The short cut involves using a simple linear regression on IAM model outputs with Dt as the independent variable and SCC as the dependent variable and then using the resulting equations to solve for SCC at OAW- adjusted levels of Dt . Using 2013 public access versions of DICE, we first estimated two equations based on separate runs of the model (called the ‘Copenhagen Accords’ and ‘Limit of 2 °C’ scenarios) and then used the resulting equations – both of which fit well (R2>0.80) as linear models through 2100 – to suggest what SCC would be in various years if OAW costs were included. We based the OAW-adjusted level of damages (Dt) at 5-year increments through 2100 on the assumption that damages by 2100 would amount to $20 trillion per year, but with a relatively low probability (25%) of occurring. The $20 trillion figure is within the range suggested by Table 4. The expected value – $5 trillion/yr by 2100 on top of the IWG’s baseline estimates – was assumed to increase from zero in 2015 at a constant rate until 2100. We then plugged the resulting baseline plus OAW damage figures into the regression equations to translate them into increments to the IWG’s published SCC estimates.

The results of this simplified approach are reported in Table 5. Column one reports the IWG’s baseline SCC figures at a 3% discount rate in 2007 dollars. Column two adds modeled increments to the SCC to account for OAW using the Copenhagen scenario of DICE while column three uses the Limit2 scenario. The former suggests an SCC rising from $60 (tCO2-e)-1 in 2015 to $101 (tCO2-e)-1 in 2100. The latter suggests a range of $96 (tCO2-e)-1 to $281 (tCO2-e)-1.

Respectively, these columns suggest that adding OAW costs would yield an SCC 1.5 to 4.0 times greater than the existing federal baseline.

Table 5: Social cost of carbon – modified damage function approach

($2007, 3% discount rate, OAW damages at $20 trillion in 2100 with probability=0.25)

 

Year IWG baseline ($/mt CO2) OAW/DICE Limit2
($/mt CO2)
OAW/DICE Copen
($/mt CO2)
2015 $36 $96 $60
2020 $42 $161 $75
2025 $46 $205 $84
2030 $50 $235 $91
2035 $55 $256 $96
2040 $60 $269 $98
2045 $64 $277 $100
2050 $69 $281 $101

4.2 Replacement or adaptation cost approach

An entirely different approach that may be more amenable to the damages associated with OAW is one based on the replacement or adaptation cost associated with losing key ecosystem goods and services and replacing infrastructure. Thus, as food from the sea declines there will be a replacement cost associated with providing alternative nutrition sources from the land. The tally of costs should include both the financial outlays needed as well as any additional external damages that may be associated with the substitutes.

Increasing agricultural output to make up for declining seafood consumption, for instance, may come at a steep cost to remaining native terrestrial ecosystems and the goods and services they provide if additional land needs to be put into production.

Current replacement or adaptation cost figures – and these may certainly change over time as new information permits more refined estimates – can then be used as date-certain investment targets achieved by a stream of annual investments that begins today. Dividing the necessary level of investment by emissions in a given year then represents what charge needs to be made on each ton of carbon dioxide released in order to eliminate the externalized cost burden. This approach may be better suited for costs of OAW because most of the costs are non-market in nature.

It is easier to figure out the cost of replacing these lost services than the existing economic damage their loss generates simply due to the inherent uncertainty associated with non-market valuation techniques.

As an example, consider coastal infrastructure that will need to be abandoned and replaced if sea level were to rise 1.55 meters by 2100 (See Section 3.3). As previously noted, this scenario entails a risk of losses of $3.6 trillion/yr – a figure that reflects the current value of GDP in areas that would be newly inundated above and beyond a sea level rise of 0.55 meters, the baseline IWG assumption. As a general rule of thumb, economists assume that the value of the underlying capital stock is roughly ten times the annual GDP produced by a given area. In this case, the challenge would be replacing roughly $36 trillion in infrastructure.

If we select 2100 as the date-certain when these investments need to be completed, it implies an annualized investment stream now until that date of $2.5 trillion/yr taking into account an opportunity cost of capital (OCC) of 7% – the standard now used by many public agencies when making large-scale infrastructure investment decisions. The OCC is used to reflect the opportunity of taking capital out of more productive investments elsewhere. Dividing this annual investment need by current global emissions suggests an increment of about $70 to the current SCC to account for the externalized debt obligation associated with replacing coastal infrastructure at a sea level rise of 1.55 meters rather than 0.55 meters by 2100.

Additional replacement cost increments to SCC can be made for dwindling supplies of food from the sea, lost carbon sequestration capacity (the alternative here may be reforestation), and perhaps other ecosystem services that have functional replacement that are relatively easy to identify and cost out. Adding in these other replacement cost figures would likely justify an increase of SCC by a factor of two or more.

4.2 Averted risk approach

People pay to reduce risk. Of course, this is the bread and butter of the insurance industry. But it is also one of the most basic themes in welfare economics, in particular, the branch of economics related to risk and uncertainty. Models of decision making under risk and uncertainty, including the payments of premiums to avoid or reduce risks, may be an extremely fruitful approach to the SCC since so many of the damages expected are potentially catastrophic but highly uncertain (Botzen 2013).

An averted risk approach would peg the SCC to what society is willing to pay today (WTP) to reduce the risk of future economic damages. Stated as a cost, it represents the welfare loss associated with having a large share of economic activity at risk from climate change.

Basing SCC on WTP to avert or reduce risk has advantages over damage function based approaches. For example, current damage function models are based on certainty equivalents, when in reality uncertainty over whether or not a specific damage (i.e. catastrophic sea level rise associated with the collapse of West Antarctic ice sheets) will occur as well as the magnitude of such damages is the norm. Of course this trades one complex task for another – modeling probabilities rather than damages – but nonetheless is more tractable, especially if the probabilities are based on subjective expert assessments. In this way, the averted risk approach need not be nearly as sophisticated or complex as the existing IAMs.

The standard method for determining WTP to reduce risk is based on expected utility theory. Figure 1 illustrates calculation of the risk premium an individual is willing to pay, shown as the line connecting points c and d, or Y3-Y4. It involves three key steps. First, it requires an assumption regarding the shape of an individual’s (or in our case, society’s) utility function. Utility is an economic concept that hypothetically measures the enjoyment, or wellbeing associated with a given level of income, wealth, or quantity of a good or service. For our purposes, we adopt one of the standard forms depicting the utility function of a risk-averse person or population: U = ln(W), where W is wealth (y-axis) and U the level of utility associated with that level of wealth (x- axis). The declining marginal utility of wealth is reflected by the concave shape of the curve, and is a graphical representation of the fact that as wealth increases, a given increment to wealth has less of an impact on wellbeing.

The second step depicts the loss scenario, should it unfold. The person currently enjoys a level of wealth W and utility U1, but faces a 50/50 chance that a catastrophic event will reduce her wealth by L to the point W-L with a utility of U2. Given this risk, the expected wealth and utility in the next time period is given by the points Y3 and U4. This is simply a weighted average assigning equal probability to the two outcomes of next period wealth. The third step calculates the risk premium. The risk premium reflects what society is willing to pay to have an intermediate level of wealth in the next period (Y4) for certain rather than an uncertain W. The calculations are relatively straightforward, and the results vary with the shape of the assumed utility function, probability of loss, and magnitude of loss.

Parameters Scenario1 Scenario2 Scenario 3
Existing wealth
(GWP – $trillions)
$75.80 $75.80 $75.80
Nominal loss from OAW ($trillions/yr) $20.00 $20.00 $20.00
Year of loss 2050 2100 2100
Discount rate 0% 1% $3%
Present value loss ($trillions/yr) $20.00 $8.58 $1.62
Risk of loss 0.25 0.50 0.75
Expected utility 4.25 4.27 4.31
Certainty equivalent wealth ($trillions) $70.21 $71.38 $74.58
Risk premium ($trillions) $5.59 $4.42 $1.22
SCC increment $155.66 $123.15 $33.96

Table 6 shows the result of this simplified analysis for three loss scenarios, each with OAW losses of $20 trillion (as suggested by Table 4) but with different assumptions about when that loss will occur, the social discount rate (converting future losses into present values), and the probability of the loss. The resulting risk premiums, in trillions per year, are then divided by current emissions to suggest what the SCC increment should be today to internalize the welfare loss associated with the risk of catastrophic damages associated with OAW by 2050 or 2100. The results justify an increment of $33.96 to $155.66 to the SCC, for emissions over the next few years, to account for this welfare loss. This translates into an SCC that is 1.8 to 4.7 times higher than the current federal estimate.

5.0 Conclusions

Ocean acidification and warming (OAW) has the potential to put the livelihoods of billions of people at risk, accelerate the extinction of marine species, and damage critical life support systems of the planet, including the production of adequate levels of oxygen for life on Earth to exist. Literature on the economic toll of OAW is relatively sparse compared with other aspects of climate change. As a result, past efforts to estimate SCC have excluded these costs by treating them as zero. Here, we argue that there now exists sufficient information to develop non- zero estimates of the OAW component of the SCC. Moreover, incorporating such estimates would be consistent with regulatory requirements to use best available science and take note of high-impact/low probability scenarios.

There are at least three approaches for doing so. The first is simply to fit plausible scenarios of OAW and the likely magnitude of economic costs into integrated assessment models (IAMs) used by federal agencies. The IAMs model year-by-year net economic damages as a quadratic function of temperature and then translate the present value damage stream into an estimate of the SCC for emissions today and in future years. The key conclusion we offer here is that while OAW damages are highly uncertain, they can nonetheless be input into the IAM framework as expected (probability-weighted) values.

The second is an entirely different approach that requires maintaining an ongoing inventory of necessary capital investments needed to replace or adapt to ecosystem goods, services, and infrastructure likely to be lost to OAW. Under this approach, the SCC would reflect what amount ought to be charged to emissions, beginning now, to generate an annual investment stream needed to meet long term replacement or adaptation goals. If adaptation planning is begun in earnest today, there is no reason why this approach could not supplement the SCC’s damage function basis.

The third mimics the insurance industry to estimate society’s willingness to pay to reduce or eliminate future OAW risks. We find that this approach is, perhaps, the most suitable for OAW given the fact that economic costs are potentially catastrophic in value but highly uncertain. Standard expected utility theory provides the basis for current estimates of WTP and resulting increments to SCC needed to capture the welfare losses associated with having these economic risks on the books.

Taken together, our preliminary results suggest that SCC should be 1.5 to 4.7 times the current federal rate, or in the $60 to $200 per metric ton CO2-e range, just to account for the costs of OAW.

For references cited, see the author’s manuscript, available here.

The “Sweetcakes by Melissa” opinion from the Oregon Court of Appeals upholds the Bureau of Labor and Industries (BOLI) sanctions against the bakery that refused to bake a wedding cake for a lesbian couple. In terms of a tweet, you might summarize the opinion as saying “Running a public accommodation means accommodating everyone the same.”

The heart of the opinion is the court wrestling with who gets the benefit of the doubt: is it the lesbian couple that just wanted a wedding cake like their straight friends and family could get? If so, then the business almost surely loses.

Or is it the bakery, which creates a custom product with undeniably artistic elements, rather than operating a Costco-style factory for standardized baked products? In this case, the court would look very hard at the law as a burden on the artist’s right to speak as an artist, which includes the right not to be compelled to speak as well.

The court ends up saying that, yeah, while there are artistic elements in this custom bakery thing, there’s not enough to take this out of the world where it’s a business first, with artistry second. And, as a business that operates in the public domain, open to all, it’s subject to all the usual regulations unless the business can show some extraordinary reason that it is impossible for the business to operate if it has to treat all customers the same, straight and LGBTQ alike.

That’s pretty much the ballgame right there.  Below is that key section, lightly edited for easier reading, mostly to remove excessive citations to other decisions. Taken from Oregon Appeals Reports, Vol. 289, starting at page 507; the selection below begins at page 517.)

  1. Meaning and scope of ORS 659A.403

In their first assignment of error, the Kleins argue that BOLI misinterpreted ORS 659A.403—specifically, what it means to deny equal service “on account of” sexual orientation. According to the Kleins, they did not decline service to the complainants “on account of” their sexual orientation; rather, “they declined to facilitate the celebration of a union that conveys messages about marriage to which they do not [subscribe] and that contravene their religious beliefs.” BOLI rejected that argument, reasoning that the Kleins’ “refusal to provide a wedding cake for Complainants because it was for their same-sex wedding was synonymous with refusing to provide a cake because of Complainants’ sexual orientation.” We, like BOLI, are not persuaded that the text, context, or history of ORS 659A.403 contemplates the distinction proposed by the Kleins. . . .

The text of ORS 659A.403(1) leaves little doubt as to its breadth and operation. It provides, in full:

“(1) Except as provided in subsection (2) of this section, all persons within the jurisdiction of this state are entitled to the full and equal accommodations, advantages, facilities and privileges of any place of public accommodation, without any distinction, discrimination or restriction on account of race, color, religion, sex, sexual orientation, national origin, marital status or age if the individual is of age, as described in this section, or older.” (Emphases added.)

The phrase “on account of” is unambiguous: In ordinary usage, it is synonymous with “by reason of” or “because of.” Webster’s Third New Int’l Dictionary 13 (unabridged ed 2002); id. at 194 (defining “because of” as “by reason of : on account of”).

And it has long been understood to carry that meaning in the context of antidiscrimination statutes. E.g., 18 USC § 242 (1948) (making it unlawful to deprive a person of “any rights, privileges, or immunities secured or protected by the Constitution or laws of the United States, or to different punishments, pains, or penalties, on account of such inhabitant being an alien, or by reason of his color, or race” (emphases added)).

Thus, by its plain terms, the statute requires only that the denial of full and equal accommodations be causally connected to the protected characteristic or status—in this case, “sexual orientation,” which is defined to mean “an individual’s actual or perceived heterosexuality, homosexuality, bisexuality or gender identity, regardless of whether the individual’s gender identity, appearance, expression or behavior differs from that traditionally associated with the individual’s sex at birth.” . . .

In this case, Sweetcakes provides a service—making wedding cakes—to heterosexual couples who intend to wed, but it denies the service to same-sex couples who likewise intend to wed. Under any plausible construction of the plain text of ORS 659A.403, that denial of equal service is “on account of,” or causally connected to, the sexual orientation of the couple seeking to purchase the Kleins’ wedding-cake service.

The Kleins do not point to any text in the statute or provide any context or legislative history suggesting that we should depart from the ordinary meaning of those words. What they argue instead is that the statute is silent as to whether it encompasses “gay conduct” as opposed to sexual orientation. The Kleins state that they are willing to serve homosexual customers, so long as those customers do not use the Kleins’ cakes in celebration of same-sex weddings. As such, according to the Kleins, they do not discriminate against same-sex couples “on account of” their status; rather, they simply refuse to provide certain services that those same-sex couples want. The Kleins contend that BOLI’s “broad equation of celebrations (weddings) of gay conduct (marriage) with gay status rewrites and expands Oregon’s public accommodations law.”

We see no evidence that the drafters of Oregon’s public accommodations laws intended that type of distinction between status and conduct. First, there is no reason to believe that the legislature intended a “status/conduct” distinction specifically with regard to the subject of “sexual orientation.” When the legislature in 2007 added “sexual orientation” to the list of protected characteristics in ORS 659A.403, Or Laws 2007, ch 100, § 5, it was unquestionably aware of the unequal treatment that gays and lesbians faced in securing the same rights and benefits as heterosexual couples in committed relationships. During the same session that the legislature amended ORS 659A.403 (and other antidiscrimination statutes) to include “sexual orientation,” it adopted the Oregon Family Fairness Act, which recognized the “numerous obstacles” that gay and lesbian couples faced and was intended to “extend[] benefits, protections and responsibilities to committed same-sex partners and their children that are comparable to those provided to married individuals and their children by the laws of this state.” Or Laws 2007, ch 99, §§ 2(3), (5). To that end, section 9 of that law provided:

“Any privilege, immunity, right or benefit granted by statute, administrative or court rule, policy, common law or any other law to an individual because the individual is or was married, or because the individual is or was an in-law in a specified way to another individual, is granted on equivalent terms, substantive and procedural, to an individual because the individual is or was in a domestic partnership or because the individual is or was, based on a domestic partnership, related in a specified way to another individual.”

Or Laws 2007, ch 99, § 9(1).

The Kleins have not provided us with any persuasive explanation for why the legislature would have intended to grant equal privileges and immunities to individuals in same-sex relationships while simultaneously excepting those committed relationships from the protections of ORS 659A.403. [fn 5]

  [fn 5] At the time that the Oregon Family Fairness Act was enacted, Article XV, section 5a, of the Oregon Constitution defined “marriage” to be limited to the union of one man and one woman, and the Oregon Family Fairness Act expressly states that it “cannot bestow the status of marriage on partners in a domestic partnership.” Or Laws 2007, ch 99, § 2(7). Nonetheless, the act contemplated, but did not require, the performance of “solemnization ceremony[ies]” and left it to the “dictates and conscience of partners entering into a domestic partnership to determine whether to seek a ceremony or blessing over the domestic partnership.” Or Laws 2007, ch 99, § 2(8). Thus, the legislature was aware that same-sex couples would be participating in wedding ceremonies, and when it simultaneously chose to extend the protections of ORS 659A.403 to cover sexual orientation, there is no reason to believe that it intended to exempt places of public accommodation— such as cake shops, dress shops, or flower shops—so as to permit them to discriminate with regard to services related to those anticipated ceremonies.

Nor does the Kleins’ proposed distinction find support in the context or history of ORS 659A.403 more generally. As originally enacted in 1953, the statute (then numbered ORS 30.670) prohibited “any distinction, discrimination or restriction on account of race, religion, color or national origin.” Or Laws 1953, ch 495, § 1. One of the purposes of the statute, the Supreme Court has observed, was “to prevent ‘operators and owners of businesses catering to the general public to subject Negroes to oppression and humiliation.’ ” Schwenk v. Boy Scouts of America, 275 Or 327, 332, 551 P2d 465 (1976) (quoting a statement by one of the principal sponsors of the statute (emphasis removed)).

Yet, under the distinction proposed by the Kleins, owners and operators of businesses could continue to oppress and humiliate black people simply by recasting their bias in terms of conduct rather than race. For instance, a restaurant could refuse to serve an interracial couple, not on account of the race of either customer, but on account of the conduct—interracial dating—to which the proprietor objected. In the absence of any textual or contextual support, or legislative history on that point, we decline to construe ORS 659A.403 in a way that would so fundamentally undermine its purpose. See King v. Greyhound Lines, Inc., 61 Or App 197, 203, 656 P2d 349 (1982) (adopting an interpretation of Oregon’s public accommodation laws that recognizes that “the chief harm resulting from the practice of discrimination by establishments serving the general public is not the monetary loss of a commercial transaction or the inconvenience of limited access but, rather, the greater evil of unequal treatment, which is the injury to an individual’s sense of self-worth and personal integrity”)

Tellingly, the Kleins’ argument for distinguishing between “gay conduct” and sexual orientation is rooted in principles that they derive from United States Supreme Court cases rather than anything in the text, context, or history of ORS 659A.403. Specifically, the Kleins draw heavily on the Supreme Court’s reasoning in Bray v. Alexandria Women’s Health Clinic, 506 US 263, 113 S Ct 753, 122 L Ed 2d 34 (1993), which concerned the viability of a federal cause of action under 42 USC section 1985(3) against persons obstructing access to abortion clinics. In that case, the Supreme Court addressed, among other things, whether the petitioners’ opposition to abortion reflected an animus against women in general—that is, whether, because abortion is “an activity engaged in only by women, to disfavor it is ipso facto to discriminate invidiously against women as a class.” Id. at 271 (footnote omitted).

In rejecting that theory of ipso facto discrimination, the Court observed:

“Some activities may be such an irrational object of disfa- vor that, if they are targeted, and if they also happen to   be engaged in exclusively or predominantly by a particular class of people, an intent to disfavor that class can read- ily be presumed. A tax on wearing yarmulkes is a tax on Jews. But opposition to voluntary abortion cannot possibly be considered such an irrational surrogate for opposition to (or paternalism towards) women. Whatever one thinks of abortion, it cannot be denied that there are common and respectable reasons for opposing it, other than hatred of,   or condescension toward (or indeed any view at all concerning), women as a class—as is evident from the fact that men and women are on both sides of the issue, just as men and women are on both sides of petitioners’ unlawful demonstrations.”

The Kleins argue that “[t]he same is true here. Whatever one thinks of same-sex weddings, there are respectable reasons for not wanting to facilitate them.” They contend that BOLI simply “ignores Bray” and that BOLI’s construction of ORS 659A.403 “fails the test for equating conduct with status” that the Supreme Court announced in that case.

Bray, which involved a federal statute, does not inform the question of what the Oregon legislature intended when it enacted ORS 659A.403. But beyond that, Bray does not articulate a relevant test for analyzing the issue presented in this case. Bray addressed the inferences that could be drawn from opposition to abortion as a “surrogate” for sex-based animus, and it was in that context that the Supreme Court described “irrational object[s] of disfavor” that “happen to be engaged in exclusively or predominantly by a particular class of people,” 506 US at 270, such that intent to discriminate against that class can be presumed.

Here, by contrast, there is no surrogate. The Kleins refused to make a wedding cake for the complainants precisely and expressly because of the relationship between sexual orientation and the conduct at issue (a wedding). And, where a close relationship between status and conduct exists, the Supreme Court has repeatedly rejected the type of distinction urged by the Kleins. . . . We therefore reject the Kleins’ proposed distinction between status and conduct, and we hold that their refusal to serve the complainants is the type of discrimination “on account of * * * sexual orientation” that falls within the plain meaning of ORS 659A.403. [fn 6]

[fn 6] In doing so, we join other courts that have declined to draw a “status/ conduct” distinction similar to that urged by the Kleins. See, e.g., State v. Arlene’s Flowers, Inc., 187 Wash 2d 804, 823, 389 P3d 543, 552 (2017) (stating that “numerous courts—including our own—have rejected this kind of status/conduct distinction in cases involving statutory and constitutional claims of discrimination,” and citing cases to that effect).

The reasons for the Kleins’ discrimination on account of sexual orientation—regardless of whether they are “common and respectable” within the meaning of Bray— raise questions of constitutional law, not statutory interpretation. The Kleins, in the remainder of their argument concerning the construction of ORS 659A.403, urge us to consider those constitutional questions and to interpret the statute in a way that avoids running afoul of the “Speech and Religion Clauses of the Oregon and United States constitutions.” . . .  Here, the Kleins have not made that threshold showing of ambiguity. Accordingly, we affirm BOLI’s order with regard to its construction of ORS 659A.403, and we turn to the merits of the Kleins’ constitutional arguments.

  1. Constitutional challenges to ORS 659A.403

The Kleins invoke both the United States and the Oregon constitutions in arguing that the final order violates their rights to free expression and the free exercise of their religion. Oregon courts generally seek to resolve arguments under the state constitution before turning to the federal constitution. . . . In this case, however, the Kleins draw almost entirely on well-developed federal constitutional principles, and they do not meaningfully develop any independent state constitutional theories. Accordingly, in the discussion that follows, we address the Kleins’ federal constitutional arguments first and their state arguments second. . . .

  1. Free expression

The Kleins argue that BOLI’s final order violates their First Amendment right to freedom of speech. BOLI argues that the order simply enforces ORS 659A.403, a content-neutral regulation of conduct that does not implicate the First Amendment at all. And each side argues that United States Supreme Court precedent is decisively in its favor.

The issues before us arise at the intersection of two competing principles: the government’s interest in promoting full access to the state’s economic life for all of its citizens, which is expressed in public accommodations statutes like ORS 659A.403, and an individual’s First Amendment right not to be compelled to express or associate with ideas with which she disagrees. Although the Supreme Court has grappled with that intersection before, it has not yet decided a case in this particular context, where the public accommodation at issue is a retail business selling a service, like cake-making, that is asserted to involve artistic expression. [fn 7]

[fn 7] The issue is currently before the Supreme Court in a case involving a Colorado bakery that similarly refused to make a wedding cake for a same-sex couple. Craig v. Masterpiece Cakeshop, Inc.

It is that asserted artistic element that complicates the First Amendment analysis—and, ultimately, distinguishes this case from the precedents on which the parties rely. Generally speaking, the First Amendment does not prohibit government regulation of “commerce or conduct” whenever such regulation indirectly burdens speech. . . .

When, however, the government regulates activity that involves a “significant expressive element,” some degree of First Amendment scrutiny is warranted. Arcara v. Cloud Books, Inc., 478 US 697, 706, 106 S Ct 3172, 92 L Ed 2d 568 (1986); id. at 705 (reasoning that the “crucial distinction” between government actions that trigger First Amendment scrutiny and those that do not is whether the regulated activity “manifests” an “element of protected expression”).

In the discussion that follows, we conclude that the Kleins have not demonstrated that their wedding cakes invariably constitute fully protected speech, art, or other expression, and we therefore reject the Kleins’ position that we must subject BOLI’s order to strict scrutiny under the First Amendment. At most, the Kleins have shown that their cake-making business includes some arguably expressive elements as well as non-expressive elements, so as to trigger intermediate scrutiny. We assume (without deciding) that that is true, and then conclude that BOLI’s order nonetheless survives intermediate scrutiny because any burden on the Kleins’ expressive activities is no greater than is essential to further Oregon’s substantial interest in promoting the ability of its citizens to participate equally in the marketplace without regard to sexual orientation.

(1)     “Public accommodations” and the First Amendment

Oregon enacted its Public Accommodation Act in 1953. See Or Laws 1953, ch 495. The original act guaranteed the provision of “full and equal accommodations, advantages, facilities and privileges * * * without any distinction, discrimination or restriction on account of race, religion, color, or national origin.” Former ORS 30.670 (1953), renumbered as ORS 659A.403 (2001). It applied to “any hotel, motel or motor court, any place offering to the public food or drink for consumption on the premises, or any place offering to the public entertainment, recreation or amusement.” Former ORS 30.675 (1953), renumbered as ORS 659A.400 (2001).

Oregon’s statute was thus similar in scope to Title II of the federal Civil Rights Act of 1964, which prohibits discrimination “on the ground of race, color, religion, or national origin” in three broad categories of public accommodations: those that provide lodging to transient guests, those that sell food for consumption on the premises, and those that host “exhibition[s] or entertainment,” such as theaters and sports arenas. Pub L 88-352, Title II, § 201, 78 Stat 243 (1964), codified as 42 USC § 2000a(b). When the United States Supreme Court upheld the public accommodations provisions of Title II in 1964, it observed that the constitutionality of state public accommodations laws at that point had remained “unquestioned,” citing previous instances in which it had “rejected the claim that the prohibition of racial discrimination in public accommodations interferes with personal liberty.” Atlanta Motel v. United States, 379 US 241, 260-61, 85 S Ct 348, 13 L Ed 2d 258 (1964).

Over two decades, the Oregon legislature incrementally expanded the definition of “place of public accommodation” to include “trailer park[s]” and “campground[s],” Or Laws 1957 ch 724, § 1, and then to places “offering to the public food or drink for consumption on or off the premises,” Or Laws 1961, ch 247, § 1 (emphasis added). Then, in 1973, the legislature significantly expanded the definition to include “any place or service offering to the public accommodations, advantages, facilities or privileges whether in the nature of goods, services, lodgings, amusements or otherwise,” subject to an exception for “any institution, bona fide club or place of accommodation which is in its nature distinctly private.” Or Laws 1973, ch 714, § 2 (emphasis added). Other states similarly enlarged the scope of their public-accommodations laws over time. See, e.g., Hurley v. Irish-American Gay, Lesbian and Bisexual Group of Boston, Inc., 515 US 557, 571-72, 115 S Ct 2338, 132 L Ed 2d 487 (1995) (describing the ways in which the Massachusetts legislature had “broaden[ed] the scope of” the state’s public accommodations law); Roberts v. United States Jaycees, 468 US 609, 624, 104 S Ct 3244, 82 L Ed 2d 462 (1984) (observing that Minnesota had “progressively broadened the scope of its public accommodations law in the years since it was first enacted, both with respect to the number and type of covered facilities and with respect to the groups against whom discrimination is forbidden”).

First Amendment challenges to the application of public-accommodations laws—and other forms of anti- discrimination laws—have been mostly unsuccessful. See, e.g., Roberts, 468 US at 625-29 (rejecting argument that a private, commercial association had a First Amendment right to exclude women from full membership); Hishon v. King & Spalding, 467 US 69, 78, 104 S Ct 2229, 81 L Ed 2d 59 (1984) (rejecting law firm’s claim that prohibiting the firm from discriminating on the basis of gender in making partnership decisions violated members’ First Amendment rights to free expression and association); Runyon v. McCrary, 427 US 160, 175-76, 96 S Ct 2586, 49 L Ed 2d 415 (1976) (rejecting private schools’ claim that they had a First Amendment associational right to discriminate on the basis of race in admitting students). The United States Supreme Court has repeatedly acknowledged that public accommodations statutes in particular are “well within the State’s usual power to enact when a legislature has reason to believe that a given group is the target of discrimination.” Hurley, 515 US at 572. The Court has further acknowledged that states enjoy “broad authority to create rights of public access on behalf of [their] citizens,” in order to ensure “wide participation in political, economic, and cultural life” and to prevent the “stigmatizing injury” and “the denial of equal opportunities” that accompanies invidious discrimination in public accommodations. Roberts, 468 US at 625. And the Court has recognized a state’s interest in preventing the “unique evils” that stem from “invidious discrimination in the distribution of publicly available goods, services, and other advantages.” Id. at 628.

However, as states adopted more expansive definitions of “places of public accommodation,” their anti-discrimination statutes began to reach entities that were different in kind from the commercial establishments that were the original target of public accommodations laws. As a result, on two occasions, the Court held that the application of such laws violated the First Amendment.

First, in Hurley, the court held that Massachusetts’s public accommodations law could not be applied to require a St. Patrick’s Day parade organizer to include a gay-rights group in its parade. 515 US at 573. Observing that state public accommodations laws do not, “as a general matter, violate the First or Fourteenth Amendments,” the Court went on to conclude that the Massachusetts law had been “applied in a peculiar way” to a private parade, a result that “essentially requir[ed]” the parade organizers to “alter the expressive content of their parade” by accommodating a message (of support for gay rights) that they did not want to include. Id. at 572-73 (emphasis added). The Court further reasoned that such an application of the statute “had the effect of declaring the [parade] sponsors’ speech itself to be the public accommodation,” which violated “the fundamental rule of protection under the First Amendment, that a speaker has the autonomy to choose the content of his own message.” Id. at 573.

Following Hurley, the Court decided Boy Scouts of America v. Dale, 530 US 640, 120 S Ct 2446, 147 L Ed 2d 554 (2000) (Dale), in which it held that applying New Jersey’s public accommodations law to require the Boy Scouts to admit a gay scoutmaster violated the group’s First Amendment right to freedom of association. The Court observed that, over time, public accommodations laws had been expanded to cover more than just “traditional places of public accommodation—like inns and trains.” Id. at 656. According to the Court, New Jersey’s definition of a “place of public accommodation” was “extremely broad,” particularly because the state had “applied its public accommodations law to a private entity without even attempting to tie the term ‘place’ to a physical location.” Id. at 657. The court dis- tinguished Dale from prior cases in which it held that public accommodations laws posed no First Amendment problem, observing that, in those prior cases, the law’s enforcement did not “materially interfere with the ideas that the organization sought to express.” Id.

Thus, Hurley and Dale demonstrate that the First Amendment may stand as a barrier to the application of state public accommodations laws when such laws are applied to “peculiar” circumstances outside of the usual commercial context. See Dale, 530 US at 657 (“As the definition of ‘public accommodation’ has expanded from clearly commercial entities, such as restaurants, bars, and hotels, to member- ship organizations such as the Boy Scouts, the potential for conflict between state public accommodations laws and the First Amendment rights of organizations has increased.”).

In this case, the Kleins concede that Sweetcakes is a “place of public accommodation” under Oregon law because it is a retail bakery open to the public. But the Kleins contend that, as in Hurley and Dale, application of ORS 659A.403 in this case violates their First Amendment rights.

(2)     First Amendment precedent

BOLI and the Kleins offer competing United States Supreme Court precedent that, they argue, clearly requires a result in their respective favors. We begin our analysis by explaining why we do not regard the authorities cited by the parties as controlling.

The Kleins argue that the effect of BOLI’s final order is to compel them to express a message—a celebration of same-sex marriage—with which they disagree. They pri- marily draw on two interrelated lines of First Amendment cases that, they contend, preclude the application of ORS 659A.403 here.

First, the Kleins rely on cases holding that the government may not compel a person to speak or promote a government message with which the speaker does not agree. See, e.g., Board of Education v. Barnette, 319 US 624, 63 S Ct 1178, 87 L Ed 1628 (1943) (holding that a state may not sanction a public-school student or his parents for the student’s refusal to recite the Pledge of Allegiance or salute the flag of the United States); Wooley v. Maynard, 430 US 705, 97 S Ct 1428, 51 L Ed 2d 752 (1977) (holding that New Hampshire could not force a person to display the “Live Free or Die” state motto on his license plate).

We do not consider that line of cases to be helpful here. In “compelled speech” cases like Barnette and Wooley, the government prescribed a specific message that the individual was required to express. ORS 659A.403 does nothing of the sort; it is a content-neutral regulation that is not directed at expression at all. It does not even regulate cake-making; it simply prohibits the refusal of service based on membership in a protected class. The United States Supreme Court has repeatedly held that such content-neutral regulations—although they may have incidental effects on an individual’s expression—are an altogether different, and generally permissible, species of government action than a regulation of speech. See Rumsfeld v. Forum for Academic & Institutional Rights, Inc., 547 US 47, 62, 126 S Ct 1297, 164 L Ed 2d 156 (2006) (FAIR) (“[I]t has never been deemed an abridgement of freedom of speech or press to make a course of conduct illegal merely because the conduct was in part initiated, evidenced, or carried out by means of language, either spoken, written, or printed.” (Internal quotation marks omitted.)); R. A. V. v. St. Paul, 505 US 377, 385, 112 S Ct 2538, 120 L Ed 2d 305 (1992) (“We have long held * * * that nonverbal expressive activity can be banned because of the action it entails, but not because of the ideas it expresses * * *.”). In short, we reject the Kleins’ analogy of this case to Barnette and Wooley.

Second, the Kleins rely heavily on Hurley and Dale, which, as discussed above, invalidated the application of public accommodations statutes in “peculiar” circumstances outside of the usual commercial context. The difficulty with that analogy is that this case does involve the usual commercial context; Sweetcakes is not a private parade or membership organization, and it is hardly “peculiar,” as that term was used in Hurley, to apply ORS 659A.403 to a retail bakery like Sweetcakes that is open to the public and that exists for the purpose of engaging in commercial transactions. Indeed, the Kleins accept the premise that Sweetcakes is a place of public accommodation under Oregon law, and that, as such, it must generally open its doors to customers of all sexual orientations, regardless of the Kleins’ religious views about homosexuality. Thus, if the Kleins are to succeed in avoiding compliance with the statute, it cannot be because their activity occurs outside the ordinary commercial context that the government has wide latitude to regulate, as was the case in Hurley and Dale. The Kleins must find support elsewhere.

In BOLI’s view, on the other hand, the Kleins’ arguments are disposed of by the United States Supreme Court’s decision in FAIR. In that case, an association of law schools and law faculty (FAIR) sought to enjoin the enforcement of the Solomon Amendment, a federal law that requires higher-education institutions, as a condition for receiving federal funds, to provide military recruiters with the same access to their campuses as non-military recruiters. 547 US at 52-55. Because FAIR opposed the military’s policy at that time regarding homosexual service-members, FAIR argued that the equal-access requirement violated the schools’ First Amendment rights to freedom of speech and association. Id. at 52-53.

The Court rejected FAIR’s compelled-speech argument, reasoning that the Solomon Amendment “neither limits what law schools may say nor requires them to say anything,” and, therefore, the law was a “far cry” from the compulsions at issue in Barnette and Wooley. Id. at 60, 62. The Court acknowledged that compliance with the Solomon Amendment would indirectly require the schools to “speak” in a sense because it would require the schools to send emails and post notices on behalf of the military if they chose to do so for other recruiters. Nevertheless, the Court found it dispositive that the Solomon Amendment did not “dictate the content of the speech at all, which is only ‘compelled’ if, and to the extent [that,] the school provides such speech for other recruiters.” Id. The Court distinguished that situation from those where “the complaining speaker’s own message was affected by the speech it was forced to accommodate.” Id. at 63-64 (citing, inter alia, Hurley, 515 US at 568).

In BOLI’s view, this case is like FAIR because ORS 659A.403 does not directly compel any speech; even if one considers the Kleins’ cake-making to involve some element of expression, the law only compels the Kleins to engage in that expression for same-sex couples “if, and to the extent” that the Kleins do so for the general public.

This case is distinguishable from FAIR, however, in a significant way. Essential to the holding in FAIR was that the schools were not compelled to express a message with which they disagreed. The schools evidently did not assert, nor did the Supreme Court contemplate, that there was a meaningful ideological or expressive component to the emails or notices themselves, which merely conveyed factual information about the presence of recruiters on campus. The Court thus distinguished the case from Barnette and Wooley, cases that addressed the harm that results from true compelled speech—that is, depriving a person of autonomy as a speaker and “inva[ding]” that person’s “ ‘individual freedom of mind,’ ” Wooley, 430 US at 714 (quoting Barnette, 319 US at 637); see Hurley, 515 US at 576 (“[W]hen dissemination of a view contrary to one’s own is forced upon a speaker intimately connected with the communication advanced, the speaker’s right to autonomy over the message is compromised.”).

Here, unlike in FAIR, the Kleins very much do object to the substantive content of the expression that they believe would be compelled. They argue that their wedding cakes are works of art that express a celebratory message about the wedding for which they are intended, and that the Kleins cannot be compelled to create that art for a wedding that they do not believe should be celebrated. And there is evidentiary support for the Kleins’ view, at least insofar as every wedding cake that they create partially reflects their own creative and aesthetic judgment. Whether that is sufficient to make their cakes “art,” the creation of which the government may not compel, is a question to which we will turn below, but even the Kleins’ subjective belief that BOLI’s order compels them to express a specific message that they ideologically oppose makes this case different from FAIR.

That fact is also what makes this case difficult to compare to other public accommodations cases that the United States Supreme Court has decided. It appears that the Supreme Court has never decided a free-speech challenge to the application of a public accommodations law to a retail establishment selling highly customized, creative goods and services that arguably are in the nature of art or other expression.

To put the problem into sharper focus, we see no reason in principle why the services of a singer, composer, or painter could not fit the definition of a “place of public accommodation” under ORS 659A.400. One can imagine, for example, a person whose business is writing commissioned music or poetry for weddings, or producing a sculpture or portrait of the couple kissing at an altar. One can also imagine such a person who advertises and is willing to sell those services to the general public, but who holds strong religious convictions against same-sex marriage and would feel her “freedom of mind” violated if she were compelled to produce her art for such an occasion. Cf. Barnette, 319 US at 637. For the Kleins, this is that case. BOLI disagrees that a wedding cake is factually like those other examples, but the legal point that those examples illustrate is that existing public accommodations case law is awkwardly applied to a person whose “business” is artistic expression. The Court has not told us how to apply a requirement of nondiscrimination to an artist.

We believe, moreover, that it is plausible that the United States Supreme Court would hold the First Amendment to be implicated by applying a public accommodations law to require the creation of pure speech or art. If BOLI’s order can be understood to compel the Kleins to create pure “expression” that they would not otherwise create, it is possible that the Court would regard BOLI’s order as a regulation of content, thus subject to strict scrutiny, the test for regulating fully protected expression. See Hurley, 515 US at 573 (application of public accommodations statute violated the First Amendment where it “had the effect of declaring the sponsors’ speech itself to be the public accommodation,” thus infringing on parade organizers’ “autonomy to choose the content of [their] own message”); see also Riley v. National Federation of the Blind, 487 US 781, 795- 98, 108 S Ct 2667, 101 L Ed 2d 669 (1988) (explaining that “[m]andating speech that a speaker would not otherwise make necessarily alters the content of the speech,” and subjecting such regulation to “exacting First Amendment scrutiny”).

Although the Court has not clearly articulated the extent to which the First Amendment protects visual art and its creation, it has held that the First Amendment covers various forms of artistic expression . . . . The Court has also made clear that a particularized, discernible message is not a prerequisite for First Amendment protection. [fn 8] . . .

[Fn 8] The First Amendment’s protection of artwork is distinct from the protections that extend to so-called “expressive conduct.” Expressive conduct involves conduct that may be undertaken for any number of reasons but, in the relevant instance, is undertaken for the specific purpose of conveying a message. . . . For example, a person may camp in a public park for any number of reasons, only some of which are intended to express an idea. . . . In contrast (as we understand the Supreme Court to have held), because the creation of artwork and other inherently expressive acts are unquestionably undertaken for an expressive purpose, they need not express an articulable message to enjoy First Amendment protection.

In short, although ORS 659A.403 is a content- neutral regulation that is not directed at expression, the Kleins’ arguments cannot be dismissed on that ground alone. Rather, we must decide whether the Kleins’ cake-making activity is sufficiently expressive, communicative, or artistic so as to implicate the First Amendment, and, if it is, whether BOLI’s final order compelling the creation of such expression in a particular circumstance survives First Amendment scrutiny.

(3)     Whether these cakes implicate the First Amendment

If, as BOLI argues, the Kleins’ wedding cakes are just “food” with no meaningful artistic or communicative component, then, as the foregoing discussion illustrates, BOLI’s final order does not implicate the First Amendment; the Kleins’ objection to having to “speak” as a result of ORS 659A.403 is no more powerful than it would be coming from the seller of a ham sandwich. On the other hand, if and to the extent that the Kleins’ wedding cakes constitute artistic or communicative expression, then the First Amendment is implicated by BOLI’s final order. In short, we must decide whether the act that the Kleins refused to perform—to design and create a wedding cake—is “sufficiently imbued with elements of communication” so as to “fall within the scope” of the First Amendment. . . .

Consequently, the question is whether that customary practice, and its end product, are in the nature of “art.” As noted above, if the ultimate effect of BOLI’s order is to compel the Kleins to create something akin to pure speech, then BOLI’s order may be subject to strict scrutiny. If, on the other hand, the Kleins’ cake-making retail business involves, at most, both expressive and non-expressive components, and if Oregon’s interest in enforcing ORS 659A.403 is unrelated to the content of the expressive components of a wedding cake, then BOLI’s order need only survive intermediate scrutiny to comport with the First Amendment. . . .

The record reflects that the Kleins’ wedding cakes follow a collaborative design process through which Melissa uses her customers’ preferences to develop a custom design, including choices as to “color,” “style,” and “other decorative detail.” Melissa shows customers previous designs “as inspiration,” and she then draws “various designs on sheets of paper” as part of a dialogue with the customer. From that dialogue, Melissa “conceives” and customizes “a variety of decorating suggestions” as she ultimately finalizes the design. Thus, the process does not simply involve the Kleins executing precise instructions from their customers; instead, it is clear that Melissa uses her own design skills and aesthetic judgments.

Therefore, on this record, the Kleins’ argument that their products entail artistic expression is entitled to be taken seriously. That being said, we are not persuaded that the Kleins’ wedding cakes are entitled to the same level of constitutional protection as pure speech or traditional forms of artistic expression.

In order to establish that their wedding cakes are fundamentally pieces of art, it is not enough that the Kleins believe them to be pieces of art. See Nevada Comm’n on Ethics v. Carrigan . . . (“[T]he fact that a nonsymbolic act is the product of deeply held personal belief—even if the actor would like to convey his deeply held personal belief— does not transform action into First Amendment speech.” (Emphasis in original.)); see also Clark v. Community for Creative Non-Violence . . . (the burden of proving that an activity is protected expression is on the person asserting First Amendment protection for that activity).

For First Amendment purposes, the expressive character of a thing must turn not only on how it is subjectively perceived by its maker, but also on how it will be perceived and experienced by others. . . . Here, although we accept that the Kleins imbue each wedding cake with their own aesthetic choices, they have made no showing that other people will necessarily experience any wedding cake that the Kleins create predominantly as “expression” rather than as food.

Although the Kleins’ wedding cakes involve aesthetic judgments and have decorative elements, the Kleins have not demonstrated that their cakes are inherently “art,” like sculptures, paintings, musical compositions, and other works that are both intended to be and are experienced predominantly as expression. Rather, their cakes, even when custom-designed for a ceremonial occasion, are still cakes made to be eaten. Although the Kleins themselves may place more importance on the communicative aspect of one of their cakes, there is no information in this record that would permit an inference that the same is true in all cases for the Kleins’ customers and the people who attend the weddings for which the cakes are created. Moreover, to the extent that the cakes are expressive, they do not reflect only the Kleins’ expression. Rather, they are products of a collaborative process in which Melissa’s artistic execution is subservient to a customer’s wishes and preferences. For those reasons, we do not agree that the Kleins’ cakes can be understood to fundamentally and inherently embody the Kleins’ expression, for purposes of the First Amendment. [fn 9]

[fn 9] To be clear, we do not foreclose the possibility that, on a different factual record, a baker (or chef) could make a showing that a particular cake (or other food) would be objectively experienced predominantly as art—especially when created at the baker’s or chef’s own initiative and for her own purposes. But, as we have already explained, the Kleins never reached the point of discussing what a particular cake for Rachel and Laurel would look like; they refused to make any wedding cake for the couple. Therefore, in order to prevail, the Kleins (as they implicitly acknowledge) must demonstrate that any cake that they make through their customary practice constitutes their own speech or art. They have not done so.

We also reject the Kleins’ argument that, under the facts of this case, BOLI’s order compels them to “host or accommodate another speaker’s message” in a manner that the Supreme Court has deemed to be a violation of the First Amendment. . . .

In the only such case that involved the enforcement of a content-neutral public accommodations law, Hurley, the problem was that the speaker’s autonomy was affected by the forced intermingling of messages, with consequences for how others would perceive the content of the expression. 515 US at 576-77 (reasoning that parades, unlike cable operators, are not “understood to be so neutrally presented or selectively viewed,” and “the parade’s overall message is distilled from the individual presentations along the way, and each unit’s expression is perceived by spectators as part of the whole” (emphasis added)). Here, because the Kleins refused to provide their wedding-cake service to Rachel and Laurel altogether, this is not a situation where the Kleins were asked to articulate, host, or accommodate a specific message that they found offensive. It would be a different case if BOLI’s order had awarded damages against the Kleins for refusing to decorate a cake with a specific message requested by a customer (“God Bless This Marriage,” for example) that they found offensive or contrary to their beliefs. . . .

The Kleins’ additional concern, as we understand it, is that a wedding cake communicates a “celebratory message” about the wedding for which it is intended, and the Kleins do not wish to “host” the message that same-sex weddings should be celebrated. But, unlike in Hurley, the Kleins have not raised a nonspeculative possibility that anyone attending the wedding will impute that message to the Kleins. We think it more likely that wedding attendees understand that various commercial vendors involved with the event are there for commercial rather than ideological purposes. Moreover, to the extent that the Kleins subjectively feel that they are being “associated” with the idea that same-sex marriage is worthy of celebration, the Kleins are free to engage in their own speech that disclaims such support. Cf. FAIR, 547 US at 65 (rejecting argument that law schools would be perceived as supporting any speech by recruiters by simply complying with the Solomon Amendment; noting that nothing prevented the schools from expressing their views in other ways).

In short, we disagree that the Kleins’ wedding cakes are invariably in the nature of fully protected speech or artistic expression, and we further disagree that BOLI’s order forces the Kleins to host, accommodate, or associate with anyone else’s particular message. Thus, because we conclude that BOLI’s order does not have the effect of compelling fully protected expression, it does not trigger strict scrutiny under the First Amendment.

As noted above, however, BOLI’s order is still arguably subject to intermediate First Amendment scrutiny if the Kleins’ cake-making activity involves both expressive and non-expressive elements. O’Brien, 391 US at 376 (“[W]hen ‘speech’ and ‘nonspeech’ elements are combined in the same course of conduct, a sufficiently important govern- mental interest in regulating the nonspeech element can jus- tify incidental limitations on First Amendment freedoms.”); see also Turner Broadcasting System, Inc., 512 US at 661-62.

Here, we acknowledge that the Kleins’ cake-making process is not a simple matter of combining ingredients and follow- ing a customer’s precise specifications. Instead, based on the Kleins’ customary practice, the ultimate effect of BOLI’s order is to compel them to engage in a collaborative process with a customer and to create a custom product that they would not otherwise make. The Kleins’ argument that that process involves individualized aesthetic judgments that are themselves within the realm of First Amendment protected expression is not implausible on its face.

Ultimately, however, we need not resolve whether that argument is correct. That is because, even assuming (without deciding) that the Kleins’ cake-making business involves aspects that may be deemed “expressive” for purposes of the First Amendment, BOLI’s order is subject, at most, to intermediate scrutiny, and it survives such scrutiny, as explained below. . .

by Kristin Eberhard, Sightline.org

Portlanders care about clean air, preventing climate change, preventing deaths, and relieving congestion. Unfortunately, expanding the I-5 Rose Quarter freeway will make pollution worse, won’t help safety, and won’t help with congestion. Here’s the answers to your questions.

1. Why won’t freeway expansion relieve congestion?

On “Free Cone Day,” Ben & Jerry’s ice cream shops have lines out the door because they are giving ice cream away for free. Freeways get congested because we give road space away for free. Expanding the freeway to get rid of congestion is like asking other ice cream shops to give away free ice cream to try to relieve the lines at Ben & Jerry’s. The new free ice cream shops will induce more people to show up now that a shop in their neighborhood is offering free ice cream, and new freeway lanes induce more people to drive now that the freeways are offering additional capacity.

If you don’t like my ice cream analogy, you can listen to researchers who have studied the phenomenon of “induced demand.” The American Economic Review looked at evidence from US cities and concluded the “Fundamental Law of Road Congestion,” is that increasing roads does not relieve congestion. Local economist Joe Cortright wrote about how induced demand played out in Houston and Louisville, and here it is at work in Los Angeles.

2. Why will congestion pricing relieve congestion?

While ice cream lines are linear—each additional person in line only makes the line one person slower—traffic flow is nonlinear—just a few additional cars can cause traffic to suddenly grind to a crawl. Traffic is sort of like a game of jenga: you can keep adding cars and traffic will keep chugging along but one extra car can push the system past a critical tipping point and make everything fall apart for everyone on the road. Once the system tips into congestion, its capacity decreases, meaning fewer vehicles can flow than were able to pass without congestion.

The silver lining to traffic flow’s nonlinear nature is that taking just a few cars off the road can disproportionately free up space and shorten travel times for all the other cars. Dissuading a handful of drivers who don’t really need to drive during peak hours, congestion pricing can make the whole system work better and more predictably for everyone. Stockholm, London, Milan, and Singapore have all shown that congestion pricing works in the real world.

3. Will a freeway expansion increase pollution?

Some people—including elected officials like Oregon State Senator Lee Beyer and Portland City Commissioner Amanda Fritz—conclude that cars and trucks idling in traffic must be emitting more pollution than they would if they were moving, so expanding the freeway will reduce pollution by restoring free flow of traffic. Unfortunately, that conclusion is wrong.

As explained above, a new freeway lane will get traffic flowing in the near term, but the new lanes will quickly attracts more freeway drivers and soon the road will be right back to the same frustrating idle. Only now a whole extra lane of drivers will be idling, meaning more pollution, not less. City Observatory digs into the data here.

In addition, highway construction emits between 1,400 and 2,300 tons of CO2 per lane-mile of new highway, so the act of adding two new lanes and shoulders to I-5 will by itself increase climate change pollution.

All in all, a 2007 Sightline analysis concluded that over the course of five decades, adding new highway lanes will lead to substantial increases in vehicle travel and CO2 emissions from cars and trucks.

Original Sightline Institute graphic, used under its free use policy.

4. Why is ODOT calling the mega freeway expansion a safety project?

Because they’re trying to pull one over on Portlanders.

Portlanders care about safety, as exemplified by the commitment to Vision Zero—a move toward zero traffic-related fatalities in the next ten years. Portlanders are not so keen on advantaging car drivers above other Portlanders, as exemplified by a history of killing big road projects.

Oregon Department of Transportation (ODOT) got the memo and is trying to sell a mega freeway expansion project as a safety project, claiming the purpose of the Rose Quarter expansion is to “improve safety and operations on I-5.” An ODOT spokesman recently told Willamette Week “The primary purpose of this project is to address a critical safety need,” and another ODOT representative pointed out there have been two fatalities on that stretch of freeway. But the proposed project has exactly zero relationship to those two fatalities. They were both homeless men—one may have had serious mental health issues and the other was intoxicated—who walked out onto the freeway. Adding more freeway lanes would not prevent those deaths.

If ODOT wanted to prioritize safety, it could use $450 million to fund Portland’s entire Vision Zero action plan about ten times over. Portland has used a data-driven approach to identify high crash corridors and intersections where many people have been injured or killed in recent years, identify the causes of those crashes, and design solutions that would prevent injuries and deaths. However, people driving on Portland’s streets continue to injure and kill people walking and biking as Portland works to implement its vision. Or if ODOT wanted to protect people experiencing homelessness, $450 million could get hundreds of people off the street and into housing.

The I-5 Rose Quarter Expansion is not a project to help keep Portlanders safe, it is a mega freeway project which futilely tries to make cars go faster.

5. Is the current freeway situation equitable?

Using taxpayer’s money to give out road space for free is not equitable. It gives a big handout to drivers, who, in Portland as elsewhere in the United States, are wealthier than people who take the bus or walk or bike to work or who don’t work. Even when roads are supposedly free, you have to pay an average of $8,558 per year to own and operate a car. For 20 percent of Portlanders, that would mean spending more than one third of their household income on one car. Given this, it is not surprising that people who pay to drive make more money than those who don’t. Freeways benefit (generally wealthier) drivers and don’t benefit (generally less wealthy) people who don’t drive.

Free road capacity also encourages sprawl, which can lock middle and working class families into expensive commutes. The “drive ‘til you qualify” approach to homebuying forces families to move further away from their job to be able to afford a mortgage, but in a location where they can’t get to work with the more affordable options of walking, biking, or transit.

Finally, highways are often located in lower-income neighborhoods where their pollution and noise disproportionately impacts lower-income people, while their free capacity disproportionately benefits higher-income people who drive.

No, the current freeway situation is not equitable.

Original Sightline Institute graphic, used under Sightline’s free use policy.

6. Is a freeway expansion equitable?

Spending nearly half a billion dollars on a freeway expansion in Portland will just double down on the inequity of the current system. ODOT’s generous estimates, which assume cars drive faster than the legal speed limit, suggest the expansion could save peak-hour commuters 6.5 minutes during the morning commute and 8 minutes in the evening. (ODOT’s time savings estimates don’t acknowledge the well-documented effects of induced demand, which will generate more traffic and erase theoretical time savings). We’ve already seen that drivers on average make more money than non-drivers, but drilling down further we see that peak-hour drivers make more money than non-peak-hour drivers. In fact, one study showed that just 3 percent of Portland’s peak-hour single occupant car drivers are people with low incomes.

Peak-hour drivers impose the greatest cost on the transportation system. Charging all taxpayers to expand the freeway then letting peak drivers use it for free is not fair. Not only is it not equitable, it is not an efficient use of scarce dollars since investments in helping people move via transit, walking, and biking get more bang for the buck.

Those who claim or imply Portland should proceed with the freeway expansion because congestion pricing could be inequitable either haven’t thought through the equity implications of the options, or are disingenuously using the poor as an excuse to perpetuate a system which hurts the poor.

I can’t say it better than Michael Manville, Assistant Professor at UCLA, said it here:

It is appropriate to worry that priced roads might harm the poor while helping the rich. But we should also worry that free roads do the same, and think about which form of unfairness we are best able to mitigate. People who worry about harms to the poor when roads are priced, and not when roads are free, may be worried more about the prices than the poor.

7. Could congestion pricing be equitable?

It is true that tolls are regressive—the same toll presents a bigger burden for a lower-income driver than for a wealthier driver. But done right, congestion pricing could put more low-income people in a better position than they are in now.

Peak hour pricing asks those who place the greatest burden on the transportation system (those who drive during peak hours). Those drivers mostly also have the greatest ability to pay, to take more responsibility. That is fair. For the 3 percent or so of peak-hour drivers who have low-incomes, an equitable program could exempt them from paying the peak-hour fees. They would be better off with peak pricing because they could get to work faster at no extra cost.

And Oregon could go even further to make congestion pricing more equitable by investing the congestion pricing revenue in helping additional low-income people who don’t drive during peak hours, for example by:

 

  • Investing in walking and biking infrastructure for non-drivers

 

  • Building affordable housing close to transit

 

  • Assisting low-income transit riders, building on the Low-Income Fare program Trimet is already developing

 

  • Exempting low-income residents, for example, anyone with an Oregon Trail card, from paying the tolls

 

  • Funding a low-income tax credit

 

8. Who wants the mega freeway expansion?

ODOT, the Port of Portland, the Oregon Trucker’s Association (OTA) and Portland Mayor Ted Wheeler, and possibly the full Portland City Council.

9. Who wants congestion pricing?

A large coalition of individuals and organizations, including OPAL Environmental Justice Oregon, Oregon Walks, BerniePDX, the Portland Chapter of the NAACP and others have joined to express concerns about the Rose Quarter Freeway expansion and ask ODOT and local partners to implement congestion pricing before expanding the freeway. Representatives from Neighbors for Clean Air, the Audubon Society, and 350PDX wrote an excellent op-ed here.

Portland Mayor Ted Wheeler also supports congestion pricing, but in addition to the freeway expansion project.

10. Does the City of Portland have a say?

State law already requires the Oregon Transportation Commission to pursue value pricing, but it also authorizes ODOT to pursue the freeway expansion megaproject. The City of Portland does not have authority over ODOT, but it can exert pressure. The city’s Central City 2035 plan currently includes features that give a blessing to the I-5 expansion project; by removing those or making a statement in favor of implementing congestion pricing first, the city could pressure ODOT in that direction.

11. How can I have a say?

If you live in Portland, you can contact the Mayor and other members of the city council. You can also submit comments to the members of the Portland Region Value Pricing Policy Advisory Committee.

No matter what issue you begin with, no matter what problem takes center stage for you — environment, racial justice, inequality, workers rights, health care access, you name it — eventually a persistent activist realizes that there are two parts to the problem, like an iceberg. There’s the visible part above sea level, and then there’s the much more massive part below that, hidden, that roots the visible problem in place and is much harder to deal with. The visible part is the problem that captured your attention. The bigger, invisible part is that American politics has succumbed to the very forces that America was created to counter, aristocratic inherited wealth.

One of the things that makes the Sightline Institute an OregonPEN favorite is that Sightline, originally an environmentally focused group, has recognized this and has put a lot of thought into how we have to solve the democracy problem (the takeover of our politics by wealth) in order to solve any of the others.

Sightline’s Kristin Eberhard tirelessly promotes essential reforms, not to be an academic, but because dealing with issues like climate disruption means we must start systematically removing the barriers to doing what has to be done.

Below is an April 2017 memo by Eberhard to present a concise shopping list of what we can do here in Oregon to restore democracy.

This memo is an articulation of Sightline’s internal strategy for voting systems reform. It is not a thoroughly vetted and reviewed report or article like most of our publications. All assertions are not cited or otherwise supported but instead reflect Sightline’s current judgment, which we may revise with further learning. Not all reforms mentioned are explained in this memo but are or will be explained in Sightline’s other published work.

If you are an Oregon resident or advocate excited by the energy around democracy reform in the United States, you might be wondering what the easiest or most impactful reform opportunities are close to home.

Fortunately, Oregon is ripe ground for voting reform. The state constitution specifically allows alternative and proportional voting. Charter counties and charter cities have autonomy to make changes without first seeking a change in state law. And all levels of government make liberal use of the citizens’ initiative process.

As in other places, reformers must consider the lack of alternative-ready vote- counting machines and the possible resistance of county auditors. But one Oregon county has already approved alternative voting, with several others poised to follow suit, and momentum is building around implementing proportional voting in Oregon’s largest city, Portland.

An effective and comprehensive strategy may involve a mix of easier and harder reforms. Demonstrating reforms in low-stakes elections or in localities before attempting statewide reform, for example, might be a good progression. This strategy memo is not based on public opinion research; such research would help prioritize among the objectives outlined here.

Below are the voting reforms we at Sightline would make if we could wave a magic wand, as well as our rough estimate of:

▪ how quickly or easily they might be accomplished (five stars is quick and easy, and one star is a long hard slog) and

▪ how much impact we think it might have (five stars means a significant improvement in democracy for a large number of Oregonians, and one star means a small improvement for a small number of people).

This memo is about voting systems reform, and we do not include other types of reforms that we are also researching, such as democracy vouchers for campaign funding and automatic voter registration. (You can find a similar document for Washington here.)

Our categories of preferred voting systems reforms are:

▪ Implement proportional voting for multi-member (legislative) bodies

▪ Implement improved voting for single-member offices

▪ Eliminate primaries or advance more candidates to the general election

▪ Create a unicameral state legislature

Implement proportional voting for multi-member (legislative) bodies

Although legislative bodies like the state legislature and city councils are meant to be reflective of all constituents, most Oregon jurisdictions use single-winner elections, either through single-member districts or numbered seats, to elect legislators. A series of single-winner elections yields a legislative body consisting almost entirely of the same kind of people because the majority in each district elects the sole representative from that district. Put together a body of majority winners and the majority is over-represented while voters in the minority are under-represented.

For example, in Oregon, white men make up 38 percent of the population but 67 percent of elected officials, while women of color make up 11 percent of the population but just three percent of elected officials. Democrats and Republicans win 100 percent of the seats, even though one-third of voters don’t affiliate with either of those parties.

Proportional voting could correct that unfair skew. To achieve more representative results, multi-member bodies like legislatures, councils, and school boards generally must be elected via multi-winner elections, not by single-winner elections based on single-member districts or at-large numbered seats. However, a hybrid system called Mixed Member Proportional voting achieves proportional representation while retaining some single-member districts. Several forms of voting can be used to achieve proportional or semi-proportional results, including:

Single-Transferable Vote (STV): A proportional, multi-winner form of Ranked-Choice Voting (RCV). It is used in Cambridge, Massachusetts; Ireland; Australia; and for Academy Awards nominees. All candidates for the X-member district appear on the same ballot, and voters rank their candidates in order of preference. The top X candidates win seats.

Mixed Member Proportional (MMP): Used in Germany and New Zealand, MMP retains some single-winner districts for local representation while adding multi-winner seats from party lists. Voters cast two votes: one for a local representative from a single-member district and one for a party.

Reweighted Range Voting (RRV): A proportional, multi-winner form of Score Voting. It is now used to select the five OSCAR nominees for “Best Visual Effects.” All candidates for the X-member district appear on the same ballot, and voters give each candidate a score, for example from zero to 9. The top X candidates win seats.

Proportional Score Runoff Voting (SRV-PR): A new method that would use a score ballot to select candidates one by one, with voters who supported a winning candidate having less say in subsequent rounds to ensure minority voters have a chance to elect a representative.

Limited Voting: A semi-proportional form of voting used in jurisdictions across the United States. Voters can cast fewer votes than there are seats available. For example, in a five-member district, voters might be able to cast two votes, enabling minority voters making up about two-fifths of the population to elect two out of five seats.

Cumulative Voting: A semi-proportional form of voting used in jurisdictions across the United States. Voters can cast as many votes as there are seats available but they can choose to allocate more than one vote per candidate. For example, in a three-member district, minority voters can give all three votes to their favorite candidate, ensuring that favorite wins a seat. Or they can give two votes to their favorite and one vote to their second-favorite, who also has support from some majority voters.

Federal courts sometimes order jurisdictions in violation of section 2 the Voting Rights Act to switch from “choose one” voting to Limited or Cumulative Voting because racial minorities who could not win representation under plurality voting can win seats under Limited or Cumulative Voting. Experts consider Limited Voting and Cumulative Voting to be “semi-proportional” because they achieve more proportional results than single-winner elections, but, depending on the strategies that parties and voters employ, they still are often less proportional than STV.

The national reform organization FairVote categorizes STV, MMP, limited, and cumulative systems under the moniker “Fair Representation Voting Systems.”

Multi-member offices can also use party-based proportional representation systems such as list voting, in which the ballot lists candidates by party, and voters can vote for their favorite candidate within a party list (in Open List systems) or for their favorite party, and the party then assigns seats based on its candidate list (in Closed List systems). But American voters tend to eschew strong party control, so these systems might be less popular in the near term.

A few cities in Oregon already elect multi-winner elections—electing multiple members in a single pool. Voters are allowed to “Vote for Three,” and the top three win, instead of the more common single-winner districts or numbered seats where voters can only “Vote for One.” These cities could make an easier switch to proportional voting, because the city would only need to switch to cumulative, limited, or ranked ballots, and not have to change anything else.

One challenge to adopting improved voting systems is that some Oregon counties’ vote-counting machines cannot yet tally alternative ballots. To ensure smooth implementation of voting reforms, these counties will need to update their scanners or software. On the bright side, because Oregon votes by mail, it does not have to purchase expensive polling-place machines, only the scanners and software that scan and count the ballots once they are mailed in to the county.

Quick & Easy Impact Proportional

Voting in:

Explanation
*** **** State Legislature Encourage Democratic legislators to head off the Republican redistricting effort by instead passing a redistricting law that adopts MMP, or draws multi- member districts, or requires multi-member districts for any area of the state lacking adequate racial representation.
*** **** Portland 2018 ballot initiative switching the city council from at- large numbered seats to multi-member districts with proportional voting.
*** **** Multnomah County Ballot initiative switching the county council from single-member districts to multi-member districts with proportional voting.
*** **** Other Charter cities and counties By vote of the council or by ballot initiative, adopt proportional voting to elect council.
* ***** State House Change Oregon law to elect state representatives in multi-member districts with proportional voting. For example, 60 reps from 20 three-member districts (and reduce size of Senate to 20 reps).
* ***** State House Change Oregon constitution and state law to elect state representatives via MMP. For example, 30 reps from existing senate single-member districts, plus 5 from each of 5 regional party lists (each region encompassing six districts), for a total of 55.
* ** State Task Force Encourage the Republican-led state Redistricting Task Force to recommend multi-member districts for the state legislature.
** *** Charter cities that use multi-member districts and bloc voting Fifteen or more charter cities—including Lake Oswego and Maywood Park in Multnomah County—already use multi-member districts and bloc voting (eg: “vote for 3”). Reformers could target these cities to make a switch to using a ranked-choice ballot and achieve proportional representation with no other changes.
** **** Gresham Urge 2020 Charter Review commission to put proportional voting on ballot to elect the 6 at-large city councilors in one or two multi-member districts.
* ***** Interstate Compact Cascadian interstate compact for fair representation in Congress: get Washington, Oregon, and Idaho to agree to elect their Congressional delegations by multi-member district.
** *** School Boards Ballot measures or urge Board vote to adopt proportional voting to elect board and to move elections to even years with higher turnout.

Implement improved voting for single-winner races

Most elections in Oregon use single-winner plurality voting (voters “choose one” on the ballot, and the candidate with the most votes wins) for both executive and legislative seats. The state legislature and most local councils use single-winner districts (the city or state is carved into districts with one representative per district) or at-large numbered seats (several city councilors run for the city at-large, but instead of running against each other they each choose which of the numbered seats to run for.) Some cities use bloc voting in multi-winner elections.

A primary narrows the field to two candidates in nonpartisan elections or one candidate per party in partisan elections, and the candidate with the most votes in the general elections wins. Even elections for multi-member bodies, such as the state legislature, city councils, and school boards, use single-winner elections, either in single-member districts or at-large numbered seats.

Most elections in Oregon use single-winner plurality voting for both executive and legislative seats. Under single-winner plurality voting, voters may choose just one candidate on the ballot, and the candidate with the most votes—though not necessarily a majority of votes—wins.

The Oregon state legislature and all local councils use one of the following:

▪ single-winner districts, in which the city or state is carved into districts, with one representative per district;

▪ at-large numbered seats, in which several city councilors run for the city at-large, but instead of all running against each other, they each choose which of the numbered seats to run for;

▪ bloc voting, in which several city councilors run for, for example, three open city-wide seats on the council, and voters can vote for three candidates.

In many local elections, if a candidate wins a majority of votes in the primary, she wins; otherwise, the top two vote-getters advance to the general, and the candidate with more votes in the general election wins. Even elections for multi-member bodies, such as the state legislature, city councils, and school boards, use single- winner elections, either in single-member districts or at-large numbered seats.

Under single-winner plurality voting, third-party candidates are discouraged from running for fear of “spoiling” the election for the major-party candidate they are most similar to. This cuts down on nuanced discussion of the issues and reduces voter choice. If a third-party candidate persists in running, it can throw the election to the less popular, opposition major-party candidate, ultimately meaning that a majority of voters dislike the one person elected to represent them.

Aside from the third-party spoiler problem, plurality voting also rewards candidates for scaring away voters as much as for winning them over. If a candidate can get enough of her opponent’s voters to just stay home, disgusted with the spectacle of politics, she can win with just the minority of voters making up her base. This structural flaw encourages negative campaigning.

Single-member offices, such as governor, treasurer, and mayor, could instead be elected by Instant Runoff Voting (IRV, which is one form of Ranked-Choice Voting (RCV)). Under Instant Runoff Voting, voters rank their candidates in order of preference, and the ballots are counted in rounds: if a candidate wins more than half of the first-choice rankings, she wins. Otherwise, the candidate(s) with the
fewest first-choice rankings are eliminated, and their voters’ votes get transferred to their next-ranked candidate who is still in the running. Counting continues until one candidate wins more than half of the active votes. This one-minute video explains.

Score Runoff Voting (SRV) is a promising but as yet untested option for electing single-member offices. Under SRV, voters give each candidate a score from 0 (no support) to 5 or 9 (strong support). The scores are added up, and the two candidates with the top total scores go to an instant runoff. In the runoff, a voter’s vote goes to the runoff candidate he or she scored higher, and the candidate with the most votes wins.

Because they allow voters to give a rank or score to more than one candidate, both IRV and SRV would allow third-party candidates to run, enriching political dialogue and increasing options for voters. Because they reward candidates for winning additional support, these improved voting systems also encourage candidates to reach out to voters beyond their base, encouraging positive, policy-oriented campaigns.

Two other voting methods—Approval Voting and Score Voting—can, in theory, achieve excellent results. Under Approval Voting, voters vote for all the candidates they approve of, and the candidate with the most votes wins. Under Score Voting, voters give each candidate a score, and the candidate with the highest total score wins.

In practice, though, experience indicates that approval voting devolves to “bullet voting,” where voters only approve of their favorite candidate, out of (justified) fear that approving of their second or third favorite will hurt their most favorite.

Score Voting has not been used in a public election, so we can’t look to experience with it, but it suffers from the same structural flaw as Approval Voting— voting experts say it fails the “Later-No-Harm” criterion because voters can be harmed by scoring a less preferred candidate. Under Score Voting, voters would likely strategically give a top score to their favorite and no or very low scores to other candidates they actually like. (Note that Score Runoff Voting would likely overcome this flaw by encouraging voters to give scores to candidates other than their favorite to ensure they still have a vote in the runoff if their favorite doesn’t make it.)

Multi-member bodies, such as the state legislature, city councils, and school boards, are often elected by district or by numbered (also called posted) seats via single-winner methods. In this case, Instant Runoff Voting and possibly Score Runoff Voting would be an improvement over single-winner plurality voting.

However, even with such improvement, legislatures, councils, and school boards elected in single-winner elections will not proportionally reflect their constituents, and legislative bodies will continue to be mired in partisan gridlock. To achieve proportional representation and improved legislative capabilities, jurisdictions must use one of the methods detailed in the section above.

Quick & Easy Impact Proportional

Voting in:

Explanation
***** **** Benton County Ensure that Benton County’s recently-adopted IRV is implemented well.
*** ***** Multnomah County 2018 ballot initiative adopting alternative voting.
*** ***** Portland 2018 ballot initiative adopting alternative voting.
**** **** State Leg. /

Sec. of State

Require counties to acquire alternative voting-ready machines whenever turning over, or even to accelerate turnover.
**** **** Lane County Urge council to put SRV on the ballot in 2017.
*** **** Other Charter cities and charter counties By vote of the council or by ballot initiative, adopt alternative voting to elect single-member offices.

▪   Oregon has nine charter counties: Benton, Clatsop, Hood River, Jackson, Josephine, Lane, Multnomah, and Washington.

▪   Oregon has 111 charter cities.

▪   Oregon’s 241 general law cities also have the power of referendum and initiative, so it is possible they too could pass an alternative voting initiative, but it is not clear what the initiative would do since they don’t have a charter to amend.

***** * Independent Party of Oregon (IPO) Use IRV or SRV in next online election. The IPO has flexibility to quickly try things in its online elections, allowing for a quick and easy test with real voters.
* ***** State Leg. Adopt alternative voting for US Presidential primaries. Administratively difficult because all counties would need to be able to count alternative ballots.
** *** Clatsop County Use 2017 Charter Review process to propose alternative voting for county commissioners (elected by district).

Eliminate primaries or advance more candidates to the general election

Primaries act as a modern poll tax.

Primary voters tend to be an extremely small (usually 10 to 20 percent) and non-representative (whiter, older, wealthier) share of the voting-eligible population. Primaries thus tend to nominate older, whiter, more conservative candidates. And primaries in single-winner districts that are “safe” for one or the other of the two major parties tend to nominate more sharply partisan candidates because they only have to campaign to win over their base in the party primary, not the general election. The primary thus narrows and skews the field, leaving general election voters with few options.

All of the alternative and proportional methods above could be used without a primary, so a switch in voting system could have the bonus of eliminating the 21st- century poll tax. Or, Oregon could mitigate the impact of the poll tax—and avoid the pitfalls of Washington’s “top two” system—by instead holding open primaries that advance three or four candidates to a general election, in which voters could use one of the alternative methods to select the winners. Either option would give general election voters more say in who represents them.

Quick & Easy Impact Proportional

Voting in:

Explanation
* *** State Leg. Switch to ranked-choice voting for presidential primaries.
** *** Charter cities and charter counties Change charters to advance three or four people per seat to the general and to use ranked-choice voting in the general.

Create a unicameral state legislature

The Oregon state bicameral legislature consists of two elected bodies representing exactly the same people and charged with doing the same thing twice. This makes it twice as hard as it should be to pass legislation. Nebraska has had a unicameral state legislature for nearly a century, cutting down on waste and streamlining government. Oregon could do the same.

Quick & Easy Impact Proportional

Voting in:

Explanation
* ***** Unicameral State Legislature Ballot initiative to combine the state senate and state house into a single unicameral body elected through MMP voting or multi-member districts with proportional voting. For example, create one of the following:

▪   a single 60-member body elected from 20 three-member districts

▪   a single 75-member body elected from 15 five-member districts

▪   a 60-member MMP body with 30 representatives elected from single-member districts and 30 from six five-member party list regions.

Sightline Institute is a think tank that provides leading original analysis of energy, economic, and environmental policy in the Pacific Northwest.

Kristin Eberhard is a Senior Researcher at Sightline Institute, where she works on climate change policy and democracy reform. You can reach her at kristin@ sightline.org.