__Priority Topics Flashcards

(228 cards)

1
Q

Energy Services

A

Services provided by energy, like hot showers, cold beer, lit rooms, and spinning shafts

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
2
Q

Energy Intensity

A

(E/GDP) = Energy required to create each unit of economic output (falling worldwide for last few decades)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
3
Q

Energy Productivity

A

(GDP/E) = Economic output per unit of energy. It reframes GDP as a function of energy, and is often used as a measure of comparative productivity across countries.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
4
Q

Positive Analysis

A

Fact-based and objective

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
5
Q

Normative Analysis

A

Subjective and value-based

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
6
Q

Circular vs. Directional Systems

A

Circular systems, as the macroeconomy is often modeled, have many interrelated elements that can exhibit a balance with feedback keeping the various elements in check. It is often difficult to discern the beginning and the end of a circular system process, like the chicken and egg. In contrast, directional systems tend to have a distinct beginning and a distinct end - they start with some inputs and go through a series of transformations resulting in outputs, but the outputs don’t stay in the system or recycle in any significant way. The energy supply chain, like nearly all supply chains, is an open and directional system.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
7
Q

Innovation

A

Constraints compel invention and creativity in trying to create additional advantages in the form of reduced costs or increased profits. This innovation occurs everywhere in the system - supply, efficiency, demand, cost, and benefit - and is a permanent fixture of the energy system. [CREATING OPPORTUNITIES / GROWTH]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
8
Q

Depletion

A

Depletion of the relevant resources or capacity or value or market opportunity (i.e. procuring cheapest and easiest resources first, infrastructure investments deteriorating, competition) [INCREASING COSTS / SCARCITY]

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
9
Q

Kinetic Energy

A

Energy of motion - manifests at each of four levels: subatomic, atomic, molecular, particle

These correspond with five common energies:
- Electromagnetic / Radiant (Subatomic) - radiant waves i.e. ultraviolet light, visible light, microwaves, x-ray
- Electrical (Subatomic) - movement of electrons
- Thermal / Heat (Atomic/Molecular) - addition of energy to an atom or molecule increases vibration, thereby increasing temperature
- Motion (molecular/particle) - energy resident of an object in motion
- Sound or wave (molecular/particle) - energy moves as compression or vibration in air or water

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
10
Q

Potential Energy

A

Stored energy, resides persistently in various fuels that can be combusted. Four are:
- Nuclear energy (subatomic): Energy extant in bonds in every atom that hold subatomic particles together
- Gravitational energy (subatomic): i.e. waterfall
- Chemical energy (atomic/molecular): found in bonds between atoms and molecules, can be harnessed through forming or breaking these bonds
- Elastic energy (atomic/molecular) - in springs and polymers, hold energy in tension until they regain their natural shape

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
11
Q

Primary Energy Sources

A

Energy available in nature - cannot be produced and must exist within or be constantly delivered to the energy system from nature

Includes:
- Biomass (potential, chemical)
- Fossil fuel (potential, chemical)
- Nuclear (potential, nuclear)
- Hydropower (kinetic, motion)
- Tidal (kinetic, motion)
- Wind (kinetic, motion)
- Geothermal (kinetic, thermal)
- Solar (kinetic, electromagentic)
- Animal (kinetic, motion)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
12
Q

Prime Movers

A

Machines that are used to harness and transfer primary kinetic and potential energy sources into directed and concentrated forms to produce mechanical work. Started out as very basic reciprocating steam engines.
- Have evolved into very sophisticated turbines and combustion devices used to perform industrial work in both stationary devices and transportation vehicles.
- These devices were intended to transform available energy - concentrate it, change its form to something easier to handle, and direct it to specific purposes
- In stationary applications, converting primary energy into electricity

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
13
Q

Secondary Energy

A

Forms of energy not available in a primary form in the environment, which includes electricity, refined fuels, hydrogen, and other synthetic fuels. (Sometimes referred to as energy carriers)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
14
Q

Final Energy Service

A

Final products or services that are delivered by the use of energy (Toasted bread, chilled beer, spun shafts, or transported family members)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
15
Q

Scenarios

A

Scenario is different from forecast. Modeling exercise that asks “what if” question. The modeler establishes and expectation of the relationship among different variables and the output and then constructs a range of scenarios for the inputs. For each scenario, outputs are calculated based on the model parameters. Scenario analysis assumes that the construction of the relationships between variables is sound, but it allows that the values the inputs will take are either subject to significant uncertainty or not known.

By contrast, by calling something a forecast usually assets that both the model and the input assumptions are expected to occur, and therefore the output is expected to approximate future reality.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
16
Q

Energy

A

“Ability to do work”

Units: Joules (J), Watt-hours (Wh), tons of oil equivalent (toe), barrels of oil (boe), British thermal units (Btu), or calories (cal)

E = P * t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
17
Q

Power

A

The rate at which energy is transformed. Power is a rate of flow within the system, corresponding to a rate of change of energy transformed or delivered.

Units:
- joules/second transformed = watt
- Energy within a kilowatt-hour spread over an hour leaves a kilowatt (kW)
- Barrels of oil per day (bpd)

P = E/t

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
18
Q

First Law of Thermodynamics

A

Law of conservation of energy - all of the energy that enters a closed system must remain in that system as energy, heat, or work produced. Energy can be neither created nor destroyed.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
19
Q

Second Law of Thermodynamics

A

In most transformations of one type of energy to another, some amount is wasted or rendered useless. The energy input must created the desired output (useful energy) or wasted (wasted energy). Through entropy, this heat becomes more diffuse, disorganized, and difficult to recapture.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
20
Q

Useful Energy

A

Amount of energy input creating desired output or work

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
21
Q

Wasted Energy

A

Amount of energy input is wasted (most is in heat, though additional can be lost as light or sound or other vibration)

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
22
Q

Total Final Consumption / Final Energy Consumption

A

May only be a small fraction of the primary energy supply, but it has been transformed, purified, moved, directed, and distributed to exactly where the customer may find it desirable. Despite the losses, the value to the end customer has increased dramatically.

How well did you know this?
1
Not at all
2
3
4
5
Perfectly
23
Q

4 Dimensions of Transformation/Fungibility Framework

A
  • What: Changing the form of energy is the purpose of many transformations in the energy system. This can be any type of purification, processing, refining, or straight conversion from one energy type to another. (Low-grade to high-grade, stepping voltage up and down, removal of impurities)
  • Where: Move energy from where it is to where people may find it more useful and valuable. (Firewood harvested, electricity transmitted, petroleum distributed to fueling stations)
  • When: Energy is not always needed at the exact time it is available. Sources of potential energy, including biomass and fossil fuels, have an inherent ability to store their energy over time under some conditions. Any time infrastructure is deployed to assist in the temporal transfer of energy from not until later is a when transformation (underground storage, batteries for electricity, tanks for petroleum)
  • How certain: Not all energy sources are available in the exact form, in the right time or place they might be desired. Buffer stocks are used to deal with uncertainties. Infrastructure designed to increase surety that energy will be available when desired.
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
24
Q

5 Forms of Industrial Capital

A
  • Physical capital
  • Financial capital
  • Intellectual capital: knowledge and technology
  • Political capital
  • Human capital
    (plus Natural capital)
How well did you know this?
1
Not at all
2
3
4
5
Perfectly
25
Natural Capital
Many other endowments of natural capital are necessary for the complete functioning of the energy system. Examples include water availability, raw materials like metals and elements, and land.
26
Stock
(Boxes, Energy) Rule 1. Stocks are the foundation of any system; they can be seen, felt, counted, or measured at any given time.
27
Flow
(Arrows, Power) Rule 2. Stocks change over time through the actions of flows (mathematically, their rate of change is the first derivative of the amount of stock over time). Rule 3. A stock can be increased by decreasing its outflow rate as well as by increasing its inflow rate.
28
Feedback Loop
Feedback is the communication mechanism between stocks and flows, taking in data about the state of the system and communicating that data to other elements of the system, causing those elements to react by either maintaining or adjusting their behavior. Feedback loops describe a complete cycle of these types of feedback, stocks, and flows that continually update each other. Whether they contain just a few elements or are long and complex loops of interactions, feedback loops have general properties or behaviors that can help to explain system dynamics, including: Rule 4. Feedback loops can be sustaining or they can be reinforcing. Rule 5. Feedback loops only affect future behavior and not current behavior; that is, lags and delays happen.
29
Sustaining/Goal-seeking Loop
The first, a sustaining loop or goal-seeking loop, exhibits properties of stability or equilibrium. When a system that has a sustaining loop detects that stocks are too low, it causes increased inflows (or decreased outflows) and the stocks to rise. If a system detects that stocks are too high, it causes the opposite to occur. (Home thermostat)
30
Runaway/Reinforcing Loop
Conversely, runaway loops or reinforcing loops cause a system that is out of balance to go further in that direction. Simple examples might include avalanches, where a small amount of ice and snow dislodges larger amounts, and these larger amounts in turn dislodge even larger amounts further down. Earning interest on a pool of capital causes that pool of capital to grow, increasing the amount of interest that can be earned on it, which causes it to rise more quickly, or compound.
31
System Purpose
Systems, when viewed dispassionately, follow a set of rules and behaviors that supersede the motivations of any one of the actors in the system. Systems are not designed; they emerge from a set of stocks, opportunities to transform those stocks, and behavioral elements that determine how the stocks and flows will change based on the conditions in the system. In aggregate, this system has an outcome, also thought of as the system purpose.
32
Market Power
The extent to which an individual actor has the ability to influence market outcomes
33
Natural Monopoly
Industries or service providers for which it is only economically efficient to have a single provider. In these industries, a single provider persistently achieves cost improvements as it gets bigger (decreasing cost industries). Adding a second provider would raise average cost and reduce overall system efficiency. (Many energy delivery architectures, including electricity grid distribution, exhibit this characteristic of decreasing costs)
34
Crowding Out
One of the main examples of government failure is that of crowding out. Crowding out occurs when a government behaves in a way that supplants the need for the development of market structures and participants to deliver the same good or service. This can happen when governments favor one technology over another through incentives. It can also happen when grants of aid or technology, even though well-intentioned, such as free or subsidized energy generation components in the developing world, alter the incentives of local providers to deliver the same under nascent market structures.
35
Sovereign Risk
government failure include regulatory risks, such as the possibility of (2) risks that all firms face from unexpected and uncompensated changes in laws or regulations that affect them (referred to as sovereign risk).
36
Cost of Capital
The rate of return required by investors in a project. This will determine what projects are feasible to undertake since the return should equal or exceed the cost of capital
37
What is a busbar?
A conductive material that connects multiple circuits in electrical systems. ## Footnote Used for distributing power within substations.
38
What does locational marginal price (LMP) represent?
The price of electricity at a specific location, considering supply and demand. ## Footnote Influences market trading and pricing strategies.
39
What is base load?
The minimum level of demand on an electrical grid over a period of time. ## Footnote Typically met by power plants that run continuously.
40
What is peak load?
The maximum level of demand on an electrical grid during a specific time. ## Footnote Often requires additional generation capacity to meet demand.
41
What is capacity factor?
The ratio of actual output of a power plant to its potential output over a period. ## Footnote Indicates the efficiency of the plant.
42
What are spinning reserves?
Backup power resources that can be activated quickly to meet sudden demand. ## Footnote Essential for grid reliability.
43
What are ancillary services?
Support services necessary for maintaining grid reliability and stability. ## Footnote Includes frequency control and voltage support.
44
What is cost-of-service recovery (COSR)?
The method by which utilities recover the costs of providing service to customers. ## Footnote Ensures financial stability for utility operations.
45
What is a rate base?
The value of property or assets used by a utility to provide service. ## Footnote Used to determine allowable revenue for utilities.
46
What are stranded costs?
Costs incurred by utilities that cannot be recovered due to market changes. ## Footnote Often arise during deregulation.
47
What is weighted average cost of capital (WACC)?
The average rate of return a company is expected to pay to its security holders. ## Footnote Used in investment decisions and financial modeling.
48
What is decoupling in utility regulation?
A mechanism that separates a utility's profits from the amount of energy sold. ## Footnote Encourages energy conservation.
49
What is reserve margin?
The difference between available capacity and peak demand. ## Footnote Ensures reliability during peak periods.
50
What is a hurdle rate?
The minimum rate of return required on an investment. ## Footnote Used in capital budgeting decisions.
51
What is net present value (NPV)?
The difference between the present value of cash inflows and outflows. ## Footnote Used to assess profitability of an investment.
52
What does internal rate of return (IRR) measure?
The discount rate that makes the net present value of an investment zero. ## Footnote Helps compare the profitability of investments.
53
What does LCOE stand for?
Levelized Cost of Electricity Includes: - Overnight costs - Fixed O&M - Variable O&M - Fuel Costs ## Footnote LCOE is a measure of the average net present cost of electricity generation for a generating plant over its lifetime.
54
What are overnight costs?
Costs incurred to build a generating plant before it starts operating ## Footnote These costs include all expenses associated with construction, excluding interest during construction.
55
What do O&M costs refer to?
Operations and Maintenance costs ## Footnote O&M costs are the expenses associated with the operation and upkeep of a power plant.
56
What are fuel costs?
Expenses related to the procurement of fuel for electricity generation ## Footnote Fuel costs can vary significantly depending on the type of fuel used and market conditions.
57
What is a discount rate?
The interest rate used to determine the present value of future cash flows ## Footnote The discount rate reflects the opportunity cost of capital.
58
What is heat rate?
A measure of the efficiency of a power plant, typically expressed in BTU/kWh ## Footnote A lower heat rate indicates a more efficient plant.
59
What does busbar cost mean?
The cost of electricity at the point where it is delivered to the grid ## Footnote Busbar costs reflect the total costs of generating electricity, excluding transmission and distribution costs.
60
What is real LCOE?
The levelized cost of electricity adjusted for inflation ## Footnote Real LCOE provides a more accurate representation of the cost over time.
61
What is the market-clearing price?
The price at which supply equals demand in a competitive market ## Footnote This price determines the revenue for electricity generators.
62
What does merit order refer to?
The ranking of power plants based on their marginal costs of generation ## Footnote Plants with lower costs are dispatched first to meet demand.
63
What is marginal cost?
The cost of producing one additional unit of electricity ## Footnote Marginal cost influences the market-clearing price in competitive markets.
64
What is a supply stack?
A graphical representation of the available generation resources and their costs ## Footnote The supply stack helps determine which resources will be used to meet demand.
65
What are feed-in tariffs (FITs)?
Payments to energy producers for the electricity they generate from renewable sources ## Footnote FITs encourage the development of renewable energy by providing stable pricing.
66
What does project finance refer to?
Financing based on the cash flow generated by a specific project rather than the overall balance sheet ## Footnote Project finance often involves special purpose vehicles to isolate risk.
67
What does the reserve-to-production ratio (R/P Ratio) indicate?
The relationship between proven reserves and annual production ## Footnote A higher R/P ratio suggests a longer lifespan of resource availability.
68
What is a shadow price?
The estimated price for a good or service that is not normally priced in the market Used in cost-benefit analysis and environmental economics.
69
1 Completion risk
The risk that a project, once construction commences, fails to be completed on time or on budget. Completion risk also covers projects that fail to deliver the expected level of output or require additional inputs at the time of construction.
70
2 Revenue risk
The risk that the expected receipts for the sale of the outputs of a project fail to meet expectations over time. This can happen because the price received for outputs is lower than expected for part or all of the period of operation or because some volume of output cannot be sold into the marketplace.
71
3 Supply risk
The problem of either unexpectedly costly inputs or supply chain disruptions. While some of these supply risks can be hedged, some cannot, at least not at a reasonable price.
72
4 Technology risk
The risk that the components used in a project, once commissioned, fail to meet technical performance expectations over the lifetime of the project. Project performance, efficiency, conversion ratio, and throughput tend to degrade over time, but if they degrade more than expected at the outset of the project, they can negatively impact the project economics and compensation of investors.
73
5 Operational risk
The risk that the project, once put into service, fails to meet expectations of output. Projects that fail to meet lifetime expectations also possess operational risk.
74
6 Policy and political risk
Policy risk is the risk that policies imposed after commencement might alter the economics of the project, such as restrictions on operation, additional cost for environmental controls or safety, or changing the method of compensation for project output. Political risk, though similar, deals with international risks facing projects largely sited in foreign locations, including threat of war, civil strife, asset expropriation by host governments, or restrictions on repatriating proceeds.
75
7 Environmental risk
The potential for a project to be delayed due to regulatory or environmental approvals, as well as to be shut down due to a failure to comply with existing (or even future) environmental rules is subject to environmental risk. This represents a variation of both completion risk and operational risk, specifically tied to environmental concerns. As described in Chapter 12, not all technologies share equal exposure to these risks, but every technology is affected by some environmental impact that could present a risk.
76
7 Types of Project Risks
1 Completion risk 2 Revenue risk 3 Supply risk 4 Technology risk 5 Operational risk 6 Policy and political risk 7 Environmental risk
77
Define the product development lifecycle.
The process of bringing a new product to market, including stages like conception, design, and launch. - Tech R&D (Govt R&D, VC, Corp VC) - Manufacturing scale-up (Public and private equity markets, Tax breaks) - Roll-out (Asset finance, project finance) -- ■ Technology research and development (R&D)—R&D spending typically involves existing corporations and technology solutions, and it covers devices, components, and installation mechanisms. This type of spending ranges from research spending on basic science and early concept testing to late-stage development of prototypes. Early-stage R&D spending can come from government R&D programs and agencies or universities, often subsequently taken over by some combination of third-party venture capital, private equity, or corporate investment (corporate venturing) sources willing to take substantial risks to obtain intellectual property and later monetize it. This higher risk must be compensated through higher expected returns. ■ Manufacturing scale-up—Once the product or process has passed initial testing and seems technically sound, a major uncertainty remaining is how quickly the production cost can be driven down and at what scale long-term competitive economics can be achieved. Facilities for manufacturing economic quantities of components or devices must be established, and capital-intensive technologies tend to have substantial manufacturing requirements and therefore financial capital. Larger factories help bring down unit costs for manufactured products (i.e., they enjoy economies of scale), but they are also more expensive and therefore exposed to more financial risk if the product fails to be commercialized. Companies that Page 356356 raise money in public and private equity markets (and occasionally corporate debt markets when their financial strength allows it) typically make balance sheet investments for these manufacturing scale-up investments. Sometimes state or local government economic planning agencies targeting local manufacturing of particular technologies or development of skills can help finance the costs through a variety of manufacturing incentives or tax breaks. This type of financing is still considered fairly risky, so it commands a higher required rate of return, though often lower than R&D funding. ■ Rollout—Finally, project developers that purchase components from these technology manufacturers must also have access to financing for project development. These components are used to build energy generation projects, which, like any energy asset, also must be financed to help amortize their cost over the projected lifetime of the project. This type of investment, called asset finance, is described throughout many of the appendices in this section and includes both debt and equity financing deployed from either corporate balance sheets or in separate project finance vehicles. As many of the technology risks have been mitigated and most of the supply and revenue risks have become well understood before commencement, the required rates of return for these types of projects can be much lower but still vary based on the remaining risks of the underlying project. ## Footnote The lifecycle helps in managing the product from initial idea to market release.
78
What are production tax credits (PTCs)?
Tax incentives provided to producers of renewable energy based on the amount of electricity generated. ## Footnote PTCs are aimed at promoting investment in renewable energy technologies.
79
What is a renewable portfolio standard (RPS)?
A regulation that requires a certain percentage of energy to come from renewable sources. ## Footnote RPS policies are designed to increase the use of renewable energy in the energy mix.
80
What are renewable energy certificates (RECs)?
Tradable certificates that represent proof that energy was generated from renewable sources. ## Footnote RECs are used to track renewable energy generation and compliance with RPS.
81
What is the experience curve?
A phenomenon where the cost per unit of production decreases as cumulative production increases. ## Footnote The experience curve is often used to predict cost reductions in technology sectors.
82
Building envelope
HVAC—Space heating, cooling, and ventilation needs are driven by the thermal characteristics of the building, including the thermal efficiency of the insulation, windows, doors, and roof—collectively, the building envelope, or the collection of components that separate the building from the external environment. Tight building envelopes reduce the need for HVAC, thereby increasing building efficiency. Loose building envelopes increase the energy requirements.
83
Decoupling
However, utilities are not always enthusiastic suppliers of EE solutions. In fact, energy efficiency can run counter to the typical utility business model of selling more electrons and recovering larger revenues based on that strategy. It can also have the perverse effect of increasing the volumetric rates (rate per kilowatt-hour) that customers pay as utilities spread more fixed costs over fewer kilowatt-hours being sold. (See the Metrics Sidebar below.) Utility regulators in many jurisdictions have recognized this conflict and adjusted the revenue mechanisms to accommodate it. One of the primary ways of dealing with this has been the establishment of decoupling (or revenue decoupling), which allows utilities to separate the total amount of cost recovery (importantly, including the gross profit or contribution they would have otherwise received) from the actual amount of electricity sold in the period. This way, the utilities eliminate both the temporary and permanent disincentives to deploy EE programs. California is particularly known for its successful decoupling program, originally established in the 1970s, which has helped lead to a per capita electricity consumption about half of the rest of the United States. While decoupling does remove the disincentive to deploy EE programs, it does not create a particularly powerful incentive. Some jurisdictions have gone further by creating incentive regulation, which increases the revenue, profit, or both when utilities meet or exceed efficiency performance benchmarks.
84
Demand response (DR)
In contrast to the many EE solutions described above, demand response (DR) represents a set of solutions deployed with the goal to change the peak load of electricity consumption. DR solutions are not about substantively changing the actual energy consumption over the long term but about making sure that demand can be managed to optimize the peak capacity needs of the electricity system, particularly when the grid is most constrained and therefore vulnerable to failure. In this way, demand response represents more of a power application, in contrast to the energy applications of energy efficiency. When done correctly, reducing or shifting the load of individual devices or collections of devices on the system at peak times helps reduce the overall capacity needs. Doing so substantially changes the supply and demand dynamics in electricity markets by creating some demand elasticity that does not exist in more traditional models of utility dispatch.
85
pumped hydropower
Pumped hydropower (gravitational potential energy)—Pumped hydropower storage is the method of pumping (using electricity) either fresh or salt water to a higher elevation, and storing it in some reservoir for later use (see Figure 10.4). When the energy is required, the water can be run, using gravity, through a turbine to generate electricity. Where the geology allows this, substantial storage can be created cheaply, and the overall efficiency of the process is quite high.
86
compressed air energy storage (CAES)
Compressed air (elastic potential energy)—Compressed air energy storage (or CAES) uses an electric-powered compressor to force air into a closed container, and the electricity can be recaptured later by the release of that pressure driving an air engine or pneumatic motor. While this technology can be used for vehicles and other portable applications, it is currently used primarily for grid-connected energy applications, which utilize large underground storage (underground CAES) in the form of geological formations or potentially depleted natural gas reservoirs.
87
flywheels
■ Flywheels (rotational kinetic energy)—Flywheels are spinning shafts or discs that are accelerated using electricity. The momentum of the flywheel can be converted easily into electricity using a dynamo. Because of the physical momentum of the devices, the amount of energy that can be safely stored in flywheels is limited, but flywheels are increasingly recognized as a technical option of high-power grid-connected storage for ancillary services, with potentially very long device lifetimes and overall high efficiency when constructed properly.
88
capacitors
Capacitors are small electronic devices that can store electric charge using electric plates and a separating (dielectric) insulator. Capacitors are used throughout the electricity system to manage the flow of electrons in everything from computer chips to large generating stations. Capacitors do not store large amounts of energy very well, but the ability to charge or discharge their energy quickly gives them an extremely high power rating, or power density. Supercapacitors (also called ultracapacitors) are capacitors with technical features that allow them to be scaled up to a larger size. This larger size allows them to store meaningfully large amounts of energy and also discharge and recapture at very high power ratings. This makes them suitable for applications such as regenerative braking, as well as other electric vehicle applications, though they are primarily used for power management and not for the energy required to propel vehicles over long distances.
89
superconducting magnetic energy storage (SMES)
Superconducting magnetic energy storage (SMES) allows energy storage in a magnetic field. By using supercooled conductors, which have nearly no resistance, energy can be cycled through conduit loops for a long time with very little loss.3 Another advantage of the system is the nearly instantaneous response time to operational signals.
90
specific energy
Specific energy—Specific energy is the amount of energy that can be stored in the device or system per unit of mass (technically, gravimetric energy density to differentiate it from the volumetric kind below). Clearly, being able to store energy in less mass is preferable, particularly when the cost of the device is heavily driven by the cost of the materials used in it.
91
specific power
Specific power and power density—Both of the concepts above can be measured as functions of power rather than energy if the application is intended to provide primarily power outputs. Specific power—Specific power is the amount of power that can be stored in the device or system per unit of mass. Clearly, being able to store power in less mass is preferable, particularly when the cost of the device is heavily driven by the cost of the materials used in it.
92
round-trip efficiency
Round-trip efficiency - This refers to the efficiency with which energy can be stored and then converted back into electricity in the device or system. High round-trip efficiency means that a lot of the input energy that comes back is useful output, minimizing the losses in both the physical and the economic senses.
93
frequency regulation
Using electric storage for short-term applications has economic advantages compared with using it for longer-term applications. Storage typically has a large upfront capital cost, which needs to be recouped through as much value creation or use as possible. Frequency regulation is the short-term management of the supply and demand balance in the grid that keeps the system operating within acceptable parameters of voltage and current. It helps avoid tripping or curtailing assets, which can lead to expensive cascading failures. In restructured electricity markets, frequency regulation services are potentially needed at any given moment for standby power and execution in both up (adding) and down (reducing) markets, depending on in which direction the system is imbalanced. Storage solutions are particularly effective at meeting this need due to their requirement to both take in and deliver electricity in balance over a given period of time. They can participate in up markets at some times and in down markets at others, often in rapid succession. These are sometimes referred to as balancing markets in Europe. Electricity storage, particularly from batteries or electronically controlled devices, has additional advantages in functioning as capacity for system regulation—it has a very fast response time (sometimes reacting in less than one second from receiving an instruction) compared with other types of storage and nearly all types of generators, which need much longer to ramp up their capacity. This makes energy storage a particularly effective spinning reserve (a reserve that can be called on and deliver electricity ultrafast), though adequate compensation from utility operators for this faster response time is not universally provided in regulation or through PUCs.
94
peak shaving
One common goal for electricity storage is to meet some of the same challenges that demand response targets discussed in Chapter 9—reducing the peak energy requirements of the grid, particularly at its most constrained times of the day or year. Very small amounts of energy shifting, or peak shaving, at these moments can have substantial impacts on the system's power requirements and overall stability. The value of reducing these peak power needs can be significant, since under current Dutch auction market structures, the reduction in the wholesale power price benefits all customers. Also, with storage, a grid operator may be able to deploy fewer generation (and potentially transmission and distribution) assets to prepare for these rare but inevitable peaks. For these reasons, there is value to this peak shaving with the potential for compensation.
95
system regulation
Ensuring the reliable and stable operation of the grid The overall flexibility of being able to dispatch electricity from energy storage, like for other forms of generation, helps improve the operator's system regulation. It provides capacity to the system, which should generally be compensated under whichever method and degree of regulation the utility operator is subject to.
96
firming renewable energy
Finally, an emerging power application involves using storage in firming renewable energy. Many emerging renewable energy options, such as wind and solar, rely on kinetic energy resources that are intermittent and not perfectly predictable. While the average amount can be reasonably well known, relying on this energy as being dispatchable is not possible unless it can be stored for minutes, hours, or even days, depending on the nature of the resource. Electric storage is ideal for these applications because of the matching ins and outs and high power needs of these applications. This kind of storage can be located anywhere—from the generator site to the grid—and can provide value for both smoothing and grid regulation at the same time.
97
time shifting
Energy applications for storage move chunks of energy from the time they are generated to another time when they are more valuable. This time shifting can be for hours or days, depending on the design of the system and the amount of energy available.Solutions of this type can take a couple of different forms. The first is load shifting, which allows an energy input from a few hours earlier to be delivered at the peak and is compensated by the differential value for the electricity between when it is stored and when it is withdrawn. This particular configuration is also very useful when generation may be closely coincident with the peak but not exactly, such as for solar generation technologies in places with a late afternoon or early evening load peak. Moving the supply of the generated electricity a few hours later greatly improves the economics and functioning of the overall system.
98
day-night arbitrage
If energy value drivers only occur once per day, then the ideal proposition is to buy power at the cheapest part of the day (often at night) and sell it back at the peak hours of the next day (i.e., in the middle of the day or late afternoon/early evening). This is termed day-night arbitrage, and Figure 10.10 shows how it works in theory. Because this process also reduces the top and fills in the bottoms of the load curve, it is sometimes referred to as load leveling.
99
levelized cost of storage (LCOS)
Storage devices (just like generators) that are used with different capacity factors, incur different O&M costs, or consume different fuels may change the economics of operating these technologies in practice. Aggregating asset performance characteristics and capital costs into a unit for broader comparison is precisely what the LCOE calculation derived in Chapter 5 was designed to solve for generators. These concepts can be converted for use in electric storage applications with a very similar method, called levelized cost of storage (LCOS). Includes Overnight, Fixed O&M, Fuel cost Because the power rating of a storage device is much easier to measure and more persistent over time, standardizing the cost per watt of the device is also easier to calculate. However, for most of the time-shifting and energy applications for storage, evaluating their economics on a per kilowatt-hour ($/kWh) basis is a more appropriate and relevant standardizing cost metric. Combining these data allows a calculation of LCOS for different technologies in a specific application. A correctly constructed LCOS calculation allows not only comparison of the economics across different storage technologies but also the comparison of storage to other generation technologies that might meet the same market need. One major caveat in LCOS calculations, however, is that storage often provides many value streams simultaneously. Since storage is a basket of benefits that may include energy, power, risk mitigation, and time shifting—all of which derive from the same capital investment in the storage application—when assessing the economic competitiveness of a storage application, it may not be appropriate to attribute the entire cost of the storage device to just one of the value streams, such as energy.
100
grid hardening
Microgrids Grid hardening (protection against natural or human-made loss of asset use) initiatives, islanding capabilities (i.e., being able to operate when disconnected from the grid) for campus or other isolated grids, and military applications are all creating microgrid benefits and will use increasing amounts of storage to support their deployment.
101
Distributed generation (DG)
The options for producing energy at, or very near, the customer's load. ---- One of the most profound changes in the overall structure of the electricity system in the last decade has been the rapid emergence of distributed generation (DG) options for producing energy at, or very near, the customer's load. (See Figure 11.1.) This generation is increasingly on the customer side of the electricity meter, creating interesting implications for the traditional grid operation model. What is new, though, is the emergence of a set of technologies that is cost-effective at the very small scale and that may be located and sized to match the specific customer load. To date, this phenomenon has been driven by solar photovoltaic (PV) devices, but these devices are just one among a class of technology options for generating electricity at a home, a business, or an industrial site. Many such generation options are getting cheaper at an astounding rate and are approaching cost-effectiveness in their own right. This process has been helped by favorable subsidies and targeted policy in many places around the world over the last decade, but, increasingly, DG technologies are becoming viable even without government support.
102
Inverter
Integrating these off-grid systems with devices that run on AC power, instead of the DC power a PV module or battery produces, requires an inverter to convert DC to AC. As with all transformations, this device incurs some losses of useful energy in its conversion and requires additional capital investment, but the advantages may be worthwhile enough to pair AC components with DC power production. Once a system can manage AC power, it can also be supplemented with a generator to ensure that power is available even when sunlight is not and the batteries have been depleted. These hybrid PV systems give additional assurance of electricity under a wider range of needs and ambient conditions.
103
Rate design
Rate design—As discussed in Chapter 4, the allocation of the grid's costs to the various users of its services is done through the process of rate design. Today's rate design is predominantly driven by volumetric considerations, and it allocates the costs over the volume of energy used by the customers. However, the specific features of the rate design can dramatically affect the economics of the DG intervention. Some considerations for rate design include: -- Flat rate vs. time of use -- Connection charges -- Demand charges
104
Learning rate (LR)
The learning rate (LR) is defined as the percentage drop in the cost to produce the technology for each doubling of cumulative production.
105
Progress ratio (PR)
The progress ratio (PR) is defined as one minus the learning rate (PR = 1 – LR).
106
Product innovation
Product innovation is generally the ability to improve the performance of a device—often through R&D or design innovation—and results in a productivity boost or more of the desired output for a given device. This results in a reduction of the input materials or costs that go into the device, but it can be thought of as typically targeting improvements in the denominator, or output, of the standardized cost calculation.
107
Process innovation
However, process innovation does not normally affect the performance of the device but instead drives down the costs of manufacturing it. While this can sometimes include modest adjustments to material specifications or input costs, it often includes manufacturing process adjustments, reduction of the number of process steps, and scale economies in manufacturing. In this way, process engineering typically targets the cost elements of standardized costing; that is, the numerator.
108
Parity
The point at which a technology becomes competitive with the current competitive solution for a particular customer need in a market is called parity. As with all competitive analysis, clearly identifying the customer need that is being met in the market requires understanding the fungibility, or substitutability, of one solution vs. another. However, when one formerly expensive technology falls in cost to parity with the current best solution in the marketplace, market dynamics have the potential to shift dramatically. Figure 11.11 shows how the PV experience curve is moving toward a break-even point, or parity, with its direct alternatives. This point of parity can be established in any market with an emerging alternative and an industry incumbent, but for distributed PV it is specifically defined as grid parity. Grid parity is the point at which distributed PV falls to the same cost as the grid electricity it displaces.
109
Learning investment triangle
the learning investment triangle defines the total excess costs above the market alternative that need to be incurred to see a technology reach the scale, and therefore the cost, to make that technology competitive in the marketplace. --- Another interesting question about parity arises from the realization that prior to parity, technologies tend to be uneconomic, and therefore not likely to be aggressively deployed or to easily achieve the resulting cost reductions that would enable them to reach parity. Looking again at Figure 11.11, the learning investment triangle defines the total excess costs above the market alternative that need to be incurred to see a technology reach the scale, and therefore the cost, to make that technology competitive in the marketplace.
110
Disruptive technology
Emerging technologies that bypass the limiting features of the incumbent delivery architecture and offer wholly new and cost-effective solutions. These new offerings are referred to as disruptive technology and can cause substantial change to a system's overall performance and character. --- If the emerging technology is absorbed into the existing infrastructure and its deployment is limited by the incumbent delivery architecture, the new technology is referred to as a sustaining technology. Such innovations may improve or optimize the incumbent business model and its economics but do not fundamentally change the system architecture. Other times, however, the incumbent technology providers cannot change the price or respond effectively even as the emerging technology continues to get cheaper. Some emerging technologies can bypass the limiting features of the incumbent delivery architecture and offer wholly new and cost-effective solutions to the incumbent's customers. These new offerings are referred to as disruptive technology and can cause substantial change to a system's overall performance and character. The economist Clay Christensen is one of the leading thinkers on disruptive innovations and describes the characteristics that these innovations typically share. Figure 11.12 demonstrates some of these characteristics, including: ■ Their technology is not new—Disruptive technologies rarely start out as a profound technological breakthrough. They are not usually some incredible product innovation discovered in a lab or garage; instead, they represent existing technologies that have some current use or verified performance, without which they would have trouble gaining the necessary confidence of customers or investors. ■ They are initially more expensive—Disruptive technologies typically start out more expensive due to their lower scale of deployment and earlier stage of development. ■ They start out niche oriented—Because they are more expensive, these disruptive technologies will find their initial application in niche applications where the customer value proposition is much higher or the incumbent solution is unavailable. ■ They build scale and drive trust as a mass solution—As customers get comfortable with these technologies in the initial applications, increased deployment can drive down costs (through experience curve effects) while increased visibility can improve the risk profile for customers in larger and more price-sensitive markets. ■ They reach parity—Lower costs through higher volumes can trigger a positive feedback loop until price thresholds are crossed and the formerly more expensive niche technology becomes the best solution for mass-market application.
111
Duck curve
Over time, however, the additional generation during the middle part of the day is changing the low profile observed by utility operators. Figure 11.17 shows a graph—colloquially called the duck curve due to its shape—that demonstrates how quickly these effects are occurring as midday solar generation is reducing the net load the grid needs to supply. Even as peak load growth increases in the early evening, midday power needs are falling dramatically. This creates a situation of much more dynamic generation requirements from the rest of the generation base as load ramps up and down quickly. It also exposes grid operators to rising risks from a growing share of intermittent generation.
112
Grid defection
severing their connection to the traditional grid altogether Various combinations of electricity generation and storage options through microgrids are expected to increasingly enable generation of electricity more cheaply at the load while retaining the high reliability that grid customers expect. As more people adopt microgrid solutions and gain comfort with their technical and economic features, they may begin to question the need to simultaneously maintain both their local electricity supply and management through a microgrid and their connection to the traditional grid. Under some circumstances, customers may choose grid defection, or severing their connection to the traditional grid altogether. The simplest technical mechanism of grid defection may include the combination of distributed solar generation and storage, as is already done in off-grid applications today. Under the distributed solar and storage model, there are different parity and break-even points, but the grid defection parity point has already been reached in sunny and remote areas and will occur in others as well.18 However, microgrids that include fuel-based generators may eventually be favored due to their ability to store substantially larger amounts of energy and thus provide more service reliability to customers. Customers with both distributed renewable generation options and a connection to a natural gas distribution network may get rid of their connection to the electric grid. The economic choice to do so is another matter and will be driven by local resource endowments, costs, and price volatility. However, falling costs of DG components vs. grid alternatives will likely make grid defection more economically competitive in the future. Grid parity that triggers customer defection would strand costly utility assets that have economic lives planned well beyond the time it would take to reach parity, and the resulting rising electricity rates from underutilized grid assets would accelerate incentives for more customers to defect—referred to as an economic death spiral for utilities.19 As a multitude of microgrids begin allowing circumvention of the grid, the traditional centralized grid architecture will continue to be undermined, and utilities and their regulators may be forced to rethink the very nature of utilities’ relationships to their customers and to society.
113
Learning curve / experience curve
For many products (and services too, but for ease of exposition, this section will refer just to products), there is a clear negative relationship between the cost to produce something and the amount produced. Essentially, the more time and effort people and companies invest in producing certain things, the better they get at it and the cheaper those things become. When applied to a single person or a firm, this effect is sometimes referred to as a learning curve, which shows how much more efficiently an operation is executed the more times it is performed by a single operator. But the concept of learning can expand beyond the single operator or process and apply to whole industries. Aggregate learning (or “experience”) in industries occurs through many complex and diffuse mechanisms, and it tends to benefit not only the firm or person making the effort but also the whole industry as new methods are developed and copied and as competition rewards innovators with market share growth. The analytical tool for understanding this broader march of product and process innovation across many firms (and even nations) is an experience curve. It is derived by plotting the observed market price or cost data for a type of product on the y-axis against the cumulative volume produced for that product by all manufacturers on the x-axis (and using a logarithmic scale for both to compensate for the typically exponential nature of the growth). Figure 11.7 gives an example of this analysis.
114
Wholesale electricity markets
Wholesale electricity markets, specifically for electricity sold in real-time and day-ahead markets, are among the easiest to understand. Very formal structures are set up by grid operators to procure energy from various generators. Once a generator is qualified to provide electricity to the market at a very specific place and time, deciding among the generation options is a matter of a structured auction system. Traditional generators, including fossil fuel, nuclear, hydropower, and other renewable generators, all bid into these short-term markets when and where they are available, and those with the lowest marginal cost are chosen first until the demand is met. Once they are winners in the market, they are obligated to dispatch their energy when and in the amount expected. Since these markets operate continually, they include the needs for all base, intermediate, and peak power times. Based on the relative cost structures and value drivers of the supply options, different technologies are more suitable to meet different energy needs.
115
Capacity markets
Capacity markets, where formalized, tend to be highly structured and very specific in the method of engagement for prospective suppliers, just like other wholesale power markets. Many of the generation technologies bid into these capacity markets, which allow them a supplemental revenue stream to their energy sales. In some of the more highly restructured US wholesale markets, the opportunity to supply capacity has been expanded to accommodate third-party demand response, but the method of supply and the appropriate compensation method are still being determined and sometimes litigated. Setting up these rules would help other solutions emerge to fill the needs as well, including more grid-scale storage as its costs fall and its value improves.
116
Ancillary service markets
Ancillary service markets are also highly structured, where they exist, but are not as fully developed as energy or capacity markets. In addition to the standby fossil generators and hydropower spinning reserves available to meet these system regulation needs, emerging storage technologies, particularly high-power applications like flywheels, are being explored. As mentioned in Chapter 4, some of the recent changes in rules for providing ancillary services have bifurcated the slow response and fast response markets, thereby adding more gradations of time and uncertainty, which can improve compensation for devices that can take advantage of them with fast response, such as most emerging storage technologies.
117
Distributed generation markets
Distributed generation markets are potentially huge, but uncertainty remains on how strong customer demand will be, if there are any confounding regulatory variables, and how intermittency will be managed. As mentioned above, the market structures here are virtually nonexistent. Customers usually decide whether they want to purchase distributed generation based on options available to them from third-party installers of developers, and they often decide based on comparing the cost of the DG solution against the incumbent electricity from grid operators. This can be further complicated if integrating the two has economic or technical ramifications or risks. Third-party providers have created innovative financing and payment structures that allow customers to mitigate some of this risk through a lease payment, though adding financial intermediaries can increase the cost of the delivered electricity. Given the relatively discrete nature of these customer and supplier interactions, suppliers of any of the DG technologies can offer their services to customers, subject to regulatory restrictions on interconnection when connecting them to the grid is necessary. Specific technologies that have been successful in establishing DG markets include grid-connected distributed solar on homes and businesses, and lots of off-grid applications with various generators (wind, water, and solar) along with some storage. These technology providers retain the option of going directly to customers to establish market opportunities, though doing so invariably incurs higher individual transaction costs than larger, established markets.
118
Energy savings markets
Energy savings markets, particularly the retrofit market, which targets improvement of the efficiency of an existing building or customer's electricity use, can take on many forms. One is through utility programs that are legislated to establish very formal methods of engagement between utility and customer. Technologies are made available to customers, incentives are established to assist them in the adoption, and monitoring for verification of savings can help manage some of the risks. However, efficiency can also be provided through third-party structures that look more like the distributed energy solutions above. Providers of these need to find and engage customers and convince them of the benefits of adopting the solutions. - This type of market can be anything from selling the components for cash and having customers install them all the way to a complete outsourcing of the construction and monitoring, paid for on a shared energy savings basis.
119
Dispatch decisions
Short-term, use of existing capital, marginal cost The previous section differentiated between short-term markets and long-term markets, and it described many different markets in which existing competitive electricity alternatives compete. Short-term markets represent those that can be served immediately with existing capital in place, and so are limited by the availability of assets and infrastructure to serve them. When power (or savings, or transmission, for example) is needed tomorrow or in the next hour, it can only come through dispatch decisions, or use, of existing stock of capital and is limited by the ability to deliver the services where they are needed in sufficient volumes. For the capital assets to remain in service, they only need to produce income sufficient to cover their marginal costs, and as long as they do they will remain available to compete in the market.
120
Investment decisions
Long-term, "fair rate of return on the capital deployed above their average cost - and that the expected competitiveness of technologies will persist long enough to realize their required rate of return." Over time, however, the existing stock of assets in the electricity system wears down and needs to be replaced. In addition, any marginal growth in energy services demanded requires additional asset investment decisions, or commitment of new capital. But for investors to provide the capital to add new productive assets to the system, they have to be convinced that they will receive a fair rate of return on the capital deployed above their average cost—and that the expected competitiveness of technologies will persist long enough to realize their required rate of return. The existing installed capital base throughout the entire supply chain is an accumulation of all of the investment decisions people have made in the past, resulting in the current stock of capital available to call upon. Choices made about which assets to add on an annual basis will, over time, alter the mix and operation of the electricity system. In this way, the future of the electricity grid will be determined by the current endowments of capital and any marginal investment decisions. Understanding what drives marginal investment decisions, therefore, is one of the most fundamental tools in understanding the long arc of the electricity system—including its long-term cost, availability, reach, and environmental impact—and everything that depends on it. Ultimately, investment decisions require the complicity of a number of stakeholders—the developer or operator, various investors, public sector regulators, and the customer. Among these, however, it is the providers of the financial capital, the investors in asset finance, that have the core responsibility of assessing their projects, as they retain a large part of the financial risk of getting that assessment wrong. Understanding how they view these marginal investments is helpful in understanding changing investment patterns and the resulting future electricity architecture that will emerge from those choices.
121
Internal combustion engine (ICE)
While early versions of autonomously powered carriages (automobiles) had been developed over the nineteenth century, Karl Benz is considered the inventor of the modern car, which he patented in 1879 and commercialized over the next couple of decades. This vehicle used an internal combustion engine (ICE), which ignited a light distillate of oil—gasoline—in a sealed container, which turned a driveshaft and brought power to the wheels. In 1892, fellow German Rudolf Diesel developed a variation on the engine design to use heavier distillates of oil (paraffin oil, now known as diesel fuel) in a more efficient combustion process. Both gasoline and diesel engines are in common use today for automobiles.
122
Codependence
Where the physical capital of vehicles and infrastructure has each had to develop simultaneously and now rely on the existence of the other for continued efficient operation. Codependence creates the condition that changing aspects of one of these types of capital may only proceed as long as it takes into consideration the existence and capacity of the other. --- The development of this transportation network infrastructure is not independent of the development of the vehicles and fuels that use them. Precise choices of infrastructure elements have always been informed by the types of vehicles and fuels that use them and, conversely, additional vehicles have been deployed where infrastructure elements were available to allow their efficient use. This has created an architecture founded on codependence, where the physical capital of vehicles and infrastructure has each had to develop simultaneously and now rely on the existence of the other for continued efficient operation. Codependence creates the condition that changing aspects of one of these types of capital may only proceed as long as it takes into consideration the existence and capacity of the other. Vehicle innovations (for example, larger planes and ships or alternative-fuel engines) must ensure access to the necessary infrastructure to operate them (e.g., correctly sized airports or wide-enough canals or advanced fueling stations); conversely, new infrastructure must be developed with consideration for the projected fleet of vehicles that use them, which will by necessity include a large part of any existing fleet of vehicles. Given that the decision makers for investment and vehicles and infrastructure are not always the same people, the potential for poor investment exists without effective communication. In economics, two goods that have to be consumed at the same time are known as complementary goods. Technically, this refers to the case in which the elasticity of demand is negative across the two goods (cross-elasticity). For example, a rising price of one good that causes a reduction of demand for it therefore causes a reduction in the consumption of the other good, and vice versa. Codependence of different parts of energy and capital in energy system supply chains arises from this complementary relationship. Rising prices for gasoline may reduce the demand not only for that fuel but also for vehicles and infrastructure that use gasoline. Conversely, categorically more expensive vehicles may reduce the purchase of those vehicles and reduce demand for fuel, as well as the need for fueling infrastructure to provide energy to those vehicles. The development of these supply chain components proceeds in tandem, often ensuring balanced growth among the components.
123
Total cost of ownership (TCO)
The first step in understanding the economics of various fuel and transportation options is to standardize the units of comparison. Any such standardization must include not only the cost of purchasing the device (and related necessary fueling components) but also of operating it. Understanding the cost of operation must also include understanding the amount and type of use that it undergoes. One method of establishing the cost of transportation is to calculate the total cost of ownership (TCO). This method has many similarities to the levelized cost of energy (LCOE) calculation and can be used to understand the cost of many capital assets beyond transportation, including machinery or other devices. As applied to vehicles, TCO includes: --Fixed costs --WACC --Asset life --Terminal value --Performance variables --Fuel consumption --Distance traveled --Driving behavior --Operating costs --Fuel costs --Maintenance expense --Cost of operation
124
Fuel efficiency
Worldwide, there are two main methods of reporting the relationship of a vehicle's fuel usage and distance traveled. The standard metric of measuring this in the United States, the United Kingdom, and a smattering of other countries like India, Saudi Arabia, Mexico, and Brazil is through a fuel efficiency measure, or distance traveled per unit of fuel. In the United States and the United Kingdom, this unit is in miles per gallon (mpg), and elsewhere is measured in kilometers per liter (km/L). Elsewhere in the world, this relationship is determined through a fuel consumption measure, which inverts the relationship and shows distance traveled per standard unit of fuel. Typically, this follows the metric system and is reported as liters per 100 kilometers (L/100 km), but can also be seen on current US fuel economy labels such as those in Figure 13.4 reported as gallons per 100 miles.
125
Design efficiency
All vehicles have a relatively narrow range of the relationship between fuel used and the distance covered. This measure is usually established at the time of design and engineering of the vehicle (design efficiency), independently tested through rigorous protocols, and generally expected to be similar across all vehicles of the same type and make. (See the following Metrics Sidebar comparing the metrics of miles per gallon versus gallons per mile for some insight into appropriate metric design.)
126
Operating efficiency
Two vehicles that have the same design efficiency and travel the same distance could still have very different fuel use, operating costs, and wear and tear on the equipment. Most of these differences result from how the vehicle is operated and can collectively be thought of as operating efficiency factors. Common examples include city driving vs. highway driving, which can result in very different fuel consumption over the same distance. Operating the vehicle above or below the recommended speed limits can also dramatically change the operating efficiency.
127
Corporate average fuel economy (CAFE) standards
In transportation, setting appropriate levels of device performance is often handled through fuel economy standards. A fuel economy standard establishes a fleetwide average fuel economy level and relies on vehicle manufacturers to sell a mix of vehicles, all of which meet or exceed that target. Not doing so can result in penalties or restrictions on vehicle sales. The standards can be calculated using subtly different methods, and they may alternatively rely on a test for the fleet's average emission levels (emission standards) in some jurisdictions. Versions of these standards have been widely adopted around the world in the last four decades to gently force the fleet of vehicles to gradually increase in efficiency as new, more efficient vehicles are added. In the United States, this policy is known as Corporate Average Fuel Economy, or CAFE, standards. The standards were established in response to the first oil crisis in the 1970s (discussed in Chapter 14), requiring new cars to nearly double their fleet efficiency by the early 1980s. The vehicle standards were extended to light trucks, albeit at a lower level of fuel efficiency, starting in the early 1980s. The oil price collapse in the 1980s, partly due to global efficiency improvements and oil-based transportation (partly an unintended consequence of the policy—see the Economics Box on p. 638), led to a period of reduced political motivation to require additional increases in CAFE standards. A return to rising oil prices in the late 1990s led to the establishment of more stringent truck standards in 2005 followed by tougher passenger vehicle standards a few years later. Figure 13.6 shows these changing US standards for cars and trucks, along with the actual fleet performance observed over that time.
128
Rebound effect
Rebound effect Due to the rapid increase in average fuel economy between the late 1970s and early 1980s, petroleum demand dropped quickly and resulted in substantially lower oil prices (see Figure 13.6). By the mid-1980s, these lower prices incentivized increasing fuel consumption (along with an increasing willingness to purchase SUVs) due to a process known as the rebound effect. Resource efficiency by definition leads to lower prices than would otherwise be observed and, absent other restrictions, will directly and indirectly incentivize increased consumption of that good. The amount of rebound (or return of some of the efficiency gains) can vary widely depending on the technology or circumstance. This phenomenon was first described by William Stanley Jevons in 1865, and so it is sometimes called the Jevons paradox.
129
Aftermarket
For vehicles already in service, many of the improvements described above are still available, though the cost-benefit ratio is typically smaller than installing them on a new vehicle. These aftermarket improvements can target powertrain improvements, add aerodynamic components, or use fuel additives to squeeze marginal efficiency out of the truck's operation. While often available, aftermarket improvements suffer from a few difficulties that limit truck owner and operator use, including the short lifespan (and even shorter ownership span) for most HDVs. Vehicle owners also worry about downtime and performance risk for expensive vehicle assets, so the required return threshold for adopting aftermarket solutions is very high.
130
Crude oil
crude oil is a liquid consisting of naturally formed hydrocarbons extracted from the earth, which is refined throughout the oil/petroleum supply chain.
131
Hydrocarbons
Hydrocarbons, collections of molecules consisting almost exclusively of hydrogen and carbon, are created under different circumstances and have modestly varying characteristics that affect their suitability for providing energy. Collectively, these hydrocarbons provide a widely available and very high-density source of combustible energy. They also have the advantage of being easily and cost-effectively transported, particularly when in their stable liquid state.
132
Petroleum
Petroleum (a word derived from the Latin for “rock-oil”) is a term that is slightly differentiated from “oil.” While petroleum can include both the natural crude oil and refined fuels and products that were introduced in the previous chapter, crude oil refers only to the hydrocarbons obtained from the underground reservoirs in which it formed.
133
source rock
The sediment in underground or in the ocean where organic matter collects and matures to eventually become oil. From here, it migrates into a reservoir. -- After significant amounts were deposited, the organic material was slowly buried by layers of sediment and sometimes further shifted through tectonic activity, which increased the pressure and temperature under which these deposits matured. Once formed in this source rock and allowed to mature for a long time, other geologic conditions were necessary for the hydrocarbons to accumulate in easily accessible reservoirs. As organic matter in the source rock matures into oil, it tends to change in density and volume. This has the result of forcing the oil out of the source rock and upward into cracks and fissures as it escapes the containment of its original location, which is dependent on the permeability of the nearby geology.
134
reservoir
Natural formation with a top impermeable layer that creates a trap, collecting oil. -- As organic matter in the source rock matures into oil, it tends to change in density and volume. This has the result of forcing the oil out of the source rock and upward into cracks and fissures as it escapes the containment of its original location, which is dependent on the permeability of the nearby geology. This migration continues until it finds a reservoir to fill. Finally, containment occurs only when the reservoir has a top impermeable layer that creates a trap, arresting the upward mobility of the migrating oil with a correctly shaped impermeable top seal, usually made of shale rock or salt.
135
API gravity
Heavy / Light API gravity is the measure developed by the American Petroleum Institute (API) to gauge how heavy or light a petroleum liquid is. The higher the API gravity, the less dense the liquid is. (Lower gravity is "heavy"). Using water as a benchmark with an API gravity of 10, nearly all petroleum liquids have a higher API gravity value and, therefore, float on water. Crude oil from oil wells generally falls on a spectrum of API gravity from the 20s to nearly 50. API = 141.5/(specific gravity at 60F) - 131.5 Crude oil with the higher scores (38 or more) is usually referred to as light crude and generally has a mix of shorter hydrocarbon chains than other crude oils. Light crude tends to be easier to pump and transport, due to a lower concentration of wax in the crude oil. Crude oil with a slightly higher density and lower API score is classified as medium crude. Crude oil with the lowest scores (22 or less) tends to be called heavy crudes and have a range of higher density and higher viscosity due to the presence of longer and heavier hydrocarbon chains. This creates a product that is harder to pump and requires more processing to break down the oil into useful refined fuels, so it often sells at a discount compared to the lighter and easier-to-handle grades. The most extreme forms of heavy crude can have API gravity less than 10 and are called extra-heavy crude or bitumen, which is the type of oil found in tar sands. -- Crude oil can vary slightly in its chemical composition, depending on the characteristics of the geology in which it forms. These variations in crude oil characteristics determine optimal methods of drilling and extraction, as well as the processing and handling that it requires once produced. Crude oil is classified by a number of measured characteristics, but the two most important are the API gravity and the sulfur content. In addition, acidity and volatility are important considerations for managing oil safely for both humans and equipment.
136
Sulfur content
Sweet/Sour (Low sulfur content is sweet) Sulfur content is another important consideration when looking at the quality of crude oil. The amount of sulfur is measured as a percentage of the weight of the crude oil and typically ranges from zero to about 3.5%. There is a negative correlation between API and sulfur content. Lighter oils tend to have less sulfur, and vice versa, though this relationship is not perfect. Crude oil that has very low sulfur content is referred to as sweet crude. Sweet crude generally has less than 0.5% sulfur by weight, which makes the crude easier to manage and process into fuels. It is referred to as sweet crude because of the lack of sour-smelling sulfur in the oil. Crude oil with higher sulfur content is referred to as sour crude due to its unpleasant odor. In addition to having an unpleasant odor, sour crudes are both more toxic and corrosive, requiring expensive processing and removal of the sulfur before transporting it on ships and through pipelines. Sulfur can also be a breathing hazard for workers if it is converted into hydrogen sulfide.
137
Associated gas
Gas coming out of the oil well ---- Hydrocarbons extracted as petroleum are not only liquid crude oil but also contain other hydrocarbons with various molecular weights and properties. Of these hydrocarbons, a substantial amount of natural gas (methane, or CH4) comes out of the oil well and is referred to as associated gas. While natural gas is described in great detail in Chapter 18, within this associated gas (as well as within unassociated gas wells that do not produce crude oil directly) are a number of other gaseous and liquid hydrocarbons with varying chain lengths of carbon and hydrogen.
138
Benchmarks
Regional hubs exist through which a lot of the oil travels, which helps standardize the location of crude oil with similar characteristics within a region. These benchmarks exist around the world to establish a standard price for standardized grades of fuel at the same hub locations. The largest producing areas tend to have the most active benchmark locations: ■ West Texas intermediate (WTI) crude—A light sweet grade of fuel, often priced at the transshipment point of Cushing, Oklahoma, in the United States. ■ Brent crude—Originally a benchmark set up from a field producing in the North Atlantic, Brent crude represents a light sweet fuel (though in neither characteristic as light or sweet as WTI) that comes from over 15 fields and can be delivered to one of four physical locations (Brent, Forties, Oseberg, and Ekofisk fields), collectively referred to as BFOE. ■ Dubai crude—A benchmark used to price the oil trade from the Middle East to Asia (with WTI and Brent being used in the Atlantic trade primarily). Dubai crude is a medium crude (API of 31) and is relatively sour (with 2% sulfur content). Each of these benchmarks can be used to establish a standardized price (benchmark price) that allows pricing and trading of other crudes with slightly different quality or geographic characteristics. Contracting, buying, and selling at a premium or discount to the benchmark price simplifies trading and reduces the inefficiency of trying to constantly set prices across many small markets.
139
Upstream
First part of petroleum industry supply chain, production of crude oil. The upstream portion of the petroleum industry involves everything necessary to find and produce oil. This part of the supply chain represents a very risky and capital-intensive set of activities, so it tends to be the most constrained part of the oil delivery system. As the bottleneck for the system, this is typically where the bulk of the value added (i.e., profit) is captured in the oil supply chain, and therefore is of great interest to many players. Production of the crude oil (upstream) to its transport to refineries (midstream) to the refining of that crude oil into other fuels or non-fuel outputs into the various wholesale and retail channels (downstream).
140
Exploratory wells
Once a location is deemed to be of sufficient size and potential quality, initial drilling of exploratory wells needs to take place. The purpose of these initial wells is to increase confidence in the subsurface conditions and potentially to strike oil deposits that can be tested for their pressure, flow rates (the natural rate at which oil and gas emerge from the well), and product quality. These variables are essential to be able to determine the long-term production profile and economic value of additional drilling activity in that area, which can be confirmed through additional appraisal wells to test these conditions over a larger area before committing substantial capital to field development. Before this, sophisticated seismic surveys are used to map the various rock layers underground to see if the necessary density and topology exist to form oil traps. Seismic data can also be used to determine whether the necessary seal on top of the reservoir, which holds the oil in place, exists. Once the conditions are identified, understanding how large a geographic area shares those conditions is important in establishing initial estimates of the economic potential of a particular reservoir.
141
Flow rates
the natural rate at which oil and gas emerge from the well
142
Directional drilling
Historically, drilling rigs drill vertically into the earth to tap conventional reservoirs of oil and gas, but technical advances in directional drilling have allowed turning the direction of the drill bit and casing to allow angled approaches to reservoirs and along the contours of underground formations. With some techniques, it is even possible to turn them a full 90° and conduct horizontal drilling, when the geology or circumstances require. Directional and horizontal drilling allows improved economic access to less productive rocks by increasing the contact area within the hydrocarbon-bearing strata of rock.
143
Recovery rate
the percentage of hydrocarbons in the reservoir recovered Once a well is producing oil, it is important to ensure its long-term productivity through well maintenance and additional interventions to maintain reservoir pressure at the optimal level. The goal of this process is typically to maximize the recovery rate (the percentage of hydrocarbons in the reservoir recovered) for the field.
144
Production profile
The production profile for an oil well or field is constructed to compare the quantity of crude produced per unit time over the lifetime of the well. The size and length of the respective phases of a well's production are described as (1) ramp-up, (2) plateau or peak, and (3) post-plateau or decline. The rates of decline vary widely across well types, based on geography, viscosity of the oil, and temperature. (Upside down U, increases quickly to reach 100% peak, then decreases somewhat linearly) As shown, the decline phase can be further subdivided. Decline phase 1 covers the period when the well produces at least 85% of its peak production. Together, peak and decline phase 1 are referred to as the production plateau. Decline phase 2 covers the period when the well produces at least 50% of its peak, and decline phase 3, the period when the well produces less than 50% of its peak. Decline phases 2 and 3 are collectively referred to as post-peak. A nuanced understanding of the decline phase by field or region is important for understanding world supply dynamics, corporate profits, and appropriate policy responses because many wells and fields are past peak. According to the EIA, the average rate of reduction in annual production (decline rate) per field for conventional reserves is 6%, though individual wells decline much faster. Initially, oil fields are established over larger areas, with wells spaced apart to increase initial production and minimize the loss of pressure in a given area. As the field matures, some infield drilling, or drilling activity among existing wells to extract remaining pockets of oil, will occur. This infield drilling helps keep field decline rates lower, but it is still significant over many years. Time x Share of Peak Production
145
Resources
Resources, by definition, are therefore all of the potential hydrocarbons within an area. Understanding the aggregate amount of oil available within a reservoir or field sets an upper boundary on the amount that could be extracted. This calculation begins by estimating the oil initially in place (OIIP)—or its equivalent, gas initially in place (GIIP) in natural gas fields—by understanding the size of the field, the volume of liquids within each reservoir, the saturation level, the permeability of the rock, and how movable the oil is within the reservoir.
146
Technically recoverable resources
Not all of the oil in a resource reservoir is available, as some of it is not technically or economically feasible to extract. The first filtering of the OIIP resources involves assessing the technically recoverable resources, which applies the test of being recoverable using current commercial technologies based on the existing geology of the field. This requires estimating the recovery rate, or percentage of total hydrocarbons that can be recovered, based on the use of primary, secondary, and enhanced recovery techniques.
147
Reserves
Reserves are defined as quantities of commercially recoverable oil in known accumulations under defined conditions. To assist with transparency and consistency in these calculations, operators and their oversight bodies standardize the measure of technically and economically feasible resources through the determination of a reserve calculation (see the Metrics Sidebar below). By convention, reserves must be: 1. Discovered—Using geologic, seismic, and other field data, operators must have a clear definition of the overall OIIP resources in a field (resources). 2. Recoverable using existing technology—Oil must be technically recoverable according to the definition above (intellectual capital). 3. Commercially viable—Oil must be economically recoverable, which includes a clear understanding of costs, prices, and required returns and also the necessary legal and contractual rights to produce (political capital) and infrastructure to deliver the oil to customers (physical capital). 4. Remaining in the ground—Reserves must still be in place and cannot have previously been produced.
148
Proven reserves (1P/P90)
Proven reserves (discussed in more detail below) have a slightly more restrictive definition that also requires the discoveries to be confirmed to a high likelihood with acceptable technology, often using exploratory wells and other advanced equipment for site evaluation. Oil companies (IOCs and NOCs) tend to focus on proven reserves as the minimum asset base that they expect to monetize in the future, and those reserves are therefore of great importance to their ongoing acquisition of financial capital and optimal company valuation.
149
Reserve-to-production ratio
(Proven Reserves) / (Quantity Produced per Year) The reserve-to-production ratio for any country or region is calculated by dividing the current estimation of proven reserves in that location by the quantity of a resource (oil or gas, usually) produced per year. The resulting ratio measures the amount of the nonrenewable resource expressed in a unit of time, such as years. Properly read, it would be expressed as: at the current reserve estimate and the current production levels, the resource will last for a given time. However, this ratio should be used with extreme caution, as both the numerator and denominator are subject to change for many reasons. As explained throughout the chapter, estimates of reserves vary over time, depending on price, technology, and even the degree to which exploratory wells have been drilled. At the same time, resource production tends to be heavily correlated with economic growth, although that relationship has been weakening. Finally, as the production profile of both wells and fields indicates, it is difficult if not impossible to maintain the same level of production indefinitely.
150
Initial production (IP) rate
the rate of production by an oil or gas well after it is drilled and stabilized As described earlier, all hydrocarbon wells deplete. Technically, this means that the rate of production by an oil or gas well after it is drilled and stabilized (initial production, or IP, rate) diminishes as the well naturally loses pressure and the flow rate drops to the point where additional pumping or recovery techniques are necessary.
151
Spare oil capacity
The difference between the production at any given time and the production capacity is called spare oil capacity. Spare oil capacity is the quantity of crude oil that a country could produce but is not currently sending to market. Many conditions must be in place—including the existence of unused reserves, wells, and offtake infrastructure—to be considered spare capacity. To maximize oil revenues, few producers withhold potential production and sales from the market, but Saudi Arabia traditionally maintains some swing production to manage unexpected losses of output from other OPEC nations, and occasional circumstances such as embargoes, economic downturns reducing demand, or temporary production bottlenecks can create spare capacity as well. The International Energy Agency (IEA) differentiates between nominal spare oil capacity, broadly measured, and effective spare oil capacity, which is the capacity that can be brought to market nearly immediately. Previously, the idea was that spare capacity was the result of an active policy decision to withhold production, possibly to help keep supplies high or stable. However, political unrest and war have made significant quantities of the nominal spare capacity unlikely to contribute to global markets.
152
Undulating plateau
Daniel Yergin proposes an "undulating plateau" of activity for a long time as this stabilizing loop of higher prices drives innovation and penetration into previously uneconomic and unconventional resources. As described above, the oil industry is in tension between depletion (physical depletion of its wells and economic depletion of its fields and regions) and innovation (with producers getting more efficient and accessing new opportunities every year). Depletion causes oil prices to rise, which spurs efficiency, capital investment, and innovation, which then causes oil prices to fall (a form of rebound effect). Lower oil prices cause expansion of economic activity and reduction of capital investment, which drives oil prices higher (colloquially described by oil producers as “the best cure for low oil prices is low oil prices”). This pattern repeats in the powerful macroeconomic stabilizing loop driving oil system dynamics.
153
Oil dependence
While dependence can be a physical linkage, the term best describes the economic linkages that naturally arise in a commodity traded among countries, particularly one that is critical to a country's economic activity. When one country constantly supplies a vital resource to another, it creates a dependence for both countries on the continuation of that relationship. Oil dependence is one of the extreme versions of this type of relationship. Some of the unique properties of oil in this regard include: - A globally traded commodity - Highly inelastic demand - Highly inelastic supply - Substantial infrastructure in risky places in the world Consider import dependence vs. export dependence
154
Probable reserves (2P/P50)
These represent all of the proven plus any unproven reserves that are estimated to have at least a 50% likelihood of recovery. These are sometimes referred to as P50 reserves.
155
Possible reserves (3P/P10)
These represent all of the 2P reserves plus an additional estimation of reserves identified and possibly recoverable with at least a 10% probability. These are sometimes referred to as P10 reserves.
156
Food vs. fuel debate
First-generation feedstocks currently in use tend to use the same farmland and other forms of capital as traditional food production, setting up a dynamic where rising demand for the agricultural outputs causes tension between food production and fuel production—a food vs. fuel debate.
157
Lifecycle analysis
Analysis of how much carbon emissions are generated in the production of a product (in this context, biofuels). A final consideration in determining the desirability of biofuels is the role they play in improving the emissions profile of combustion vs. existing fuels. Combustion of biofuels has a similar impact on emissions of nitrogen oxides, sulfur oxides, particulates, and ozone as combustion of the fuels they replace, since the amount of these pollutants emitted from vehicles is driven as much by vehicle design as by the fuels. The carbon emission differentials are another story. Substantial scientific work has been conducted on the lifecycle analysis of biofuels to determine their carbon content, and the result is a range of estimates with mixed results, depending on which fuel is examined. Based on these studies, first-generation biofuels typically have a modest average reduction in lifecycle emissions of carbon but can sometimes be produced in conditions with longer supply chains and higher conventional fuel input to the conversion process, resulting in increased emissions (negative emission reductions in the figure) over the fuels they displace.
158
Drop-in fuels
Fuels that can be synthesized to identically match the types of fuels they are displacing - with identical hydrocarbon chains and mixes eliminating the need to change equipment or take on risks of failure or efficiency loss. -- Overcoming the technical limitations of ethanol and biodiesel fuels may be difficult until fuels can be synthesized to identically match the types of fuels they are displacing. Such fuels, with identical hydrocarbon chains and mixes eliminating the need to change equipment or take on risks of failure or efficiency loss, are also called drop-in fuels. Particularly in demanding applications like aviation fuel combustion, which has a wide range of temperature and pressure over which fuels must perform predictably, precisely synthesizing identical fuels may be more important than developing replacement new fuels that require costly or risky engine adaptations.
159
Blending mandates
In setting standards for biofuels, blending mandates establish a certain quantity or percentage of biofuels to be mixed into the refined fuel supply. The standard usually specifies the precise fuel and level required to be blended into the fuel supply, and typically wholesale producers of fuel are expected to meet this standard. Many countries have a blending mandate requirement for ethanol or biodiesel (or both), with blends ranging from a couple percentage points to more than 20% in the case of Brazilian ethanol.
160
Renewable Fuel Standard (RFS)
In the United States, the blending mandate program began in 2005 and is known as the Renewable Fuel Standard (RFS). The original version of the standard established a rising volume of ethanol required in the US fuel supply. Notably, the standard was not on a percentage basis but rather on a gross volume of ethanol required based on estimates of future transportation fuel supply and demand. Under this standard, refiners and importers, known as obligated parties, had to prove they met the blending requirements by accumulating blending certificates known as Renewable Identification Numbers (RINs). RINs could be generated from blending activity or purchased from other blenders who had accumulated excess RINs over their blending requirements. -- In 2010, the United States established a second RFS (RFS2) that tapered off the growth of the corn ethanol contribution to the fuel supply and established additional mandates for cellulosic ethanol and advanced biofuel contributions (see Figure 15.13). This way, total biofuel contribution could increase but with limited impact on corn and agricultural markets. Unfortunately, despite substantial investment in technology and scaling up biofuel conversion facilities by incumbent blenders and venture-capital based companies, the volume of cellulosic ethanol production by 2015 was substantially below the statutory requirements laid out in RFS2. In mid-2015, the EPA revised the rules for meeting these requirements, slashing the cellulosic ethanol production requirements by over 95% so that the industry could remain in compliance. The lessons that can be drawn here include the risks associated with mandating production of technologies that are not yet technically proven. While production mandates that target cost reductions through scale can work, technologies that are not yet fully developed present a technology risk to the implementation of the policy for exactly these reasons.
161
Renewable Identification Numbers (RINs)
Blending certificates for obligated parties under an RFS to prove they met the blending requirements. These could be generated from blending activities or purchased from other blenders who had accumulated excess RINs over their blending requirements.
162
S-curve
(Used to describe development lifecycle) S-curves show a stylized deployment, where the early stages require tremendous preparatory work, and may result in little adoption, but as the conditions emerge that enable broader adoption, rapid growth can occur. Eventually, market saturation begins and adoption levels off. These S-curve adoption stages can also be thought of in terms of system dynamics, where the early stages of product development occur with a stabilized loop of low or no adoption, followed by a change in the conditions that allow a reinforcing loop of rapid adoption to occur, and then a reversion to a stabilizing loop of market saturation when the innovation has reached its limit of reach or value creation for users.
163
First-mover advantages
Benefits to having a product or service available in marketplace early. Include: - interest - trust - control narrative - dominant brand - make fundraising and customer acquisition easier in a fast-growth industry --- Having a product or service available in the marketplace early can help build interest, trust, and narrative about the product's features and suitability for use. It can establish the dominant brand and make fundraising and customer acquisition easier in a fast-growth industry.
164
Positive externality
A positive externality is a benefit that is obtained by a third party through the actions of someone else and is an analog to a negative externality (discussed in Chapter 3). Positive externalities exist all the time when individuals do things in their own interest that indirectly benefit others, including obtaining private health care that reduces contagion of disease to others, investing in education that diffuses to others, or painting a house to improve its property value, which makes nearby homes more valuable. The main economic issue with technology investment is the potential for that investment to create positive externalities (see the Economics Box below). Paradoxically, while creating a surplus of positive benefits to society is generally considered valuable, an innovator that cannot fully protect or capture those benefits fails to fully realize the fruits of those efforts, which leads to a situation in which innovators tend to invest less in technology innovation than if they were able to reap more of the benefits the innovations create.
165
Spillover effects
Indirect benefits (like positive externalities) are also referred to as spillover effects. The features that allow a positive externality to exist are the inability of the person taking an action to exclude others (nonexcludability) from these spillover effects.
166
Regenerative braking
The process of recapturing energy from braking in a HEV, that can be used to power onboard systems and provide additional energy to the powertrain of a vehicle.
167
Plug-in hybrid EV (PHEV)
HEV with chargers and additional battery capacity. What traditional HEVs lack is a way to provide supplemental, or external, electrical energy to increase the range and contribution of electric propulsion to overall vehicle operation. Solving this problem requires adding chargers and additional battery capacity, converting these vehicles into plug-in hybrid EVs (PHEVs). Depending on the battery capacity, these vehicles can substantially extend the average daily commuting range (as compared to the BEVs discussed below), while still providing the flexibility for longer trips or overcoming the difficulty and uncertainty in accessing EV charging infrastructure through the onboard ICE components.
168
Battery EV (BEV)
Vehicles exclusively powered by electricity, stored in batteries. In contrast to the hybrid approach, the second (revolutionary) path for developing EVs has been to start with the simplest vehicle design platform and rely exclusively on electricity to power the vehicle, eliminating any ICE components. Developing these battery EVs (BEVs) allows manufacturers without experience in traditional ICEs and mechanical drivetrains to create a complete vehicle. It also requires a minimum number of parts to establish a working vehicle, needing only battery storage, motors, and charging components. Such simple configurations minimize complexity and cost in new vehicle design. Powering a vehicle using only electric motors has many technical advantages. They are extremely efficient, with conversion efficiencies of over 80%. They have high torque, which can provide power at low speeds and quick acceleration, providing very high power-to-weight ratios compared to combustion engines. Typically, electric motors will turn an axle for vehicle propulsion, but, increasingly, smaller motors are being applied to individual wheels and can even be distributed directly into the wheel base, further enhancing efficiency and operational control.
169
Flex-fuel vehicle
Vehicles that can accommodate multiple fuels (in this case liquid fuel and electricity) PHEV proponents argue that the flexibility of their platform is more suitable to mass-market applications, providing a flex-fuel vehicle that can accommodate both liquid fuel and electricity.
170
Battery swapping
An alternate strategy to provide faster charging of EVs is through battery swapping. Battery swapping requires the design of an EV to allow quick removal of the depleted battery and the necessary equipment to replace it with a charged battery, almost certainly requiring the infrastructure of a commercial battery-swapping station.
171
Discharge
While charging the battery delivers potential energy to it, the discharge of the battery allows the vehicle to move and perform work.
172
Useful capacity
The fraction of a battery's absolute capacity that can be used, since it cannot be charged to 100% or discharged to 0% of technical capacity without sustaining impacts to long-term performance. --- First, a battery's useful capacity is very different from its absolute capacity, as a battery cannot be charged to 100% of its technical capacity (the percentage of the technical capacity being used is referred to as the state of charge) without risking damage to the molecular structure of the battery, shortening its useful lifetime. A battery also cannot be discharged to 0% without similar damage and risk to long-term battery performance. As such, there is a useful range of battery performance, the useful capacity, which is a fraction of the overall battery capacity. Calculations of the amount of energy that a battery can store, and the resulting range it can travel (an EV analogue to miles per gallon, as discussed in the Metrics Sidebar below), should be based on the useful capacity value of its battery. The cost of the battery pack, however, will be based on the total capacity.
173
Total cost of ownership (TCO)
Levelized costs of owning a vehicle, derived by breaking down the fixed cost of owning the vehicle and amortizing it over its useful lifetime and the variable cost of operating a vehicle (fuel and maintenance). Standardized on a cost per mile basis. The basic economic analysis for vehicle ownership is similar to the levelized cost methodology established elsewhere throughout this book, beginning with LCOE in Chapter 5. Levelized costs involve breaking down the variable cost of operating a vehicle, including fuel and maintenance, and the fixed cost of owning the vehicle and amortizing it over its useful lifetime. In transportation, this combined levelized cost is sometimes referred to as the total cost of ownership (TCO), described for ICE vehicles in Chapter 13. a) Operating cost b) Battery cost, cycle life c) Cost per Mile
174
Cycle life
The number of cycles a battery can withstand before being depleted (or at least no longer useful for transportation purposes). A TCO calculation requires a similar approach. However, since the lifetime of the battery is typically measured in terms of cycles, rather than calendar life, estimating the number of cycles a battery can withstand before being depleted (or at least no longer useful for transportation purposes) is an ideal way to measure the battery lifetime. This cycle life needs to be averaged over the distance, such as number of miles that an average charge and discharge cycle provides to the vehicle owner, resulting in a fixed cost per mile for depleting the battery life.
175
Cost per mile (CPM)
Used in TCO calculation to standardize all operating and fixed costs, and allow comparability among transportation options.
176
Total potential market
From the perspective of the manufacturer of an EV, the total potential market in which it could compete would include any vehicles that might be electrified. The total potential market should be the entire fleet of vehicles sold in a given year around the world, or the entire 80 million new passenger vehicles sold, plus any future organic growth.
177
Total addressable market (TAM)
The Total Potential Market after accounting for adoption constraints. Adoption constraints: ■ Economic constraints—As mentioned above, in some circumstances, EVs are economically inferior to ICEs on a TCO basis. High electricity prices, low fuel prices, and high-cost batteries or other utilization features will make ICEs an economically preferable solution. ■ Substitutability constraints—EVs are not perfect substitutes for ICEs for most users. Some LDV owners use their vehicles in ways that EVs have trouble accommodating, including long or unpredictable daily range requirements or duty-cycle requirements to handle agricultural work, construction, or operation in wide temperature extremes. While EVs could be configured to accommodate these demands, ICE vehicles have natural advantages that will be hard to overcome. ■ Access to pairing technology—EVs, like many other technologies, require access to other devices and technologies to be useful. Notably, EVs need access to charging infrastructure for providing the necessary energy to power the vehicle electrically. Customers who do not have ready access to this infrastructure at home, at work, or at third-party charging stations in some combination may find it impractical to adopt EVs. Conversely, ICE vehicles utilize an embedded network of fueling stations that make access to pairing technology less of a concern. ■ Investment constraints—Beyond the relative TCO of competing alternatives, higher upfront capital investment may be needed to adopt EVs vs. comparable ICEs. Not only are the vehicles more expensive because of the substantial battery components, but installing local charging equipment at a home or workplace may also substantially add to the capital required for EV adoption. While these upfront capital costs are potentially financeable, allowing them to be spread over the useful life of the vehicle, in practice these higher upfront capital costs can prove a deterrent to adoption. ■ Market failures—Even when all of the economic and operational considerations are favorable, customers may fail to adopt an alternative due to classic market failures. These include normal market failures from myopia, costly information about economics or performance, or excessive risk aversion. Customers do very limited ex post economic analysis of their vehicle operation, and so may not know the real value of adoption, or may have an excessive and unjustified risk perception. ■ Behavioral and social constraints—Even when fully aware of the economic and operational benefits, consumers still may be uncomfortable with adoption for behavioral reasons. Behavioral issues may also occur because customers value other features more, including the social and peer effects of owning a particular type of car. Having many neighbors with similar vehicles may encourage adoption, while living in communities with social norms against certain vehicle choices can restrict it. The realistic market that manufacturers can expect to penetrate with their product. Still constrained by their sales & marketing efforts vs. the competition.
178
Range anxiety
A consumer concern about how far they can go on a single charge and whether they can perform all of their necessary transportation tasks without running out of charge or going too far out of their way to recharge. major constraint on BEV adoption and use.
179
Lifecycle emissions
The amount of GHG emissions creating during vehicle manufacturing, fueling/charging, and other uses. Lifecycle emissions are calculated using the lifecycle analysis methodology discussed in detail in Chapter 20. Because EVs need to be charged using grid electricity, full lifecycle analysis of their emissions has to include upstream emissions generated by the mix of electricity being used to power the grid, including all of the efficiency losses incurred through the supply chain. In almost any comparison, EVs have lower GHG emissions than their ICE counterparts, but exactly how much lower depends on the generation mix in the particular country.
180
Thermal energy system
The thermal energy system is the third of the major subsystems in the overall energy system. (Cooking, space heating, industrial processes) Thermal energy use is the oldest and most basic form of human energy use, originating with the harnessing of fire to burn wood and other biomass for cooking and space heating. Improved furnaces and fuels that could achieve higher temperatures allowed the melting and processing of metals and evolution of human society from the Stone Age to the Bronze Age to the Iron Age. Today it is still the largest subsystem as measured by energy flows, representing 37% of final energy consumption in the OECD countries, and 47% worldwide due to a lower relative demand for electricity and transportation services in the developing world.
181
Final energy use for heat (FEH)
Breakdown of the fuels consumed in the production of heat as well as the end-use sectors to which the heat is applied. In the industrialized OECD countries, natural gas is the primary source of energy for heat, followed by oil and coal, which collectively make up about 85% of the final energy use for heat (FEH). In the developing world, a substantially higher reliance on biomass for residential heating and cooking applications reduces, but does not eliminate, the relative contribution of fossil fuels to FEH. As shown in the figure, end-use applications for this heat are primarily split between heat used for industrial purposes (such as smelting, process heat, etc.) vs. heat used in building applications (space conditioning, such as heating and air conditioning, cooking, and hot water).
182
Fuel switching
Using a different fuel for industrial thermal processes. Where available and technically feasible, coal-to-gas switching may reduce pollution and emissions from cogeneration. Also, fossil fuel to renewable switching can improve emissions characteristics and dampen fuel price volatility and dependence, but it must also be evaluated for reliability and cost.
183
Heat pumps
Heat pumps comprise a vast range of technologies that can move heat (through the use of a liquid or gaseous refrigerant carrier) from one location (a source) to another (a sink). Through various types of compression (electrically driven) or absorption (thermally driven), these technologies can even extract heat from a colder location to a warmer one, or vice versa, allowing heating in cold climates and air conditioning in warm ones. The most commonly used systems are vapor-compression refrigeration units, which use a liquid refrigerant that expands into gas (a phase change) and back into liquid as it cycles through the unit.
184
Coefficient of performance (COP)
A common metric used to standardize the relative efficiency of heat pumps is the coefficient of performance (COP). Conceptually, the COP is the ratio of the heat supplied to or removed from the reservoir as a percentage of the work consumed by the pump. Devices with a pure thermal in and thermal out cycle, such as stoves or boilers, naturally have a COP of less than 1 due to the laws of thermodynamics. Devices that tap into ambient heat sources, such as heat pumps, can have a COP of greater than 1.
185
Air source heat pumps
These heat pumps use ambient air outside the building as a source or sink of heat in their operation. They are the simplest to set up but may suffer from efficiency losses in extremely hot or cold ambient environments.
186
Ground source heat pumps
Also called geothermal heat pumps, these heat pumps require equipment to circulate water (or another refrigerant) in a closed or open loop under the ground, taking advantage of the natural temperature differentials between the ground and the ambient air. The systems tend to be very efficient (see Metrics Sidebar), but their initial setup can be expensive due to the excavation or drilling necessary to install the loops.
187
Combined heat and power (CHP)
Combined heat and power (CHP) was discussed in detail in Chapter 6 and represents the process whereby the waste heat from electricity generation can be captured and put to work for productive purposes. Sometimes this productive use is to increase the efficiency of electricity generation itself, but more often the low-quality heat is targeted toward local thermal needs, such as space or water heating.
188
Passive design
Even without trying to capture and redirect the energy of the sun, many design features determine a building's relationship to the sun, seasons, and ambient conditions, collectively referred to as the passive design of the house, or just passive design. These passive design features include the orientation of the house, doors, and windows; building envelope material, color, and insulation choices; and awnings, airflow, and other engineering choices. The combination of these choices can have a dramatic impact on the overall heating, lighting, and cooling requirements of a building for a given environment. Due primarily to cost issues, many of these features are best embedded in the initial design and construction, though retrofits can help with some aspects of design as well.
189
District energy
District heating and cooling systems, collectively referred to as district energy, are not sources of energy themselves but instead are methods of capturing heat from a source and moving it to a sink using large integrated networks of delivery systems across multiple buildings and users. The usual justification for these systems is either having ample sources or sinks of excess heat or favorable economics for building a system large enough to capture and deliver heat with corresponding economies of scale. A particular type of district heating system is called a municipal steam system, one of the most famous of which is in some of the more densely populated parts of New York City. One of the nice features of a steam system is that the temperature of the delivered steam is high enough to be useful for industrial as well as residential and commercial users.
190
Repowering
When the performance/economics of existing electricity generators or thermal plants are enhanced through new financial and physical capital. --- Use existing assets or infrastructure and enhance their performance or economics through meaningful additions of new financial and physical capital. Often, these opportunities arise long before the end of the useful life of an asset, but overhauling the aging equipment using state-of-the-art components (retrofits) might still make economic sense, depending on the circumstances. When this is done using electricity generators, thermal plants, or even engines, it is often referred to as repowering an asset.
191
Methane
CH_4, natural gas. Compared to oil, natural gas tends to form in higher pressure and temperature ranges, resulting in a complete conversion of the organic matter into lighter hydrocarbons, the lightest of which is methane, or CH4.
192
Associated gas
Natural gas that emerges during oil production In the absence of readily available offtake infrastructure or local use for the energy, this associated gas has very little value and must be dealt with to prevent safety issues. Managing this associated gas was a particularly difficult challenge for early oil prospectors and had to be handled to avoid explosion or health consequences to oilfield workers.
193
Flaring
Burning the methane in a flare stack. The predominant method of managing associated gas has been this, but more recently, methods have been developed to reinject the natural gas into the well to provide additional pressure for production of primary oil.
194
Unconventional gas
Natural gas that is present in other geological formations like coal seams and shale rock, that require advanced technology or nontraditional extraction methods to access. Include: ■ Tight gas—Tight gas is the type of gas that is trapped in a low-permeability source rock deep underground, such as sandstone or limestone. Because this gas is not freely flowing, methods have been developed to liberate the gas from the rock, called well stimulation. The most common method uses high-pressure water injection to break up the source rock, also called hydraulic fracturing, or fracking. ■ Shale gas—Shale gas represents natural gas trapped in even deeper and denser shale source rock, and obtaining commercial quantities of natural gas requires even higher stimulation of the rock through hydraulic fracturing. To make these types of extraction cost-effective, additional techniques of directional drilling (drilling at angles to follow hydrocarbon deposits) and horizontal drilling (drilling laterally across the source rock) have been developed and used in combination with hydraulic fracturing. Due to the geology of tight gas and shale gas deposits, these wells typically have very high initial production (IP) rates that drop off quickly without additional stimulation. ■ Coal-bed methane—Another, shallower source of natural gas is found colocated with coal deposits, where the natural decomposition of organic material over time created some methane that remains trapped inside the deposit. Geologically, this coal-bed methane (CBM), also called coal-seam methane, stays trapped in the coal due to the presence of large amounts of water, creating pressure that prevents the gas from escaping upward through the rock. Extracting the natural gas from these coal seams, a process originally developed to make the coal seams safe for mining, is typically just a matter of removing the water, thereby allowing pressure in the reservoir to drop and the gas to escape simultaneously. While CBM wells are initially very expensive and typically start out with lower IP rates, they tend to perform consistently over a long period of time once completed. Landfill gas: Another source of natural gas is landfill gas. Landfill gas is the methane that is captured from the decomposition of material in waste landfills. Biogas: Finally, biogas can be an important source of natural gas, particularly in places with abundant organic waste matter. Biogas (described in more detail in Chapter 8) is created by breaking the organic matter down with anaerobic bacteria in a digester, resulting in methane that can be used for thermal energy purposes. -- Natural gas from associated and nonassociated deposits are collectively referred to as conventional gas. However, natural gas is also present in many other geological formations that have trapped hydrocarbons, including coal seams and shale rock. Natural gas deposits in these formations are collectively referred to as unconventional gas deposits and often require advanced technology or nontraditional extraction methods to access.
195
Gas initially in place (GIIP)
Gas initially in place (GIIP), which estimates the total amount of gas in the reservoir based on seismic data and exploratory well experience. Not every bit of the GIIP is obtainable, but the technology and geology will allow some overall recovery factor (percentage of the total GIIP ultimately obtainable), which will result in a total amount of natural gas that can be produced over the life of the field. Improving technology can help improve the recovery factor and therefore ultimate production of a given field, though the cost of doing so will always need to be evaluated vs. the benefit of the marginal production.
196
Fugitive emissions
Emissions from natural gas supply chain - creating concerns and the potential for additional future regulation (mainly VOC and methane). VOCs: The production of natural gas creates a number of these fugitive emissions, including volatile organic compounds (VOCs), a class of organic chemicals, such as benzene, that can have significant health effects on local populations. Methane: Another fugitive emission problem is the release of methane into the atmosphere. While nearly all of the methane in the natural gas supply chain is captured, small amounts do leak out both at the point of production and elsewhere in the transmission and distribution systems. Aside from the loss of potential revenue this creates, methane is a powerful greenhouse gas, which has an equivalent heating effect (global warming potential, or GWP) 86 times that of carbon dioxide over a 20-year time frame and 34 times over a 100-year time frame.
197
Volatile organic compounds (VOCs)
Type of fugitive emissions from natural gas production - class of organic chemicals, such as benzene, that can have significant health effects on local populations. The production of natural gas creates a number of these fugitive emissions, including volatile organic compounds (VOCs), a class of organic chemicals, such as benzene, that can have significant health effects on local populations. VOCs are toxic and cause respiratory problems, particularly in infants and the elderly; in addition, constant exposure to VOCs increases the risk of cancer and other long-term impairment. VOCs are also linked to dramatic increases in ozone levels, which result in local smog and air quality deterioration that can amplify the negative health impacts of VOCs.
198
Henry Hub (HH)
main gas distribution hub in Louisiana, where the US benchmark price is established --- As with all prices, they must reflect a functional product with a very precise set of characteristics, and these are typically standardized by market makers in well-established benchmark prices to simplify transactions. The US benchmark price in Figure 18.7 is established at the Henry Hub (HH), a main gas distribution hub in Louisiana. The UK benchmark number is from the National Balancing Point (NBP), a constructed measure for all UK gas. For Japan, the number comes from spot liquefied natural gas (LNG) import prices, since all gas used there must be imported from overseas.
199
Working gas
Working gas refers to the actual gas that can be withdrawn from storage above any amount of base gas, or the minimum amount of gas necessary in a storage device at any time to keep it pressurized and flowing at commercial volumes. ----- Because the supply of natural gas is generally much more constant than its demand, managing this seasonality in demand requires natural gas storage using the techniques described in the section above. In the US example, strong seasonal demand for building thermal energy requires storage to be filled during the warmer months, and drawn down over the winter. Figure 18.12 shows the regular patterns of this storage, with a 5-year band of high and low levels of working gas. Working gas refers to the actual gas that can be withdrawn from storage above any amount of base gas, or the minimum amount of gas necessary in a storage device at any time to keep it pressurized and flowing at commercial volumes. The relative amount of storage in the United States tends to be higher than in other OECD countries due to the combination of isolated geography and strong winter seasonality.
200
Base gas
minimum amount of gas necessary in a storage device at any time to keep it pressurized and flowing at commercial volumes (as opposed to working gas, which is the amount above this that can be withdrawn)
201
Stranded gas
Gas in countries with an abundance of natural gas but lacking the geographic connections or capital to establish pipeline facilities. --- Places that lack the geographic connections or capital to establish pipeline facilities are effectively cut off from international trade in natural gas. Countries with an abundance of natural gas, therefore, have stranded gas, because they cannot get this gas to market, while countries with no domestic supplies cannot procure the fuel and take advantage of its benefits.
202
Liquefied natural gas (LNG)
Natural gas cooled down to the temperature at which it becomes liquid, or –162°C (–260°F). This liquefied natural gas (LNG) increases the amount of natural gas energy that can be stored in a given volume by 600 times and contains close to triple the energy content per unit of volume than CNG. As a result, LNG can be more cost-effectively transported in special ships over long ocean routes. However, the process for safely cooling, loading, shipping, offloading, and reheating the gas at its destination requires a staggering amount of physical and financial capital.
203
Indexation
adjustments to that pricing based on changes in the corresponding index (in the context of an index contract for natural gas) Base pricing and indexation—This comprises the total pricing algorithm between any fixed base price (the minimum or floor price) and the indexation (adjustments to that pricing based on changes in the corresponding index). It may also provide a cap on pricing to protect the buyer.
204
Oil-linked index contract
natural gas contracts are often indexed not to the spot price for natural gas but to the price of oil it is common for a contracted commodity to be indexed to the spot market price of that commodity but much rarer to be indexed to a different commodity. This type of arrangement emerged because of the long-term relationship between oil prices and natural gas prices and the competitive dynamics of the gas-exporting countries of Qatar and Russia. Qatar is gas rich but oil poor and wanted to establish trading relationships for its LNG that did not economically disadvantage it against its oil-exporting neighbors. Russia, a big producer of both oil and natural gas, also wanted to make sure that its natural gas was not economically disadvantaged vs. its oil exports, particularly to an energy-hungry Europe. Due to the market power these two gas-exporting countries had in the early formation of international natural gas trade, an oil-linked index contract became the standard. Today nearly all of the international trade of natural gas into the Japanese market and about half of that sold into Europe is indexed to the price of oil.
205
Shale gas
Natural gas produced from ultrahard shale deep underground using advanced methods, including: - hydraulic fracturing (use of high-pressure water to fracture the rock containing hydrocarbons) - injection of slickwater (adding chemicals to allow easier fluid flows within the well) - proppants (injecting sand or other material to hold open fractures and cracks in the rock) - horizontal drilling (allowing drilling along the contours of the deposit)
206
Well productivity
output of each well drilled often measured in cubic feet per day (cf/d) or million cubic feet per day (MMcf/d). Productivity index: A measure of a well's ability to produce gas, calculated by dividing the flow rate by the pressure difference between the reservoir and the wellbore.
207
Non-associated gas
Subsequent discoveries of substantial gas fields without the presence of significant oil (nonassociated gas) in the United States, the Netherlands, and Siberia in the mid-twentieth century led to efforts to capture and process this gas and deliver it to customers to help meet their growing energy needs.
208
Conventional gas
Natural gas from associated and nonassociated deposits are collectively referred to as conventional gas.
209
Scarcity
the notion that in making economic decisions, actors are necessarily prioritizing between options due to an inability, through lack of physical or financial resources or time, to choose all options at once “Economics is a science which studies human behaviour as a relationship between ends and scarce means which have alternative uses.” - Lionel Robbins
210
Circular system
system with lots of feedback loops that influence each other and no obvious beginning or ending point
211
Physical dependence
Physical dependence is a measure of how much of the energy services in an economy would cease if the physical supply of energy were disrupted. Physical dependence refers to the need for energy for the actual needs of people, government, and business in the economy. It is related to the direct use of energy.
212
Economic dependence
Economic dependence measures focus on the relative impact of entry price changes on economic activity. --- Finally, the relationship between energy and the economy can be measured as described in the next section, which can include economic dependence or trade dependence, though they are often intertwined in practice. Economic dependence measures focus on the relative impact of entry price changes on economic activity.
213
Dutch disease
Example of how large resource endowments can lower performance in other economic sectors. When Dutch disease occurs, the external demand for a country's resources (e.g., oil or iron ore) puts strong upward pressure on that country's currency. This in turn causes the currency to appreciate (meaning it is more expensive to purchase with other currencies), which then harms other exporting sectors in that economy—often manufacturing. In addition to currency effects, Dutch disease can also put pressure on manufacturing by increasing wages across the economy and adding to inflationary pressures, particularly through an increase in industrial prices, measured by the producer price index. Because energy resources and other commodities responsible for causing Dutch disease are often subject to a cyclical boom-bust pattern, the benefits to the resources sector are often short term, while the damage to manufacturing or other sectors can be long term and practically irreparable. Although resources can contribute greatly to the wealth and prosperity of a nation, valuable indigenous resources can also damage economic performance.
214
Leapfrogging
The idea of skipping the centralized approach to go straight to distributed generation powered by renewable energy
215
Crowding out
the potential for subsidy or giveaway programs to impair normal market function
216
Subsidy dependence
A market failure where local users expect future subsidies / wait for next giveaway program rather than consume on their own. When customers receive free or subsidized energy generation or financial solutions, it may reduce the demand for existing or future market transactions for these or competing solutions. It also has the potential to change expectations of customers around the future availability of subsidies, creating a subsidy dependence of local users, where they simply wait for the next giveaway program.
217
Sources and sinks
Sources: Part of ecosystem providing resources Sink: Part of ecosystem collecting undesirable consequences The energy-economic system is still an open, nested system within the larger vessel of the global ecosystem, from which it draws resources (sources) and into which it puts the undesirable consequences of energy and economic transformations (sinks).
218
Natural capital
Form of capital provided by nature, providing services that might otherwise be impossible or would need to be replaced by other forms of capital (i.e. water purification) While many of these services support the healthy functioning of individuals and societies, they also provide essential economic transformations that would need to be replicated if the ecosystem were unavailable to do so. From an economic system perspective, this natural capital exists to provide or facilitate other transformations in support of human industrial activity. Water can be delivered from a wide basin to thirsty urban populations through both groundwater funneling into streams and underground aquifers. Some of these methods also purify that water in the process in a way that would need to be replicated through physical and financial capital if the natural capital to do so were unavailable.
219
Environmental insult
The specific bad thing that is performed Borrowing a term from the study of medicine, this injury is described in terms of an environmental insult, or just insult for short. This insult can be increased levels of pollutants, absorption of water supplies above the recharge rate, or depletion of fuel stocks, for example.
220
Environmental impact
The outcome of the bad thing When the ecosystem endures such an insult, a combination of direct and indirect environmental impacts affects the functioning of the overall ecosystem services, sometimes through multiple channels simultaneously. In the case of increased air pollution, the contaminated air is breathed by both humans and the surrounding plants and animals on which they rely, creating illness and disease and reducing their natural growth rates.
221
Montreal Protocol
Protocol of 1987 that was key to the protection of the atmospheric ozone layer from damaging chemicals ---- The Montreal Protocol, which addressed chemicals that harm the ozone layer in the earth's atmosphere, represents one of the great success stories in environmental regulation, as the world's governments came together to develop a shared policy framework. Former UN Secretary General Kofi Annan called the protocol “perhaps the single most successful international agreement to date.” Faced with mounting evidence that certain classes of ozone-depleting substances (or ODSs, including CFCs and HFCs described later in this chapter) were leading to a growing hole in the protective ozone layer, a UN treaty codified acceptable quantities for each pollutant, as well as a graduated reduction of emissions down to zero from 1987 to 1996. Faced with strict and clear emission quantity constraints, producers responded by finding the most cost-effective way to reduce these emissions over time, and met the phase-out requirements much more cheaply than originally estimated. ---- Even as national environmental protection laws continued to improve throughout the twentieth century across geography and scope, the issues became increasingly recognized as international in scope. Dealing with pollution becomes substantially more complicated when it crosses a sovereign legal boundary, as happens for many types of air pollution, water pollution, and nuclear contamination, as well as for climate change. Some of the first international agreements to deal with transboundary pollution were bilateral agreements between two countries across a single border, but, by the 1960s, the recognition of acid rain sources and sinks across much of northern Europe led to the signing of the first internationally legally binding instrument for dealing with transboundary pollution in 1979, called the Convention on Long-range Transboundary Air Pollution (LRTAP). This successful multilateral cooperation on environmental issues provided the foundation for expansion of this convention, as well as the establishment of other conventions and protocols, including the Montreal Protocol of 1987 that was key to the protection of the atmospheric ozone layer from damaging chemicals (see Economics Box on establishing markets for externalities later in this chapter for more detail), and the UN Framework Convention on Climate Change (UNFCCC).
222
Greenhouse gases (GHGs)
gases that, when resident in the atmosphere, cause sunlight falling on the earth to be increasingly captured in the atmosphere as heat, functioning much as the glass panes of a greenhouse. Main gases: ■ Carbon dioxide (CO2)—76% of CO2eq annual emissions—Carbon dioxide is the primary contributor to GHGs and climate change and is generated through both human fossil fuel combustion and other industrial processes and from forestry and other agricultural activities. ■ Methane (CH4)—16% of CO2eq annual emissions—Methane is a hydrocarbon (CH4) that is the main component of natural gas. It is released both from the fossil fuel supply chain in natural gas extraction and delivery but also has many anthropogenic sources of emissions through deforestation, soil degradation, and land-use changes. The anthropogenic volume of natural gas methane emitted from both of these sources is modest, but it has a high GWP of 34 over a 100-year period, and higher over shorter periods. ■ Nitrous oxide—6% of CO2eq annual emission—While large contributions of nitrous oxide, N2O, are emitted through normal biological processes, the anthropogenic sources of nitrogen oxide are primarily a result of land use, soil use, and nitrogen-based fertilizers and feeds that accelerate bacterial activity. About 20% of anthropogenic nitrous oxide comes from combustion.15 In addition to the damage caused by local air pollution and ozone formation described in the previous section, nitrous oxide has a very long lifetime in the atmosphere and, as a result, a high GWP. ■ Fluorinated gases (F-gases)—2% of CO2eq annual emission—These gases are industrial chemicals designed for use in refrigeration, insulation, and other thermal applications for consumer and industrial devices, including hydrofluorocarbons (HFCs), perfluorocarbons (PFCs), sulfur hexafluoride (SF6), and chlorofluorocarbons (CFCs). CFCs were largely regulated out of use under the Montreal Protocol
223
Carbon dioxide removal (CDR)
Geoengineering approach to remove carbon from the atmosphere - Some of the more benign versions of this technology include increased forest cover (afforestation or reforestation) or capturing carbon in biochar, a charcoal product that can be used to simultaneously sequester carbon in soils and improve agricultural productivity in some areas. - More exotic versions of this technology involve iron fertilization of oceans, which relies on iron to increase biological activity and uptake of carbon dioxide before it sinks to the bottom, but little real scientific study has been done verifying this approach. - Finally, air capture technologies involve both biological and chemical routes for capturing carbon dioxide from ambient air and sequestering it. While these pathways can simultaneously reduce carbon dioxide and the negative impacts it creates, achieving sufficient scale, cost, and safety of these capture technologies and ensuring long-term sequestration of the captured carbon dioxide remain significant unaddressed challenges.
224
Bradford Rule
This assessment relies on an important principle in capital-intensive systems, the Bradford rule perhaps, which is that capital-intensive systems are primarily altered by changing the flow of capital into (and out of) them. --- (Not defined as part Bradford's Rule, but these are the three main takeaways from decarbonizaiton pathways - which connect to the rule a bit) Step 1: Use less Step 2: Rotate primary energy supply from carbon emitting to carbon free Step 3: Alter the flows of new capital in the system to meet those goals
225
Shadow carbon price
implicit carbon price used in capital budgeting by corporations Corporations also increasingly consider a shadow carbon price, or an implicit carbon price, in their capital budgeting (i.e., investment) decisions to ensure that their multiyear or multidecadal investments appropriately consider the risk of future carbon prices in calculating expected returns. Even large oil and gas producers have included a price of carbon, ranging from US$40 to $80 per ton of CO2, substantially higher than the prices reached in explicit carbon markets today.
226
System modeling
the process of building mathematical relationships that integrate all of the key variables of that system
227
Scenario planning
the process of altering the inputs of those models to determine how big an impact different pathways will have on the outcome of target system variables
228
Adoption Constraints
Six of them ■ Economic constraints—As mentioned above, in some circumstances, EVs are economically inferior to ICEs on a TCO basis. High electricity prices, low fuel Page 827827 prices, and high-cost batteries or other utilization features will make ICEs an economically preferable solution. ■ Substitutability constraints—EVs are not perfect substitutes for ICEs for most users. Some LDV owners use their vehicles in ways that EVs have trouble accommodating, including long or unpredictable daily range requirements or duty-cycle requirements to handle agricultural work, construction, or operation in wide temperature extremes. While EVs could be configured to accommodate these demands, ICE vehicles have natural advantages that will be hard to overcome. ■ Access to pairing technology—EVs, like many other technologies, require access to other devices and technologies to be useful. Notably, EVs need access to charging infrastructure for providing the necessary energy to power the vehicle electrically. Customers who do not have ready access to this infrastructure at home, at work, or at third-party charging stations in some combination may find it impractical to adopt EVs. Conversely, ICE vehicles utilize an embedded network of fueling stations that make access to pairing technology less of a concern. ■ Investment constraints—Beyond the relative TCO of competing alternatives, higher upfront capital investment may be needed to adopt EVs vs. comparable ICEs. Not only are the vehicles more expensive because of the substantial battery components, but installing local charging equipment at a home or workplace may also substantially add to the capital required for EV adoption. While these upfront capital costs are potentially financeable, allowing them to be spread over the useful life of the vehicle, in practice these higher upfront capital costs can prove a deterrent to adoption. ■ Market failures—Even when all of the economic and operational considerations are favorable, customers may fail to adopt an alternative due to classic market failures. These include normal market failures from myopia, costly information about economics or performance, or excessive risk aversion. Customers do very limited ex post economic analysis of their vehicle operation, and so may not know the real value of adoption, or may have an excessive and unjustified risk perception. ■ Behavioral and social constraints—Even when fully aware of the economic and operational benefits, consumers still may be uncomfortable with adoption for behavioral reasons. Behavioral issues may also occur because customers value other features more, including the social and peer effects of owning a particular type of car. Having many neighbors with similar vehicles may encourage adoption, while living in communities with social norms against certain vehicle choices can restrict it.