Week 7 - Chapter 11 Flashcards
- distributed generation (DG)
One of the most profound changes in the overall structure of the electricity system in the last decade has been the rapid emergence of distributed generation (DG) options for producing energy at, or very near, the customer’s load. (See Figure 11.1.) This generation is increasingly on the customer side of the electricity meter, creating interesting implications for the traditional grid operation model.
What is new, though, is the emergence of a set of technologies that is cost-effective at the very small scale and that may be located and sized to match the specific customer load. To date, this phenomenon has been driven by solar photovoltaic (PV) devices, but these devices are just one among a class of technology options for generating electricity at a home, a business, or an industrial site. Many such generation options are getting cheaper at an astounding rate and are approaching cost-effectiveness in their own right. This process has been helped by favorable subsidies and targeted policy in many places around the world over the last decade, but, increasingly, DG technologies are becoming viable even without government support.
- photovoltaic effect
The photovoltaic effect was first observed in experiments by A. E. Becquerel in 1839, when he noticed that some electrolytic cells showed increased electric activity (voltage or current) when exposed to sunlight (specifically, photons, though these were not fully understood until much later).
Essentially, photons of light shining on layers of certain materials eject free electrons, which can be captured as they move toward different materials. This process creates an electric current that can power electronic devices (see Figure 11.2).2 Later work by many scientists, including Heinrich Hertz, experimented with basic photoelectric effects across a number of metals and semiconductors.
- PV modules
In 1954, scientists at Bell Labs in the United States established the first working PV device with enough power to run standard electrical equipment like light bulbs and small motors. Initial designs had 4% conversion efficiency of light into electricity, which grew quickly to 11%.3 These first devices used a purified silicon wafer for the semiconductor, doped with impurities to enhance the photoelectric effect. They were named PV cells.
Additional supporting elements were created to package the cells into PV modules. These required stringing together the cells to achieve certain voltage outputs from the module, as well as to encapsulate the components to protect them from water and other contaminants that would degrade the overall module performance. Unless converted with an on-board device, electricity that comes out of a PV module is direct current (or DC) electricity.
- off-grid systems
Almost immediately, applications were developed that connected the DC power from the module to batteries that could store the excess electricity and make it available when the sunlight was unavailable. As most batteries store DC power, it takes a relatively simple charge controller to ensure charging the batteries in a way that is not detrimental to their long-term operation and also regulating the use of electricity from both the module and the batteries to meet the load. This opened up new applications, including solar lanterns, solar powered electric fences, marine applications, and remote communications, as well as powering homes with no access to grid electricity—collectively referred to as off-grid systems.
- grid-connected systems
Once a system is designed to accept AC power, it is a small matter to accept AC power from the grid when it is available. The grid can then function as a supplemental source of electricity supply, much in the same way a generator is used in the hybrid systems described above, plus the connection can also be used to displace the need for (costly) on-site batteries by accepting surplus generation from DG systems. Since 2004, these grid-connected systems have become the dominant method of deploying distributed PV, primarily because they reduce the upfront cost, LCOE, and complexity of systems vs. those that have a dedicated battery or alternative generator backup.
While off-grid systems still have a very important niche in the overall distributed energy landscape for remote applications and small devices, grid-connected systems are the primary thrust of current distributed generation deployment and will be the primary focus of the technical and economic evaluation that follows.
- inverter
Integrating these off-grid systems with devices that run on AC power, instead of the DC power a PV module or battery produces, requires an inverter to convert DC to AC. As with all transformations, this device incurs some losses of useful energy in its conversion and requires additional capital investment, but the advantages may be worthwhile enough to pair AC components with DC power production.
Once a system can manage AC power, it can also be supplemented with a generator to ensure that power is available even when sunlight is not and the batteries have been depleted. These hybrid PV systems give additional assurance of electricity under a wider range of needs and ambient conditions.
- insolation
In addition to all of the physical capital, PV systems require a source of primary solar energy for conversion into electricity. While the amount of sun hitting the Earth at a perpendicular angle over time is the same (the solar constant), several factors determine how much sunlight is available for capture at any point on the planet’s surface (insolation). These include:
- Where and how much
- When
- How certain
- polysilicon PV
The dominant technology today for producing PV modules (polysilicon PV) uses crystalline silicon (polysilicon) as a semiconductor base similar to those used in computer memory and processing devices. There are many methods of converting raw metallurgical grade silicon into highly purified forms (sometimes called semiconductor grade, or nine-nines pure; i.e., 99.9999999% pure silicon), but all of them require substantial capital and energy to liquefy silicon in the purification process.
Over the last decade, PV modules have become the dominant use of this polysilicon, and the manufacturing supply chain has at times had trouble keeping up with the dramatic growth in PV demand. Once the polysilicon is melted, purified, and formed into a solid silicon ingot, it is sliced into PV wafers, and then chemically treated (doped) to have specific photoelectric properties as a PV cell. These cells are then assembled and encapsulated (nearly always with a glass layer that is both rugged and impermeable to water), becoming useful PV modules for installation. See Figure 11.5 for a diagram of these steps.
- thin-film PV
Due to the inherent limitations of the capital-intensive process of producing polysilicon, and to circumvent occasional bottlenecks in the polysilicon supply chain, new technology pathways have been developed. Three broad classes of these technologies have been commercialized and are referred to as thin-film PV because they eliminate the use of a thick silicon wafer in PV construction.
- balance-of-systems (BOS) components
As mentioned above, once a module is ready to be used, a number of other components are necessary to configure it into a working system at the customer site. These are referred to as balance-of-systems (BOS) components and include:
- ■ Mounting, racking, and wiring—The PV modules need to be affixed to a mounting structure, usually through the use of a rack onto which they can be bolted. These mountings are often placed on the roof of the building (roof mounted) to maximize the access to sunlight and minimize the interference with ground-level activities. Alternatively, systems can be ground mounted, or placed on the ground, with ample insolation, a setup that can be cheaper and easier to install if space is available. Wires to connect the modules together into strings and arrays and deliver that energy to the inverter are also required.
■ Inverter and power management—Modern grid-connected PV systems must have a DC to AC inverter to convert the solar power into a form useful for the grid. These can be anything from very small microinverters located directly on the back of a module all the way up to large container-sized inverters for significant commercial or industrial applications. In addition to transforming the type of electricity, these systems function as a power management tool to optimize the use and synchronization of the modules for maximum electricity output. They may also have communications and testing protocols to assist in the remote monitoring of PV systems.
■ Labor and inspection—Finally, labor is needed to deliver materials to the site and assemble it into a completed system. This labor requires certain technical ability in the mechanical and electrical trades, and thus often commands a higher wage than traditional manual labor. Once a system is installed, internal and external inspection to test the system and certify it for use is required.
- soft costs
In the solar business, installers are the people who simply install the modules and BOS (a cost anticipated in the BOS costs above), but developers have to do more, including finding viable customers and seeing them through to a completed installation. Often, project developers charge a developer margin above the cost of components and installation to adequately compensate them for these costs as well as to provide adequate profit incentive. These costs include:
– ■ Customer acquisition costs—Finding people who are both willing and qualified to install a solar system on a home or business represents a meaningful investment of time and money. Finding customers (through many different methods of outreach and advertising) and getting them to signal their initial interest is only the beginning. Qualifying customers requires an investment of time for designing and pricing the systems to determine whether the systems are suitable (or the customers are ultimately willing) to go forward. As a result, customer acquisition costs, particularly for the smallest installations, can end up being the single largest cost component after the cost of the module.
– ■ Design and approvals—Determining the site specifications and the specific components required to optimize the solar array requires technical talent and time. Depending on the local zoning and building inspection requirements, getting these plans and the final installation certified for use can also be a complex undertaking.
– ■ Financing—Somebody has to provide the financial capital to purchase the system, whether that is the customer or a third-party financial provider. Ensuring that there are adequate financing solutions available and that the customer can take advantage of them requires time and expertise. Even when the direct cost of obtaining the financial solution is low, poor customer creditworthiness can result in the loss of the productive time spent identifying and developing the customer’s system if a determination is made that the customer cannot qualify for a loan to pay for it.
– ■ Monitoring and billing—Once the system is installed, it is important to continually monitor and ensure optimal performance, as well as identify any faults, failures, or hazards. Depending on the nature of the billing process, accurate tracking of the system output may also be needed to determine the amount paid by a customer each month. Regardless of the compensation structure, sending statements and collecting bills is often complicated for customers in the smallest and least creditworthy portion of the customer base.
- net metering
Net metering—Once the system is connected, the electricity flowing back and forth between the distributor generation and the grid must be fairly compensated. The simplest method to do this is to count the net kilowatt-hours that flow into the building and charge the customer for that amount. This allows any surplus generation from the DG system to be used (often thought of as being stored) by the grid. This method of compensation was originally developed because analog meters on typical
- rate design
Rate design—As discussed in Chapter 4, the allocation of the grid’s costs to the various users of its services is done through the process of rate design. Today’s rate design is predominantly driven by volumetric considerations, and it allocates the costs over the volume of energy used by the customers. However, the specific features of the rate design can dramatically affect the economics of the DG intervention. Some considerations for rate design include:
– Flat rate vs. time of use
– Connection charges
– Demand charges
- time of use (TOU) pricing
Flat rate vs. time of use—Customers can receive the same volumetric charge for a kilowatt-hour regardless of the time of day in which it is consumed (flat-rate pricing). Alternatively, customers with appropriate metering technology can be charged based on the time of day they consume electricity and the relative value of the electricity at that time. Technologies like solar that are correlated with peak demand would be compensated more favorably under time-of-use (TOU) pricing, at least at low levels of penetration.
- connection charge
Connection charges—Connecting to the grid often creates overhead costs and expenses that need to be compensated, regardless of the amount of electricity used. Charging for these expenses is usually done with a connection charge, which is often the same for all customers of a certain size class. Higher connection charges result in a lower proportion of expenses allocated to volumetric charges, and therefore a lower incentive to offset kilowatt-hours with DG.
- demand charge
Demand charges—The grid also must be prepared to provide adequate power at any given time to all customers. This creates capacity requirements for which the grid must contract. To allocate the costs for this, customers are often charged a fixed demand charge based on their highest historical power requirements. When DG can reliably reduce the peak power requirements, this provides a benefit that must be considered in economic calculations.
- investment tax credit (ITC)
Type of incentive mechanism
■ Equipment buy-downs or rebates—Helping buy down the installed cost of the system through direct rebates or other tax incentives can reduce the LCOE of PV systems to the point where customers find them economic and compelling. These have been constructed as direct rebates to customers in Japan’s New Sunshine Program in the 1990s and in the California Solar Initiative (CSI) program in the 2000s. The US government for many years has offered a federal investment tax credit (ITC) that functions in a similar way through the issuance of monetizable tax credits, and some states have also offered them.
- learning rate (LR)
The learning rate (LR) is defined as the percentage drop in the cost to produce the technology for each doubling of cumulative production.
- progress ratio (PR)
The progress ratio (PR) is defined as one minus the learning rate (PR = 1 – LR).
- product innovation
Product innovation is generally the ability to improve the performance of a device—often through R&D or design innovation—and results in a productivity boost or more of the desired output for a given device. This results in a reduction of the input materials or costs that go into the device, but it can be thought of as typically targeting improvements in the denominator, or output, of the standardized cost calculation.
- process innovation
However, process innovation does not normally affect the performance of the device but instead drives down the costs of manufacturing it. While this can sometimes include modest adjustments to material specifications or input costs, it often includes manufacturing process adjustments, reduction of the number of process steps, and scale economies in manufacturing. In this way, process engineering typically targets the cost elements of standardized costing; that is, the numerator.
- market shakeout
■ Market shakeout—A second nonlinear change in experience curves can arise when the experience curve is drawn using market prices for the technology instead of the underlying cost structure to manufacture it. In these situations, supply and demand balance can change and, for periods of time, demand can outpace supply. This tends to bid up the market price of the device, even as the underlying cost to manufacture it continues to fall through experience curve effects. When this happens, margins for producers tend to expand as long as the price umbrella persists. Eventually, supply catches up to demand, often overshooting and creating overcapacity, and a market shakeout ensues until margins are normalized and the price returns to the normal experience curve trajectory.
- parity
The point at which a technology becomes competitive with the current competitive solution for a particular customer need in a market is called parity.
As with all competitive analysis, clearly identifying the customer need that is being met in the market requires understanding the fungibility, or substitutability, of one solution vs. another. However, when one formerly expensive technology falls in cost to parity with the current best solution in the marketplace, market dynamics have the potential to shift dramatically.
Figure 11.11 shows how the PV experience curve is moving toward a break-even point, or parity, with its direct alternatives. This point of parity can be established in any market with an emerging alternative and an industry incumbent, but for distributed PV it is specifically defined as grid parity. Grid parity is the point at which distributed PV falls to the same cost as the grid electricity it displaces.
- learning investment triangle
Another interesting question about parity arises from the realization that prior to parity, technologies tend to be uneconomic, and therefore not likely to be aggressively deployed or to easily achieve the resulting cost reductions that would enable them to reach parity. Looking again at Figure 11.11, the learning investment triangle defines the total excess costs above the market alternative that need to be incurred to see a technology reach the scale, and therefore the cost, to make that technology competitive in the marketplace.
- disruptive technology
If the emerging technology is absorbed into the existing infrastructure and its deployment is limited by the incumbent delivery architecture, the new technology is referred to as a sustaining technology. Such innovations may improve or optimize the incumbent business model and its economics but do not fundamentally change the system architecture.
Other times, however, the incumbent technology providers cannot change the price or respond effectively even as the emerging technology continues to get cheaper. Some emerging technologies can bypass the limiting features of the incumbent delivery architecture and offer wholly new and cost-effective solutions to the incumbent’s customers. These new offerings are referred to as disruptive technology and can cause substantial change to a system’s overall performance and character.
The economist Clay Christensen is one of the leading thinkers on disruptive innovations and describes the characteristics that these innovations typically share. Figure 11.12 demonstrates some of these characteristics, including:
■ Their technology is not new—Disruptive technologies rarely start out as a profound technological breakthrough. They are not usually some incredible product innovation discovered in a lab or garage; instead, they represent existing technologies that have some current use or verified performance, without which they would have trouble gaining the necessary confidence of customers or investors.
■ They are initially more expensive—Disruptive technologies typically start out more expensive due to their lower scale of deployment and earlier stage of development.
■ They start out niche oriented—Because they are more expensive, these disruptive technologies will find their initial application in niche applications where the customer value proposition is much higher or the incumbent solution is unavailable.
■ They build scale and drive trust as a mass solution—As customers get comfortable with these technologies in the initial applications, increased deployment can drive down costs (through experience curve effects) while increased visibility can improve the risk profile for customers in larger and more price-sensitive markets.
■ They reach parity—Lower costs through higher volumes can trigger a positive feedback loop until price thresholds are crossed and the formerly more expensive niche technology becomes the best solution for mass-market application.
- isocost curve
Examining a range of possible system prices and a range of possible electricity prices that this DG will compete with allows the construction of an isocost curve similar to the one in Figure 11.14. Isocosts hold constant the cost of the system and show the trade-off between the amount of insolation in a particular place vs. the LCOE generated with that amount of insolation. This relationship is clearly negative, as the fixed cost of the system must be amortized over fewer kilowatt-hours produced in lower insolation jurisdictions, resulting in a comparatively higher cost for the LCOE created.
- duck curve
Over time, however, the additional generation during the middle part of the day is changing the low profile observed by utility operators. Figure 11.17 shows a graph—colloquially called the duck curve due to its shape—that demonstrates how quickly these effects are occurring as midday solar generation is reducing the net load the grid needs to supply. Even as peak load growth increases in the early evening, midday power needs are falling dramatically. This creates a situation of much more dynamic generation requirements from the rest of the generation base as load ramps up and down quickly. It also exposes grid operators to rising risks from a growing share of intermittent generation.
- microgrids
As the grid evolves, more and more operational capacity is being developed and placed near the customer. Many features of grid management discussed in Chapter 9 on demand management are increasingly important to control the load on the customer’s side of the meter, and the inclusion of DG and distributed storage are making these customer-sited solutions more complete and cost-effective. These solutions allow targeting specific customer needs better, and the costs come down as deployment of the technologies scales up.
Once the ability to generate electricity, store it, and manage both supply and demand is deployed locally, however, it is entirely possible to combine these features into smaller integrated microgrids that can operate independently from the grid for some period of time. The motivation may be to save overall costs through reduced energy requirements and payments from the traditional grid provider. Another motivation may be to ensure that a local electricity supply is available even if the grid is not, a concern in areas of frequent grid outages or weather damage, particularly for vital infrastructure such as hospitals.
Microgrids must manage all of the local features of supply and demand and balance those constantly in the same way the centralized grid would. This requires a constant supply of electricity that can be met through a combination of renewable DG, fuel-based DG, and storage to buffer the system. How long a microgrid can operate independently from the grid is determined by how much storage buffer is available and how easily fuel can be obtained for fuel-based generation. Storage tanks for liquid fuels or even connection to a natural gas delivery infrastructure can provide long run times for microgrids as required by local circumstances.
- grid defection
Various combinations of electricity generation and storage options through microgrids are expected to increasingly enable generation of electricity more cheaply at the load while retaining the high reliability that grid customers expect. As more people adopt microgrid solutions and gain comfort with their technical and economic features, they may begin to question the need to simultaneously maintain both their local electricity supply and management through a microgrid and their connection to the traditional grid. Under some circumstances, customers may choose grid defection, or severing their connection to the traditional grid altogether.
The simplest technical mechanism of grid defection may include the combination of distributed solar generation and storage, as is already done in off-grid applications today. Under the distributed solar and storage model, there are different parity and break-even points, but the grid defection parity point has already been reached in sunny and remote areas and will occur in others as well.18 However, microgrids that include fuel-based generators may eventually be favored due to their ability to store substantially larger amounts of energy and thus provide more service reliability to customers. Customers with both distributed renewable generation options and a connection to a natural gas distribution network may get rid of their connection to the electric grid. The economic choice to do so is another matter and will be driven by local resource endowments, costs, and price volatility.
However, falling costs of DG components vs. grid alternatives will likely make grid defection more economically competitive in the future. Grid parity that triggers customer defection would strand costly utility assets that have economic lives planned well beyond the time it would take to reach parity, and the resulting rising electricity rates from underutilized grid assets would accelerate incentives for more customers to defect—referred to as an economic death spiral for utilities.19 As a multitude of microgrids begin allowing circumvention of the grid, the traditional centralized grid architecture will continue to be undermined, and utilities and their regulators may be forced to rethink the very nature of utilities’ relationships to their customers and to society.
- learning curve / experience curve
For many products (and services too, but for ease of exposition, this section will refer just to products), there is a clear negative relationship between the cost to produce something and the amount produced. Essentially, the more time and effort people and companies invest in producing certain things, the better they get at it and the cheaper those things become.
When applied to a single person or a firm, this effect is sometimes referred to as a learning curve, which shows how much more efficiently an operation is executed the more times it is performed by a single operator. But the concept of learning can expand beyond the single operator or process and apply to whole industries. Aggregate learning (or “experience”) in industries occurs through many complex and diffuse mechanisms, and it tends to benefit not only the firm or person making the effort but also the whole industry as new methods are developed and copied and as competition rewards innovators with market share growth.
The analytical tool for understanding this broader march of product and process innovation across many firms (and even nations) is an experience curve. It is derived by plotting the observed market price or cost data for a type of product on the y-axis against the cumulative volume produced for that product by all manufacturers on the x-axis (and using a logarithmic scale for both to compensate for the typically exponential nature of the growth). Figure 11.7 gives an example of this analysis.