Making Sense of Smart Grid Blog

Over the past 5 years Portland State University has offered a unique class on the emerging Smart Grid and what it will mean for Oregon consumers of energy, how it will affect the utilities, the regulators, and what markets, services and products it might stimulate. The course is literally the first of its kind in the country and it tracks this emerging trend as well as following the new technologies in real time. After taking a year off in 2012 this course is back for winter and spring terms this year. PSU decided to offer this experimental course after being approached by some visionaries at Portland General Electric. This year the course continues to be sponsored by PGE, as well as Intel and Veris Industries.  To accommodate the rapid evolution of this new discipline, Jeff Hammarlund designed the course so that it will integrate current developments in the Northwest implementation of Smart Grid even as they unfold! Joining Jeff on the faculty are James Mater, Michael Jung, Mark Osborn and Lawrence Beaty. The class convenes every Thursday in the Urban Center Building from 6:30 - 9:40 PM.
<< First  < Prev   1   2   3   Next >  Last >> 
  • 08 Feb 2011 11:00 AM | Anonymous

    PSU’s third annual Smart Grid class is now about half way through the Syllabus and there seems still so much to consider about this evolving technology and energy delivery approach. It seems that the more we delve into Smart Grid the more we realize how vast the implications are.

    After nearly a dozen classes I am also sensing that there is almost too much information being presented. At times the sheer volume of the in-class information and the readings is overwhelming. This  information intensity is exacerbated by a  disjointed progression of speakers whose availability trumps  a more logical progression.  That said, let’s return to covering the material offered by PSU’s PA 510 – their exemplary course on the the Smart Grid.


    This 5th session of the class hosted three distinct presentations. The first was Conrad Eustis’ presentations on regulatory trade off’s presented by the Smart Grid Technology. This was followed by James Mater’s presentation on the interoperability challenges inherent in the technology layers used to link the elements of the Smart grid. And finally Ken Nichols presented his perspective on energy trading in the western section of this continent.


    Trade offs in the Smart Grid

    - Conrad Eustis

    The introduction of the Smart Grid into a regulated electricity distribution system that hasn’t changed much over the last 100 years poses some fundamental questions about investment costs, social equity issues, environmental effects and economic impacts.

    The regulators are grappling with questions like:

    • should low income electricity users be subsidized?
    • who should pay for these subsidies: ratepayers or taxpayers?
    • At what cost should we pursue one policy trajectory over another?
    • How expensive can we make electricity to preserve the environment?
    • To what extent are regional assets accessible to the nation?

    These are some of the knotty policy conundrums that regulators are grappling. According to Conrad, decisions by electric industry regulators are mostly based on economics and have to do with cost justifications – either direct (“hard”) costs or indirect (“soft”) costs.

    The delivery of electricity entails much investment in physical infrastructure to permit energy delivery at a sufficient capacity to support society’s needs. Implicit in these decisions is the trade off between capital investment and operations & management costs. The more we invest in expensive capitol equipment the more we can reduce “O&M” costs. That’s one of the trade offs. Another might be represented by the how much we invest to create jobs? Or what investment is justified to preserve environmental health. To what degree should industrial or commercial users subsidize residential users? Or whether urban users should help pay for rural users? What about the acceptable cost of supporting an infrastructure that doesn’t compromise our national security?

    To the extent that there are laws that require certain investments, no justifications are needed by the regulators. But there remains much “art” in how the social and environmental costs are included in regulators’ calculations. According to Conrad, many policy decisions are influenced by emotion or the “Movement Du Jour”.

    But fundamentally regulators’ decisions are based on serving the public interest at a fair and reasonable price. In many cases the dynamic of a free market is considered to be the more efficient way to allocate resources, but in a regulated industry we need to ask whether there isn’t a cheaper way to achieve the desired end. That’s the whole point of having a regulated industry in the first place.

    In general regulators are concerned with managing the costs of delivering electricity – from a financial perspective, from a social perspective and increasingly from an environmental perspective as well. Their mandates are usually circumscribed by the concept of delivering electricity at a fair and reasonable prices and supported by prudent investments.

    Cost justifications can be successfully argued if it can be shown that the alternative energy source is less expensive than the conventional source, that the capacity is cheaper, or that the O & M costs are less. Indirect cost arguments might involve claiming that the expense was justified based on the job creation that the investments triggered. Increasingly environmental costs are also included – if they have not already been codified into environmental regulations that must be heeded. But as suggested above, the indirect costs of social and environmental impacts can often by influenced by emotional appeals and popular concerns.

    But here, Conrad introduces an interesting caveat. Suppose you built a super energy efficient house with hyper effective solar panels and ultra efficient appliances. And suppose further, with the help of an overly optimistic nature, that this house achieved and even exceeded the net zero energy status – resulting in a net “export” of energy to the grid. Would the utility have to consider this energy source as a cheaper energy source (than conventional generation)? The answer is no, because of the cross-subsidies that understate the real costs of providing retail electricity, Why is this so? Because the utilities usually prefer to book some fixed costs (poles and wires) as variable costs in order to artificially lower retail electricity rates for residential users. Thus, when we add these variable distribution costs  to the exported power (from your “net-negative” home) the cost is much higher than the utility’s cost of generation.

    Fundamental to this discussion of relative costs of power generation is a definition of the efficiency of energy generation. The second law of thermodynamics established the concept of entropy that explained how energy conversion always resulted in the loss of some of that energy. Lord Kelvin stated it as follows, “No process is possible in which the sole result is the absorption of heat from a reservoir and its complete conversion into work.” In other words, converting energy to power requires some loss. The degree to which this is true is the efficiency of the conversion process.

    Most conventional generation Plants (except renewable energy sources) convert heat into electricity. The efficiency of this process is known as the heat rate, which measures the heat (expressed in Btu) required to create 1 kWh.

    η (Heat Rate) = Btu of Heat to generate 1 kWh

    The heat rate gives us a convenient way to compare the efficiency of various ways of converting energy into power. Conrad then proceeded to give a whole list of comparable conversion processes to illustrate their relative efficiency.

    • Combined Cycle Combustion Turbine: Heat Rate ~7,000; η =49%
    • New Coal Plant: Heat Rate 9,800;  η = 35%
    • Low Speed Diesel On Liquid Fuel: Heat Rate 8,540 η = 40%
    • Nuclear Plant: Heat Rate 11,000  η = 31%
    • Simple Combustion Turbine: Heat Rate 11,400   η = 30 %
    • High Speed Diesel:  Heat Rate 12,200 η = 28 %
    • Automotive Diesel:  Heat Rate 14,200 η = 24 %
    • Typical 4 cylinder engine: Heat Rate 18,000 η = 19 %
    • PV Panel:  η = 6 to 35 % of energy in incident sunlight

    Knowing the energy conversion rate is not enough to evaluate the practical costs of operating costs of a plant. To do this we need to look at the levelized plant operation costs that take into account the cost of the fuel and the maintenance costs for the plant – as well as the time during which the plant will actually be in operation.

    First we determine the levelized annual (cost) requirements (“LAR”) of the plant- expressed in mills per kWh. We multiply this LAR times the cost of the plant per KW capacity. This all-in plant cost is then divided by the utilization (expressed in mills) – that is, 8760 hours divided by 1000 times the capacity. The product of this fraction gives us the hourly cost of operation of the plant (expressed in mills) and adjusted to show the actual usage. This capital cost (aka “overnight installation cost” ) represents the cost of building, financing and operating the plant on an hourly basis – and adjusted to reflect the actual percentage capacity at which the plant is utilized (expressed in mills).

    To complete the cost calculation we need to add the effective heat conversion rate and the fuel costs (expressed in mills). The combination of the construction and maintenance costs times the effective conversion rate at the prevailing fuel costs provides you with the levelized plant operational costs.

    Conrad then presented several other examples of calculating the levelized costs for various projects such as a wind farm (8.5 cents/kWh), PV panel 3 KW home installation ($9.5 cents/kWh), D cell battery ($16/kWh). The Simple Cycle Combustion Turbine is the lowest cost plant that PGE can build. This is an important example because if we’re using this plant to deal with peak load for 87 hours per year – this is the cheapest resource they can build. At that point it costs $1.87 per kWh. But what’s more important is the fixed cost per kWh. This is the “bogie” of the demand response program. If you’re trying to beat the cost of building a simple cycle CT plant, the cost of the energy is no longer the relevant measure. What’s relevant is the cost to displace one KW of the existing capacity.

    He also provide the calculation for a DR program.

    One of the important points in this discussion centered on the idea that the variable cost (fuel) calculation makes a dramatic difference in setting the cost. Most capital intensive plants are very sensitive to  part time operation, so even though the fixed costs stay the same the reduced operation time materially affects the levelized costs. This is a challenge for independent power producers which is why they rely on fixed contracts, otherwise the risk is way too high. For peaking plants that may run only about 1% of the time the cost of running the plant rises significantly, because most of the time the plant sits idle, but the financing and construction costs must still be born.

    With respect to PV panels the efficiency of the conversion rate makes a difference since space is limited on roofs. Efficiency also makes a huge difference in the amount of land you have to purchase. If you have a large installation it will increase the amount of wires that you will have to use – that’s a significant cost.

    In response to a question about German subsidies for PV installations which eventually became very prevalent, Conrad explained how the German subsidies that were as high as 40 cents per kW (versus 10 cent base rate) were effectively pushed to all the base rate users. That meant that a substantial subsidy had to be shared among all the other users. With ever greater numbers of installations these expanding subsidies were getting pushed to a smaller group and that eventually forced a reduction in the subsidies.


    Smart Grid Interoperability

    - James Mater

    James Mater is one of the founders of the Smart Grid Oregon – one of the nation’s first industry associations focused on promoting the advancement of Smart Grid technologies and companies that produce goods and services to support the successful implementation of Smart Grid. He is also the co-founder, Director and Smart Grid Evangelist of QualityLogic, a firm developing test and certification tools for the technology market, and currently expanding into verification of Smart Grid interoperability requirements.

    James is also part of the teaching staff for this seminal Smart Grid course. This was his first contribution to the course.

    His objective with this initial presentation was to help the students gain an appreciation for the challenges of achieving “plug and play” interoperability between smart grid components and applications. With the lecture he planned to identify the key organizations working on smart grid standards and the status of current efforts to achieve a national consensus on those standards.

    He started out his presentation with by quoting from George Arnold’s testimony to Congress this past July.

    “The U.S. grid, which is operated by over 3100 electric utilities using equipment and systems from hundreds of suppliers, has historically not had much emphasis on standardization and thus incorporates many proprietary interfaces and technologies that result in the equivalents of stand-alone silos.
    “Transforming this infrastructure into an interoperable system capable of supporting the nation’s vision of extensive distributed and renewable resources, energy efficiency, improved reliability and electric transportation may well be described by future generations as the first great engineering achievement of the 21st century.”

    In a word George Arnold was pointing out that efforts to achieve any sort of standardization are encumbered by what are effectively 3100 “silos”, each intent on seeking their own home grown solutions. Achieving any standardization would be a huge task, George Arnold warned. He ought to know. George Arnold came out of the telecom business where he helped to develop the wimax standard. However, in the telecommunications industry they only have 3 standards organizations, but Smart Grid has 15 standards organizations.

    NIST echoed this concern when in their January 2010 Framework and Roadmap for the Smart Grid, they declared that the “lack of standards may also impede future innovation and the realization of promising applications“.

    Yet the opportunity was enormous. In the same report NIST forecast that the, “U.S. market for Smart Grid-related equipment, devices, information and communication technologies, and other hardware, software, and services will double between 2009 and 2014—to nearly $43 billion…the global market is projected to grow to more than $171 billion, an increase of almost 150 percent.”

    In an observation that may seem out of place in the highly regulated electricity market, NIST went to so far as to declare that “standards enable economies of scale and scope that help to create competitive markets in which vendors compete on the basis of a combination of price and quality“.

    James Mater then reviewed the NIST conceptual model. In the terminology being used for Smart Grid discussions, each of these seven “cloud” categories is called a “domain.” Within any particular domain, there may be a number of different “stakeholders”. The framework being used by NIST to coordinate this effort identifies 22 stakeholder groups, from “appliance and consumer electronics providers” and “municipal electric utility companies” to “standards-development organizations” and “state and local regulators.”

    He also went on to consider how that model looked when overlaid over the existing structure of a utility, which had already moved towards some degree of proprietary automation.

    These slides, already presented by Conrad Eustis, showed the difficulty of trying to achieve economies of scale on an incremental basis. Interoperability was achieved but in a proprietary manner that denied the benefits of economies of scale and innovation that accrues from open standards and competitive markets. Moreover, the challenge is even more complex when we consider the whole panoply of systems that are used in the context of a major utility. Each of these systems, whether they be management software, or operational tools are supported by a proprietary enterprise service bus that communicates bilaterally, but not universally. This is the challenge that faces the designers of the interoperability standards that have to connect to many stand-alone systems.

    This is the task that has been assigned to NIST – mapping the standards for Smart Grid.

    Who is active developing the Smart Grid standards?

    NIST has published 25 of the most important standards. Open ADR, which facilitates DR in commercial buildings, has just been published as a standard. NIST is now identifying the “critical” standards in every domain. So far there are about 33 standards which enjoy a degree of consensus. And there are another 70 for consideration…The standards for security are the last to get promulgated, in part because they are a foil for the nay-sayers. And to certain, security for the Smart Grid is a serious challenge that needs a solution that is at least as reliable as we expect when flying aircraft or banking.

    So who are they key players involved in developing the Smart Grid standards?

    • NIST
    • Smart Grid Interoperability Panel
    • GridWise Architecture Council (GWAC)
    • UCA International
    • GridWise Alliance
    • EPRI/EEI
    • Zigbee, Wi-Fi – low power radio inside buildings
    • IEEE PES/2030 – institute for engineering
    • SDO’s
    • ISO – international standards; technology standards
    • IEC – Geneva based – active in generation & transmission
    • ANSI – working on meters
    • NAESB – industry group
    • NRECA
    • State Legislatures
    • Federal/State Regulators
    • FERC
    • NERC
    • State PUC’s
    • International standards bodies
    • Oasis – good at internet standards
    • Ashrae – heating & HVAC
    • Bacnet – standard for commercial buildings
    • OPC

    The most active of these groups include the following:

    The GridWise Architecture Council:

    The GridWise Architecture Council, GWAC, is DOE sponsored and has 13 council members from different parts of the domain. Under the Energy Independence and Security Act (EISA) of 2007, the National Institute of Standards and Technology (NIST) has “primary responsibility to coordinate development of a framework that includes protocols and model standards for information management to achieve interoperability of smart grid devices and systems…” EISA requires that NIST consult with GWAC to define the standards and set up investment grants.  The GWAC also sponsors 3 annual conferences:

    • Connectivity week
    • Gridweek
    • Interop

    The Gridwise Architecture Council has enormous influence. They are developing the context setting framework, and designed the GWAC stack, which is adapted from the highly successful OSI layered stack that helped to stimulate innovation in the computer industry.

    To carry out its EISA-assigned responsibilities, NIST devised a three-phase plan to rapidly establish an initial set of standards. In April 2009, the new office launched the  plan to expedite development and promote widespread adoption of Smart Grid interoperability standards:

    • Engage stakeholders in a participatory public process to identify applicable standards, gaps in currently available standards, and priorities for new standardization activities.
    • Establish a formal private-public partnership to drive longer-term progress.
    • Develop and implement a framework for testing and certification.

    Smart Grid Interoperability Panel (SGIP):

    The Smart Grid interoperability panel (SGIP) is the way NIST interacts with industry. Some of the SGIP players came from GWAC. They are working on Smart Grid standards, developing priority action plans, and designing the testing and certification standards. SGIP developed the Smart grid conceptual model (see earlier graphics with various domains shown as clouds) and are working on the Smart Grid cyber security solutions. They are also working on the interoperability knowledge base (IKB). Importantly they also run the SGIP Twiki for technical collaboration.This wiki is an open collaboration site for the Smart Grid (SG) community to work with NIST in developing this framework. The board of the SGIP is chosen from across industry and government.

    SG Architectural Committee: Semantic Model work

    A canonical data model (CDM) is a semantic model chosen as a unifying model that will govern a collection of data specifications.


    NERC Critical infrastructure protection (CIP) standards require compliance every 6 months. There is a $1 M penalty per violation per day. Industry was shell shocked by these requirements. Industry is very anxious about any standards that might be in conflict with the CIP. Actually the CIP is comprised of a whole family of standards – that essentially require you to document everything you do. These CIP standards were originally devised and implemented to prevent big blackouts – so they’re both rigorous and heavily enforced.


    Smart Power Pricing, Space and Time

    -Ken Nichols

    Electricity futures are the most volatile commodity…because it can’t be stored.  Futures prices are based on quantity, location and time. This mirrors futures contracts in other commodities such as orange juice for which a typical future’s contract might specify:

    Quantity: 15,000 lb of Frozen Concentrate OJ

    Locati0n: exchange certified warehouses in California and Florida

    Time: 1st business day of the month

    For Natural Gas, the primary distribution point is known as “Henry Hub” which is located ii Louisiana. But other locations can be specified, though the price of natural gas delivered elsewhere will vary because of constraints and cost of transportation. This variance is called the “basis price”.

    Apparently there are two variants to trading energy futures: financial trading instruments, referred to as “Space Contracts” and “Physical Contracts”.  Physical contracts are traded bilaterally and are more likely to be delivered. Some “Space contracts” also offer a conversion price that allows buyers to secure a theoretical amount of energy, but then later convert the contract into a physical contract for actual delivery if the energy is ultimately needed.

    The NERC interconnections:

    The North American Electric Reliability Corporation’s (NERC) mission is to ensure the reliability of the North American bulk power system. NERC is the electric reliability organization certified by the Federal Energy Regulatory Commission to establish and enforce reliability standards for the bulk-power system. NERC also maintains the large regional electric transmission networks that span North America.

    NERC sets rules for regional system operation, reliability. It sets the reserve margin requirements and schedules transmission, etc.

    The three systems that span North America are a contiguous AC system. They are linked by small DC inter ties. In particular there is a small utility in northern Texas, Tres Amigas, located in the ERCOT interchange that has limited connectivity to both the Western Electricity Coordinating Council (WECC) and to SW Power Pool (SPCC) which is part of the Eastern Interconnection. The following are the major transmission pools:

    These players can be further distinguished as Independent System Operators (ISO’s)  and Regional Transmission Operators (RTO’s). Both of these organizations are interested in improving the quality of the information supporting the market transactions and the effective consolidation of operations by the multiple owners of transmission. By driving down the physical barriers between the various regional electricity pools, NERC is encouraging increased integration of a national electricity market. For that reason it has not been fully embraced by Pacific Northwest regional energy providers, because they enjoy a substantial energy price advantage, as a result of the hydroelectric potential in the Northwestern region.

    ISOs and RTOs are interested in market transactions and consolidated operations of multiple owners of transmission.


    Non-RTO transmission organizations:


    Ken Nichols discussed the formation of the RTO’s and ISO’s. He explained that, “the PJM, New England and New York ISO’s were established on the platform of existing tight power pools. It appears that the principal motivation for creating ISO’s in these situations was NERC Order No. 888 that required that there be a single systemwide transmission tariff for tight pools”. In contrast, Ken asserted that “the establishment of the California ISO and the ERCOT ISO was the direct result of mandates by state governments”. The Midwest ISO is unique; it was neither required by government nor based on an existing institution. Apparently, “two states in the region required utilities in their states to participate in either a Commission-approved ISO (which occurred in Illinois and Wisconsin), or sell their transmission assets to an independent transmission company that would operate under a regional ISO (applied in Wisconsin).”

    The only ISO’s  are Texas, California, and Alberta and the New York ISO (NISO). The Western half of the United States is dominated by the Western Energy Coordinating Council (WECC). California is an ISO; however SMUD, TID and LDWP are independent utilities operating inside California ISO. The CA ISO and AESO are only ISO’s in the western market. The rest of WECC are “Balancing authorities” and each of them has control of what’s coming in and out. In this region the BPA plays that role. Balancing Authorities are responsible for balancing schedules, managing transmission, and keeping the electricity at 60 Hz frequency.

    PJM is the darling of the Smart Grid promoters. It is an RTO (facilitating transmission and markets across the several states in which it operates).

    For energy traders operating in the western market  there are several distinct locations upon which the prices are predicated. These include:

    • COBB Mid Columbia,
    • Four corners,
    • Palo verde,
    • South of Path 15
    • North of Path 15

    Pricing based for energy is based on the Platts month ahead pricing. It is divided into  “On Peak” (6am to 10pm) and “Off Peak” (10pm to 6am). Trading is usually done in blocks of 25 mega watts.

    The Energy Market handles both financial or futures markets, defined in specific delivery location requirements, delivery times, and quantity. The trades are cleared through an exchange or clearinghouse.

    An example would be: “Mid-C, off peak, 25 MWh block, March delivery”.

    Physical contracts can be same as financial, but bilateral trades (mainly financial/speculative) permits more flexibility in the terms.

    The proposed WECC Energy Imbalance Market  (EIM):

    The proposed EIM is a sub-hourly, real-time energy market providing centralized, automated, generation dispatch over a wide area. However, unlike an RTO, it would not replace the current bilateral energy market, but would instead supplement the bilateral market with real-time balancing. The automation of the EIM would allow for a more efficient dispatch of the system by providing access to balancing services from generation resources located throughout the EIM footprint and optimizing the overall dispatch, while incorporating real-time generation capabilities, transmission constraints, and pricing.

    While the EIM market design has many similarities to those administered by ISOs and RTOs, this proposal does not include implementing an RTO in the Western Interconnection. The EIM could utilize tools and algorithms that have been successfully implemented in other centralized markets, but an EIM would not include a consolidated regional tariff for basic transmission service (e.g. network or point-to-point). But the EIM would use a coordination tariff to address provision of generation and load energy imbalance, replacing some Ancillary Service Schedules of participating transmission provider tariffs.

  • 01 Feb 2011 1:15 PM | Anonymous

    Getting Smarter

    Attending the Smart Grid Class reminds me of an old Russian joke that I heard from my grandfather about a Muscovite nobleman sharing a train compartment with a southern Russian from Georgia. While the lavishly dressed nobleman had a gourmet hamper of Moscow delicacies to satisfy his hunger on the long train trip, the shabbily dressed southerner produced a heap of fish heads wrapped in oily newspaper. The nobleman had often heard of Georgians’ legendary business acumen and remarkable intelligence. So he asked his travelling companion whether it was true, as folklore suggested, that this fishy repast was what made Georgians so smart? The swarthy Georgian merely shrugged. After further fruitless attempts to get a satisfactory answer, the nobleman finally proposed to swap their respective foods for the length of the journey so that he could satisfy his curiosity about this supposed correlation. Soon the Georgian was enjoying the nobleman’s sumptuous repast, while the Muscovite began daintily picking through the fish heads.

    Two days later as they are approaching the final train stop in Semipalatinsk, the noble man crumpled up the last of the oily newspaper, while his taciturn travelling companion returned the now empty picnic hamper. “You know”, said the nobleman, “after eating all those fish heads I really don’t feel much smarter.”

    The Georgian merely cracked a toothy grin from under his flourishing mustache and with a twinkle he allowed as how the nobleman was nonetheless “getting smarter!”


    While I am still quite unclear about where the Smart Grid begins and the dumb grid ends, and whether it is all about modernizing our infrastructure, or providing consumers with choices, or unleashing latent innovation … I think I’m “getting smarter” about Smart Grid.


    Utility Operations Today

    - Conrad Eustis

    Conrad began by reviewing the NIST model of interoperability that we discussed at the end of the class on January 24th. He emphasized again that even though many of the functions that he had described were interoperable, they mostly used proprietary means to do so. He showed how the systems were indeed bridging the vertical silos, but often there were manual steps in the process. He pointed out that the cost of making sweeping changes were often more expensive than the benefits derived from a complete conversion, so it was reasonable to make these changes incrementally.

    Next he introduced the OSI model – “Open System Interconnect” model. This layer architecture model was comprised of the following layers:

    1.     Physical Layer

    2.     Datalink Layer

    3.     Network Layer

    4.     Transport Layer

    5.     Session Layer

    6.     Presentation Layer

    7.     Application Layer

    Conrad then used the example of mailing a valentine to explain the various layers:

    The words in the Valentine would correspond to the “application” layer. The envelope was the equivalent of the “presentation” layer. There was no equivalent of the session since writing letters does not involve parallel activities. The transport layer was represented by the stamp that paid for its voyage. and the Network layer was about the addressing. Finally the datatlink was considered the same as the placing the letter in the mailbox, and the Physical layer was the way the way the message was actually moved – such as by truck or plane.

    Importantly, it should be noted that the complete architectural layer system required that the message then reverse the flow until the message was delivered and the words were interpreted at the “application” layer, again.

    The next example showed how this OSI layered architecture analyis could be applied to the outage call system employed by the utility. In this case the report is application, the presentation is phone, Once again the session is singular. The transport layer is the telephone; the network is phone number, the data link is the IVR, etc.

    Conrad went on to show how interoperability is mostly about combining two sets of data to extrapolate some useful information and implementing actions to respond to it.

    While today’s systems can report can synthesize information from the Smart meters and correlate them with customer information systems, most of these interactions are proprietary. There are no standardized ways to report the information. The bottom half of the slide explains what will be needed in the future…


    Smart Grid:  Cyber-Security Concepts

    - Linda Rankin

    Linda’s talk fit perfectly in line with the preceding explanation of the OSI model.

    Linda Rankin was an earlier graduate of the course and was later hired by PSU to co-teach the course. She was subsequently hired by QualityLogic, the firm co-founded by James Mater (part of the instructional team for this course). After these introductions had been made Conrad Eustis interjected the fact that Linda had been quite active in the development of various interfaces during her time at Intel and this work was an ideal extension of her previous work developing standard interfaces for the computer industry.

    Linda began by reintroducing the OSI layered architecture stack that Conrad had referenced in the previous session, and that had been adopted as a model by the Gridwise architecture council for building a layered approach to standardized communications between the various layers of the emerging smart grid:

    1.      physical

    2.      datalink

    3.      network

    4.      transport

    5.      session

    6.      presentation

    7.      application

    Linda discussed the various types of layers:

    • Semantic layers – has to do with price
    • Syntactic layers – defines the unit of measurement
    • Networking layers – this is all about addresses
    • Basic connectivity – wired and unwired

    …and how standardizing the connections between the layers allowed development to occur more quickly. Security belongs to all layers.

    Linda then gave the class three examples of how the integration of these many layers is accomplished. To start, she explained how a SCADA system works :

    A Scada system – controls many SCADA devices on an electrical network. It is comprised of:

    • A control unit
    • Many remote terminal units (RTU’s)
    • Many sensors, and
    • Programmable logic controllers

    The whole system is connected through wired or microwave communications networks. It provides two-way communications and pings the system endlessly. Scada systems were designed  for reliability with stand alone functionality.

    Linda then went on to explain how three typical systems work with respect to designing the linkages between the various levels of the “stack”.

    1. PSU building control system

    A central computer acts as server for the Siemens Building System. The system uses the BACNET protocol to communicate between the layers. Clients can log into the BACNET. Siemens Building services runs on a schedule, supplies the algorithms. BACNET integrates all the interlayer connections via one consolidate software solution. The BACNET offers a single proprietary interface that connects with the controller, the sensor or actuator, and to non-native applications. It suffers from localized low performance and limited addressability.

    Part of the reason that this system is not configured to offer greater standardization is that the cost to convert old (pneumatic) system to digital costs $3000 or more. The arrangement is proprietary to reduce the need to change all the remote units tied into the interface. This system is used extensively in commercial buildings. Siemens models the building and develops the algorithms used by BACNET.

    2. PGE distributed energy system: GenonSys

    This system uses software known as GenonSys. In this architecture the control server is a computer. Clients can interface into the control server, which is maintained behind a firewall at PGE. The system connects to clients (responsive assets); it communicates through Ethernet to a communication server. The communications server connects using Internet radio (high speed wireless) that has a range of 5 miles. Each site has a modem that delivers the Internet over wireless Ethernet. Each of these modems (Motorola) costs about $1000. It is totally controlled by PGE. Very similar to SCADA system, but looks like

    The modem communicates to the RTU’s which in turn connect to PLC’s  -which then controls the responsive asset (customer’s generator). The standard is ModBus (a lot like BACNET) has a daisy chain protocol – a master slave. It queries every x seconds. It mixes the syntactic with the addressing with semantic – blends all the layers together. One ModBus can access only 247 sensors; one ModBus is deployed per home.

    Home ModBus master costs about $300. Here again it may not be economical to invest in a full ISO model. The GenonSys system is really a proprietary application. It seeks specialized information necessary for managing generation. It tells us how much energy the generator putting out, how hot is the machine, fuel levels, any one in the area, etc.

    Security – recent Scada exploit in UK. Once in you can get access to whole system.

    3. Tendril –for home based networks.

    This area has the most potential. The architecture is really an aggregator (server provider) to accommodate consumer based systems. It also is supported by Open ADR. The Tendril server is based on an Open API (application programming interface).

    Tendril server – supports open API. It communicates with server via a backhaul to tendril server. Tendril server connects to internet.  The internet interface could be SilverSpring, Comcast, etc – your ISP. It can go to your meter (SilverSpring) or to a gateway – your home modem. In ZigBee architecture they support a network of devices – sensors and actuators connected through wireless ZigBee interface.

    Home Energy Management System is a different model – is best for sorting out direct pricing.

    The Tendril model ultimately still needs smart appliances to be connected to this system. To be able optimize energy you  you have to be conantly vigilant.


    Security is an attribute across the entire stack. First you have to identify the risk. You need to keep the integrity of the system, and ensure its confidentiality. And you have to ensure the availability – especially with a scada network.

    Linda finished her presentation discussing some of the following issues related to security:

    • Attacks: Phishing, man-in-the middle, brute force attacks
    • Authentication: Password, security, multi-factor
    • Encryption: Keys, Certification (PKI), Digital signatures
    • Trade-offs:
      • Services offered versus security provided
      • Ease of use versus security
      • Cost of the security versus the risk of the loss
  • 25 Jan 2011 12:34 PM | Anonymous

    This was the third class of PSU’s 2011 iteration of the Smart Grid Course – a path breaking instructional offering that for the last three years has been delivering the best thinking and newest developments pertaining to the implementation of this new technology and the evolution of this new paradigm for energy delivery.

    In this third class, PGE’s Steve Hawke, SVP for Transmission and Distribution talks about how he views the Smart Grid challenges. He sees this as an opportunity to introduce more effective market signals to replace the cumbersome direction of regulation.

    Jeff Hammarlund followed this presentation with a review of how the current electrical distribution system evolved from Edison and Westinghouse’s days – with a special emphasis on implementations that shaped the Pacific Northwest T&D environment.

    Finally, Conrad Eustis, Director, Retail Technology Development at PGE, then introduced the basic elements the Mart Grid interoperability challenge and how PGE was trying to comply with the NIST interoperability model. This presentation was continued on January 31st and served as the foundation for Linda Rankin’s presentation of system architectures.

    Guest Presenter: Steve Hawke

    And to help us on our way to becoming even smarter about Smart Grid, we were honored to have Steve Hawke, Senior Vice President, Customer Service, Transmission and Distribution at Portland General Electric talk to us about why PGE is so supportive of the Smart Grid.

    The gist of his energetic presentation was that the energy paradigm was rapidly changing and that new aggregators were entering the field whose business models might eventually pose a huge threat to large and even mid-sized electric utilities, like PGE.

    PGE is a regulated monopoly.

    PGE is mid-size company. It’s not so big that they’re hampered by their internal processes and/or the costs of embarking on new investments, nor are they so small that they haven’t got the resources. PGE is trying to be a “good early adopter” and pick their “sweet” spot. Like all other companies PGE has to consider its (business) environment.

    All providers of energy are regulated monopolies in the United States. PGE is regulated by a public utility commission. Municipal boards regulate municipal utilities; customers regulate their cooperative utilities. They are regulated because they’re capital intensive. Building a transmission line from Boardman to Salem costs $1 billion. And when you invest that kind of money and later PacifiCorp builds an identical line right next door spending the same investment again it is not an efficient use of public resources – this is the basis of the regulatory structure that has evolved since the early 1900’s. The way we provide electric service in this country has been developed around a regulated monopoly business model. That’s not a given for all time. There have been lots of previous monopolies that are no longer regulated:

    Grain silos, insurance salesmen and even movie ticket sales were once regulated. AT&T was the biggest regulated monopoly. In 1981 MCI introduced the first mobile phone. Initially MCI attacked only one tiny part of AT&T’s service – the long distance routes from Los Angeles to New York. The legal basis for the challenge was that competition could provide a more reliable and lower cost service. Eventually this argument stripped AT&T of its lucrative business lines. AT&T was left with only their least profitable lines and could no longer cross subsidize its remaining services. Within 10 years the AT&T business model collapsed.

    So over the course of the last three decades we’ve seen a change from a $12 per month regulated land line service to the home replaced by a smart mobile phone line, with internet and cable that can supply various data formats pouring out nearly unlimited data streams that cost somewhere between $250 and $300 per month. That’s the value proposition that we get when you replace the regulatory structure with free market competition. That’s what you get when technology, “legal” and politics support a change from a regulated monopoly to a competitive enterprise.

    Deregulation has been tried.

    The idea was that technology had developed to the point that utilities could offer unlimited supply. It was presumed that investors would build speculative power plants that would drive down the cost of energy. It turned out to be too early for that business model. It failed miserably.   Prices went up and California went into a tailspin. The legacy of that disaster still haunts the industry. The regulated business model is not sacrosanct and changes in a number of different ways when attacked.

    How does PGE see its business environment?

    1.     Electricity is the most important product over the last century.

    2.     Electricity will see increased use until 2080. We will need 8-12 times more power on the planet than we have now; in the US we’ll need twice the energy we use now.

    3.     A system that takes 3 million years to set up but is used up in 80-90 years is not sustainable. The transportation system burns up 24 times more energy each day than all the generation plants in the country, mainly because the car is so inefficient. From the utility’s perspective, you can’t just look at one part of the environmental problem. You have to look at the entire system when you’re trying to decide where do you want to spend your time trying to solve the carbon problem? In other words, the transportation system is another place where PGE might be able to derive more energy savings to meet their energy savings goals. Hence PGE’s support for the emerging EV industry…

    4.     The North American transmission and distribution system is the most complicated system in existence. It has a product that moves at the speed of light and circles the equator 7.5 times in a second, and yet we choose to have more than 3000 people manage it! It has evolved organically; you would never plan to run it the way it is currently managed! From the utility’s perspective they are saddled with an antiquated delivery infrastructure that desperately need renovation.

    5.     According to the basic law of thermodynamics everything gets reduced to heat. Heat is the common currency of energy. Energy efficiency is about conserving heat; industrial efficiencies try to reduce line loss and heat dissipation. Over time we will begin to consider how we reduce all emissions. We’ll eventually have to consider how to reduce emissions of all chemical substances. In the end we’ll be talking about heat as a currency. From the utility’s perspective, the current focus on carbon is just the beginning; we will have to reconsider all emissions.

    6.     Utilities have a load curve that resembles a (two humped) “Bactrian camel”. Of course, the ideal load curve for a utility would be a flat load curve so that a steady and cheap energy source can satisfy the demand and minimize loss of heat. Similarly to the regulated telecommunications business model, utilities blend the different cost structures of their businesses to develop the most compelling value propositions for the different customers classes that they serve. This blending of cost structures depends upon their ability to cross-subsidize different business products.

    7.     Legal challenges to this regulated monopoly business model tend to focus on those lucrative business products and services that sustain the cross-subsidization of less profitable, but necessary product lines. Typically sophisticated services aimed at mid-sized businesses are priced well above their delivery costs. Legal challenges to the regulated monopoly business models have contended that precisely these lucrative services can be reliably delivered at a much lower cost. No doubt this is true, but the gradual deregulation of the profitable lines of business will eventually collapse the utilities’ ability to cross-subsidize the portfolio of products they are required to provide. The most successful way to attack a regulated business is to “skim off the cream”, or acquire the most lucrative customers while leaving the regulated model with the less profitable customers.

    Why is this relevant to PGE? If a competitor were able to skim off those customers with a flatter load curve, they would be able to provide those customers with a lower cost solution and still generate enough profit for shareholders. Unfortunately, this gradual syphoning off of customers with flatter load curves will undercut the utilities’ ability to profitably serve those customers with a more variable load curves.

    Today PGE is able to provide its customers with reliable service because they are able to aggregate all the customers in a territory. But what if another unregulated competitor were to be able to aggregate these same energy users and “cream skim” the most lucrative customers? Who might these more efficient aggregators be? Possibly Google – they also have all the customers in the country. Any technology that can provide energy storage, or that could bypass wires would have a significant competitive advantage.

    The question that PGE is seeking to answer is how to survive the gradual deregulation of the energy industry. It may take as long as fifty years to bring down the cost of energy delivery to a deregulated price point. Deregulation may also engender a more distributed infrastructure that can withstand security challenges.


    Wiring the Smart Grid into History: the Historical Precedents of the Smart Grid.

    Jeff Hammarlund

    Jeff quickly summarized the “key components of a systems framework to support the smart grid” that he had discussed at the end of the class on January 18th.

    Drawing in part from the work of Mark Johnson and Josh Suskewicz of Innosight, Jeff had postulated that new clean-tech initiatives like the smart grid had the best chance for success when a systemic framework was established that offered four interdependent and mutually reinforcing components.  They were:

    1. An enabling technological system. A technological breakthrough is not enough. Edison recognized this with the invention of the light bulb. Although he did borrow a few components from the legacy lighting systems, but he wouldn’t have been successful without the development of an enabling technology system that produced wires, generators, meters, transmission lines, and new appliances that had to be built from the ground up.

    2. An innovative business model.  Success requires combining a technological breakthrough with a business model that provides value for both the customer and the company.

    3. A market adoption strategy that ensures a foothold. The smart grid and other clean-tech systems, like the systems they are intended to modify or replace, are complicated. They are fraught with unknowns and how best to integrate their parts won’t be clear at the outset. Smart Grid advocates need to be humble, nimble, and willing to act when their initial hunches turn out to be wrong.  When possible, they should incubate their new ventures outside demanding, competitive marketsundefinedin foothold markets, where the value proposition offered by even early-stage technologies and business models is so great that customers are willing to overlook their shortcomings. Portland might serve as such an accommodating incubator for hosting a pilot that links demand response programs with electric vehicles via the emerging Smart Grid.

    4.  A favorable government policy.  Federal support for the smart grid has been vital, particularly given the depressed economy.  But it is not without risks.  Government support is most effective when it is directed not just at new technologies but at new business models as well. For instance, in the early 1900’s, Samuel Insull’s development of the regulatory concept of a “natural monopoly” and the related concept of a “load factor” were instrumental in producing a regulatory compact that allowed the industry to develop more compelling value propositions while still responding to public policy requirements.

    A Short History of the electrical industry in the Pacific Northwest.

    - Jeff Hammarlund

    After this brief review of favorable systemic conditions, Jeff began to outline the original development of electricity, and the early transmission and distribution systems. He described Edison’s opening of the Pearl Street Station on September 4th, 1882 near Wall Street. The original $160,000 power plant served an area of only 1/6 sq. mile and included the stock exchange, two newspapers and several prominent investment banks.

    Jeff then described the fierce competition between Westinghouse and Edison. The population remained very skeptical of this technology fearing electrocution. Edison actually founded the first co-generation plant when he used the steam from his powerhouse to heat nearby offices. But the limited reach of the DC current meant that Edison had to build twenty separate transmission and distribution systems. Nonetheless, Edison also believed that only direct current (DC) transmission lines could make it possible to transmit power over longer distances, such as the 14 miles that separated Oregon City and Portland – his first demonstration projects in the West. Much of this history that Jeff summarized was gleaned from his 2002 draft article on Oregon’s Role as an Energy Innovator. The following citations (in italics) are taken from the above reference article.

    The Battle of the Currents

    When Edison refused to listen, Tesla quit and joined forces with Edison’s chief rival, George Westinghouse. The Pittsburgh-based industrialist had just bought out some of Edison’s other competitors and boasted that his new Westinghouse Electric and Manufacturing Company would become “the most progressive” company in the field. As a young man, Westinghouse had been an inventor in his own right; his most famous invention was the railroad air brake on his 23rd birthday. He was both excited by Tesla’s inventions and intrigued by the young Serb’s claims that he was a “scientific mystic” whose inventions came from “spiritual visions.” After purchasing the rights to Tesla’s 40 AC patents, including the polyphase AC and the induction motor, Westinghouse hired the young emigrant as director of research. Together, they began to develop and market Tesla’s inventions. Thus began the “Battle of the Currents” between the two camps. …Tesla insisted that his system offered two important advantages. First, the electricity could travel much further, possibly many miles from a remote power plant to a population center. Second, the voltage from a single polyphase AC system could be stepped up or down to meet the specific power requirements of residential, commercial, or industrial customers.

    The operators of Willamette Falls Electric where the first long distance transmission line to Portland was to be installed were not cowed by Edison’s scare tactics. After the success of the initial DC transmission, they nonetheless switched to AC. The Willamette Falls Electric transmission test confirmed Tesla’s contention that transmission with AC would significantly reduce power losses over the lines. In fact, the amount of line losses was even less than Tesla had anticipated.

    Henry Villard and the Genesis of PGE

    In January of 1880, just three months after Thomas Edison invented the incandescent light bulb, shipping and railroad magnate Henry Villard, owner of the Oregon Railroad and Navigation Company, gave Edison’s fledgling company its first commercial order. Four small electric generators called dynamos were placed in the engine room of Villard’s new ship, the SS Columbia. Each generator lit 60 small lamps of 16 candlepower. Once the ship reached Portland in the fall of 1880, wires were strung from ship to shore to light up a lamp hung from the porch of Portland’s Clarendon Hotel. “The powerful rays lighted up the whole neighborhood to the brightness of day,” proclaimed The Oregonian. Sensing the significance of the event, an accompanying editorial boasted, “The enterprise of a Western railroad gives Edison’s greatest invention, the electric light, its first practical use while the conservative East is still trying to laugh it off as a ridiculous joke.”

    Villard began to invest heavily in the fledging Edison General Electric Company and, by 1888, had been elected as the company’s president. The novel idea of festoon lighting appealed to many Northwest shop owners, most notably the proprietors of our popular saloons, and soon many companies sprang up to offer this service. In Seattle alone, nearly 30 minuscule start-up companies placed dynamos in their basements and competed for this cutting edge business. But start-up costs were high and potential customers were more curious than convinced in the viability of electricity. . Most failed in the financial panic of 1893.

    In more laid-back Portland, a firm that built hydraulic elevators bought three steam dynamos in 1884. Using excess steam from the elevator company’s boilers, each dynamo powered 20 light bulbs. Under the grandiose name of United States Electric Light and Power Company, the tiny company soon proved itself to be the most successful utility in town. This was the predecessor of today’s PGE.

    Insull and the Regulatory Compact

    From the beginning, the most successful electric holding company was J. Pierpont Morgan’s Electric Bond and Share Company. When J. Pierpont Morgan died in 1912, his son, J.P. Morgan, Jr., continued his efforts to use the holding company structure to consolidate the utility industry. Morgan Jr.’s goal was to make the United Corporation the dominant power in the utility industry. While he never fully succeeded, he and his 17 partners controlled 13 utility holding companies which controlled a network of utilities from coast to coast.

    The only large utility holding company independent of the House of Morgan was controlled by Thomas Edison’s former personal secretary, Samuel Insull.  After witnessing Edison lose control of his Edison General Electric to the elder Morgan, Insull moved to Chicago to manage a fledgling utility and he quickly demonstrated remarkable business acumen and political savvy to make his utility, now known as Commonwealth Edison, into the dominant regional energy provider.

    Insull proved to be one of the most innovative leaders in the young electric utility industry. He was the first to demonstrate the profitability of linking central generating systems, small towns, and rural areas with extensive distribution systems. And he was one of the leading proponents of state regulation of inventor-owned utilities.

    Insull argued that utilities were “natural monopolies”. He insisted that competition was not in the best interests of either company owners or the customers since competing power lines and power plants increased costs for consumers and reduced company profits. The costs of generating, transmitting, or distributing electricity would be lower if only one company handled all these functions in a particular area.

    Insull’s vision was to establish electricity as an “essential service” and to assign utilities specific service territories. In return, the utilities would agree to serve all the customers in their territories and they would be granted a set rate of return for their services. They would be regulated according to their ownership. Municipal electric companies were regulated by city councils, rural cooperatives by their customers, investor owned utilities by the public utility commissions, and public utilities by their county commissions.

    In ’35 NW had 8 IOU’s and 42 citizen owned utilities. The scope of state authorities varied. The federal wholesale rates and the local retail rates were regulated to be “just and reasonable”. The basic principles used by the regulatory bodies was:

    • That utilities were to operate safely and reliably
    • At rates which were both just and reasonable.
    • Utilities could recover prudently invested costs, and
    • They were allowed to earn a reasonable rate of return.

    At this point, Jeff concluded his presentation in order to give Conrad sufficient time to present his overview of how utilities structure their responsibilities and how that relates to the successful implementation of the  Smart Grid.


    Utility Operations, Today

    -  Conrad Eustis

    Towards the end of the class Conrad began his presentation on how utilities operate today and the implications of the management structure on the implementation of the Smart Grid.  Conrad began with an explanation of the National Institute for Standards and Technology’s (NIST) model for the Smart Grid.

    The NIST conceptual model divides the functions into four categories:

    1. The service provider

    2. The markets – where we get energy

    3. Operations, including:

    a.      power generation

    b.     transmission operator

    c.      distribution and repair

    d.     Customer premise management

    All these elements should be interoperable!

    Initially Conrad began to map out the major functions of an actual electric utility.

    For instance, the typical service provider functions included:

    • Develop Product Offers
    • Start/Stop Service
    • Read Meter
    • Calculate Bill
    • Answer Questions
    • Acknowledge Outage

    The operations function included the following tasks:

    • Acquire Energy for Tomorrow
    • Maintain Grid Stability
    • Reroute Power to minimize customers out of power
    • FERC:  Control Area Operator
    • Repair Broken Wires
    • Distribution Sys Operator

    The transmission and distribution functions included:

    • Create new Services
    • Upgrade Plant as Required
    • Create and Monitor Telemetry

    And finally, the bulk generation function included:

    • Build, Operate and Maintain Power plants

    Then he mapped PGE’s officer positions to the NIST model.

    • 7 administrative officer positions were directly related to the Service Provider role.
    • 3 operational roles mapped to Operations,
    • 1 executive mapped to the transmission and distribution functions, and
    • 1 executive was assigned to the bulk generation function.

    The clear lessons to be learned from this superimposition of the theoretical NIST model over an actual utility was that there are many areas where the interoperability function had just been initiated and was not yet completely interoperable. In most cases the interoperability had not yet been addressed, and in many other cases it was still under construction.

    That concluded the third session of the Smart Grid Class on January 24th, 2011.

  • 19 Jan 2011 12:57 PM | Anonymous

    This second class was devoted to laying the foundation for the examination of Smart Grid. Conrad Eustis provided a primer on electricity and the basic infrastructure of a “dumb” electric grid. Jeff started to lay out the history of energy policy. Some time was spent on class communications logistics, the course textbook (Smart Power by Peter Fox-Penner), and the small group learning exercise. This blog will not concern itself with logistics or the small group learning exercise, but will focus instead on the substantive lectures focused on the Smart Grid, and partially on the readings for that week.

    Conrad Eustis led of the class by presenting what he calls “Grid 101”, beginning with the basics of how electricity works. Starting at the most basic level, he explained that electricity is the flow of electrons along a conductor such as a wire. The rate at which the electrons move is measured in amperes, or “amps”. But as we all know homes are equipped with sockets that provide both 120 and 240 volts of power. How does voltage relate to amps?

    Voltage measures the charge inherent in the flow of electrons. This would be similar to the water pressure in a pipe held at various angles. A flat pipe has no ability to exert power, whereas a pipe held at a 45° has some pressure, and a vertical pipe has the greatest pressure. The voltage refers to the “pressure” driving the electrons and their commensurate ability to do work.

    Thus the power that an electrical charge exerts (expressed in watts) is a combination of the voltage and the rate of flow (amps). However, as we know most voltages are fixed (for example 120 or 240 for households) thus watts are proportional to the current flow (amps). For example:

    • A 1,200 watt hair dryer consumes 10 amps of flow on a 120 voltage line.
    • A 5,520 watt clothes dryer consumes the flow of 23 amps with a 240 voltage line.

    By definition a 100-watt appliance uses 100 watts in an hour, or 100 Wh, or 0.1kWh (zero point one kilowatt hours). Kilowatts per hour, “kWh”, is the comprehensive unit of measurement for energy consumption since it combines amps, voltage and time.

    Now we get to another important distinction: the difference between energy and power.

    The 1st Law of Thermodynamics states that energy simply exists; it can neither be created nor destroyed. The 2nd Law of Thermodynamics states that energy can be transformed from one from to another. It can be harnessed to enable work. “Power” is the rate at which we use that harnessed energy.

    In the electric industry “capacity” and “demand” are also used instead of “power”. For people in the electric industry power is also an attribute of the electrical delivery system or power conversion unit. In other words they consider that their delivery infrastructure, and/or specific devices are designed or rated to accommodate specific amounts of energy. Applying too much energy reduces efficiency and increases the amount of energy lost through heat diffusion. When efficiently consumed, energy creates value (e.g., 1000 kCal/day, about 4 oz of fat, (~50 watts) keeps an adult warm).

    The 2nd law of thermodynamics also states that no transformation is 100% efficient. When energy is transformed some of it leaves the system, often in the form of ambient heat. Another way to say this is that Energy flows from an organized state to a disorganized state, from high availability to low availability. This is referred to as entropy. But not all systems experience a net exodus of energy. An “anti-entropic” system benefits from external energy sources that replenish and increase the amount of available energy. Thus Earth benefits from 174 billion MW of solar energy that radiates into our system every day. Most systems and living organisms progress from an anti-entropic state, to a state of equilibrium, and finally to an entropic state until all energy has escaped and the system ceases to exist.

    Such a digression into the Law of Entropy (2nd law of thermodynamics) is probably more relevant to a broader discussion of sustainability, but for now we are primarily concerned with the transformation of energy into power and the delivery of that power across the country to the end-users.

    As Conrad stated before, to produce power we must transform energy into a form that can be harnessed. We can use kinetic energy, chemical energy, nuclear, potential, solar, wind, wave and geothermal energy, but for the purpose of supplying the grid we will need to produce a steady stream of electrical power. The generation plants are typically rated by the amount of energy that they can produce in a single day. So a 1 MW plant that operates all year produces 8,760,000 kWh. Yet if that same plant only operates for 4 hours every year it produces only 4,000 kWh. Since these plants have a high fixed cost it is desirous to run them continuously.

    Note that the levelized cost of the renewable energy sources is significantly higher than the fossil fuel energy sources, with wind estimated to cost $.10/kWh and biomass coming in at $.095-.099/kWh. Of particular interest was the slide that Conrad presented that showed the costs of new power plants. Note the rising cost of the pulverized coal (PC) and the integrated coal gasification combined cycle (IGCC) plants, as the cost of carbon emissions rises. The carbon cost of the natural gas combine cycle (NGCC) turbines also rises, but because their carbon footprint is less the increase is not as dramatic.

    Conrad now provided a brief overview on PGE:
    Operating in 52 Oregon cities, Portland General Electric Company serves approximately 816,000 customers, including nearly 100,000 commercial customers. PGE has a diverse mix of generating resources that includes hydropower, coal and gas combustion, wind and solar, as well as key transmission resources. Their 13 power plants have a total combined generating capacity of 2,434 megawatts.
    PGE began back in 1889, when a generator at Willamette Falls in Oregon City produced power to light 55 street lamps 14 miles away in Portland undefined the first long-distance transmission line in the nation.

    Key facts:

    ·       Service Area 4,000 Sq Miles

    ·       Population Served 1.6 Million

    ·       Serves 710,000 Residential Customers

    ·       Average Residential use 11,000 kWh (1.25 kWa)

    ·       100,000 Commercial and Industrial

    ·       Total Sales 20 Billion kWh

    ·       Annual Revenue about $1.7 Billion

    ·       825,000 Meters, 180,000 Street Lights

    ·       4,000 MW Peak Demand (winter & summer)

    o   In winter 2,100 MW from Residential average 3 kW

    o   Each service drop supports 24 to 48 kW

    Discussion of Grid Infrastructure:
    The balance of Conrad’s presentation was a detailed explanation of the generation, transmission and distribution system that makes up today’s “dumb” grid. According to Conrad, PGE produces about 1/200th of the nation’s electric energy.

    PGE is a vertically integrated utility owning both the generation, the transmission and distribution facilities. It is not self sufficient in power and depends upon long-term contracts and purchases on the spot market to fill in the additional demand. PGE typically has higher household usage rates because of the demand for heating and cooling – so they are both winter and summer “peaking”. PGE owns most of its own transmission and distribution equipment except where it uses BPA substations and lines.

    A giant ring of bulk substations surrounds Portland and Vancouver so that power delivery is redundantly supplied – power can circle to the customer in either direction. This ring consists of 10 Bulk Power Substations that create 115 KV Transmission Feeds to about 90 Distribution Substations.

    The local transmission and distribution infrastructure is made up of:

    • 1,600 Miles of Transmission
    • 12 Bulk Power Substations
    • 158 Distribution Substations
    • 260 Distribution Transformers (Transmission to 13kV)
    • 585 Feeders, about 18 Switches per feeder
    • 220,000 Power Poles Owned; Rent on 43,000 Poles
    • 15,500 miles of Primary Voltage distribution Circuits (half underground)
    • 190,000 Utilization Transformers (e.g. 7.6 kV to 240 V)

    The book value (what was originally paid for it) of all this hardware is between $3 – 4 billion, but after depreciation it’s only about $1 billion. But to replace it today would cost closer to $10 billion. All of this equipment must be mapped out and tracked to ensure timely maintenance and effective system repair.

    What followed was a pictorial guide to the transmission and distribution system featuring enough poles and wires to satisfy any linesman’s nightmares. But without delving into the details of all the pole configurations to be found, the following key observations should be noted:

    • To move electricity over greater distances it is more efficient to do so at a higher voltage. But requires that the power be “stepped down” as it is distributed to customers and households. Using higher voltages decreases line loss, which is often estimated at 9-10%.
    • The DC intertie transmission lines moving power across the region (from BC to California) covering as much as 865 miles typically carry voltages of 500 kV – these are not part of PGE equipment, but belong to the BPA.
    • The AC transmission lines that carry the power from 50 to 500 miles into the bulk power substations at voltages of 500kV and 230kV serve the next biggest transmission requirements.
    • From the bulk power substations to the distribution substations the lines “transmit” 110 KV, or in older areas 57 KV.
    • At the local distribution substation the power is then “transformed from 110 KV to primary voltage of 7,200 in our system.
    • Finally, the power is routed via feeder lines to 2,000 – 3,000 customers. At PGE these feeder lines typically carry 7,200 volts in order to reduce line loss, and each feeder has a breaker. “One wire to ground is 7,200 volts; phase to phase (there are three phases) are 13,000 volts.” At regular intervals “service drops” connect to individual customers.
    • Feeders:
      • Typical Voltages 25 kV, 13kV, 4kV
      • Starts with Breaker at Substation
      • Serves One Lg. Industrial or 500 to 6000 residential
    • Service Drops:
      • A few Large Industrial Customers at Transmission Voltage e.g 110,000 volts
      • Several Hundred at Primary Voltage 13,000 volts
      • Most At Secondary Voltage Via Utilization Transformer that Serves 1 to 12 customers
      • Secondary Service Drop Distinctions
      • Number of Phases 1, 2, or 3
      • Wye or Delta Wiring
      • Voltage 480, 277, 240, 208, or 120
    • The system is designed to accommodate power going in one direction, which poses a problem for the smart grid, which envisions two-way communication.

    Transmission and Distribution points to remember:
    • System Design for Power Flow in 1 Direction
    • When a Wire Breaks you still need to roll a crew to fix it
    • Protection Device Designed to Protect Wires and Transformers. Every fuse is an engineering design task.

    After a break the class reviewed the mechanics of Small Group Learning, and the formation of said small groups. They subsequently met after class to assign roles and adopt norms.

    Jeff began his “History of Energy Policy and its Implications for the Smart Grid”, but was soon cut-off by the end of class. His presentation will be included in the next blog.

  • 12 Jan 2011 1:33 PM | Anonymous

    It was hard to tell from the assembly of people gathered in front of room 303, who was teaching the class and who was attending. There were lots of familiar faces from the local utilities, from the BPA, energy efficiency consultants, lobbyists, regulators, Intel employees and several veterans of Jeff’s previous Smart Grid classes who were taking the course simply to absorb the updated information.

    This year’s iteration of this cutting edge course, Smart Grid and Sustainable Communities,  is being taught by:

    Jeffrey Hammarlund, Adjunct Professor, PSU’s Mark O. Hatfield School of Government; President, Northwest Energy and Environmental Strategies; and Oregon Caucus Chair, NW Energy Coalition;

    Conrad Eustis, PSU Adjunct Professor and Director, Retail Technology Development, Portland General Electric.

    James Mater, co-founder and Director, QualityLogic, and founding board member, Smart Grid Oregon;

    Ken Nichols, Smart Grid Consultant and ex-TransCanada energy trader; plus many other nationally and regionally known experts on various aspects of the Smart Grid. __________________________________________________________

    What is Smart Grid?

    As might be expected the first order of business was to try to define the amorphous concept loosely referred to as the “Smart Grid”.

    Citing the Electric Power Research Institute (EPRI), we were reminded that, “it matters how you define the problem; it can make a difference on the solution you arrive at.” In all, EPRI catalogued over 80 different definitions of the Smart Grid! Two other perspectives, worth calling out included definitions offered by Jesse Burst, the editor of the Smart Grid Newsletter: Smart Grid – the application of digital technology to the electric power infrastructure. Thomas Friedman’s vision of the Smart Grid included the following scenario:

    “In the early years of the Energy-Climate Era, we progressed from an Internet thatconnected computers and the World Wide Web that connected content and Web sites…to an Internet of Things: an Energy Internet in which every device – from light switches to air conditioners, to basement boilers, to car batteries and power lines and power stations – incorporated microchips that could inform your utility either directly or through a special device, of the energy level at which it was operating, take instructions from you or your utility as to when it should operate and at what level of power and tell your utility when it wanted to purchase or sell electricity.”

    Jeff Hammarlund then offered, if not a definition, then at least a set of characteristics that helped to define a framework within which Smart Grid could be said to be evolving. His definition, like the preceding ones, reflects upon the definer’s perspective – in Jeff's case, his experience drafting energy policies.

    1. Smart Grid is not a thing or an end state.
    2. It involves a combination of values and characteristics that differ markedly from the present energy delivery system.
    3. It is a technology enabling process that will support new technologies, services, products and markets for the benefit of utilities, society and the environment.
    4. Smart Grid constitutes a process or journey that leverages new ways of measuring the energy market.

    Smart Grid is what it delivers – defining the scope of its potential contributions.

    The second part of the class was delivered by Conrad Eustis, a PGE engineer with deep Smart Grid knowledge. Unlike Jeff’s policy driven perspective, Conrad’s “take” on Smart Grid was a more “bottoms-up” definition based on its expected benefits.

    He began by considering the different classes of benefits that might be derived from the implementation of Smart Grid technology:

    1. Reliability – driven by the expensive loss of business continuity suffered during the East Coast Black out of 2003, utilities and their customers want to avoid another scenario of cascading failures that takes down the entire eastern seaboard. Investment in a more intelligent and communicative grid would be a start, and the designation of some ARRA monies for this purpose has help catalyze such efforts.
    2. Economic – The cost of the economic dislocation caused by both intermittent and major service interruption is said to cost the nation over $100 billion. Increasingly, as our nation’s infrastructure is automated through computerized management, the cost of outages becomes ruinously expensive, as an ever broader array of pivotal management functions become energy dependent for both analysis and execution.

    3. Security – Our current system for managing energy generation, distribution and planning future investments is based on an antiquated and uncommunicative system. Not only is it rife for failure, but it is increasingly vulnerable to sophisticated attack, or to the stresses of natural disasters.
    4. Efficiency - The Smart Grid should optimize asset utilization and allow utilities to operate more efficiently. Operationally, it should improve load factors, lower system losses and dramatically improve outage management. Additional grid intelligence should inform planners and engineers to build what is needed, when it is needed, to extend the life of assets, and to anticipate imminent failures, while at the same time maximizing labor deployment. Operational, management and capital costs should be reduced.
    5. Safety – A more intelligent grid should vastly increase the safety of the electrical delivery system. Not only will this affect the safety of power infrastructure workers by permitting intelligent switches, and more robust safety systems, but it will also ensure the reliability of end-use devices throughout society upon whose uninterrupted operation is critical for society’s safe operation.
    6. Environmental benefits – not only will reduced energy use contribute to a lessening of carbon emissions, but more efficient allocation of energy to integrate renewable energy will become increasingly important.

    The Pacific Northwest Perspective

    Here in the Pacific Northwest there is particular interest in the implementation of the Smart Grid. The BPA in conjunction with Batelle is currently running the largest Smart Grid project in the country here in the Pacific Northwest.

    The Smart Grid Oregon Association is one of the first organizations across the nation to bring together representatives from utilities, governmental agencies, energy consultancies, third party service providers, engineering firms, equipment manufacturers, environmental groups, non profit groups and regulators to envision and evolve a common vision for the statewide and regional implementation of the Smart Grid. The association has already convened one conference that attracted local, regional and national experts to address the emerging regulatory, equity and market models upon which regulatory and incentive structures can be erected.

    The Portland State University courses devoted to the evolution of Smart Grid implementation represent a unique contribution, and serve to provide a base of educated managers that can participate in the ongoing design of the regional implementation.

    The Oregon Public Utility Commission (OPUC) has also been quite active in seeking to lay out a framework through which a comprehensive set of regulatory guidelines can be established that both safeguard the public investment, ensure stable and economic power, even as we begin to rethink the basic underlying concepts involved in the delivery of energy resources across the region.

    Beginning from today’s status quo…

    Having provided a parallel definition of the Smart Grid derived from an engineering perspective (as opposed to a policy-centric viewpoint), Conrad Eustis began to lay the ground work for exploring the structural aspects of a Smart Grid. To do this he began to lay out the fundamental elements of today’s mostly “dumb” grid.

    He described the network of Power generation plants that distributed their energy to bulk power distribution hubs, which in turn routed the needed energy to the sub-stations and the neighborhood feeder lines, ultimately arriving at individual residential, commercial and industrial premises. This grid was designed to flow outwards from the generation plants to the end-user consumption nodes. There was/is virtually no ability for the distribution system to track its own performance, and the primary way that utilities learn about outages is from customer calls. Most of the technology in use today has not communication capabilities, either from the consumption point back up the line to the generation source, or from the generator down to the consumer.

    Conrad Eustis acknowledged the current fascination with the Smart Grid topic. “Lot’s of people are stepping up to the IT challenges, but there is no shared vision.” Moreover the regulatory approval path is interminably long with some approvals taking more than a year to be granted. Finally, he also acknowledged that with the introduction of AMI technologies at the consumption end, the amount of data will be increased exponentially making analysis of energy usage patterns even more complex. Finally, he also reiterated the lack of standards and the utility’s preference for using proprietary metrics rather than support standardization. Even when technologies have been adopted, Conrad pointed out it can still take decades to adopt and integrate a new technological standard, such as the DSL technology for transmitting data over phone lines. He expects progress on Smart Grid issues to be equally slow – predicting that it would take further decades to fully implement Smart Grid in the United States. “When you automate you have to do it right” he cautioned.

    More fundamentally, Conrad indicated that the utilities are also cognizant that this evolution of the energy market will change the basis upon which their business models rest. With the entrance of altogether new players into the mix, as well as increased uncertainty about how this market will segment service delivery there is more than enough anxiety to go around.

    Review of the Class structure and readings

    Following Conrad’s presentation, the balance of the class was devoted to reviewing how the materials would be broken down into the various class segments. Some of the later classes were reserved to as-yet unannounced topics, as Jeff felt current events would mostly likely generate new material by that time.

    The major topics that the class would cover were:

    • Integration of wind energy
    • Review of the Pacific Northwest Smart grid project
    • Regulatory issues
    • Customer concerns
    • Sustainability and environmental benefits of Smart grid

    - Jim Thayer

<< First  < Prev   1   2   3   Next >  Last >> 
© Smart Grid Oregon
Powered by Wild Apricot Membership Software