Power and Patience

The U.S. utility industry – particularly the electric-producing branch of it, there also are natural gas and water utilities – has found itself in a new, and very uncomfortable, position. Throughout the first quarter of 2009 it was front and center in the political arena.

Politics has been involved in the U.S. electric generation and distribution industry since its founding in the late 19th Century by Thomas Edison. Utilities have been regulated entities almost since the beginning and especially after the 1930s when the federal government began to take a much greater role in the direction and regulation of private enterprise and national economics.

What is new as we are about to enter the second decade of the 21st Century is that not only is the industry being in large part blamed for a newly discovered pollutant, carbon dioxide, which is naturally ubiquitous in the Earth’s atmosphere, but it also is being tasked with pulling the nation out of its worst economic recession since the Great Depression of the 1930s. Oh, and in your spare time, electric utilities, enable the remaking of the automobile industry, eliminate the fossil fuels which you have used to generate ubiquitous electricity for 100 years, and accomplish all this while remaining fiscally sound and providing service to all Americans. Finally, please don’t make electricity unaffordable for the majority of Americans.

It’s doubtful that very many people have ever accused politicians of being logical, but in 2009 they seem to have decided to simultaneously defy the laws of physics, gravity, time, history and economics. They want the industry to completely remake itself, going from the centralized large-plant generation model created by Edison to widely dispersed smaller-generation; from fossil fuel generation to clean “renewable” generation; from being a mostly manually controlled and maintained system to becoming a self-healing ubiquitously digitized and computer-controlled enterprise; from a marginally profitable (5-7 percent) mostly privately owned system to a massive tax collection system for the federal government.

Is all this possible? The answer likely is yes, but in the timeframe being posited, no.

Despite political co-option of the terms “intelligent utility” and “smart grid” in recent times, the electric utility industry has been working in these directions for many years. Distribution automation (DA) – being able to control the grid remotely – is nothing new. Utilities have been working on DA and SCADA (supervisory control and data acquisition) systems for more than 20 years. They also have been building out communications systems, first analog radio for dispatching service crews to far-flung territories, and in recent times, digital systems to reach all of the millions of pieces of equipment they service. The terms themselves were not invented by politicians, but by utilities themselves.

Prior to 2009, all of these concepts were under way at utilities. WE Energies has a working “pod” of all digital, self-healing, radial-designed feeders that works. The concept is being tried in Oklahoma, Canada and elsewhere. But the pods are small and still experimental. Pacific Gas and Electric, PEPCO and a few others have demonstration projects of “artificial intelligence” on the grid to automatically switch power around outages. TVA and several others have new substation-level servers that allow communications with, data collection from and monitoring of IEDs (Intelligent electrical devices) while simultaneously providing a “view” into the grid from anywhere else in the utility, including the boardroom. But all of these are relatively small-scale installations at this point. To distribute them across the national grid is going to take time and a tremendous amount of money. The transformation to a smart grid is under way and accelerating. However, to this point, the penetration is relatively small. Most
of the grid still is big and dumb.

Advanced metering infrastructure (AMI) actually was invented by utilities, although vendors serving the industry have greatly advanced the art since the mid-1990s. Utilities installed earlier-generation AMI, called automated meter reading (AMR) for about 50 percent of all customers, although the other 50 percent still were being read by meter readers traipsing through people’s yards.

AMI, which allows two-way communications with the meters (AMR is mostly one-way), is advancing rapidly, but still has reached less than 20 percent of American homes, according to research by AMI guru Howard Scott and Sierra Energy Group, the research and analysis division of Energy Central. Large-scale installations by Southern Company, Pacific Gas and Electric, Edison International and San Diego Gas and Electric, are pushing that percentage up rapidly in 2009, and other utilities were in various stages of pilots. The first installation of a true two-way metering system was at Kansas City Power & Light Co. (now Great Plains Energy) in the mid-1990s.

So the intelligent utility and smart grid were under development by utilities before politicians got into the act. However, the build-out was expected to take perhaps 30 years or more before completed down to the smallest municipal and co-operative utilities. Many of the smaller utilities haven’t even started pilots. Xcel Energy, Minneapolis, is building a smartgrid model in one city, Boulder, Col., but by May, 2009, two of the primary architects of the effort, Ray Gogel and Mike Carlson, had left Xcel. Austin Energy has parts of a smart grid installed, but it still reaches only a portion of Austin’s population and “home automation” reaches an even smaller proportion.

There are numerous “paper” models existent for these concepts. One, developed by Sierra Energy Group more than three years ago, is shown in Figure 1.

Major other portions of what is being envisioned by politicians have yet to be invented or developed. There is no reasonably priced, reasonably practical electric car, nor standardized connection systems to re-charge them. There are no large-scale transmission systems to reach remote windmill farms or solar-generating facilities and there is large-scale resistance from environmentalists to building such transmission facilities. Despite some political pronouncements, renewable generation, other than hydroelectric dams, still produces less than 3 percent of America’s electricity and that percentage is climbing very slowly.

Yes, the federal government was throwing some money at the build-out in early 2009, about $4 billion for smart grid and some $30-$45 billion at renewable energy. But these are drops in the bucket to the amount of money – estimated by responsible economists at $3 trillion or more – required just to build and replace the aging transmission systems and automate the grid. This is money utilities don’t have and can’t get without making the cost of electricity prohibitive for a large percentage of the population. Despite one political pronouncement, windmills in the Atlantic Ocean are not going to replace coal-fired generation in any conceivable time frame, certainly not in the four years of the current administration.

Then, you have global warming. As a political movement, global warming serves as a useful stick to the carrot of federal funding for renewable energy. However, the costs for the average American of any type of tax on carbon dioxide are likely to be very heavy.

In the midst of all this, utilities still have to go to public service commissions in all 50 states for permission to raise rates. If they can’t raise rates – something resisted by most PSCs – they can’t generate the cash to pay for this massive build-out. PSC commissioners also are politicians, by the way, with an average tenure of only about four years, which is hardly long enough to learn how the industry works, much less how to radically reconfigure it in a similar time-frame.

Despite a shortage of engineers and other highly skilled workers in the United States, the smart grid and intelligent utilities will be built in the U.S. But it is a generational transformation, not something that can be done overnight. To expect the utility industry to gear up to get all this done in time to “pull us out” of the most serious recession of modern times just isn’t realistic – it’s political. Add to the scale of the problem political wrangling over every concept and every dollar, mix in a lot of government bureaucracy that takes months to decide how to distribute deficit dollars, and throw in carbon mitigation for global warming and it’s a recipe for disaster. Expect the lights to start flickering along about…now. Whether they only flicker or go out for longer periods is out of the hands of utilities – it’s become a political issue.

Modeling Distribution Demand Reduction

In the past, distribution demand reduction was a technique used only in emergency situations a few times a year – if that. It was an all-or-nothing capability that you turned on, and hoped for the best until the emergency was over. Few utilities could measure the effectiveness, let alone the potential of any solutions that were devised.

Now, demand reduction is evolving to better support the distribution network during typical peaking events, rather than just emergencies. However, in this mode, it is important not only to understand the solution’s effectiveness, but to be able to treat it like any other dispatchable load-shaping resource. Advanced modeling techniques and capabilities are allowing utilities to do just that. This paper outlines various methods and tools that allow utilities to model distribution demand reduction capabilities within set time periods, or even in near real time.

Electricity demand continues to outpace the ability to build new generation and apply the necessary infrastructure needed to meet the ever-growing, demand-side increases dictated by population growth and smart residences across the globe. In most parts of the world, electrical energy is one of the most important characteristics of a modern civilization. It helps produce our food, keeps us comfortable, and provides lighting, security, information and entertainment. In short, it is a part of almost every facet of life, and without electrical energy, the modern interconnected world as we know it would cease to exist.

Every country has one or more initiatives underway, or in planning, to deal with some aspect of generation and storage, delivery or consumption issues. Additionally, greenhouse gases (GHG) and carbon emissions need to be tightly controlled and monitored. This must be carefully balanced with expectations from financial markets that utilities deliver balanced and secure investment portfolios by demonstrating fiduciary responsibility to sustain revenue projections and measured growth.

The architects of today’s electric grid probably never envisioned the day when electric utility organizations would purposefully take measures to reduce the load on the network, deal with highly variable localized generation and reverse power flows, or anticipate a regulatory climate that impacts the decisions for these measures. They designed the electric transmission and distribution systems to be robust, flexible and resilient.

When first conceived, the electric grid was far from stable and resilient. It took growth, prudence and planning to continue the expansion of the electric distribution system. This grid was made up of a limited number of real power and reactive power devices that responded to occasional changes in power flow and demand. However, it was also designed in a world with far fewer people, with a virtually unlimited source of power, and without much concern or knowledge of the environmental effects that energy production and consumption entail.

To effectively mitigate these complex issues, a new type of electric utility business model must be considered. It must rapidly adapt to ever-changing demands in terms of generation, consumption, environmental and societal benefits. A grid made up of many intelligent and active devices that can manage consumption from both the consumer and utility side of the meter must be developed. This new business model will utilize demand management as a key element to the operation of the utility, while at the same time driving the consumer spending behavior.

To that end, a holistic model is needed that understands all aspects of the energy value chain across generation, delivery and consumption, and can optimize the solution in real time. While a unifying model may still be a number of years away, a lot can be gained today from modeling and visualizing the distribution network to gauge the effect that demand reduction can – and does – play in near real time. To that end, the following solutions are surely well considered.

Advanced Feeder Modeling

First, a utility needs to understand in more detail how its distribution network behaves. When distribution networks were conceived, they were designed primarily with sources (the head of the feeder and substation) and sinks (the consumers or load) spread out along the distribution network. Power flows were assumed to be one direction only, and the feeders were modeled for the largest peak level.

Voltage and volt-ampere reactive power (VAR) management were generally considered for loss optimization and not load reduction. There was never any thought given to limiting power to segments of the network or distributed storage or generation, all of which could dramatically affect the flow of the network, even causing reverse flows at times. Sensors to measure voltage and current were applied at the head of the feeder and at a few critical points (mostly in historical problem areas.)

Planning feeders at most utilities is an exercise performed when large changes are anticipated (i.e., a new subdivision or major customer) or on a periodic basis, usually every three to five years. Loads were traditionally well understood with predictable variability, so this type of approach worked reasonably well. The utility also was in control of all generation sources on the network (i.e., peakers), and when there was a need for demand reduction, it was controlled by the utility, usually only during critical periods.

Today’s feeders are much more complex, and are being significantly influenced by both generation and demand from entities outside the control of the utility. Even within the utility, various seemingly disparate groups will, at times, attempt to alter power flows along the network. The simple model of worst-case peaking on a feeder is not sufficient to understand the modern distribution network.

The following factors must be considered in the planning model:

  • Various demand-reduction techniques, when and where they are applied and the potential load they may affect;
  • Use of voltage reduction as a load-shedding technique, and where it will most likely yield significant results (i.e., resistive load);
  • Location, size and capacity of storage;
  • Location, size and type of renewable generation systems;
  • Use and location of plug-in electrical vehicles;
  • Standby generation that can be fed into the network;
  • Various social ecosystems and their characteristics to influence load; and
  • Location and types of sensors available.

Generally, feeders are modeled as a single unit with their power characteristic derived from the maximum peaking load and connected kilovolt-amperage (KVA) of downstream transformers. A more advanced model treats the feeder as a series of connected segments. The segment definitions can be arbitrary, but are generally chosen where the utility will want to understand and potentially control these segments differently than others. This may be influenced by voltage regulation, load curtailment, stability issues, distributed generation sources, storage, or other unique characteristics that differ from one segment to the next.

The following serves as an advanced means to model the electrical distribution feeder networks. It provides for segmentation and sensor placement in the absence of a complete network and historical usage model. The modeling combines traditional electrical engineering and power-flow modeling with tools such as CYME and non-traditional approaches using geospatial and statistical analysis.

The model builds upon information such as usage data, network diagrams, device characteristics and existing sensors. It then adds elements that could present a discrepancy with the known model such as social behavior, demand-side programs, and future grid operations based on both spatio-temporal and statistical modeling. Finally, suggestions can be made about sensors’ placement and characteristics to the network to support system monitoring once in place.

Generally, a utility would take a more simplistic view of the problem. It would start by directly applying statistical analysis and stochastic modeling across the grid to develop a generic methodology for selecting the number of sensors, and where to place them based on sensor accuracy, cost and risk-of-error introduction from basic modeling assumptions (load allocation, timing of peak demand, and other influences on error.) However, doing so would limit the utility, dealing only with the data it has in an environment that will be changing dramatically.

The recommended and preferred approach performs some analysis to determine what the potential error sources are, which source is material to the sensor question, and which could influence the system’s power flows. Next, an attempt can be made to geographically characterize where on the grid these influences are most significant. Then, a statistical approach can be applied to develop a model for setting the number, type and location of additional sensors. Lastly sensor density and placement can be addressed.

Feeder Modeling Technique

Feeder conditioning is important to minimize the losses, especially when the utility wants to moderate voltage levels as a load modification method. Without proper feeder conditioning and sufficient sensors to monitor the network, the utility is at risk of either violating regulatory voltage levels, or potentially limiting its ability to reduce the optimal load amount from the system during voltage reduction operations.

Traditionally, feeder modeling is a planning activity that is done at periodic (for example, yearly) intervals or during an expected change in usage. Tools such as CYME – CYMDIST provide feeder analysis using:

  • Balanced and unbalanced voltage drop analysis (radial, looped or meshed);
  • Optimal capacitor placement and sizing to minimize losses and/or improve voltage profile;
  • Load balancing to minimize losses;
  • Load allocation/estimation using customer consumption data (kWh), distribution transformer size (connected kVA), real consumption (kVA or kW) or the REA method. The algorithm treats multiple metering units as fixed demands; and large metered customers as fixed load;
  • Flexible load models for uniformly distributed loads and spot loads featuring independent load mix for each section of circuit;
  • Load growth studies for multiple years; and
  • Distributed generation.

However, in many cases, much of the information required to run an accurate model is not available. This is either because the data does not exist, the feeder usage paradigm may be changing, the sampling period does not represent a true usage of the network, the network usage may undergo significant changes, or other non-electrical characteristics.

This represents a bit of a chicken-or-egg problem. A utility needs to condition its feeders to change the operational paradigm, but it also needs operational information to make decisions on where and how to change the network. The solution is a combination of using existing known usage and network data, and combining it with other forms of modeling and approximation to build the best future network model possible.

Therefore, this exercise refines traditional modeling with three additional techniques: geospatial analysis; statistical modeling; and sensor selection and placement for accuracy.

If a distribution management system (DMS) will be deployed, or is being considered, its modeling capability may be used as an additional basis and refinement employing simulated and derived data from the above techniques. Lastly, if high accuracy is required and time allows, a limited number of feeder segments can be deployed and monitored to validate the various modeling theories prior to full deployment.

The overall goals for using this type of technique are:

  • Limit customer over or under voltage;
  • Maximize returned megawatts in the system in load reduction modes;
  • Optimize the effectiveness of the DMS and its models;
  • Minimize cost of additional sensors to only areas that will return the most value;
  • Develop automated operational scenarios, test and validation prior to system-wide implementation; and
  • Provide a foundation for additional network automation capabilities.

The first step starts by setting up a short period of time to thoroughly vet possible influences on the number, spacing and value offered by additional sensors on the distribution grid. This involves understanding and obtaining information that will most influence the model, and therefore, the use of sensors. Information could include historical load data, distribution network characteristics, transformer name plate loading, customer survey data, weather data and other related information.

The second step is the application of geospatial analysis to identify areas of the grid most likely to have influences driving a need for additional sensors. It is important to recognize that within this step is a need to correlate those influential geospatial parameters with load profiles of various residential and commercial customer types. This step represents an improvement over simply applying the same statistical analysis generically over the entirety of the grid, allowing for two or more “grades” of feeder segment characteristics for which different sensor standards would be developed.

The third step is the statistical analysis and stochastic modeling to develop recommended standards and methodology for determining sensor placement based on the characteristic segments developed from the geospatial assessment. Items set aside as not material for sensor placement serve as a necessary input to the coming “predictive model” exercise.

Lastly, a traditional electrical and accuracy- based analysis is used to model the exact number and placement of additional sensors to support the derived models and planned usage of the system for all scenarios depicted in the model – not just summertime peaking.

Conclusion

The modern distribution network built for the smart grid will need to undergo significantly more detailed planning and modeling than a traditional network. No one tool is suited to the task, and it will take multiple disciplines and techniques to derive the most benefit from the modeling exercise. However, if a utility embraces the techniques described within this paper, it will not only have a better understanding of how its networks perform in various smart grid scenarios, but it will be better positioned to fully optimize its networks for load and loss optimization.

Measuring Smart Metering’s Progress

Smart or advanced electricity metering, using a fixed network communications path, has been with us since pioneering installations in the US Midwest in the mid-1980s. That’s 25 years ago, during which time we have seen incredible advancements in information and communication technologies.

Remember the technologies of 1985? The very first mobile phones were just being introduced. They weighed as much as a watermelon and cost nearly $9,000 in today’s dollars. SAP had just opened its first sales office outside of Germany, and Oracle had fewer than 450 employees. The typical personal computer had a 10 megabyte hard drive, and a dot-com Internet domain was just a concept.

We know how much these technologies have changed since then, how they have been embraced by the public, and (to some degree at least) where they are going in the future. This article looks at how smart metering technology has developed over the same period. What has been the catalyst for advancements? And, most important, what does that past tell us about the future of smart metering?

Peter Drucker once said that “trying to predict the future is like trying to drive down a country road at night with no lights while looking out the back window.”

Let’s take a brief look out the back window, before driving forward.

Past Developments

Developments in the parallel field of wireless communications, with its strong standards base, are readily delineated into clear technology generations. While we cannot as easily pinpoint definitive phases of smart metering technology, we can see some major transitions and discern patterns from the large deployments illustrated in Figure 1, and perhaps, even identify three broad smart metering “generations.”

The first generation is probably the clearest to delineate. The first 10 years of smart metering deployments (until about 2004) were all one-way wireless, limited two-way wireless, or very low-bandwidth power-line carrier communications (PLC) to the meter, concentrated in the U.S. The market at this time was dominated by Distribution Control Systems, Inc. (DCSI) and, what was then, CellNet Data Systems, Inc. Itron Fixed Network 2.0 and Hunt Technologies’ TS1 solution would also fit into this generation.

More than technology, the strongest characteristic of this first generation is the limited scope of business benefits considered. With the exception of Puget Sound Energy’s time-of-use pricing program, the business case for these early deployments was focused almost exclusively on reducing meter reading costs. Effectively, these early deployments reproduced the same business case as mobile automated meter reading (AMR).

By 2004, approximately 10 million of these smart meters had been installed in the U.S. (about 7 percent of the national total); however, whatever public perception of smart metering there was at the time was decidedly mixed. The deployments received scant media coverage, which focused almost solely on troubled time-of-use pricing programs, perhaps digressing briefly to cover smart metering vendor mergers and lawsuits. But generally smart meters, by any name, were unknown among the general population.

Today’s Second Generation

By the early 2000s, some utilities, notably PPL and PECO, both in Pennsylvania, were beginning to expand the use of their smart metering infrastructure beyond the simple meter-to-cash process. With incremental enhancements to application integration that were based on first generation technology, they were initiating projects to use smart metering to: transform outage identification and response; explore more frequent reading and more granular data; and improve theft detection.

These initiatives were the first to give shape to a new perspective on smart metering, but it was power company Enel’s dramatic deployment of 30 million smart meters across Italy that crystallized the second generation.

For four years leading to 2005, Enel fully deployed key technology advancements, such as universal and integrated remote disconnect and load limiting, that previously did not exist on any real scale. These changes enabled a dramatically broader scope of business benefits as this was the first fully deployed solution designed from the ground up to look well beyond reducing meter reading costs.

The impact of Enel’s deployment and subsequent marketing campaign on smart metering developments in other countries should not be underestimated, particularly among politicians and regulators outside the U.S. In European countries, particularly Italy, and regions such as Scandinavia, the same model (and in many cases the same technology) was deployed. Enel demonstrated to the rest of the world what could be done without any high-profile public backlash. It set a competitive benchmark that had policymakers in other countries questioning progress in their jurisdictions and challenging their own utilities to achieve the same.

North American Resurgence

As significant as Enel’s deployment was on the global development of smart metering, it is not the basis for today’s ongoing smart metering technology deployments now concentrated in North America.

More than the challenges of translating a European technology to North America, the business objectives and customer environments were different. As the Enel deployment came to an end, governments and regulators – particularly those in California and Ontario – were looking for smart metering technology to be the foundation for major energy conservation and peak-shifting programs. They expected the technology to support a broad range of pricing programs, provide on-demand reads within minutes, and gather hourly interval profile data from every meter.

Utilities responded. Pacific Gas & Electric (PG&E), with a total of 9 million electric and natural gas meters, kick-started the movement. Others, notably Southern California Edison (SCE), invested the time and effort to advance the technology, championing additions such as remote firmware upgrades and home area network support.

As a result, a near dormant North American smart metering market was revived in 2007. The standard functionality we see in most smart metering specifications today and the technology basis for most planned deployments in North America was established.

These technology changes also contributed to a shift in public awareness of smart meters. As smart metering was considered by more local utilities, and more widely associated with growing interest in energy conservation, media interest grew exponentially. Between 2004 and 2008, references to smart or advanced meters (carefully excluding smart parking meters) in the world’s major newspapers nearly doubled every year, to the point where the technology is now almost common knowledge in many countries.

The Coming Third Generation

In the 25 years since smart meters were first substantially deployed, the technology has progressed considerably. While progress has not been as rapid as advancements in consumer communications technologies, smart metering developments such as universal interval data collection, integrated remote disconnect and load limiting, remote firmware upgrades and links to a home network are substantial advancements.

All of these advancements have been driven by the combination of forward-thinking government policymakers, a supportive regulator and, perhaps most important, a large utility willing to invest the time and effort to understand and demand more from the vendor community.

With this understanding of the drivers, and based on the technology deployment plans, we can map out key future smart metering technology directions. We expect to see the next generation of smart metering exhibit two dominant differences from today’s technology. This includes increased standardization across the entire smart metering solution scope and changes to back-office systems architecture that enables the extended benefits of smart metering.

Increased Standardization

The transition to the next generation of smart metering will be known more for its changes to how a smart meter works, rather than what a smart meter does.

The direct functions of a smart meter appear to be largely set. We expect to see continued incremental advancements in data quality and read reliability; improved power quality measurement; and more universal deployment of a remote disconnect and load limiting.

But how a smart meter provides these functions will further change. We believe the smart meter will become a much more integrated part of two networks: one inside the home; the other along the electricity distribution network.

Generally, an expectation of standards for communication from the meter into a home area network is well accepted by the industry – although the actual standard to be applied is still in question. As this home area network develops, we expect a smart meter to increasingly become a member of this network, rather than the principal mechanism in creating one.

As other smart grid devices are deployed further down the low voltage distribution system, we expect utilities to demand that the meter conform to these network communications standards. In other words, utilities will continue to reject the idea that other types of smart grid devices – those with even greater control of the electrical network – be incorporated into a proprietary smart meter local area network.

It appears that most of this drive to standardization will not be led by utilities in North America. For one, technology decisions in North America are rapidly being completed (for this first round of replacements, at least). The recent Federal Regulatory Energy Commission (FERC) staff report, entitled “2008 Assessment of Demand Response and Advanced Metering” found that of the 145 million meters in the U.S., utilities have already contracted to replace nearly 52 million with smart meters over the next five to seven years.

IBM’s analysis indicated that larger utilities have declared plans to replace these meters even faster – approximately 33 million smart meters by 2013. The meter communications approach, and quite often the vendors chosen for these deployments, has typically already been selected, leaving little room to fundamentally change the underlying technological approach.

Outside of Worldwide Interoperability for Microwave Access (WiMAX) experiments by utilities such as American Electric Power (AEP) and those in Ontario, and shared services initiatives in Texas and Ontario, none of the remaining large North American utilities appear to have a compelling need to drive dramatic technology advancements, given rate and time pressures from regulators.

Conversely, a few very large European programs are poised to push the technology toward much greater standards adoption:

  • EDF in France has started a trial of 300,000 meters following standard PLC communications from the meter to the concentrator. The full deployment to all 35 million EDF meters is expected to follow.
  • The U.K. government recently announced a mandatory replacement of both electricity and natural gas meters for all 46 million customers between 2010 and 2020. The U.K.’s unique market structure with competitive retailers having responsibility for meter ownership and operation is driving interoperability standards beyond currently available technology.
  • With its PRIME initiative, the Spanish utility Iberdrola plans to develop a new PLC-based, open standard for smart metering. It is starting with a pilot project in 2009, leading to full deployment to more than 10 million residential customers.

The combination of these three smart metering projects alone will affect 91 million smart meters, equal to two thirds of the total U.S. market. This European focus is expected to grow now that the Iberdrola project has taken the first steps to be the basis for the European Commission’s Open Meter initiative, involving 19 partners from seven European countries.

Rethinking Utility System Architectures

Perhaps the greatest changes to future smart metering systems will have nothing to do with the meter itself.

To date, standard utility applications for customer care and billing, outage management, and work management have been largely unchanged by smart metering. In fact, to reduce risk and meet schedules, utilities have understandably shielded legacy systems from the changes needed to support a smart meter rollout or new tariffs. They have looked to specialized smart metering systems, particularly meter data management systems (MDMS), to bridge the gap between a new smart metering infrastructure and their legacy systems.

As a result, many of the potential benefits of a smart metering infrastructure have yet to be fully realized. For instance, billing systems still operate on cycles set by past meter reading routes. Most installed outage management applications are unable to take advantage of a direct near-real-time connection to nearly every end point.

As application vendors catch up, we expect the third generation of smart meters to be characterized by changes to the overall utility architectures and the applications that comprise them. As applications are enhanced, and enterprise architectures adapted to the smart grid, we expect to see significant architectural changes, such as:

  • Much of the message brokering functions from disparate head-end systems to utility applications in an MDMS will migrate to the utility’s service bus.
  • As smart meters increasingly become devices on a standards-based network, more general network management applications now widely deployed for telecommunications networks will supplement vendor head-end systems.
  • Complex estimating and editing functions will become less valuable as the technology in the field becomes more reliable.
  • Security of the system, from home network to the utility firewall, needs to meet the much higher standards associated with grid operations, rather than those arising from the current meter-as-the-cash-register perspective.
  • Add-on functionality provided by some niche vendors will migrate to larger utility systems as they evolve to a smart metering world. For instance, Web presentment of interval data to customers will move from dedicated sites to become a broad part of utilities’ online offerings.

Conclusions

Looking back at 25 years of smart metering technology development, we can see that while it has progressed, it has not developed at the pace of the consumer communications and computing technologies they rely upon – and for good reasons.

Utilities operate under a very different investment timeframe compared to consumer electronics; decisions made by utilities today need to stand for decades, rather than mere months. While consumer expectations of technology and service continue to grow with each generation, in the regulated electricity distribution industry, any customer demands are often filtered through a blurry political and regulatory lens.

Even with these constraints, smart metering technology has evolved rapidly, and will continue to change in the future. The next generation, with increased standardized integration with other networks and devices, as well as changes to back office systems, will certainly transform what we now call smart metering. So much so, that much sooner than 25 years from now, those looking back at today’s smart meters may very well see them as we now see those watermelon-sized cell phones of the 1980’s.

Silver Spring Networks

When engineers built the national electric grid, their achievement made every other innovation built on or run by electricity possible – from the car and airplane to the radio, television, computer and the Internet. Over decades, all of these inventions have gotten better, smarter and cheaper while the grid has remained exactly the same. As a result, our electrical grid is operating under tremendous stress. The Department of Energy estimates that by 2030, demand for power will outpace supply by 30 percent. And this increasing demand for low-cost, reliable power must be met alongside growing environmental concerns.

Silver Spring Networks (SSN) is the first proven technology to enable the smart grid. SSN is a complete smart grid solutions company that enables utilities to achieve operational efficiencies, reduce carbon emissions and offer their customers new ways to monitor and manage their energy consumption. SSN provides hardware, software and services that allow utilities to deploy and run unlimited advanced applications, including smart metering, demand response, distribution automation and distributed generation, over a single, unified network.

The smart grid should operate like the Internet for energy, without proprietary networks built around a single application or device. In the same way that one can plug any laptop or device into the Internet, regardless of its manufacturer, utilities should be able to “plug in” any application or consumer device to the smart grid. SSN’s Smart Energy Network is based on open, Internet Protocol (IP) standards, allowing for continuous, two-way communication between the utility and every device on the grid – now and in the future.

The IP networking standard adopted by Federal agencies has proven secure and reliable over decades of use in the information technology and finance industries. This network provides a high-bandwidth, low-latency and cost-effective solution for utility companies.

SSN’s Infrastructure Cards (NICs) are installed in “smart” devices, like smart meters at the consumer’s home, allowing them to communicate with SSN’s access points. Each access point communicates with networked devices over a radius of one or two miles, creating a wireless communication mesh that connects every device on the grid to one another and to the utility’s back office.

Using the Smart Energy Network, utilities will be able to remotely connect or disconnect service, send pricing information to customers who can understand how much their energy is costing in real time, and manage the integration of intermittent renewable energy sources like solar panels, plug-in electric vehicles and wind farms.

In addition to providing The Smart Energy Network and the software/firmware that makes it run smoothly, SSN develops applications like outage detection and restoration, and provides support services to their utility customers. By minimizing or eliminating interruptions, the self-healing grid could save industrial and residential consumers over $100 billion per year.

Founded in 2002 and headquartered in Redwood City, Ca., SSN is a privately held company backed by Foundation Capital, Kleiner Perkins Caufield & Byers and Northgate Capital. The company has over 200 employees and a global reach, with partnerships in Australia, the U.K. and Brazil.

SSN is the leading smart grid solutions provider, with successful deployments with utilities serving 20 percent of the U.S. population, including Florida Power & Light (FPL), Pacific Gas & Electric (PG&E), Oklahoma Gas & Electric (OG&E) and Pepco Holdings, Inc. (PHI), among others.

FPL is one of the largest electric utilities in the U.S., serving approximately 4.5 million customers across Florida. In 2007, SSN and FPL partnered to deploy SSN’s Smart Energy Network to 100,000 FPL customers. It began with rigorous environmental and reliability testing to ensure that SSN’s technology would hold up under the harsh environmental conditions in some areas of Florida. Few companies are able to sustain the scale and quality of testing that FPL required during this deployment, including power outage notification testing, exposure to water and salt spray and network throughput performance test for self-healing failover characteristics.

SSN’s solution has met or exceeded all FPL acceptance criteria. FPL plans to continue deployment of SSN’s Smart Energy Network at a rate of one million networked meters per year beginning in 2010 to all 4.5 million residential customers.

PG&E is currently rolling out SSN’s Smart Energy Network to all 5 million electric customers over a 700,000 square-mile service area.

OG&E, a utility serving 770,000 customers in Oklahoma and western Arkansas, worked with SSN to deploy a small-scale pilot project to test The Smart Energy Network and gauge customer satisfaction. The utility deployed SSN’s network, along with an energy management web-based portal in 25 homes in northwest Oklahoma City. Another 6,600 apartments were given networked meters to allow remote initiation and termination of service.

Consumer response to the project was overwhelmingly positive. Participating residents said they gained flexibility and control over their household’s energy consumption by monitoring their usage on in-home touch screen information panels. According to one customer, “It’s the three A’s: awareness, attitude and action. It increased our awareness. It changed our attitude about when we should be using electricity. It made us take action.”

Based on the results, OG&E presented a plan for expanded deployment to the Oklahoma Corporation Commission for their consideration.

PHI recently announced its partnership with SSN to deliver The Smart Energy Network to its 1.9 million customers across Washington, D.C., Delaware, Maryland and New Jersey. The first phase of the smart grid deployment will begin in Delaware in March 2009 and involve SSN’s advanced metering and distribution automation technology. Additional deployment will depend on regulatory authorization.

The impact of energy efficiency is enormous. More aggressive energy efficiency efforts could cut the growth rate of worldwide energy consumption by more than half over the next 15 years, according to the McKinsey Global Institute. The Brattle Group states that demand response could reduce peak load in the U.S. by at least 5 percent over the next few years, saving over $3 billion per year in electricity costs. The discounted present value of these savings would be $35 billion over the next 20 years in the U.S. alone, with significantly greater savings worldwide.

Governments throughout the EU, Canada and Australia are now mandating implementation of alternate energy and grid efficiency network programs. The Smart Energy Network is the technology platform that makes energy efficiency and the smart grid possible. And, it is working in the field today.

PHEVs Are on a Roll

The electric vehicle first made its appearance about a century ago, but it is only in recent years – months, to be more precise – that it has achieved breakthrough status as, quite possibly, the single-most important technological development having a positive impact on society today.

Climate change, over-dependence on fossil fuels, and the current economic crisis have combined to impact the automobile sector to a degree unforeseen, forcing technological innovation to direct its urgent attention toward the development of electric vehicles as an alternative means of transport, and a substitute for internal combustion engines. Many countries are supporting the approach in their political, energy and industrial planning directed toward the introduction of this type of vehicle. For example, the U.S. has a target of 1 million Plug-in Hybrid Electric Vehicles (PHEV) in operation by 2015. Spain expects to achieve the same number by 2014.

It is certainly true that there exist pressures capable of driving the introduction of the PHEV forward, but technological advances are the factors that underpin and give coherence to its development. There are several progressive improvements being made in technology, materials, and power generation and supply, which will support the deployment and use of electric vehicles in the coming years. They include: advances in battery manufacture and electronics (particularly in terms of power); the development of new communication protocols; ever more efficient and flexible information technologies; the growth of renewable energy sources in the electrical energy generation mix; and the concept of smart grids focused on more efficient electricity distribution. All of these improvements are underscored by a much greater degree of passion and personal involvement by the end-user.

Stakeholders and Utilities

With technology as the underlying catalyst, the scenario for electric vehicle use will include the impact and involvement of various stakeholders. This consists of: society itself, government and municipal entities, regulators, universities and research institutions, vehicle manufacturers, the ancillary automobile industry and its technological partners, battery manufacturers, the manufacturers of components, electrical and electronics systems, infrastructure suppliers, companies dedicated to mediation, billing and payment methods, ICT (Information and Communication Technology) companies, and of course, utilities.

If the electric vehicle is to become a genuinely alternative means of transportation, then this will depend on the involvement of, and interrelationship between, the above groups. One example of this is the formalizing of various agreements between certain stakeholders at both the national and international level (for example, Saab, Volvo, Wattenfall and ETC Battery in Sweden; Renault, PSA Peugeot Citroën, Toyota and EDF in France; and Iberdrola and General Motors at a global level) and the establishment of consortiums such as EDISON (Electric Vehicles in a Distributed and Integrated Market using Sustainable Energy and Open Networks) in Denmark.

If there is one dimension, however, which will be impacted most throughout the whole of the value chain, it is the electrical one. From power generation to retail, the introduction of this vehicle will require changes in current business models, and foreseeably, in utilities operational models. The short-term aim is to provide electrical energy for use in these vehicles in a more reliable and efficient way.

Battery Charging Impact

Given that charging could be the action having the greatest impact on the electrical sector, there are various alternatives for affecting this. These include:

  • Substitution. This involves a rapid exchange of vehicles and/or batteries, and the subsequent charging of both in an offline mode. It would require sharing of cars (vehicle usage and substitution) and battery charging stations for quick and automated battery exchange.
  • Direct Charging. This includes regular charging points situated in car parks, shopping centers and residences, and providing battery recharge while the vehicle is parked. There also need to be fast-charging points that could quickly charge a battery in 10 to 15 minutes.

To examine the advantages and disadvantages of the above methods, it helps to note the various pilot projects and research programs underway at both the conceptual and demonstration stages. These indicate the possibility of a coexistence scenario. Offline charging could be the least invasive method given the current system of fuel distribution. A network of “electricity stations” (as opposed to petrol stations) could provide a dedicated system of energy generation in a given location. As for direct charging, given the itinerant nature of user demand and his or her expected freedom to choose a particular charging method or location, this introduces an element of greater uncertainty, and impact on the electricity grid, requiring a system that better adapts to the lifestyle of the user.

Direct Charging and Its Impact on the Electricity Grid

Direct charging depends on various factors – notably battery characteristics (directly related to vehicle performance) and the range of time spans chosen to carry out the recharge. Associated with these are other variables: charging voltage, mode (DC, single-phase AC, and three-phase AC) and the characteristics of the charging systems employed: technology, components and their location, connectors, insulation, and the power and control electronics. All of these variables will influence the charging times, and will vary according to the power input (more power, less time) as shown in Figure 1. Therefore, depending on the kind of recharging, there will be an impact not only on the characteristics of the individual charging points but also on the supporting system.

Using extended range electrical vehicles (EREV) such as the Chevrolet Volt or Opel/Vauxhall Ampera as an example, it is estimated that annual home energy consumption from vehicle charging could be around 20 percent of the total, although some studies suggest this amount may be twice as much, based on the customer profile.

Based on the charging power input – and this is, of course, related to the methodology employed – it would be possible to fully recharge an EREV battery in about three hours. A fully charged battery would enable operation solely on electrical power for approximately 40 miles, a distance representing about 80 percent of daily car journeys based on the current averages. For a scenario like this it would be possible to use a charging method of about 4 kilowatt/220 volts.

If we analyze the impact in terms of energy supply and power capacity, there appears to be no medium-term problems in supporting these chargings, according to the data above. This is, however, a matter which depends on each individual country and also on the power transmission interconnections between them. In terms of the instantaneous power available, the charging method will have a greater or lesser impact, particularly on the distribution assets, depending on how it is carried out. Figure 2 shows how the power varies according to the charging method and the time of day when it is in use, taking into account the daily energy demand curve. We can, therefore, identify different scenarios from the most favourable (slow charging at off-peak times) to the most unfavourable (fast charging at peak times). With the latter we may find ourselves with distribution assets (e.g., transformers) incapable of supporting the heavy load of instant energy consumption.

It is necessary to link electric vehicle charging to the daily energy demand curve and instantaneous power availability in such a way that charging impacts the system as little as possible and maximizes the available energy resources. Ideally, there would be a move toward slow charging during off-peak periods. Furthermore, this kind of charging would not impact users as 90 percent of vehicles are not used between 11 a.m. and 6 p.m. Operating under such conditions would also permit the use of excess wind-generated power during off-peak times, enabling a clean locomotion device such as the PHEV to also use renewable (clean) energy as its primary source.

This all sounds reasonable, but the itinerant nature of roaming vehicle demand, together with relatively limited battery life, means that other variables such as home charging versus remote charging with the ability to measure consumption and set tariffs must be taken into account. What will be the charging price? How will charging be carried out when the vehicle is not parked at home, nor at its usual charging centre? What method will be used for making payments? Who will be involved in developing all this infrastructure and how will it all interrelate?

Smart Charging

One system providing answers to these questions is smart charging. Based on the concept, purpose and architecture of the smart grid, such technology can optimise charging in the most favorable way by considering several parameters. These may include: the current state of the electrical system; the battery charging level; tariff modes and associated demand-response models which may be applied (such as time of use, or TOU, tariffs); and the ability to use energy distributed and stored locally through an energy management system.

Smart charging would be capable of deciding when to charge in relation to different variables (for example, price and energy availability), and which energy sources to use (in-home energy storage, local and decoupled energy supply, plug-in to the distribution grid, etc.) Supporting the vehicle-to-grid (V2G) paradigm would enable managing and deciding not only when and how to best charge the vehicle, but also when to store energy in the vehicle battery that can later be returned to the grid for use in a local mode as a distributed energy source.

For all of this to be effective, a power and control electronics system (in both local and global mode), supported by information systems to manage those issues, is required. This will enable the optimal charging process (avoiding peak times, and doing fast charging only when necessary) and an intelligent measuring and tariff system. The latter may be either managed by utilities through advanced meter management (AMM), or virtually through energy tariffs and physical economic transactions. Such systems should allow for the interaction of various agents: end users, utilities, energy service companies (ESCO), infrastructure providers, banks and other method-of-payment companies.

Conclusion

Although there are still many unresolved issues around the introduction of electric vehicles (for example, incentives, carbon caps, tax collection, readiness of systems and business processes), the challenge associated with this means of locomotion and its effect on current business systems and models is a fascinating one. From an electrical viewpoint, there would not appear to be any significant impact on energy management in the medium term, but perhaps more so in terms of power requirements. As an example, some regions have adjusted to the massive introduction of air conditioning systems over recent years. While we are reassured as to the viability of electric vehicles, we are also alert to the possible significant impact of widespread vehicle charging, above all when considering a fast charging scenario.

The special characteristics of battery charging and its itinerant nature, the predicted volumes of power outlet and energy, the current state of tariff systems, the available technology, and the vision and state of deployment of smart grids and AMM, all add up to suggest a smart charging type of system would be the best option – though certainly complex to implement. Given the prominent role that information and communication technologies will play in such a system, it will be necessary to achieve consensus among various stakeholders over methodologies to be used, standards development, and in establishing a regulatory framework capable of supporting all the mechanisms and systems to be introduced.

We have already made good progress, and the electric vehicle could become an example that drives change in other business and technology models. It may well stimulate more rapid development of smart grids, encourage the creation of more efficient energy services and technologies, and lead to greater development and use of renewable energy sources, including a generation and distribution scenario based on the V2G paradigm.

It also may open the door to new businesses and stakeholders as well (such as the ESCOs) to introduce more dynamic, interactive demand response programs and broaden the function of battery storage as a provider of spinning reserves and ancillary services. These are all aspects for which it is now necessary to establish a basis for implementation and a short-term viability plan that will allow for the use of this technology with the aim of reaping its recognized benefits. Are we ready to step up to the challenge?

Thinking Smart

For more than 30 years, Newton- Evans Research Company has been studying the initial development and the embryonic and emergent stages of what the world now collectively terms the smart, or intelligent, grid. In so doing, our team has examined the technology behind the smart grid, the adoption and utilization rates of this technology bundle and the related market segments for more than a dozen or so major components of today’s – and tomorrow’s – intelligent grid.

This white paper contains information on eight of these key components of the smart grid: control systems, smart grid applications, substation automation programs, substation IEDs and devices, advanced metering infrastructure (AMI) and automated meter-reading devices (AMR), protection and control, distribution network automation and telecommunications infrastructure.

Keep in mind that there is a lot more to the smart grid equation than simply installing advanced metering devices and systems. A large AMI program may not even be the correct starting point for hundreds of the world’s utilities. Perhaps it should be a near-term upgrade to control center operations or to electronic device integration of the key substations, or an initial effort to deploy feeder automation or even a complete production and control (P&C) migration to digital relaying technology.

There simply is not a straightforward roadmap to show utilities how to develop a smart grid that is truly in that utility’s unique best interests. Rather, each utility must endeavor to take a step back and evaluate, analyze and plan for its smart grid future based on its (and its various stakeholders’) mission, its role, its financial and human resource limitations and its current investment in modern grid infrastructure and automation systems and equipment.

There are multiple aspects of smart grid development, some of which involve administrative as well as operational components of an electric power utility, and include IT involvement as well as operations and engineering; administrative management of customer information systems (CIS) and geographic information systems (GIS) as well as control center and dispatching operation of distribution and outage management systems (DMS and OMS); substation automation as well as true field automation; third-party services as well as in-house commitment; and of course, smart metering at all levels.

Space Station

I have often compared the evolution of the smart grid to the iterative process of building the international space station: a long-term strategy, a flexible planning environment, responsive changes incorporated into the plan as technology develops and matures, properly phased. What function we might need is really that of a skilled smart grid architect to oversee the increasingly complex duties of an effective systems planning organization within the utility organization.

All of these soon-to-be-interrelated activities need to be viewed in light of the value they add to operational effectiveness and operating efficiencies as well as the effect of their involvement with one another. If the utility has not yet done so, it must strive to adopt a systems-wide approach to problem solving for any one grid-related investment strategy. Decisions made for one aspect of control and automation will have an impact on other components, based on the accumulated 40 years of utility operational insights gained in the digital age.

No utility can today afford to play whack-a-mole with its approach to the intelligent grid and related investments, isolating and solving one problem while inadvertently creating another larger or more costly problem elsewhere because of limited visibility and “quick fix” decision making.

As these smart grid building blocks are put into service, as they become integrated and are made accessible remotely, the overall smart grid necessarily becomes more complex, more communications-centric and more reliant on sensor-based field developments.

In some sense, it reminds one of building the space station. It takes time. The process is iterative. One component follows another, with planning on a system-wide basis. There are no quick solutions. Everything must be very systematically approached from the outset.

Buckets of Spending

We often tackle questions about the buckets of spending for smart grid implementations. This is the trigger for the supply side of the smart grid equation. Suppliers are capable of developing, and will make the required R&D investment in, any aspect of transmission and distribution network product development – if favorable market conditions exist or if market outlooks can be supported with field research. Hundreds of major electric power utilities from around the world have already contributed substantially to our ongoing studies of smart grid components.

In looking at the operational/engineering components of smart grid developments, centering on the physical grid itself (whether a transmission grid, a distribution grid or both), one must include what today comprises P&C, feeder and switch automation, control center-based systems, substation measurement and automation systems, and other significant distribution automation activities.

On the IT and administrative side of smart grid development, one has to include the upgrades that will definitely be required in the near- or mid-term, including CIS, GIS, OMS and wide area communications infrastructure required as the foundation for automatic metering. Based on our internal estimates and those of others, spending for grid automation is pegged for 2008 at or slightly above $1 billion nationwide and will approach $3.5 billion globally. When (if) we add in annual spending for CIS, GIS, meter data management and communications infrastructure developments, several additional billions of dollars become part of the overall smart grid pie.

In a new question included in the 2008 Newton-Evans survey of control center managers, these officials were asked to check the two most important components of near-term (2008-2010) work on the intelligent grid. A total of 136 North American utilities and nearly 100 international utilities provided their comments by indicating their two most important efforts during the planning horizon.

On a summary basis, AMI led in mentions from 48 percent of the group. EMS/ SCADA investments in upgrades, new applications, interfaces et al was next, mentioned by 42 percent of the group. Distribution automation was cited by 35 percent as well.

Spending Outlook

The financial environment and economic outlook do not bode well for many segments of the national and global economies. One question we have continuously been asked well into this year is whether the electric power industry will suffer the fate of other industries and significantly scale back planned spending on T&D automation because of possible revenue erosion given the slowdown and fallout from this year’s difficult industrial and commercial environments.

Let’s first take a summary look at each of the five major components of T&D automation because these all are part and parcel of the operations/engineering view of the smart grid of the future.

Control Systems Outlook: Driven by SCADA-like systems and including energy management systems and distribution management software, this segment of the market is hovering around the $500 million mark on a global scale – excluding the values of turn-key control center projects (engineering, procurement and construction (EPC) of new control center facilities and communications infrastructure). We see neither growth nor erosion in this market for the near-term, with some up-tick in spending for new applications software and better visualization tools to compensate for the “aging” of installed systems. While not a control center-based system, outage management is a closely aligned technology development, and will continue to take hold in the global market. Sales of OMS software and platforms are already approaching the $100 million mark led by the likes of Oracle Utilities, Intergraph and MilSoft.

Substation Automation and Integration Programs: The market for substation IEDs, for new communications implementations and for integration efforts has grown to nearly $500 million. Multiyear programs aimed at upgrading, integrating and automating the existing global base of about a quarter million or so transmission and primary distribution substations have been underway for some time. Some programs have been launched in 2008 that will continue into 2011. We see a continuation of the growth in spending for critical substation A&I programs, albeit 2009 will likely see the slowest rate of growth in several years (less than 3 percent) if the current economic malaise holds up through the year. Continuing emphasis will be on HV transmission substations as the first priority for upgrades and addition of more intelligent electronic devices.

AMI/AMR: This is the lynchpin for the smart grid in the eyes of many industry observers, utility officials and perhaps most importantly, regulators at the state and federal levels of the U.S., Canada, Australia and throughout Western Europe. With nearly 1.5 billion electricity meters installed around the world, and about 93 percent being electro-mechanical, interest in smart metering can also be found in dozens of other countries, including Indonesia, Russia, Honduras, Malaysia, Australia, and Thailand. Another form of smart meters, the prepayment meter, is taking hold in some of the developing nations of the world. The combined resources of Itron, coupled with its Actaris acquisition, make this U.S. firm the global share leader in sales and installations of AMI and AMR systems and meters.

Protection and Control: The global market for protective relays, the foundation for P&C has climbed well above $1.5 billion. Will 2009 see a drop in spending for protective relays? Not likely, as these devices continue to expand in capabilities, and undertake additional functions (sequence of event recording, fault recording and analysis, and even acting as a remote terminal unit). To the surprise of many, there is still a substantial amount (perhaps as much as $125 million) being spent annually for electro-mechanical relays nearly 20 years into the digital relay era. The North American leader in protective relay sales to utilities is SEL, while GE Multilin continues to hold a leading share in industrial markets.

Distribution Automation: Today, when we discuss distribution automation, the topic can encompass any and all aspects of a distribution network automation scheme, from the control center-based SCADA and distribution management system on out to the substation, where RTUs, PLCs, power meters, digital relays, bay controllers and a myriad of communicating devices now help operate, monitor and control power flow and measurement in the medium voltage ranges.

Nonetheless, it is beyond the substation fence, reaching further down into the primary and secondary network, where we find reclosers, capacitors, pole top RTUs, automated overhead switches, automated feeders, line reclosers and associated smart controls. These are the new smart devices that comprise the basic building blocks for distribution automation. The objective will be achieved with the ability to detect and isolate faults at the feeder level, and enable ever faster service restoration. With spending approaching $1 billion worldwide, DA implementations will continue to expand over the coming decade, nearing $2.6 billion in annual spending by 2018.

Summary

The T&D automation market and the smart grid market will not go away this year, nor will it shrink. When telecommunications infrastructure developments are included, about $5 billion will have been spent in 2008 for global T&D automation programs. When AMI programs are adding into the mix, the total exceeds $7 billion. T&D automation spending growth will likely be subdued, perhaps into 2010. However, the overall market for T&D automation is likely to be propped up to remain at or near current levels of spending for 2009 and into 2010, benefiting from the continued regulatory-driven momentum for AMI/ AMR, renewable portfolio standards and demand response initiatives. By 2011, we should once again see healthier capital expenditure budgets, prompting overall T&D automation spending to reach about $6 billion annually. Over the 2008-2018 periods, we anticipate more than $75 billion in cumulative smart grid expenditures.

Expenditure Outlook

Newton-Evans staff has examined the current outlook for smart grid-related expenditures and has made a serious attempt to avoid double counting potential revenues from all of the components of information systems spending and the emerging smart grid sector of utility investment.

While the enterprise-wide IT portions (blue and red segments) of Figure 1 include all major components of IT (hardware, software, services and staffing), the “pure” smart grid components tend to be primarily in hardware, in our view. Significant overlap with both administrative and operational IT supporting infrastructure is a vital component for all smart grid programs underway at this time.

Between “traditional IT” and the evolving smart grid components, nearly $25 billion will likely be spent this year by the world’s electric utilities. Nearly one-third of all 2009 information technology investments will be “smart grid” related.

By 2013, the total value of the various pie segments is expected to increase substantially, with “smart grid” spending possibly exceeding $12 billion. While this amount is generally understood to be conservative, and somewhat lower than smart grid spending totals forecasted by other firms, we will stand by our forecasts, based on 31 years of research history with electric power industry automation and IT topics.

Some industry sources may include the total value of T&D capital spending in their smart grid outlook.

But that portion of the market is already approaching $100 billion globally, and will likely top $120 billion by 2013. Much of that market would go on whether or not a smart grid is involved. Clearly, all new procurements of infrastructure equipment will be made with an eye to including as much smart content as is available from the manufacturers and integrators.

What we are limiting our definition to is edge investment, the components of the 21st century digital transport and delivery systems being added on or incorporated into the building blocks (power transformers lines, switchgear, etc.) of electric power transmission and delivery.

At Your Service

Today’s utility companies are being driven to upgrade their aging transmission and distribution networks in the face of escalating energy generation costs, serious environmental challenges and rising demand for cleaner, distributed generation from both developing and digital economies worldwide.

The current utilities environment requires companies to drive down costs while increasing their ability to monitor and control utility assets. Yet, due to aging infrastructure, many utilities operate without the benefit of real-time usage and distribution loads – while also contending with limited resources for repair and improvement. Even consumers, with climate change on their minds, are demanding that utilities find more innovative ways to help them reduce energy consumption and costs.

One of the key challenges facing the industry is how to take advantage of new technologies to better manage customer service delivery today and into the future. While introducing this new technology, utilities must keep data and networks secure to be in compliance with critical infrastructure protection regulations. The concept of “service management” for the smart grid provides an approach for getting started.

A Smart Grid

A smart grid is created with new solutions that enable new business models. It brings together processes, technology and business partners, empowering utilities with an IP-enabled, continuous sensing network that overlays and connects a utility’s equipment, devices, systems, customers, partners and employees. A smart grid also enables on-demand access to data and information, which is used to better manage, automate and optimize operations and processes throughout the utility.

A utility relies on numerous systems, which reside both within and outside their physical boundaries. Common internal systems include: energy trading systems (ETS), customer information systems (CIS), supervisory control and data acquisition systems (SCADA), outage management systems (OMS), enterprise asset management (EAM); mobile workforce management systems (MWFM), geospatial information systems (GIS) and enterprise resource planning systems (ERP).

These systems are purchased from multiple vendors and often use a variety of protocols to communicate. In addition, utilities must interface with external systems – and often integrate all of them using a point-to-point model and establish connectivity on an as-needed basis. The point-to-point approach can result in numerous complex connections that need to be maintained.

Service Management

The key concept behind service management is the idea of managing assets, networks and systems to provide a “service,” as opposed to simply operating the assets. For example, Rolls Royce Civil Aerospace division uses this concept to sell “pounds of thrust” as a service. Critical to a utility’s operation is the ability to manage all facets of the services being delivered. Also critical to the operation of the smart grid are new solutions in advanced meter management (AMM), network automation and analytics, and EAM, including meter asset management.

A service management platform provides a way for utility companies to manage the services they deliver with their enterprise and information technology assets. It provides a foundation for managing the assets, their configuration, and the interrelationships key to delivering services. It also provides a means of defining workflow for the instantiation and management of the services being delivered. Underlying this platform is a range of tools that can assist in management of the services.

Gathering and analyzing data from advanced meters, network components, distribution devices, and legacy SCADA systems provides a solid foundation for automating service management. When combined with the information available in their asset management systems, utility companies can streamline operations and make more efficient use of valuable resources.

Advanced Reading

AMM centers on a more global view of the informational infrastructure, examining how automatic meter reading (AMR) and advanced metering infrastructure (AMI) integrate with other information systems to provide value-added benefits. It is important to note that for many utilities, AMM is considered to be a “green” initiative since it has the ability to influence customer usage patterns and, therefore, lower peak demand.

The potential for true business transformation exists through AMM, and adopting this solution is the first stage in a utility’s transformation to a more information-powered business model. New smart meters are network addressable, and along with AMM, are core components of the grid. Smart meters and AMM provide the capability to automatically collect usage data in near real time and to transport meter reads at regular intervals or on demand.

AMR/AMIs that aggregate their data in collection servers or concentrators, and expose it through an interface, can be augmented with event management products to monitor the meter’s health and operational status. Many organizations already deploy these solutions for event management within a network’s operations center environments, and for consolidated operations management as a top-level “manager of managers.”

A smart grid includes many devices other than meters, so event management can also be used to monitor the health of the rest of the network and IT equipment in the utility infrastructure. Integrating meter data with operations events gives network operations center operators a much broader view of a utility’s distribution system.

These solutions enable end-to-end data integration, from the meter collection server in a substation to the back-end helpdesk and billing applications. This approach can lead to improved speed and accuracy of data, while leveraging existing equipment and applications.

Network Automation and Analytics

Most utility companies use SCADA systems to collect data from sensors on the energy grid and send events to applications with SCADA interfaces. These systems collect data from substations, power plants and other control centers. They then process the data and allow for control actions to be sent back out. Energy management and distribution management systems typically provide additional features on top of SCADA, targeting either the transmission or distribution grids.

SCADA systems are often distributed on several servers (anywhere from two to 100) connected via a redundant local area network. The SCADA system, in turn, communicates with remote terminal units (RTUs), other devices, and other computer networks. RTUs reside in a substation or power plant, and are hardwired to other devices to bring back meaningful information such as current megawatts, amps, volts, pressure, open/closed or tripped. Distribution business units within a utility company also utilize SCADA systems to track low voltage applications, such as meters and pole drops, compared to the transmission business units’ larger assets, including towers, circuits and switchgear.

To facilitate network automation, IT solutions can help utilities to monitor and analyze data from SCADA systems in real time, monitor the computer network systems used to deploy SCADA systems, and better secure the SCADA network and applications using authentication software. An important element of service management is the use of automation to perform a wide range of actions to improve workfl ow efficiency. Another key ingredient is the use of service level agreements (SLAs) to give a business context for IT, enabling greater accountability to business user needs, and improving a utility’s ability to prioritize and optimize.

A smart grid includes a large number of devices and meters – millions in a large utility – and these are critical to a utility’s operations. A combination of IT solutions can be deployed to manage events from SCADA devices, as well as the IT equipment they rely on.

EAM For Utilities

Historically, many utility companies have managed their assets in silos. However, the emergence of the smart grid and smart meters, challenges of an aging workforce, an ever-demanding regulatory environment, and the availability of common IT architecture standards, are making it critical to standardize on one asset management platform as new requirements to integrate physical assets and IT assets arise (see Figure 1).

Today, utility companies are using EAM to manage work in gas and electric distribution operations, including construction, inspections, leak management, vehicles and facilities. In transmission and substation, EAM software is used for preventative and corrective maintenance and inspections.

EAM also helps track financial assets such as purchasing, depreciation, asset valuation and replacement costs. This solution helps integrate this data with ERP systems, and stores the history of asset testing and maintenance management. It integrates with GIS or other mapping tools to create geographic and spatial views of all distribution and smart grid assets.

Meter asset management is another area of increasing interest, as meters have an asset lifecycle similar to most other assets in a utility. Meter asset management involves tracking the meter from receipt to storeroom, to truck, to final location – as compared to managing the data the meter produces.

Now there is an IT asset management solution with the ability to manage meters as part of the IT network. This solution can be used to provision the meter, track configurations and provide service desk functionality. IT asset management solutions also have the ability to update meter firmware, and easily move and track the location and status of the assets over time in conjunction with a configuration database.

Reducing the number of truck rolls is another key focus area for utility companies. Using a combination of solutions, companies can:

  • Better manage the lifecycles of physical assets such as meters, meter cell relays, and broadband over powerline (BPL) devices to improve preventive maintenance;
  • Reconcile deployed asset information with information collected by meter data management systems;
  • Correlate the knowledge of physical assets with problems experienced with the IT infrastructure to better analyze a problem for root cause; and
  • Establish more efficient business process workflows and strengthen governance across a company.

Utilities are facing many challenges today and taking advantage of new technologies that will help better manage the delivery of service to customers tomorrow. The deployment of the smart grid and related solutions is a significant initiative that will be driving utilities for the next 10 years or more.

The concept of “service management” for the smart grid provides an approach for getting started. But these do not need to be tackled all at once. Utilities should develop a roadmap for the smart grid; each one will depend on specific priorities. But utilities don’t have to go it alone. The smart grid maturity model (SGMM) can enable a utility to develop a roadmap of activities, investments and best practices to ensure success and progress with available resources.

Infrastructure and the Economy

With utility infrastructure aging rapidly, reliability of service is threatened. Yet the economy is hurting, unemployment is accelerating, environmental mandates are rising, and the investment portfolios of both seniors and soon-to-retire boomers have fallen dramatically. Everyone agrees change is needed. The question is: how?

In every one of these respects, state regulators have the power to effect change. In fact, the policy-setting authority of the states is not only an essential complement to federal energy policy, it is a critical building block for economic recovery.

There is no question we need infrastructure development. Almost 26 percent of the distribution infrastructure owned and operated by the electric industry is at or past the end of its service life. For transmission, the number is approximately 15 percent, and for generation, about 23 percent. And that’s before considering the rising demand for electricity needed to drive our digital economy.

The new administration plans to spend hundreds of billions of dollars on infrastructure projects. However, most of the money will go towards roads, transportation, water projects and waste water systems, with lesser amounts designated for renewable energy. It appears that only a small portion of the funds will be designated for traditional central station generation, transmission and distribution. And where such funds are available, they appear to be in the form of loan guarantees, especially in the transmission sector.

The U.S. transmission system is in need of between $50 billion and $100 billion of new investment over the next 10 years, and approximately $300 billion by 2030. These investments are required to connect renewable energy sources, make the grid smarter, improve electricity market efficiency, reduce transmission-related energy losses, and replace assets that are too old. In the next three years alone, the investor-owned utility sector will need to spend about $30 billion on transmission lines.

Spending on distribution over the next decade could approximate $200 billion, rising to $600 billion by 2030. About $60 billion to $70 billion of this will be spent in just the next three years.

The need for investment in new generating stations is a bit more difficult to estimate, owing to the uncertainties surrounding the technologies that will prove the most economic under future greenhouse gas regulations and other technology preferences of the Congress and administration. However, it could easily be somewhere between $600 billion and $900 billion by 2030. Of this amount, between $100 billion and $200 billion could be invested over the next three years and as much as $300 billion over the next 10. It will be mostly later in that 10-year period, and beyond, that new nuclear and carbon-compliant coal capacity is expected to come on line in significant amounts. That will raise generating plant investments dramatically.

Jobs, and the Job of Regulators

All of this construction would maintain or create a significant number of jobs. We estimate that somewhere between 150,000 and 300,000 jobs could be created annually by this build out, including jobs related to construction, post-construction utility operating positions, and general economic "ripple effect" jobs through 2030.

These are sustainable levels of employment – jobs every year, not just one-time surges.

In addition, others have estimated that the development of the smart grid could add between 150,000 and 280,000 jobs. Clearly, then, utility generation, transmission and distribution investments can provide a substantial boost for the economy, while at the same time improving energy efficiency, interconnecting critical renewable energy sources and making the grid smarter.

The beauty is that no federal legislation, no taxpayer money and no complex government grant or loan processes are required. This is virtually all within the control of state regulators.

Timely consideration of utility permit applications and rate requests, as well as project pre-approvals by regulators, allowance of construction work in progress in rate base, and other progressive regulatory practices would vastly accelerate the pace at which these investments could be made and financed, and new jobs created. Delays in permitting and approval not only slow economic recovery, but also create financial uncertainty, potentially threatening ratings, reducing earnings and driving up capital costs.

Helping Utility Shareholders

This brings us to our next point: Regulators can and should help utility shareholders. Although they have a responsibility for controlling utility rates charged to consumers, state regulators also need to provide returns on equity and adopt capital structures that recognize the risks, uncertainties and investor expectations that utilities face in today’s and tomorrow’s very de-leveraged and uncertain financial markets.

It is now widely acknowledged that risk has not been properly priced in the recent past. As with virtually all other industries, equity will play a far more critical role in utility project and corporate finance than in the past. For utilities to attract the equity needed for the buildout just described, equity must earn its full, risk-adjusted return. This requires a fresh look at stockholder expectations and requirements.

A typical utility stockholder is not some abstract, occasionally demonized, capitalist, but rather a composite of state, city, corporate and other pension funds, educational savings accounts, individual retirement accounts and individual shareholders who are in, or close to, retirement. These shares are held largely by, or for the benefit of, everyday workers of all types, both employed and retired: government employees, first responders, trades and health care workers, teachers, professionals, and other blue and white collar workers throughout the country.

These people live across the street from us, around the block, down the road or in the apartments above and below us. They rely on utility investments for stable income and growth to finance their children’s education, future home purchases, retirement and other important quality-of-life activities. They comprise a large segment of the population that has been injured by the economy as much as anyone else.

Fair public policy suggests that regulators be mindful of this and that they allow adequate rates of return needed for financial security. It also requires that regulatory commissions be fair and realistic about the risk premiums inherent in the cost of capital allowed in rate cases.

The cost of providing adequate returns to shareholders is not particularly high. Ironically, the passion of the debate that surrounds cost of capital determinations in a rate case is far greater than the monetary effect that any given return allowance has on an individual customer’s bill.

Typically, the differential return on equity at dispute in a rate case – perhaps between 100 and 300 basis points – represents between 0.5 and 2 percent of a customer’s bill for a "wires only" company. (The impact on the bills of a vertically integrated company would be higher.) Acceptance of the utility’s requested rate of return would no doubt have a relatively small adverse effect on customers’ bills, while making a substantial positive impact on the quality of the stockholders’ holdings. Fair, if not favorable, regulatory treatment also results in improved debt ratings and lower debt costs, which accrue to the benefit of customers through reduced rates.

The List Doesn’t Stop There

Regulators can also be helpful in addressing other challenges of the future. The lynchpin of cost-effective energy and climate change policy is energy efficiency (EE) and demand side management (DSM).

Energy efficiency is truly the low-hanging fruit, capable of providing immediate, relatively inexpensive reductions in emissions and customers bills. However, reductions in customers’ energy use runs contrary to utility financial interests, unless offset by regulatory policy that removes the disincentives. Depending upon the particulars of a given utility, these policies could include revenue decoupling and the authorization of incentive – or at least fully adequate – returns on EE, DSM and smart grid investments, as well as recovery of related expenses.

Additional considerations could include accelerated depreciation of EE and DSM investments and the approval of rate mechanisms that recover lost profit margins created by reduced sales. These policies would positively address a host of national priorities in one fell swoop: the promotion of energy efficiency, greenhouse gas reduction, infrastructure investment, technology development, increased employment and, through appropriate rate base and rate of return policy, improved stockholder returns.

The Leadership Opportunity

Oftentimes, regulatory decision making is narrowly focused on a few key issues in isolation, usually in the context of a particular utility, but sometimes on a statewide generic basis. Rarely is state regulatory policy viewed in a national context. Almost always, issues are litigated individually in high partisan fashion, with little integration as part of a larger whole where utility shareholder interests are usually underrepresented.

The time seems appropriate – and propitious – for regulators to lead the way to a major change in this paradigm while addressing the many urgent issues that face our nation. Regulators can make a difference, probably far beyond that for which they presently give themselves credit.

The Smart Grid: A Balanced View

Energy systems in both mature and developing economies around the world are undergoing fundamental changes. There are early signs of a physical transition from the current centralized energy generation infrastructure toward a distributed generation model, where active network management throughout the system creates a responsive and manageable alignment of supply and demand. At the same time, the desire for market liquidity and transparency is driving the world toward larger trading areas – from national to regional – and providing end-users with new incentives to consume energy more wisely.

CHALLENGES RELATED TO A LOW-CARBON ENERGY MIX

The structure of current energy systems is changing. As load and demand for energy continue to grow, many current-generation assets – particularly coal and nuclear systems – are aging and reaching the end of their useful lives. The increasing public awareness of sustainability is simultaneously driving the international community and national governments alike to accelerate the adoption of low-carbon generation methods. Complicating matters, public acceptance of nuclear energy varies widely from region to region.

Public expectations of what distributed renewable energy sources can deliver – for example, wind, photovoltaic (PV) or micro-combined heat and power (micro-CHP) – are increasing. But unlike conventional sources of generation, the output of many of these sources is not based on electricity load but on weather conditions or heat. From a system perspective, this raises new challenges for balancing supply and demand.

In addition, these new distributed generation technologies require system-dispatching tools to effectively control the low-voltage side of electrical grids. Moreover, they indirectly create a scarcity of “regulating energy” – the energy necessary for transmission operators to maintain the real-time balance of their grids. This forces the industry to try and harness the power of conventional central generation technologies, such as nuclear power, in new ways.

A European Union-funded consortium named Fenix is identifying innovative network and market services that distributed energy resources can potentially deliver, once the grid becomes “smart” enough to integrate all energy resources.

In Figure 1, the Status Quo Future represents how system development would play out under the traditional system operation paradigm characterized by today’s centralized control and passive distribution networks. The alternative, Fenix Future, represents the system capacities with distributed energy resources (DER) and demand-side generation fully integrated into system operation, under a decentralized operating paradigm.

CHALLENGES RELATED TO NETWORK OPERATIONAL SECURITY

The regulatory push toward larger trading areas is increasing the number of market participants. This trend is in turn driving the need for increased network dispatch and control capabilities. Simultaneously, grid operators are expanding their responsibilities across new and complex geographic regions. Combine these factors with an aging workforce (particularly when trying to staff strategic processes such as dispatching), and it’s easy to see why utilities are becoming increasingly dependent on information technology to automate processes that were once performed manually.

Moreover, the stochastic nature of energy sources significantly increases uncertainty regarding supply. Researchers are trying to improve the accuracy of the information captured in substations, but this requires new online dispatching stability tools. Additionally, as grid expansion remains politically controversial, current efforts are mostly focused on optimizing energy flow in existing physical assets, and on trying to feed asset data into systems calculating operational limits in real time.

Last but not least, this enables the extension of generation dispatch and congestion into distribution low-voltage grids. Although these grids were traditionally used to flow energy one way – from generation to transmission to end-users – the increasing penetration of distributed resources creates a new need to coordinate the dispatch of these resources locally, and to minimize transportation costs.

CHALLENGES RELATED TO PARTICIPATING DEMAND

Recent events have shown that decentralized energy markets are vulnerable to price volatility. This poses potentially significant economic threats for some nations because there’s a risk of large industrial companies quitting deregulated countries because they lack visibility into long-term energy price trends.

One potential solution is to improve market liquidity in the shorter term by providing end-users with incentives to conserve energy when demand exceeds supply. The growing public awareness of energy efficiency is already leading end-users to be much more receptive to using sustainable energy; many utilities are adding economic incentives to further motivate end-users.

These trends are expected to create radical shifts in transmission and distribution (T&D) investment activities. After all, traditional centralized system designs, investments and operations are based on the premise that demand is passive and uncontrollable, and that it makes no active contribution to system operations.

However, the extensive rollout of intelligent metering capabilities has the potential to reverse this, and to enable demand to respond to market signals, so that end-users can interact with system operators in real or near real time. The widening availability of smart metering thus has the potential to bring with it unprecedented levels of demand response that will completely change the way power systems are planned, developed and operated.

CHALLENGES RELATED TO REGULATION

Parallel with these changes to the physical system structure, the market and regulatory frameworks supporting energy systems are likewise evolving. Numerous energy directives have established the foundation for a decentralized electricity supply industry that spans formerly disparate markets. This evolution is changing the structure of the industry from vertically integrated, state-owned monopolies into an environment in which unbundled generation, transmission, distribution and retail organizations interact freely via competitive, accessible marketplaces to procure and sell system services and contracts for energy on an ad hoc basis.

Competition and increased market access seem to be working at the transmission level in markets where there are just a handful of large generators. However, this approach has yet to be proven at the distribution level, where it could facilitate thousands and potentially millions of participants offering energy and systems services in a truly competitive marketplace.

MEETING THE CHALLENGES

As a result, despite all the promise of distributed generation, the current decentralized system will become increasingly unstable without the corresponding development of technical, market and regulatory frameworks over the next three to five years.

System management costs are increasing, and threats to system security are a growing concern as installed distributed generating capacity in some areas exceeds local peak demand. The amount of “regulating energy” provisions rises as stress on the system increases; meanwhile, governments continue to push for distributed resource penetration and launch new energy efficiency ideas.

At the same time, most of the large T&D utilities intend to launch new smart grid prototypes that, once stabilized, will be scalable to millions of connection points. The majority of these rollouts are expected to occur between 2010 and 2012.

From a functionality standpoint, the majority of these associated challenges are related to IT system scalability. The process will require applying existing algorithms and processes to generation activities, but in an expanded and more distributed manner.

The following new functions will be required to build a smart grid infrastructure that enables all of this:

New generation dispatch. This will enable utilities to expand their portfolios of current-generation dispatching tools to include schedule-generation assets for transmission and distribution. Utilities could thus better manage the growing number of parameters impacting the decision, including fuel options, maintenance strategies, the generation unit’s physical capability, weather, network constraints, load models, emissions (modeling, rights, trading) and market dynamics (indices, liquidity, volatility).

Renewable and demand-side dispatching systems. By expanding current energy management systems (EMS) capability and architecture, utilities should be able to scale to include millions of active producers and consumers. Resources will be distributed in real time by energy service companies, promoting the most eco-friendly portfolio dispatch methods based on contractual arrangements between the energy service providers and these distributed producers and consumers.

Integrated online asset management systems. new technology tools that help transmission grid operators assess the condition of their overall assets in real time will not only maximize asset usage, but will lead to better leveraging of utilities’ field forces. new standards such as IEC61850 offer opportunities to manage such models more centrally and more consistently.

Online stability and defense plans. The increasing penetration of renewable generation into grids combined with deregulation increases the need for fl ow control into interconnections between several transmission system operators (TSOs). Additionally, the industry requires improved “situation awareness” tools to be installed in the control centers of utilities operating in larger geographical markets. Although conventional transmission security steady state indicators have improved, utilities still need better early warning applications and adaptable defense plan systems.

MOVING TOWARDS A DISTRIBUTED FUTURE

As concerns about energy supply have increased worldwide, the focus on curbing demand has intensified. Regulatory bodies around the world are thus actively investigating smart meter options. But despite the benefits that smart meters promise, they also raise new challenges on the IT infrastructure side. Before each end-user is able to flexibly interact with the market and the distribution network operator, massive infrastructure re-engineering will be required.

nonetheless, energy systems throughout the world are already evolving from a centralized to a decentralized model. But to successfully complete this transition, utilities must implement active network management through their systems to enable a responsive and manageable alignment of supply and demand. By accomplishing this, energy producers and consumers alike can better match supply and demand, and drive the world toward sustainable energy conservation.

Real-Time Automation Solutions for Operation of Energy Assets and Markets

Areva T&D offers solutions to bring electricity from the source to end-users, building high- and medium-voltage substations and develops technologies to manage power grids and energy markets worldwide. It is a full-fl edged solution provider, offering safe, reliable, efficient power distribution down to the lowest level end-user consumption. Its software applications cover all the strategic operational business processes of an energy utility, including optimization of transmission and distribution grid operation; management of wholesale and retail market operations; and energy transaction solutions involving strategic business processes from energy trading, energy scheduling and dispatch management to demand-side management and settlements.

As long as advanced monitoring and control infrastructures have been used for grid management, Areva T&D has been at the forefront of innovation. Its strategy has always been to supply the most accurate real-time vision of the network infrastructure. This has led to several major breakthroughs, including Areva’s latest e-terraVision™ product.

The e-terraVision technology provides control rooms with higher level decision support capabilities through visualization tools, “smart applications” and simulation – thus improving situation awareness. This operator-friendly system enables power dispatchers to fully visualize their networks with the right level of situation awareness and proactively operate the grid by taking the necessary real-time corrective actions.

Expertise acquired in the high-voltage network enables Areva to supply distribution monitoring and control applications as well, and these have greatly influenced its distribution management strategy. As a result of early successes, the company developed an adapted eterra product offer for distribution customers.

Areva T&D continues to integrate unique new concepts to meet market trends and innovation. For example, Areva T&D SmartGrid solutions are designed to supply the following benefits.

  1. Alignment with deregulation trends in the consumer electricity market, including:
    • Making the process of changing energy supplier easier;
    • Providing better service quality for energy usage, including accurate and appropriate billing of actual consumed energy;
    • For specific countries where nontechnical losses are significant, allowing accurate audits to be conducted; and
    • Allowing for differentiated energy offerings with greater pricing flexibility and integration of renewable energy offers.
  2. Support for further structural benefits discussed and validated as part of international working groups on SmartGrid initiatives:
    • Better selectivity of the IEDs in medium- and low-voltage leads to reduce the number of customers affected by outages, thus improving service quality and reducing maintenance costs.
    • Careful monitoring of low-voltage grids, including consumption by phase and distribution cell – which is especially relevant in terms of renewable energy generation.
    • Online asset monitoring, which enables predictive maintenance, thus increasing assets’ life span.
    • Dynamic security management of primary and secondary networks. Introducing renewable energy sources into the distribution network poses a challenge. Combined infrastructures for monitoring systems for distribution and metering will be needed in the near future.

All these challenges have driven the definition and development of Areva SmartGrid solutions. The company’s enhanced supervision and control center products, including smart metering, supply all the advantages of automation technologies to distribution networks.