Infrastructure and the Economy

With utility infrastructure aging rapidly, reliability of service is threatened. Yet the economy is hurting, unemployment is accelerating, environmental mandates are rising, and the investment portfolios of both seniors and soon-to-retire boomers have fallen dramatically. Everyone agrees change is needed. The question is: how?

In every one of these respects, state regulators have the power to effect change. In fact, the policy-setting authority of the states is not only an essential complement to federal energy policy, it is a critical building block for economic recovery.

There is no question we need infrastructure development. Almost 26 percent of the distribution infrastructure owned and operated by the electric industry is at or past the end of its service life. For transmission, the number is approximately 15 percent, and for generation, about 23 percent. And that’s before considering the rising demand for electricity needed to drive our digital economy.

The new administration plans to spend hundreds of billions of dollars on infrastructure projects. However, most of the money will go towards roads, transportation, water projects and waste water systems, with lesser amounts designated for renewable energy. It appears that only a small portion of the funds will be designated for traditional central station generation, transmission and distribution. And where such funds are available, they appear to be in the form of loan guarantees, especially in the transmission sector.

The U.S. transmission system is in need of between $50 billion and $100 billion of new investment over the next 10 years, and approximately $300 billion by 2030. These investments are required to connect renewable energy sources, make the grid smarter, improve electricity market efficiency, reduce transmission-related energy losses, and replace assets that are too old. In the next three years alone, the investor-owned utility sector will need to spend about $30 billion on transmission lines.

Spending on distribution over the next decade could approximate $200 billion, rising to $600 billion by 2030. About $60 billion to $70 billion of this will be spent in just the next three years.

The need for investment in new generating stations is a bit more difficult to estimate, owing to the uncertainties surrounding the technologies that will prove the most economic under future greenhouse gas regulations and other technology preferences of the Congress and administration. However, it could easily be somewhere between $600 billion and $900 billion by 2030. Of this amount, between $100 billion and $200 billion could be invested over the next three years and as much as $300 billion over the next 10. It will be mostly later in that 10-year period, and beyond, that new nuclear and carbon-compliant coal capacity is expected to come on line in significant amounts. That will raise generating plant investments dramatically.

Jobs, and the Job of Regulators

All of this construction would maintain or create a significant number of jobs. We estimate that somewhere between 150,000 and 300,000 jobs could be created annually by this build out, including jobs related to construction, post-construction utility operating positions, and general economic "ripple effect" jobs through 2030.

These are sustainable levels of employment – jobs every year, not just one-time surges.

In addition, others have estimated that the development of the smart grid could add between 150,000 and 280,000 jobs. Clearly, then, utility generation, transmission and distribution investments can provide a substantial boost for the economy, while at the same time improving energy efficiency, interconnecting critical renewable energy sources and making the grid smarter.

The beauty is that no federal legislation, no taxpayer money and no complex government grant or loan processes are required. This is virtually all within the control of state regulators.

Timely consideration of utility permit applications and rate requests, as well as project pre-approvals by regulators, allowance of construction work in progress in rate base, and other progressive regulatory practices would vastly accelerate the pace at which these investments could be made and financed, and new jobs created. Delays in permitting and approval not only slow economic recovery, but also create financial uncertainty, potentially threatening ratings, reducing earnings and driving up capital costs.

Helping Utility Shareholders

This brings us to our next point: Regulators can and should help utility shareholders. Although they have a responsibility for controlling utility rates charged to consumers, state regulators also need to provide returns on equity and adopt capital structures that recognize the risks, uncertainties and investor expectations that utilities face in today’s and tomorrow’s very de-leveraged and uncertain financial markets.

It is now widely acknowledged that risk has not been properly priced in the recent past. As with virtually all other industries, equity will play a far more critical role in utility project and corporate finance than in the past. For utilities to attract the equity needed for the buildout just described, equity must earn its full, risk-adjusted return. This requires a fresh look at stockholder expectations and requirements.

A typical utility stockholder is not some abstract, occasionally demonized, capitalist, but rather a composite of state, city, corporate and other pension funds, educational savings accounts, individual retirement accounts and individual shareholders who are in, or close to, retirement. These shares are held largely by, or for the benefit of, everyday workers of all types, both employed and retired: government employees, first responders, trades and health care workers, teachers, professionals, and other blue and white collar workers throughout the country.

These people live across the street from us, around the block, down the road or in the apartments above and below us. They rely on utility investments for stable income and growth to finance their children’s education, future home purchases, retirement and other important quality-of-life activities. They comprise a large segment of the population that has been injured by the economy as much as anyone else.

Fair public policy suggests that regulators be mindful of this and that they allow adequate rates of return needed for financial security. It also requires that regulatory commissions be fair and realistic about the risk premiums inherent in the cost of capital allowed in rate cases.

The cost of providing adequate returns to shareholders is not particularly high. Ironically, the passion of the debate that surrounds cost of capital determinations in a rate case is far greater than the monetary effect that any given return allowance has on an individual customer’s bill.

Typically, the differential return on equity at dispute in a rate case – perhaps between 100 and 300 basis points – represents between 0.5 and 2 percent of a customer’s bill for a "wires only" company. (The impact on the bills of a vertically integrated company would be higher.) Acceptance of the utility’s requested rate of return would no doubt have a relatively small adverse effect on customers’ bills, while making a substantial positive impact on the quality of the stockholders’ holdings. Fair, if not favorable, regulatory treatment also results in improved debt ratings and lower debt costs, which accrue to the benefit of customers through reduced rates.

The List Doesn’t Stop There

Regulators can also be helpful in addressing other challenges of the future. The lynchpin of cost-effective energy and climate change policy is energy efficiency (EE) and demand side management (DSM).

Energy efficiency is truly the low-hanging fruit, capable of providing immediate, relatively inexpensive reductions in emissions and customers bills. However, reductions in customers’ energy use runs contrary to utility financial interests, unless offset by regulatory policy that removes the disincentives. Depending upon the particulars of a given utility, these policies could include revenue decoupling and the authorization of incentive – or at least fully adequate – returns on EE, DSM and smart grid investments, as well as recovery of related expenses.

Additional considerations could include accelerated depreciation of EE and DSM investments and the approval of rate mechanisms that recover lost profit margins created by reduced sales. These policies would positively address a host of national priorities in one fell swoop: the promotion of energy efficiency, greenhouse gas reduction, infrastructure investment, technology development, increased employment and, through appropriate rate base and rate of return policy, improved stockholder returns.

The Leadership Opportunity

Oftentimes, regulatory decision making is narrowly focused on a few key issues in isolation, usually in the context of a particular utility, but sometimes on a statewide generic basis. Rarely is state regulatory policy viewed in a national context. Almost always, issues are litigated individually in high partisan fashion, with little integration as part of a larger whole where utility shareholder interests are usually underrepresented.

The time seems appropriate – and propitious – for regulators to lead the way to a major change in this paradigm while addressing the many urgent issues that face our nation. Regulators can make a difference, probably far beyond that for which they presently give themselves credit.

Smart Metering Options for Electric and Gas Utilities

Should utilities replace current consumption meters with “smart metering” systems that provide more information to both utilities and customers? Increasingly, the answer is yes. Today, utilities and customers are beginning to see the advantages of metering systems that provide:

  • Two-way communication between the utility and the meter; and
  • Measurement that goes beyond a single consolidated quarterly or monthly consumption total to include time-of-use and interval measurement.

For many, “smart metering” is synonymous with an advanced metering infrastructure (AMI) that collects, processes and distributes metered data effectively across the entire utility as well as to the customer base (Figure 1).

SMART METERING REVOLUTIONIZES UTILITY REVENUE AND SERVICE POTENTIAL

When strategically evaluated and deployed, smart metering can deliver a wide variety of benefits to utilities.

Financial Benefits

  • Significantly speeds cash flow and associated earnings on revenue. Smart metering permits utilities to read meters and send the data directly to the billing application. Bills go out immediately, cutting days off the meter-to-cash cycle.
  • Improves return on investment via faster processing of final bills. Customers can request disconnects as the moving van pulls away. Smart metering polls the meter and gives the customer the amount of the final bill. Online or credit card payments effectively transform final bill collection cycles from a matter of weeks to a matter of seconds.
  • Reduces bad debt. Smart metering helps prevent bad debt by facilitating the use of prepayment meters. It also reduces the size of overdue bills by enabling remote disconnects, which do not depend on crew availability.

Operational Cost Reductions

  • Slashes the cost to connect and disconnect customers. Smart metering can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property.
  • Lowers insurance and legal costs. Field crew insurance costs are high – and they’re even higher for employees subject to stress and injury while disconnecting customers with past-due bills. Remote disconnects through smart metering lower these costs. They also reduce medical leave, disability pay and compensation claims. Remote disconnects also significantly cut the number of days that employees and lawyers spend on perpetrator prosecutions and attempts to recoup damages.
  • Cuts the costs of managing vegetation. Smart metering can pinpoint blinkouts, reducing the cost of unnecessary tree trimming.
  • Reduces grid-related capital expenses. With smart metering, network managers can analyze and improve block-by-block power flows. Distribution planners can better size transformers. Engineers can identify and resolve bottlenecks and other inefficiencies. The benefits include increased throughput and reductions in grid overbuilding.
  • Shaves supply costs. Supply managers use interval data to fine-tune supply portfolios. Because smart metering enables more efficient procurement and delivery, supply costs decline.
  • Cuts fuel costs. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary truck rolls. Reduces theft. Smart metering can identify illegal attempts to reconnect meters, or to use energy and water in supposedly vacant premises. It can also detect theft by comparing flows through a valve or transformer with billed consumption.

Compliance Monitoring

  • Ensures contract compliance. Gas utilities can use one-hour interval meters to monitor compliance from interruptible, or “non-core,” customers and to levy fines against contract violators.
  • Ensures regulatory compliance. Utilities can monitor the compliance of customers with significant outdoor lighting by comparing similar intervals before and during a restricted time period. For example, a jurisdiction near a wildlife area might order customers to turn off outdoor lighting so as to promote breeding and species survival.
  • Reduces outage duration by identifying outages more quickly and pinpointing outage and nested outage locations. Smart metering also permits utilities to ensure outage resolution at every meter location.
  • Sizes outages more accurately. Utilities can ensure that they dispatch crews with the skills needed – and adequate numbers of personnel – to handle a specific job.
  • Provides updates on outage location and expected duration. Smart metering helps call centers inform customers about the timing of service restoration. It also facilitates display of outage maps for customer and public service use.
  • Detect voltage fluctuations. Smart metering can gather and report voltage data. Customer satisfaction rises with rapid resolution of voltage issues.

New Services

For utilities that offer services besides commodity delivery, smart metering provides an entry to such new business opportunities as:

  • Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents.
  • Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart metering enables utilities to alert owners or managers to a need for maintenance or replacement.
  • Facilitating home and small-business networks. Smart metering can provide a gateway to equipment networks that automate control or permit owners to access equipment remotely. Smart metering also facilitates net metering, offering some utilities a path toward involvement in small-scale solar or wind generation.

Environmental Improvements

Many of the smart metering benefits listed above include obvious environmental benefits. When smart metering lowers a utility’s fuel consumption or slows grid expansion, cleaner air and a better preserved landscape result. Smart metering also facilitates conservation through:

  • Leak detection. When interval reads identify premises where water or gas consumption never drops to zero, leaks are an obvious suspect.
  • Demand response and critical peak pricing. Demand response encourages more complete use of existing base power. Employed in conjunction with critical peak pricing, it also reduces peak usage, lowering needs for new generators and transmission corridors.
  • Load control. With the consent of the owner, smart metering permits utilities or other third parties to reduce energy use inside a home or office under defined circumstances.

CHALLENGES IN SMART METERING

Utilities preparing to deploy smart metering systems need to consider these important factors:

System Intelligence. There’s a continuing debate in the utility industry as to whether smart metering intelligence should be distributed or centralized. Initial discussions of advanced metering tended to assume intelligence embedded in meters. Distributed intelligence seemed part of a trend, comparable to “smart cards,” “smart locks” and scores of other everyday devices with embedded computing power.

Today, industry consensus favors centralized intelligence. Why? Because while data processing for purposes of interval billing can take place in either distributed or central locations, other applications for interval data and related communications systems cannot. In fact, utilities that opt for processing data at the meter frequently make it impossible to realize a number of the benefits listed above.

Data Volume. Smart metering inevitably increases the amount of meter data that utilities must handle. In the residential arena, for instance, using hour-long measurement intervals rather than monthly consumption totals replaces 12 annual reads per customer with 8,760 reads – a 730-fold increase.

In most utilities today, billing departments “own” metering data. Interval meter reads, however, are useful to many departments. These readings can provide information on load size and shape – data that can then be analyzed to help reduce generation and supply portfolio costs. Interval reads are even more valuable when combined with metering features like two-way communication between meter and utility, voltage monitoring and “last gasp” messages that signal outages.

This new data provides departments outside billing with an information treasure trove. But when billing departments control the data, others frequently must wait for access lest they risk slowing down billing to a point that damages revenue flow.

Meter Data Management. An alternative way to handle data volume and multiple data requests is to offload it into a stand-alone meter data management (MDM) application.

MDM applications gather and store meter data. They can also perform the preliminary processing required for different departments and programs. Most important, MDM gives all units equal access to commonly held meter data resources (Figure 2).

MDM provides an easy pathway between data and the multiple applications and departments that need it. Utilities can more easily consolidate and integrate data from multiple meter types, and reduce the cost of building and maintaining application interfaces. Finally, MDM provides a place to store and use data, whose flow into the system cannot be regulated – for example, in situations such as the flood of almost simultaneous messages from tens of thousands of meters sending a “last gasp” during a major outage.

WEIGHING THE COSTS AND BENEFITS OF SMART METERING

Smart metering on a mass scale is relatively new. No utility can answer all questions in advance. There are ways, however, to mitigate the risks:

Consider all potential benefits. Smart metering may be a difficult cost to justify if it rests solely on customer acceptance of demand response. Smart metering is easier to cost-justify when its deployment includes, for instance, the value of the many benefits listed above.

Evaluate pilots. Technology publications are full of stories about successful pilots followed by unsuccessful products. That’s because pilots frequently protect participants from harsh financial consequences. And it’s difficult for utility personnel to avoid spending time and attention on participants in ways that encourage them to buy into the program. Real-life program rollouts lack these elements.

Complicating the problem are likely differences between long-term and short-term behavior. The history of gasoline conservation programs suggests that while consumers initially embrace incentives to car pool or use public transportation, few make such changes on a permanent basis.

Examining the experiences of utilities in the smart metering forefront – in Italy, for example, or in California and Idaho – may provide more information than a pilot.

Develop a complete business case. Determining the cost-benefit ratio of smart metering is challenging. Some costs – for example, meter prices and installation charges – may be relatively easy to determine. Others require careful calculations. As an example, when interval meters replace time-of-use meters, how does the higher cost of interval meters weigh against the fact that they don’t require time-of-use manual reprogramming?

As in any business case, some costs must be estimated:

  • Will customer sign-up equal the number needed to break even?
  • How long will the new meters last?
  • Do current meter readers need to be retrained, and if so, what will that cost?
  • Will smart metering help retain customers that might otherwise be lost?
  • Can new services such as equipment efficiency analyses be offered, and if so, how much should the utility charge for them?

Since some utilities are already rolling out smart metering programs, it’s becoming easier to obtain real-life numbers (rather than estimates) to plug into your business case.

CONSIDER ALTERNATIVES

Technology is “smart” only when it reduces the cost of obtaining specified objectives. Utilities may find it valuable to try lower-cost routes to some results, including:

  • Customer charges to prevent unnecessary truck rolls. Such fees are common among telephone service providers and have worked well for some gas utilities responding to repeated false alarms from householder-installed carbon monoxide detectors.
  • Time-of-use billing with time/rate relationships that remain constant for a year or more. This gives consumers opportunities to make time-shifting a habit.
  • Customer education to encourage consumers to use the time-shifting features on their appliances as a contribution to the environment. Most consumers have no idea that electricity goes to waste at night. Keeping emissions out of the air and transmission towers out of the landscape could be far more compelling to many consumers than a relatively small saving resulting from an on- and off-peak pricing differential.
  • Month-to-month rate variability. One study found that approximately a third of the efficiency gains from real-time interval pricing could be captured by simply varying the flat retail rates monthly – and at no additional cost for metering. [1] While a third of the efficiency gains might not be enough to attain long-term goals, they might be enough to fill in a shorter-term deficit, permitting technology costs and regulatory climates to stabilize before decisions must be made.
  • Multitier pricing based on consumption. Today, two-tier pricing – that is, a lower rate for the first few-hundred kilowatt-hours per month and a higher rate for additional hours – is common. However, three or four tiers might better capture the attention of those whose consumption is particularly high – owners of large homes and pool heaters, for instance – without burdening those at the lower end of the economic ladder. Tiers plus exception handling for hardships like high-consuming medical equipment would almost certainly be less difficult and expensive than universal interval metering.

A thorough evaluation of the benefits and challenges of advanced metering systems, along with an understanding of alternative means to achieving those benefits, is essential to utilities considering deployment of advanced metering systems.

Note: The preceding was excerpted from the Oracle white paper “Smart Metering for Electric and Gas Utilities.” To receive the complete paper, Email oracleutilities_ww@oracle.com.

ENDNOTE

  1. Holland and Mansur, “The Distributional and Environmental Effects of Time-varying Prices in Competitive Electricity Markets.” Results published in “If RTP Is So Great, Why Don’t We See More of It?” Center for the Study of Energy Markets Research Review, University of California Energy Institute, Spring 2006. Available at www.ucei.berkeley.edu/

Policy and Regulatory Initiatives And the Smart Grid

Public policy is commonly defined as a plan of action designed to guide decisions for achieving a targeted outcome. In the case of smart grids, new policies are needed if smart grids are actually to become a reality. This statement may sound dire, given the recent signing into law of the 2007 Energy Independence and Security Act (EISA) in the United States. And in fact, work is underway in several countries to encourage smart grids and smart grid components such as smart metering. However, the risk still exists that unless stronger policies are enacted, grid modernization investments will fail to leverage the newer and better technologies now emerging, and smart grid efforts will never move beyond demonstration projects. This would be an unfortunate result when you consider the many benefits of a true smart grid: cost savings for the utility, reduced bills for customers, improved reliability and better environmental stewardship.

REGIONAL AND NATIONAL EFFORTS

As mentioned above, several regions are experimenting with smart grid provisions. At the national level, the U.S. federal government has enacted two pieces of legislation that support advanced metering and smart grids. The Energy Policy Act of 2005 directed U.S. utility regulators to consider time-of-use meters for their states. The 2007 EISA legislation has several provisions, including a list of smart grid goals to encourage two-way, real-time digital networks that stretch from a consumer’s home to the distribution network. The law also provides monies for regional demonstration projects and matching grants for smart grid investments. The EISA legislation also mandates the development of an “interoperability framework.”

In Europe, the European Union (E.U.) introduced a strategic energy technology plan in 2006 for the development of a smart electricity system over the next 30 years. The European Technology Platform organization includes representatives from industry, transmission and distribution system operators, research bodies and regulators. The organization has identified objectives and proposes a strategy to make the smart grid vision a reality.

Regionally, several U.S. states and Canadian provinces are focused on smart grid investments. In Canada, the Ontario Energy Board has mandated smart meters, with meter installation completion anticipated by 2010. In Texas, the Public Utilities Commission of Texas (PUCT) has finalized advanced metering legislation that authorizes metering cost recovery through surcharges. The PUCT also stipulated key components of an advanced metering system: two-way communications, time-date stamp, remote connect/disconnect, and access to consumer usage for both the consumer and the retail energy provider. The Massachusetts State Senate approved an energy bill that includes smart grid and time-of-use pricing. The bill requires that utilities submit a plan by Sept. 1, 2008, to the Massachusetts Public Utilities Commission, establishing a six-month pilot program for a smart grid. Most recently, California, Washington state and Maryland all introduced smart grid legislation.

AN ENCOMPASSING VISION

While these national and regional examples represent just a portion of the ongoing activity in this area, the issue remains that smart grid and advanced metering pilot programs do not guarantee a truly integrated, interoperable, scalable smart grid. Granted, a smart grid is not achieved overnight, but an encompassing smart grid vision should be in place as modernization and metering decisions are made, so that investment is consistent with the plan in mind. Obviously, challenges – such as financing, system integration and customer education – exist in moving from pilot to full grid deployment. However, many utility and regulatory personnel perceive these challenges to be ones of costs and technology readiness.

The costs are considerable. KEMA, the global energy consulting firm, estimates that the average cost of a smart meter project (representing just a portion of a smart grid project) is $775 million. The E.U.’s Strategic Energy Technology Plan estimates that the total smart grid investment required could be as much as $750 billion. These amounts are staggering when you consider that the market capitalization of all U.S. investor-owned electric utilities is roughly $550 billion. However, they’re not nearly as significant when you subtract the costs of fixing the grid using business-as-usual methods. Transmission and distribution expenditures are occurring with and without intelligence. The Energy Information Administration (EIA) estimates that between now and 2020 more than $200 billion will be spent to maintain and expand electricity transmission and distribution infrastructures in the United States alone.

Technology readiness will always be a concern in large system projects. Advances are being made in communication, sensor and security technologies, and IT. The Federal Communications Commission is pushing for auctions to accelerate adoption of different communication protocols. Price points are decreasing for pervasive cellular communication networks. Electric power equipment manufacturers are utilizing the new IEC 61850 standard to ensure interoperability among sensor devices. vendors are using approaches for security products that will enable north American Electric Reliability Corp. (nERC) and critical infrastructure protection (CIP) compliance.

In addition, IT providers are using event-driven architecture to ensure responsiveness to external events, rather than processing transactional events, and reaching new levels with high-speed computer analytics. leading service-oriented architecture companies are working with utilities to establish the underlying infrastructure critical to system integration. Finally, work is occurring in the standards community by the E.U., the Gridwise Architecture Council (GAC), Intelligrid, the national Energy Technology laboratory (nETl) and others to create frameworks for linking communication and electricity interoperability among devices, systems and data flows.

THE TIME IS NOW

These challenges should not halt progress – especially when one considers the societal benefits. Time stops for no one, and certainly in the case of the energy sector that statement could not be more accurate. Energy demand is increasing. The Energy Information Administration estimates that annual energy demand will increase roughly 50 percent over the next 25 years. Meanwhile, the debate over global warming seems to have waned. Few authorities are disputing the escalating concentrations of several greenhouse gases due to the burning of fossil fuels. The E.U. is attempting to decrease emissions through its 2006 Energy Efficiency directive. Many industry observers in the United States believe that there will likely be federal regulation of greenhouse gases within the next three years.

A smart grid would address many of these issues, giving options to the consumer to manage their usage and costs. By optimizing asset utilization, the smart grid will provide savings in that there is less need to build more power plants to meet increased electricity demand. As a self-healing grid that detects, responds and restores functions, the smart grid can greatly reduce the economic impact of blackout and power interruption grid failures.

A smart grid that provides the needed power quality can ensure the strong and resilient energy infrastructure necessary for the 21st-century economy. A smart grid also offers consumers options for managing their usage and costs. Further, a smart grid will enable plug-and-play integration of renewables, distributed resources and control systems. lastly, a smart grid will better enable plug-and-play integration of renewables, distributed resources and control systems.

INCENTIVES FOR MODERNIZATION

despite all of these potential benefits, more incentives are needed to drive grid modernization efforts. Several mechanisms are available to encourage investment. Some utilities are already using or evaluating alternative rate structures such as net metering and revenue decoupling to give utilities and consumer incentives to use less energy. net metering awards energy incentives or credit for consumer-based renewables. And revenue decoupling is a mechanism designed to eliminate or reduce dependence of a utility’s revenues on sales. Other programs – such as energy-efficiency or demand-reduction incentives – motivate consumers and businesses to adopt long-term energy-efficient behaviors (such as using programmable thermostats) and to consider energy efficiency when using appliances and computers, and even operating their homes.

Policy and regulatory strategy should incorporate these means and include others, such as accelerated depreciation and tax incentives. Accelerated depreciation encourages businesses to purchase new assets, since depreciation is steeper in the earlier years of the asset’s life and taxes are deferred to a later period. Tax incentives could be put in place for purchasing smart grid components. Utility Commissions could require utilities to consider all societal benefits, rather than just rate impacts, when crafting the business case. Utilities could take federal income tax credits for the investments. leaders could include smart grid technologies as a critical component of their overall energy policy.

Only when all of these policies and incentives are put in place will smart grids truly become a reality.

Collaborative Policy Making And the Smart Grid

A search on Google for the keywords smart grid returns millions of results. A list of organizations talking about or working on smart grid initiatives would likely yield similar results. Although meant humorously, this illustrates the proliferation of groups interested in redesigning and rebuilding the varied power infrastructure to support the future economy. Since building a smart infrastructure is clearly in the public’s interest, it’s important that all affected stakeholders – from utilities and legislators to consumers and regulators – participate in creating the vision, policies and framework for these critical and important investments.

One organization, the GridWise Alliance, was formed specifically to promote a broad collaborative effort for all interest groups shaping this agenda. Representing a consortium of more than 60 public organizations and private companies, GridWise Alliance members are aligned around a shared vision of a transformed and modern electric system that integrates infrastructure, processes, devices, information and market structure so that energy can be generated, distributed and consumed more reliably and cost-effectively.

From the time of its creation in 2003, the GridWise Alliance has focused on the federal legislative process to ensure that smart grid programs and policies were included in the priorities of the various federal agencies. The Alliance continues to focus on articulating to elected officials, public policy agencies and the private sector the urgent need to build a smarter 21st-century utility infrastructure. Last year, the Alliance provided significant input into the development of smart grid legislation, which was passed by both houses of Congress and signed into law by the President at the end of 2007. The Alliance has evolved to become one of the “go-to” parties for members of Congress and their staffs as they prepare for new legislation aimed at transforming to a modern and intelligent electricity grid.

The Alliance continues to demonstrate its effectiveness in various ways. The chair of the Alliance, Guido Bartels, joins representatives from seven other Alliance member companies in recently being named to the U.S. Department of Energy’s Electricity Advisory Committee (EAC). This organization is being established to “enhance leadership in electricity delivery modernization and provide senior-level counsel to DOE on ways that the nation can meet the many barriers to moving forward, including the deployment of smart grid technologies.” Another major area of focus is the national GridWeek conference. This year’s GridWeek 2008 is focused on “delivering sustainable energy.” The Alliance expects more than 800 participants to discuss and debate topics such as Enabling Energy Efficiency, Smart Grid in a Carbon Economy and Securing the Smart Grid.

Going forward, the Alliance will expand its reach by continuing to broaden its membership and by working with other U.S. stakeholder organizations to provide a richer understanding of the value and impacts of a smart grid. The Alliance is already working with organizations such as the NARUC-FERC Smart Grid Collaborative, the National Council of State legislators (NCSl), the National Governors’ Association (NGA), the American Public Power Association (APPA) and others. Beyond U.S. borders, the Alliance will continue to strengthen its relations with other smart grid organizations like those in the European Union and Australia to ensure that we’re gaining insight and best practices from other markets.

Collaboration such as that exemplified by the Alliance is critical for making effective and impactful public policy. The future of our nation’s electricity infrastructure, economy and, ultimately, health and safety depends on the leadership of organizations such as the GridWise Alliance.

Utility Mergers and Acquisitions: Beating the Odds

Merger and acquisition activity in the U.S. electric utility industry has increased following the 2005 repeal of the Public Utility Holding Company Act (PUHCA). A key question for the industry is not whether M&A will continue, but whether utility executives are prepared to manage effectively the complex regulatory challenges that have evolved.

M&A activity is (and always has been) the most potent, visible and (often) irreversible option available to utility CEOs who wish to reshape their portfolios and meet their shareholders’ expectations for returns. However, M&A has too often been applied reflexively – much like the hammer that sees everything as a nail.

The American utility industry is likely to undergo significant consolidation over the next five years. There are several compelling rationales for consolidation. First, M&A has the potential to offer real economic value. Second, capital-market and competitive pressures favor larger companies. Third, the changing regulatory landscape favors larger entities with the balance sheet depth to weather the uncertainties on the horizon.

LEARNING FROM THE PAST

Historically, however, acquirers have found it difficult to derive value from merged utilities. With the exception of some vertically integrated deals, most M&A deals have been value-neutral or value-diluting. This track record can be explained by a combination of factors: steep acquisition premiums, harsh regulatory givebacks, anemic cost reduction targets and (in more than half of the deals) a failure to achieve targets quickly enough to make a difference. In fact, over an eight-year period, less than half the utility mergers actually met or exceeded the announced cost reduction levels resulting from the synergies of the merged utilities (Figure 1).

The lessons learned from these transactions can be summarized as follows: Don’t overpay; negotiate a good regulatory deal; aim high on synergies; and deliver on them.

In trying to deliver value-creating deals, CEOs often bump up against the following realities:

  • The need to win approval from the target’s shareholders drives up acquisition premiums.
  • The need to receive regulatory approval for the deal and to alleviate organizational uncertainty leads to compromises.
  • Conservative estimates of the cost reductions resulting from synergies are made to reduce the risk of giving away too much in regulatory negotiations.
  • Delivering on synergies proves tougher than anticipated because of restrictions agreed to in regulatory deals or because of the organizational inertia that builds up during the 12- to 18-month approval process.

LOOKING AT PERFORMANCE

Total shareholder return (TSR) is significantly affected by two external deal negotiation levers – acquisition premiums and regulatory givebacks – and two internal levers – synergies estimated and synergies delivered. Between 1997 and 2004, mergers in all U.S. industries created an average TSR of 2 to 3 percent relative to the market index two years after closing. In contrast, utilities mergers typically underperformed the utility index by about 2 to 3 percent three years after the transaction announcement. T&D mergers underperformed the index by about 4 percent, whereas mergers of vertically integrated utilities beat the index by about 1 percent three years after the announcement (Figure 2).

For 10 recent mergers, the lower the share of the merger savings retained by the utilities and the higher the premium paid for the acquisition, the greater the likelihood that the deal destroyed shareholder value, resulting in negative TSR.

Although these appear to be obvious pitfalls that a seasoned management team should be able to recognize and overcome, translating this knowledge into tangible actions and results has been difficult.

So how can utility boards and executives avoid being trapped in a cycle of doing the same thing again and again while expecting different results (Einstein’s definition of insanity)? We suggest that a disciplined end-to-end M&A approach will (if well-executed) tilt the balance in the acquirer’s favor and generate long-term shareholder value. That approach should include the four following broad objectives:

  • Establishment of compelling strategic logic and rationale for the deal;
  • A carefully managed regulatory approval process;
  • Integration that takes place early and aggressively; and
  • A top-down approach for designing realistic but ambitious economic targets.

GETTING IT RIGHT: FOUR BROAD OBJECTIVES THAT ENHANCE M&A VALUE CREATION

To complete successful M&As, utilities must develop a more disciplined approach that incorporates the lessons learned from both utilities and other industrial sectors. At the highest level, adopting a framework with four broad objectives will enhance value creation before the announcement of the deal and through post-merger integration. To do this, utilities must:

  1. Establish a compelling strategic logic and rationale for the deal. A critical first step is asking the question, why do the merger? To answer this question, deal participants must:
    • Determine the strategic logic for long-term value creation with and without M&A. Too often, executives are optimistic about the opportunity to improve other utilities, but they overlook the performance potential in their current portfolio. For example, without M&A, a utility might be able to invest and grow its rate base, reduce the cost of operations and maintenance, optimize power generation and assets, explore more aggressive rate increases and changes to the regulatory framework, and develop the potential for growth in an unregulated environment. Regardless of whether a utility is an acquirer or a target, a quick (yet comprehensive) assessment will provide a clear perspective on potential shareholder returns (and risks) with and without M&A.
    • Conduct a value-oriented assessment of the target. Utility executives typically have an intuitive feel for the status of potential M&A targets adjacent to their service territories and in the broader subregion. However, when considering M&A, they should go beyond the obvious criteria (size and geography) and candidates (contiguous regional players) to consider specific elements that expose the target’s value potential for the acquirer. Such value drivers could include an enhanced power generation and asset mix, improvements in plant availability and performance, better cost structures, an ability to respond to the regulatory environment, and a positive organizational and cultural fit. Also critical to the assessment are the noneconomic aspects of the deal, such as headquarters sharing, potential loss of key personnel and potential paralysis of the company (for example, when a merger or acquisition freezes a company’s ability to pursue M&A and other large initiatives for two years).
    • Assess internal appetites and capabilities for M&A. Successful M&A requires a broad commitment from the executive team, enough capable people for diligence and integration, and an appetite for making the tough decisions essential to achieving aggressive targets. Acquirers should hold pragmatic executive-level discussions with potential targets to investigate such aspects as cultural fit and congruence of vision. Utility executives should conduct an honest assessment of their own management teams’ M&A capabilities and depth of talent and commitment. Among historic M&A deals, those that involved fewer than three states and those in which the acquirer was twice as big as the target were easier to complete and realized more value.
  2. Carefully manage the regulatory approval process. State regulatory approvals present the largest uncertainty and risk in utility M&A, clearly affecting the economics of any deal. However, too often, these discussions start and end with rate reductions so that the utility can secure approvals. The regulatory approval process should be similar to the rigorous due diligence that’s performed before the deal’s announcement. This means that when considering M&A, utilities should:
    • Consider regulatory benefits beyond the typical rate reductions. The regulatory approval process can be used to create many benefits that share rewards and risks, and to provide advantages tailored to the specific merger’s conditions. Such benefits include a stronger combined balance sheet and a potential equity infusion into the target’s subsidiaries; an ability to better manage and hedge a larger combined fuel portfolio; the capacity to improve customer satisfaction; a commitment to specific rate-based investment levels; and a dedication to relieving customer liability on pending litigation. For example, to respond to regulatory policies that mandate reduced emissions, merged companies can benefit not only from larger balance sheets but also from equity infusions to invest in new technology or proven technologies. Merged entities are also afforded the opportunity to leverage combined emissions reduction portfolios.
    • Systematically price out a full range of regulatory benefits. The range should include the timing of “gives” (that is, the sharing of synergy gains with customers in the form of lower rates) as a key value lever; dedicated valuations of potential plans and sensitivities from all stakeholders’ perspectives; and a determination of the features most valued by regulators so that they can be included in a strategy for getting M&A approvals. Executives should be wary of settlements tied to performance metrics that are vaguely defined or inadequately tracked. They should also avoid deals that require new state-level legislation, because too much time will be required to negotiate and close these complex deals. Finally, executives should be wary of plans that put shareholder benefits at the end of the process, because current PUC decisions may not bind future ones.
    • Be prepared to walk away if the settlement conditions imposed by the regulators dilute the economics of the deal. This contingency plan requires that participating executives agree on the economic and timing triggers that could lead to an unattractive deal.
  3. Integrate early and aggressively. Historically, utility transactions have taken an average of 15 months from announcement to closing, given the required regulatory approvals. With such a lengthy time lag, it’s been easy for executives to fall into the trap of putting off important decisions related to the integration and post-merger organization. This delay often leads to organizational inertia as employees in the companies dig in their heels on key issues and decisions rather than begin to work together. To avoid such inertia, early momentum in the integration effort, embodied in the steps outlined below, is critical.
    • Announce the executive team’s organization early on. Optimally, announcements should be made within the first 90 days, and three or four well-structured senior-management workshops with the two CEOs and key executives should occur within the first two months. The decisions announced should be based on such considerations as the specific business unit and organizational options, available leadership talent and alignment with synergy targets by area.
    • Make top-down decisions about integration approach according to business and function. Many utility mergers appear to adopt a “template” approach to integration that leads to a false sense of comfort regarding the process. Instead, managers should segment decision making for each business unit and function. For example, when the acquirer has a best-practice model for fossil operations, the target’s plants and organization should simply be absorbed into the acquirer’s model. When both companies have strong practices, a more careful integration will be required. And when both companies need to transform a particular function, the integration approach should be tailored to achieve a change in collective performance.
    • Set clear guidelines and expectations for the integration. A critical part of jump-starting the integration process is appointing an integration officer with true decision-making authority, and articulating the guidelines that will serve as a road map for the integration teams. These guidelines should clearly describe the roles of the corporation and individual operating teams, as well as provide specific directions about control and organizational layers and review and approval mechanisms for major decisions.
    • >Systematically address legal and organizational bottlenecks. The integration’s progress can be impeded by legal or organizational constraints on the sharing of sensitive information. In such situations, significant progress can be achieved by using clean teams – neutral people who haven’t worked in the area before – to ensure data is exchanged and sanitized analytical results are shared. Improved information sharing can aid executive-level decision making when it comes to commercially sensitive areas such as commercial marketing-and-trading portfolios, performance improvements, and other unregulated business-planning and organizational decisions.
  4. Use a top-down approach to design realistic but ambitious economic targets. Synergies from utility mergers have short shelf lives. With limits on a post-merger rate freeze or rate-case filing, the time to achieve the targets is short. To achieve their economic targets, merged utilities should:
    • Construct the top five to 10 synergy initiatives to capture value and translate them into road maps with milestones and accountabilities. Identifying and promoting clear targets early in the integration effort lead to a focus on the merger’s synergy goals.
    • Identify the links between synergy outcomes and organizational decisions early on, and manage those decisions from the top. Such top-down decisions should specify which business units or functional areas are to be consolidated. Integration teams often become gridlocked over such decisions because of conflicts of interest and a lack of objectivity.
    • Control the human resources policies related to the merger. Important top-down decisions include retention and severance packages and the appointment process. Alternative severance, retirement and retention plans should be priced explicitly to ensure a tight yet fair balance between the plans’ costs and benefits.
    • Exploit the merger to create opportunities for significant reductions in the acquirer’s cost base. Typical merger processes tend to focus on reductions in the target’s cost base. However, in many cases the acquirer’s cost base can also be reduced. Such reductions can be a significant source of value, making the difference between success and failure. They also communicate to the target’s employees that the playing field is level.
    • Avoid the tendency to declare victory too soon. Most synergies are related to standardization and rationalization of practices, consolidation of line functions and optimization of processes and systems. These initiatives require discipline in tracking progress against key milestones and cost targets. They also require a tough-minded assessment of red flags and cost increases over a sustained time frame – often two to three years after the closing.

RECOMMENDATIONS: A DISCIPLINED PROCESS IS KEY

Despite the inherent difficulties, M&A should remain a strategic option for most utilities. If they can avoid the pitfalls of previous rounds of mergers, executives have an opportunity to create shareholder value, but a disciplined and comprehensive approach to both the M&A process and the subsequent integration is essential.

Such an approach begins with executives who insist on a clear rationale for value creation with and without M&A. Their teams must make pragmatic assessments of a deal’s economics relative to its potential for improving base business. If they determine the deal has a strong rationale, they must then orchestrate a regulatory process that considers broad options beyond rate reductions. Having the discipline to walk away if the settlement conditions dilute the deal’s economics is a key part of this process. A disciplined approach also requires that an aggressive integration effort begin as soon as the deal has been announced – an effort that entails a modular approach with clear, fast, top-down decisions on critical issues. Finally, a disciplined process requires relentless follow-through by executives if the deal is to achieve ambitious yet realistic synergy targets.

The Technology Demonstration Center

When a utility undergoes a major transformation – such as adopting new technologies like advanced metering – the costs and time involved require that the changes are accepted and adopted by each of the three major stakeholder groups: regulators, customers and the utility’s own employees. A technology demonstration center serves as an important tool for promoting acceptance and adoption of new technologies by displaying tangible examples and demonstrating the future customer experience. IBM has developed the technology center development framework as a methodology to efficiently define the strategy and tactics required to develop a technology center that will elicit the desired responses from those key stakeholders.

KEY STAKEHOLDER BUY-IN

To successfully implement major technology change, utilities need to consider the needs of the three major stakeholders: regulators, customers and employees.

Regulators. Utility regulators are naturally wary of any transformation that affects their constituents on a grand scale, and thus their concerns must be addressed to encourage regulatory approval. The technology center serves two purposes in this regard: educating the regulators and showing them that the utility is committed to educating its customers on how to receive the maximum benefits from these technologies.

Given the size of a transformation project, it’s critical that regulators support the increased spending required and any consequent increase in rates. Many regulators, even those who favor new technologies, believe that the utility will benefit the most and should thus cover the cost. If utilities expect cost recovery, the regulators need to understand the complexity of new technologies and the costs of the interrelated systems required to manage these technologies. An exhibit in the technology center can go “behind the curtain,” giving regulators a clearer view of these systems, their complexity and the overall cost of delivering them.

Finally, each stage in the deployment of new technologies requires a new approval process and provides opportunities for resistance from regulators. For the utility, staying engaged with regulators throughout the process is imperative, and the technology center provides an ideal way to continue the conversation.

Customers. Once regulators give their approval, the utility must still make its case to the public. The success of a new technology project rests on customers’ adoption of the technology. For example, if customers continue using appliances as they always did, at a regular pace throughout the day and not adjusting for off-peak pricing, the utility will fail to achieve the major planned cost advantage: a reduction in production facilities. Wide-scale customer adoption is therefore key. Indeed, general estimates indicate that customer adoption rates of roughly 20 percent are needed to break even in a critical peak-pricing model. [1]

Given the complexity of these technologies, it’s quite possible that customers will fail to see the value of the program – particularly in the context of the changes in energy use they will need to undertake. A well-designed campaign that demonstrates the benefits of tiered pricing will go a long way toward encouraging adoption. By showcasing the future customer experience, the technology center can provide a tangible example that serves to create buzz, get customers excited and educate them about benefits.

Employees. Obtaining employee buy-in on new programs is as important as winning over the other two stakeholder groups. For transformation to be successful, an understanding of the process must be moved out of the boardroom and communicated to the entire company. Employees whose responsibilities will change need to know how they will change, how their interactions with the customer will change and what benefits are in it for them. At the same time, utility employees are also customers. They talk to friends and spread the message. They can be the utility’s best advocates or its greatest detractors. Proper internal communication is essential for a smooth transition from the old ways to the new, and the technology center can and should be used to educate employees on the transformation.

OTHER GOALS FOR THE TECHNOLOGY DEMONSTRATION CENTER

The objectives discussed above represent one possible set of goals for a technology center. Utilities may well have other reasons for erecting the technology center, and these should be addressed as well. As an example, the utility may want to present a tangible display of its plans for the future to its investors, letting them know what’s in store for the company. Likewise, the utility may want to be a leader in its industry or region, and the technology center provides a way to demonstrate that to its peer companies. The utility may also want to be recognized as a trendsetter in environmental progress, and a technology center can help people understand the changes the company is making.

The technology center needs to be designed with the utility’s particular environment in mind. The technology center development framework is, in essence, a road map created to aid the utility in prioritizing the technology center’s key strategic priorities and components to maximize its impact on the intended audience.

DEVELOPING THE TECHNOLOGY CENTER

Unlike other aspects of a traditional utility, the technology center needs to appeal to customers visually, as well as explain the significance and impact of new technologies. The technology center development framework presented here was developed by leveraging trends and experiences in retail, including “experiential” retail environments such as the Apple Stores in malls across the United States. These new retail environments offer a much richer and more interactive experience than traditional retail outlets, which may employ some basic merchandising and simply offer products for sale.

Experiential environments have arisen partly as a response to competition from online retailers and the increased complexity of products. The Technology Center Development Framework uses the same state-of-the-art design strategies that we see adopted by high-end retailers, inspiring the executives and leadership of the utility to create a compelling experience that will enable the utility to elicit the desired response and buy-in from the stakeholders described above.

Phase 1: Technology Center Strategy

During this phase, a utility typically spends four to eight weeks developing an optimal strategy for the technology center. To accomplish this, planners identify and delineate in detail three major elements:

  • The technology center’s goals;
  • Its target audience; and
  • Content required to achieve those goals.

As shown in Figure 1, these pieces are not mutually exclusive; in fact, they’re more likely to be iterative: The technology center’s goals set the stage for determining the audience and content, and those two elements influence each other. The outcome of this phase is a complete strategy road map that defines the direction the technology center will take.

To understand the Phase 1 objectives properly, it’s necessary to examine the logic behind them. The methodology focuses on the three elements mentioned previously – goals, audience and content – because these are easily overlooked and misaligned by organizations.

Utility companies inevitably face multiple and competing goals. Thus, it’s critical to identify the goals specifically associated with the technology center and to distinguish them from other corporate goals or goals associated with implementing a new technology. Taking this step forces the organization to define which goals can be met by the technology center with the greatest efficiency, and establishes a clear plan that can be used as a guide in resolving the inevitable future conflicts.

Similarly, the stakeholders served by the utility represent distinct audiences. Based on the goals of the center and the organization, as well as the internal expectations set by managers, the target audience needs to be well defined. Many important facets of the technology center, such as content and location, will be partly determined by the target audience. Finally, the right content is critical to success. A regulator may want to see different information than customers.

In addition, the audience’s specific needs dictate different content options. Do the utility’s customers care about the environment? Do they care more about advances in technology? Are they concerned about how their lives will change in the future? These questions need to be answered early in the process.

The key to successfully completing Phase 1 is constant engagement with the utility’s decision makers, since their expectations for the technology center will vary greatly depending on their responsibilities. Throughout this phase, the technology center’s planners need to meet with these decision makers on a regular basis, gather and respect their opinions, and come to the optimal mix for the utility on the whole. This can be done through interviews or a series of workshops, whichever is better suited for the utility. We have found that by employing this process, an organization can develop a framework of goals, audience and content mix that everyone will agree on – despite differing expectations.

Phase 2: Design Characteristics

The second phase of the development framework focuses on the high-level physical layout of the technology center. These “design characteristics” will affect the overall layout and presentation of the technology center.

We have identified six key characteristics that need to be determined. Each is developed as a trade-off between two extremes; this helps utilities understand the issues involved and debate the solutions. Again, there are no right answers to these issues – the optimal solution depends on the utility’s environment and expectations:

  • Small versus large. The technology center can be small, like a cell phone store, or large, like a Best Buy.
  • Guided versus self-guided. The center can be designed to allow visitors to guide themselves, or staff can be retained to guide visitors through the facility.
  • Single versus multiple. There may be a single site, or multiple sites. As with the first issue (small versus large), one site may be a large flagship facility, while the others represent smaller satellite sites.
  • Independent versus linked. Depending on the nature of the exhibits, technology center sites may operate independently of each other or include exhibits that are remotely linked in order to display certain advanced technologies.
  • Fixed versus mobile. The technology center can be in a fixed physical location, but it can also be mounted on a truck bed to bring the center to audiences around the region.
  • Static versus dynamic. The exhibits in the technology center may become outdated. How easy will it be to change or swap them out?

Figure 2 illustrates a sample set of design characteristics for one technology center, using a sample design characteristic map. This map shows each of the characteristics laid out around the hexagon, with the preference ranges represented at each vertex. By mapping out the utility’s options with regard to the design characteristics, it’s possible to visualize the trade-offs inherent in these decisions, and thus identify the optimal design for a given environment. In addition, this type of map facilitates reporting on the project to higher-level executives, who may benefit from a visual executive summary of the technology center’s plan.

The tasks in Phase 2 require the utility’s staff to be just as engaged as in the strategy phase. A workshop or interviews with staff members who understand the various needs of the utility’s region and customer base should be conducted to work out an optimal plan.

Phase 3: Execution Variables

Phases 1 and 2 provide a strategy and design for the technology center, and allow the utility’s leadership to formulate a clear vision of the project and come to agreement on the ultimate purpose of the technology center. Phase 3 involves engaging the technology developers to identify which aspects of the new technology – for example, smart appliances, demand-side management, outage management and advanced metering – will be displayed at the technology center.

During this phase, utilities should create a complete catalog of the technologies that will be demonstrated, and match them up against the strategic content mix developed in Phase 1. A ranking is then assigned to each potential new technology based on several considerations, such as how well it matches the strategy, how feasible it is to demonstrate the given technology at the center, and what costs and resources would be required. Only the most efficient and well-matched technologies and exhibits will be displayed.

During Phase 3, outside vendors are also engaged, including architects, designers, mobile operators (if necessary) and real estate agents, among others. With the first two phases providing a guide, the utility can now open discussions with these vendors and present a clear picture of what it wants. The technical requirements for each exhibit will be cataloged and recorded to ensure that any design will take all requirements into account. Finally, the budget and work plan are written and finalized.

CONCLUSION

With the planning framework completed, the team can now build the center. The framework serves as the blueprint for the center, and all relevant benchmarks must be transparent and open for everyone to see. Disagreements during the buildout phase can be referred back to the framework, and issues that don’t fit the framework are discarded. In this way, the utility can ensure that the technology center will meet its goals and serve as a valuable tool in the process of transformation.

Thank you to Ian Simpson, IBM Global Business Services, for his contributions to this paper.

ENDNOTE

  1. Critical peak pricing refers to the model whereby utilities use peak pricing only on days when demand for electricity is at its peak, such as extremely hot days in the summer.

Microsoft Helps Utilities Use IT to Create Winning Relationships

The utilities industry worldwide is experiencing growing energy demand in a world with shifting fuel availability, increasing costs, a shrinking workforce and mounting global environmental pressures. Rate case filings and government regulations, especially those regarding environmental health and safety, require utilities to streamline reporting and operate safely enterprise-wide. At the same time, increasing competition and costs drive the need for service reliability and better customer service. Each issue causes utilities to depend more and more on information technology (IT).

The Microsoft Utility team works with industry partners to create and deploy industry-specific solutions that help utilities transform challenges into opportunities and empower utilities workers to thrive in today’s market-driven environment. Solutions are based on the world’s most cost-effective, functionally rich, and secure IT platform. The Microsoft platform is interoperable with a wide variety of systems and proven to improve people’s abilities to access information and work with others across boundaries. Together, they help utilities optimize operations in each line of business.

Customer care. Whether a utility needs to modernize a call center, add customer self-service or respond to new business requirements such as green power, Microsoft and its partners provide solutions for turning the customer experience into a powerful competitive advantage with increased cost efficiencies, enhanced customer service and improved financial performance.

Transmission and distribution. Growing energy demand makes it critical to effectively address safe, reliable and efficient power delivery worldwide. To help utilities meet these needs, Microsoft and its partners offer EMS, DMS and SCADA systems; mobile workforce management solutions; project intelligence; geographic information systems; smart metering/grid; and work/asset/document management tools that streamline business processes and offer connectivity across the enterprise and beyond.

Generation. Microsoft and its partners provide utilities with a view across and into their generation operations that enables them to make better decisions to improve cycle times, output and overall effectiveness while reducing the carbon footprint. With advanced software solutions from Microsoft and its partners, utilities can monitor equipment to catch early failure warnings, measure fleets’ economic performance and reduce operational and environment risk.

Energy trading and risk management. Market conditions require utilities to optimize energy supply performance. Microsoft and its partners’ enterprise risk management and trading solutions help utilities feed the relentless energy demands in a resource-constrained world.

Regulatory compliance. Microsoft and its partners offer solutions to address the compliance requirements of the European Union; Federal Energy Regulatory Commission; North American Reliability Council; Sarbanes-Oxley Act of 2000; Environmental, Health and Safety; and other regional jurisdiction regulations and rate case issues. With solutions from Microsoft partners, utilities have a proactive approach to compliance, the most effective way to manage operational risk across the enterprise.

Enterprise. To optimize their businesses, utility executives need real-time visibility across the enterprise. Microsoft and its partners provide integrated e-business solutions that help utilities optimize their interactions with customers, vendors and partners. These enterprise applications address business intelligence and reporting, customer relationship management, collaborative workspaces, human resources and financial management.

Weathering the Perfect Storm

A “perfect storm” of daunting proportions is bearing down on utility companies: assets are aging; the workforce is aging; and legacy information technology (IT) systems are becoming an impediment to efficiency improvements. This article suggests a three-pronged strategy to meet the challenges posed by this triple threat. By implementing best practices in the areas of business process management (BPM), system consolidation and IT service management (ITSM), utilities can operate more efficiently and profitably while addressing their aging infrastructure and staff.

BUSINESS PROCESS MANAGEMENT

In a recent speech before the Utilities Technology Conference, the CIO of one of North America’s largest integrated gas and electric utilities commented that “information technology is a key to future growth and will provide us with a sustainable competitive advantage.” The quest by utilities to improve shareholder and customer satisfaction has led many CIOs to reach this same conclusion: nearly all of their efforts to reduce the costs of managing assets depend on information management.

Echoing this observation, a survey of utility CIOs showed that the top business issue in the industry was the need to improve business process management (BPM).[1] It’s easy to see why.

BPM enables utilities to capture, propagate and evolve asset management best practices while maintaining alignment between work processes and business goals. For most companies, the standardized business processes associated with BPM drive work and asset management activities and bring a host of competitive advantages, including improvements in risk management, revenue generation and customer satisfaction. Standardized business processes also allow management to more successfully implement business transformation in an environment that may include workers acquired in a merger, workers nearing retirement and new workers of any age.

BPM also helps enforce a desirable culture change by creating an adaptive enterprise where agility, flexibility and top-to-bottom alignment of work processes with business goals drive the utility’s operations. These work processes need to be flexible so management can quickly respond to the next bump in the competitive landscape. Using standard work processes drives desired behavior across the organization while promoting the capture of asset-related knowledge held by many long-term employees.

Utility executives also depend on technology-based BPM to improve processes for managing assets. This allows them to reduce staffing levels without affecting worker safety, system reliability or customer satisfaction. These processes, when standardized and enforced, result in common work practices throughout the organization, regardless of region or business unit. BPM can thus yield an integrated set of applications that can be deployed in a pragmatic manner to improve work processes, meet regulatory requirements and reduce total cost of ownership (TCO) of assets.

BPM Capabilities

Although the terms business process management and work flow are often used synonymously – and are indeed related – they refer to distinctly different things. BPM is a strategic activity undertaken by an organization looking to standardize and optimize business processes, whereas work flow refers to IT solutions that automate processes – for example, solutions that support the execution phase of BPM.

There are a number of core BPM capabilities that, although individually important, are even more powerful than the sum of their parts when leveraged together. Combined, they provide a powerful solution to standardize, execute, enforce, test and continuously improve asset management business processes. These capabilities include:

  • Support for local process variations within a common process model;
  • Visual design tools;
  • Revision management of process definitions;
  • Web services interaction with other solutions;
  • XML-based process and escalation definitions;
  • Event-driven user interface interactions;
  • Component-based definition of processes and subprocesses; and
  • Single engine supporting push-based (work flow) and polling-based (escalation) processes.

Since BPM supports knowledge capture from experienced employees, what is the relationship between BPM and knowledge management? Research has shown that the best way to capture knowledge that resides in workers’ heads into some type of system is to transfer the knowledge to systems they already use. Work and asset management systems hold job plans, operational steps, procedures, images, drawings and other documents. These systems are also the best place to put information required to perform a task that an experienced worker “just knows” how to do.

By creating appropriate work flows in support of BPM, workers can be guided through a “debriefing” stage, where they can review existing job plans and procedures, and look for tasks not sufficiently defined to be performed without the tacit knowledge learned through experience. Then, the procedure can be flagged for additional input by a knowledgeable craftsperson. This same approach can even help ensure the success of the “debriefing” application itself, since BPM tools by definition allow guidance to be built in by creating online help or by enhancing screen text to explain the next step.

SYSTEM CONSOLIDATION

System consolidation needs to involve more than simply combining applications. For utilities, system consolidation efforts ought to focus on making systems agile enough to support near real-time visibility into critical asset data. This agility will yield transparency across lines of business on the one hand, and satisfies regulators and customers on the other. To achieve this level of transparency, utilities have an imperative to enforce a modern enterprise architecture that supports service-oriented architectures (SOAs) and also BPM.

Done right, system consolidation allows utilities to create a framework supporting three key business areas:

  • Optimization of both human and physical assets;
  • Standardization of processes, data and accountability; and
  • Flexibility to change and adapt to what’s next.

The Need for Consolidation

Many utility transmission and distribution (T&D) divisions exhibit this need for consolidation. Over time, the business operations of many of these divisions have introduced different systems to support a perceived immediate need – without considering similar systems that may already be implemented within the utility. Eventually, the business finds it owns three different “stacks” of systems managing assets, work assignments and mobile workers – one for short-cycle service work, one for construction and still another for maintenance and inspection work.

With these systems in place, it’s nearly impossible to implement productivity programs – such as cross-training field crews in both construction and service work – or to take advantage of a “common work queue” that would allow workers to fill open time slots without returning to their regional service center. In addition, owning and operating these “siloed” systems adds significant IT costs, as each one has annual maintenance fees, integration costs, yearly application upgrades and retraining requirements.

In such cases, using one system for all work and asset management would eliminate multiple applications and deliver bottom-line operational benefits: more productive workers, more reliable assets and technology cost savings. One large Midwestern utility adopting the system consolidation approach was able to standardize on six core applications: work and asset management, financials, document management, geographic information systems (GIS), scheduling and mobile workforce management. The asset management system alone was able to consolidate more than 60 legacy applications. In addition to the obvious cost savings, these consolidated asset management systems are better able to address operational risk, worker health and safety and regulatory compliance – both operational and financial – making utilities more competitive.

A related benefit of system consolidation concerns the elimination of rogue “pop-up” applications. These are niche applications, often spreadsheets or standalone databases, which “pop up” throughout an organization on engineers’ desktops. Many of these applications perform critical rolls in regulatory compliance yet are unlikely to pass muster at any Sarbanes-Oxley review. Typically, these pop-up applications are built to fill a “functionality gap” in existing legacy systems. Using an asset management system with a standards-based platform allows utilities to roll these pop-up applications directly into their standard supported work and asset management system.

Employees must interact with many systems in a typical day. How productive is the maintenance electrician who uses one system for work management, one for ordering parts and yet another for reporting his or her time at the end of a shift? Think of the time wasted navigating three distinct systems with different user interfaces, and the duplication of data that unavoidably occurs. How much more efficient would it be if the electrician were able to use one system that supported all of his or her work requirements? A logical grouping of systems clearly enables all workers to leverage information technology to be more efficient and effective.

Today, using modern, standards-based technologies like SOAs, utilities can eliminate the counterproductive mix of disparate commercial and “home-grown” systems. Automated processes can be delivered as Web services, allowing asset and service management to be included in the enterprise application portfolio, joining the ranks of human resource (HR), finance and other business-critical applications.

But although system consolidation in general is a good thing, there is a “tipping point” where consolidating simply for the sake of consolidation no longer provides a meaningful return and can actually erode savings and productivity gains. A system consolidation strategy should center on core competencies. For example, accountants or doctors are both skilled service professionals. But their similarity on that high level doesn’t mean you would trade one for the other just to “consolidate” the bills you receive and the checks you have to write. You don’t want accountants reading your X-rays. The same is true for your systems’ needs. Your organization’s accounting or human resource software does not possess the unique capabilities to help you manage your mission-critical transmission and distribution, facilities, vehicle fleet or IT assets. Hence it is unwise to consolidate these mission-critical systems.

System consolidation strategically aligned with business requirements offers huge opportunities for improving productivity and eliminating IT costs. It also improves an organization’s agility and reverses the historical drift toward stovepipe or niche systems by providing appropriate systems for critical roles and stakeholders within the organization.

IT SERVICE MANAGEMENT

IT Service Management (ITSM) is critical to helping utilities deal with aging assets, infrastructure and employees primarily because ITSM enables companies to surf the accelerating trend of asset management convergence instead of falling behind more nimble competitors. Used in combination with pragmatic BPM and system consolidation strategies, ITSM can help utilities exploit the opportunities that this trend presents.

Three key factors are driving the convergence of management processes across IT assets (PCs, servers and the like) and operational assets (the systems and equipment through which utilities deliver service). The first concerns corporate governance, whereby corporate-wide standards and policies are forcing operational units to rethink their use of “siloed” technologies and are paving the way for new, more integrated investments. Second, utilities are realizing that to deal with their aging assets, workforce and systems dilemmas, they must increase their investments in advanced information and engineering technologies. Finally, the functional boundaries between the IT and operational assets themselves are blurring beyond recognition as more and more equipment utilizes on-board computational systems and is linked over the network via IP addresses.

Utilities need to understand this growing interdependency among assets, including the way individual assets affect service to the business and the requirement to provide visibility into asset status in order to properly address questions relating to risk management and compliance.

Corporate Governance Fuels a Cultural Shift

The convergence of IT and operational technology is changing the relationship between the formerly separate operational and IT groups. The operational units are increasingly relying on IT to help deal with their “aging trilogy” problem, as well as to meet escalating regulatory compliance demands and customers’ reliability expectations. In the past, operating units purchased advanced technology (such as advanced metering or substation automation systems) on an as-needed basis, unfettered by corporate IT policies and standards. In the process, they created multiple silos of nonstandard, non-integrated systems. But now, as their dependence on IT grows, corporate governance policies are forcing operating units to work within IT’s framework. Utilities can’t afford the liability and maintenance costs of nonstandard, disparate systems scattered across their operational and IT efforts. This growing dependence on IT has thus created a new cultural challenge.

A study by Gartner of the interactions among IT and operational technology highlights this challenge. It found that “to improve agility and achieve the next level of efficiencies, utilities must embrace technologies that will enable enterprise application access to real-time information for dynamic optimization of business processes. On the other hand, lines of business (LOBs) will increasingly rely on IT organizations because IT is pervasively embedded in operational and energy technologies, and because standard IT platforms, application architectures and communication protocols are getting wider acceptance by OT [operational technology] vendors.”[2]

In fact, an InformationWeek article (“Changes at C-Level,” August 1, 2006) warned that this cultural shift could result in operational conflict if not dealt with. In that article, Nathan Bennett and Stephen Miles wrote, “Companies that look to the IT department to bring a competitive edge and drive revenue growth may find themselves facing an unexpected roadblock: their CIO and COO are butting heads.” As IT assumes more responsibility for running a utility’s operations, the roles of CIO and COO will increasingly converge.

What Is an IT Asset, Anyhow?

An important reason for this shift is the changing nature of the assets themselves, as mentioned previously. Consider the question “What is an IT asset?” In the past, most people would say that this referred to things like PCs, servers, networks and software. But what about a smart meter? It has firmware that needs updates; it resides on a wired or wireless network; and it has an IP address. In an intelligent utility network (IUN), this is true of substation automation equipment and other field-located equipment. The same is true for plant-based monitoring and control equipment. So today, if a smart device fails, do you send a mechanic or an IT technician?

This question underscores why IT asset and service management will play an increasingly important role in a utility’s operations. Utilities will certainly be using more complex technology to operate and maintain assets in the future. Electronic monitoring of asset health and performance based on conditions such as meter or sensor readings and state changes can dramatically improve asset reliability. Remote monitoring agents – from third-party condition monitoring vendors or original equipment manufacturers (OEMs) of highly specialized assets – can help analyze the increasingly complex assets being installed today as well as optimize preventive maintenance and resource planning.

Moreover, utilities will increasingly rely on advanced technology to help them overcome the challenges of their aging assets, workers and systems. For example, as noted above, advanced information technology will be needed to capture the tacit knowledge of experienced workers as well as replace some manual functions with automated systems. Inevitably, operational units will become technology-driven organizations, heavily dependent on the automated systems and processes associated with IT asset and service management.

The good news for utilities is that a playbook of sorts is available that can help them chart the ITSM waters in the future. The de facto global standard for best practices process guidance in ITSM is the IT Infrastructure Library (ITIL), which IT organizations can adopt to support their utility’s business goals. ITIL-based processes can help utilities better manage IT changes, assets, staff and service levels. ITIL extends beyond simple management of asset and service desk activities, creating a more proactive organization that can reduce asset failures, improve customer satisfaction and cut costs. Key components of ITIL best practices include configuration, problem, incident, change and service-level management activities.

Implemented together, ITSM best practices as embodied in ITIL can help utilities:

  • Better align asset health and performance with the needs of the business;
  • Improve risk and compliance management;
  • Improve operational excellence;
  • Reduce the cost of infrastructure support services;
  • Capture tactical knowledge from an aging workforce;
  • Utilize business process management concepts; and
  • More effectively leverage their intelligent assets.

CONCLUSION

The “perfect storm” brought about by aging assets, an aging workforce and legacy IT systems is challenging utilities in ways many have never experienced. The current, fragmented approach to managing assets and services has been a “good enough” solution for most utilities until now. But good enough isn’t good enough anymore, since this fragmentation often has led to siloed systems and organizational “blind spots” that compromise business operations and could lead to regulatory compliance risks.

The convergence of IT and operational technology (with its attendant convergence of asset management processes) represents a challenging cultural change; however, it’s a change that can ultimately confer benefits for utilities. These benefits include not only improvements to the bottom line but also improvements in the agility of the operation and its ability to control risks and meet compliance requirements associated with asset and service management activity.

To help weather the coming perfect storm, utilities can implement best practices in three key areas:

  • BP technology can help utilities capture and propagate asset management best practices to mitigate the looming “brain drain” and improve operational processes.
  • Judicious system consolidation can improve operational efficiency and eliminate legacy systems that are burdening the business.
  • ITSM best practices as exemplified by ITIL can streamline the convergence of IT and operational assets while supporting a positive cultural shift to help operational business units integrate with IT activities and standards.

Best-practices management of all critical assets based on these guidelines will help utilities facilitate the visibility, control and standardization required to continuously improve today’s power generation and delivery environment.

ENDNOTES

  1. Gartner’s 2006 CIO Agenda survey.
  2. 2. Bradley Williams, Zarko Sumic, James Spiers, Kristian Steenstrup, “IT and OT Interaction: Why Confl ict Resolution Is Important,” Gartner Industry Research, Sept. 15, 2006.