Using Analytics for Better Mobile Technology Decisions

Mobile computing capabilities have been proven to drive business value by providing traveling executives, field workers and customer service personnel with real-time access to customer data. Better and more timely access to information shortens response times, improves accuracy and makes the workforce more productive.

However, although your organization may agree that technology can improve business processes, different stakeholders – IT management, financial and business leadership and operations personnel – often have different perspectives on the real costs and value of mobility. For example, operations wants tools that help employees work faster and focus more intently on the customer; finance wants the solution that costs the least amount this quarter; and IT wants to implement mobile projects that can succeed without draining resources from other initiatives.

It may not be obvious, but there are ways to achieve everyone’s goals. Analytics can help operations, finance and IT find common ground. When teams understand the data, they can understand the logic. And when they understand the logic they can support making the right decision.

EXPOSING THE FORMULA

Deploying mobile technology is a strategic initiative with far-reaching consequences for the health of an enterprise. In the midst of evaluating a mobile project, however, it’s easy to forget that the real goal of hardware-acquisition initiatives is to make the workforce more productive and improve both the top and bottom lines over the long term.

Most decision-analytics tools focus on up-front procurement questions alone, because the numbers seem straightforward and uncomplicated. But these analyses miss the point. The best analysis is one that can determine which of the solutions will provide the most advantages to the workforce at the lowest possible overall cost to the organization.

To achieve the best return on investment we must do more than recoup an out-of-pocket expense: Are customers better served? Are employees working better, faster, smarter? Though hard to quantify, these are the fundamental aspects that determine the return on investment (ROI) of technology.

It’s possible to build a vendor-neutral analysis to calculate the total cost of ownership (TCO) and ROI of mobile computers. Panasonic Computer Solutions Company, the manufacturer of Toughbook notebooks, enlisted the services of my analytics company, Serious Networks, Inc., to develop an unbiased TCO/ROI application to help companies make better decisions when purchasing mobile computers.

The Panasonic-sponsored operational analysis tool provides statistically valid answers by performing a simulation of the devices as they would be used and managed in the field, generating a model that compares the costs and benefits of multiple manufacturers’ laptops. Purchase cost, projected downtime, the range of wireless options, notebook features, support and other related costs are all incorporated into this analytic toolset.

Using over 100 unique simulations with actual customers, four key TCO/ROI questions emerged:

  • What will it cost to buy a proposed notebook solution?
  • What will it cost to own it over the life of the project?
  • What will it cost to deploy and decommission the units?
  • What value will be created for the organization?

MOVING BEYOND GUESSTIMATES – CONSIDERING COSTS AND VALUE OVER A LIFETIME

There is no such thing as an average company, so an honest analysis uses actual corporate data instead of industry averages. Just because a device is the right choice for one company does not make it the right choice for yours.

An effective simulation takes into account the cost of each competing device, the number of units and the rate of deployment. It calculates the cost of maintaining a solution and establishes the value of productive time using real loaded labor rates or revenue hours. It considers buy versus lease questions and can extrapolate how features will be used in the field.

As real-world data is entered, the software determines which mobile computing solution is most likely to help the company reach its goals. Managers can perform what-if analyses by adjusting assumptions and re-running the simulation. Within this framework, managers will build a business case that forecasts the costs of each mobile device against the benefits derived over time (see Figures 1 and 2).

MAKING INTANGIBLES TANGIBLE

The 90-minute analysis process is very granular. It’s based on the industry segment – because it simulates the tasks of the workforce – and compares up to 10 competing devices.

Once devices are selected, purchase or lease prices are entered, followed by value-added benefits like no-fault warranties and on-site support. Intangible factors favoring one vendor over another, such as incumbency, are added to the data set. The size and rate of the deployment, as well as details that determine the cost of preparing the units for the workforce, are also considered.

Next the analysis accounts for the likelihood and cost of failure, using your own experience as a baseline. Somewhat surprisingly, the impact of failure is given less weight than most outside observers would expect. Reliability is important, but it’s not the only or most important attribute.

What is given more weight are productivity and operational enhancements, which can have a significantly greater financial impact than reliability, because statistically employees will spend much more of their time working than dealing with equipment malfunctions.

A matrix of features and key workforce behaviors is developed to examine the relative importance of touch screens, wireless and GPS, as well as each computer vendor’s ability to provide those features as standard or extra-cost equipment. The features are rated for their time and motion impact on your organization, and an operations efficiency score is applied to imitate real-world results.

During the session, the workforce is described in detail, because this information directly affects the cost and benefit. To assess the value of a telephone lineman’s time, for example, the system must know the average number of daily service orders, the percentage of those service calls that require re-work and whether linemen are normally in the field five, six or seven days a week.

Once the data is collected and input it can be modified to provide instantaneous what-if, heads-up and break-even analyses reports – without interference from the vendor. The model is built in Microsoft Excel so that anyone can assess the credibility of the analysis and determine independently that there are no hidden calculations or unfair formulas skewing the results.

CONCLUSION

The Panasonic simulation tool can help different organizations within a company come to consensus before making a buying decision. Analytics help clarify whether a purpose-built rugged or business-rugged system or some other commercial notebook solution is really the right choice for minimizing the TCO and maximizing the ROI of workforce mobility.

ABOUT THE AUTHOR

Jason Buk is an operations director at Serious Networks, Inc., a Denver-based business analytics firm. Serious Networks uses honest forecasting and rigorous analysis to determine what resources are most likely to increase the effectiveness of the workforce, meet corporate goals and manage risk in the future.

Microsoft Helps Utilities Use IT to Create Winning Relationships

The utilities industry worldwide is experiencing growing energy demand in a world with shifting fuel availability, increasing costs, a shrinking workforce and mounting global environmental pressures. Rate case filings and government regulations, especially those regarding environmental health and safety, require utilities to streamline reporting and operate safely enterprise-wide. At the same time, increasing competition and costs drive the need for service reliability and better customer service. Each issue causes utilities to depend more and more on information technology (IT).

The Microsoft Utility team works with industry partners to create and deploy industry-specific solutions that help utilities transform challenges into opportunities and empower utilities workers to thrive in today’s market-driven environment. Solutions are based on the world’s most cost-effective, functionally rich, and secure IT platform. The Microsoft platform is interoperable with a wide variety of systems and proven to improve people’s abilities to access information and work with others across boundaries. Together, they help utilities optimize operations in each line of business.

Customer care. Whether a utility needs to modernize a call center, add customer self-service or respond to new business requirements such as green power, Microsoft and its partners provide solutions for turning the customer experience into a powerful competitive advantage with increased cost efficiencies, enhanced customer service and improved financial performance.

Transmission and distribution. Growing energy demand makes it critical to effectively address safe, reliable and efficient power delivery worldwide. To help utilities meet these needs, Microsoft and its partners offer EMS, DMS and SCADA systems; mobile workforce management solutions; project intelligence; geographic information systems; smart metering/grid; and work/asset/document management tools that streamline business processes and offer connectivity across the enterprise and beyond.

Generation. Microsoft and its partners provide utilities with a view across and into their generation operations that enables them to make better decisions to improve cycle times, output and overall effectiveness while reducing the carbon footprint. With advanced software solutions from Microsoft and its partners, utilities can monitor equipment to catch early failure warnings, measure fleets’ economic performance and reduce operational and environment risk.

Energy trading and risk management. Market conditions require utilities to optimize energy supply performance. Microsoft and its partners’ enterprise risk management and trading solutions help utilities feed the relentless energy demands in a resource-constrained world.

Regulatory compliance. Microsoft and its partners offer solutions to address the compliance requirements of the European Union; Federal Energy Regulatory Commission; North American Reliability Council; Sarbanes-Oxley Act of 2000; Environmental, Health and Safety; and other regional jurisdiction regulations and rate case issues. With solutions from Microsoft partners, utilities have a proactive approach to compliance, the most effective way to manage operational risk across the enterprise.

Enterprise. To optimize their businesses, utility executives need real-time visibility across the enterprise. Microsoft and its partners provide integrated e-business solutions that help utilities optimize their interactions with customers, vendors and partners. These enterprise applications address business intelligence and reporting, customer relationship management, collaborative workspaces, human resources and financial management.

The Power of Prediction: Improving the Odds of a Nuclear Renaissance

After 30 years of disfavor in the United States, the nuclear power industry is poised for resurgence. With the passage of the Energy Policy Act of 2005, the specter of over $100 per barrel oil prices and the public recognition that global warming is real, nuclear power is now considered one of the most practical ways to clean up the power grid and help the United States reduce its dependence on foreign oil. The industry has responded with a resolve to build a new fleet of nuclear plants in anticipation of what has been referred to as a nuclear renaissance.

The nuclear power industry is characterized by a remarkable level of physics and mechanical science. Yet, given the confluence of a number of problematic issues – an aging workforce, the shortage of skilled trades, the limited availability of equipment and parts, and a history of late, over-budget projects – questions arise about whether the level of management science the industry plans to use is sufficient to navigate the challenges ahead.

According to data from the Energy Information Administration (EIA), nuclear power comprises 20 percent of the U.S. capacity, producing approximately 106 gigawatts (GW), with 66 plants that house 104 reactor units. To date, more than 30 new reactors have been proposed, which will produce a net increase of approximately 19 GW of nuclear capacity through 2030. Considering the growth of energy demand, this increased capacity will barely keep pace with increasing base load requirements.

According to Assistant Secretary for Nuclear Energy Dennis Spurgeon, we will need approximately 45 new reactors online by 2030 just to maintain 20 percent share of U.S. electricity generation nuclear power already holds.

Meanwhile, Morgan Stanley vice chairman Jeffrey Holzschuh is very positive about the next generation of nuclear power but warns that the industry’s future is ultimately a question of economics. “Given the history, the markets will be cautious,” he says.

As shown in Figures 1-3, nuclear power is cost competitive with other forms of generation, but its upfront capital costs are comparatively high. Historically, long construction periods have led to serious cost volatility. The viability of the nuclear power industry ultimately depends on its ability to demonstrate that plants can be built economically and reliably. Holzschuh predicts, “The first few projects will be under a lot of public scrutiny, but if they are approved, they will get funded. The next generation of nuclear power will likely be three to five plants or 30, nothing in between.”

Due to its cohesive identity, the nuclear industry is viewed by the public and investors as a single entity, making the fate of industry operators – for better or for worse – a shared destiny. For that reason, it’s widely believed that if these first projects suffer the same sorts of significant cost over-runs and delays experienced in the past, the projected renaissance for the industry will quickly revert to a return to the dark ages.

THE PLAYERS

Utility companies, regulatory authorities, reactor manufacturers, design and construction vendors, financiers and advocacy groups all have critical roles to play in creating a viable future for the nuclear power industry – one that will begin with the successful completion of the first few plants in the United States. By all accounts, an impressive foundation has been laid, beginning with an array of government incentives (as loan guarantees and tax credits) and simplified regulation to help jump-start the industry.

Under the Energy Policy Act of 2005, the U.S. Department of Energy has the authority to issue $18.5 billion in loan guarantees for new nuclear plants and $2 billion for uranium enrichment projects. In addition, there’s standby support for indemnification against Nuclear Regulatory Commission (NRC) and litigation-oriented delays for the first six advanced nuclear reactors. The Treasury Department has issued guidelines for an allocation and approval process for production tax credits for advanced nuclear: 1.8 cents per kilowatt-hour production tax credit for the first eight years of operation with the final rules to be issued in fiscal year 2008.

The 20-year renewal of the Price- Andersen Act in 2005 and anticipated future restrictions on carbon emissions further improve the comparative attractiveness of nuclear power. To be eligible for the 2005 production tax credits, a license application must be tendered to the NRC by the end of 2008 with construction beginning before 2014 and the plant placed in service before 2021.

The NRC has formulated an Office of New Reactors (NRO), and David Matthews, director of the Division of New Reactor Licensing, led the development of the latest revision of a new licensing process that’s designed to be more predictable by encouraging the standardization of plant designs, resolving safety and environmental issues and providing for public participation before construction begins. With a fully staffed workforce and a commitment to “enable the safe, secure and environmentally responsible use of nuclear power in meeting the nation’s future energy needs,” Matthews is determined to ensure that the NRC is not a risk factor that contributes to the uncertainty of projects but rather an organizing force that will create predictability. Matthews declares, “This isn’t your father’s NRC.”

This simplified licensing process consists of the following elements:

  • An early site permit (ESP) for locations of potential facilities.
  • Design certification (DC) for the reactor design to be used.
  • Combined operating license (COL) for the certified reactor as designed to be located on the site. The COL contains the inspections, tests, analyses and acceptance criteria (ITAAC) to demonstrate that the plant was built to the approved specifications.

According to Matthews, the best-case scenario for the time period between when a COL is docketed to the time the license process is complete is 33 months, with an additional 12 months for public hearings. When asked if anything could be done to speed this process, Matthews reported that every delay he’s seen thus far has been attributable to a cause beyond the NRC’s control. Most often, it’s the applicant that’s having a hard time meeting the schedule. Recently, approved schedules are several months longer than the best-case estimate.

The manufacturers of nuclear reactors have stepped up to the plate to achieve standard design certification for their nuclear reactors; four are approved, and three are in progress.

Utility companies are taking innovative approaches to support the NRC’s standardization principles, which directly impact costs. (Current conventional wisdom puts the price of a new reactor at between $4 billion and $5.5 billion, with some estimates of fully loaded costs as high as $7 billion.) Consortiums have been formed to support cross-company standardization around a particular reactor design. NuStart and UniStar are multi-company consortiums collaborating on the development of their COLs.

Leader of PPL Corp.’s nuclear power strategy Bryce Shriver – who recently announced PPL had selected UniStar to build its next nuclear facility – is impressed with the level of standardization UniStar is employing for its plants. From the specifics of the reactor design to the carpet color, UniStar – with four plants on the drawing board – intends to make each plant as identical as possible.

Reactor designers and construction companies are adding to the standardization with turnkey approaches, formulating new construction methods that include modular techniques; sophisticated scheduling and configuration management software; automated data; project management and document control; and designs that are substantially complete before construction begins. Contractors are taking seriously the lessons learned from plants built outside the United States, and they hope to leverage what they have learned in the first few U.S. projects.

The stewards of the existing nuclear fleet also see themselves as part of the future energy solution. They know that continued safe, high-performance operation of current plants is key to maintaining public and state regulator confidence. Most of the scheduled plants are to be co-located with existing nuclear facilities.

Financing nuclear plant construction involves equity investors, utility boards of directors, debt financiers and (ultimately) the ratepayers represented by state regulatory commissions. Despite the size of these deals, the financial community has indicated that debt financing for new nuclear construction will be available. The bigger issue lies with the investors. The more equity-oriented the risk (principally borne by utilities and ratepayers), the more caution there is about the structure of these deals. The debt financiers are relying on the utilities and the consortiums to do the necessary due diligence and put up the equity. There’s no doubt that the federal loan guarantees and subsidies are an absolute necessity, but this form of support is largely driven by the perceived risk of the first projects. Once the capability to build plants in a predictable way (in terms of time, cost, output and so on) has been demonstrated, market forces are expected to be very efficient at allocating capital to these kinds
of projects.

The final key to the realization of a nuclear renaissance is the public. Americans have become increasingly concerned about fossil fuels, carbon emissions and the nation’s dependence on foreign oil. The surge in oil prices has focused attention on energy costs and national security. Coal-based energy production is seen as an environmental issue. Although the United States has plenty of access to coal, dealing with carbon emissions using clean coal technology involves sequestering it and pumping it underground. PPL chairman Jim Miller describes the next challenge for clean coal as NUMBY – the “Not under my back yard” attitude the public is likely to adopt if forced to consider carbon pumped under their communities. Alternative energy sources such as wind, solar and geothermal enjoy public support, but they are not yet scalable for the challenge of cleaning up the grid. In general, the public wants clean, safe, reliable, inexpensive power.

THE RISKS

Will nuclear fill that bill and look attractive compared with the alternatives? Although progress has been made and the stage is set, critical issues remain, and they could become problematic. While the industry clearly sees and is actively managing some of these issues, there are others the industry sees but is not as certain about how to manage – and still others that are so much a part of the fabric of the industry that they go unrecognized. Any one of these issues could slow progress; the fact that there are several that could hit simultaneously multiplies the risk exponentially.

The three widely accepted risk factors for the next phase of nuclear power development are the variability of the cost of uranium, the availability of quality equipment for construction and the availability of well-trained labor. Not surprising for an industry that’s been relatively sleepy for several decades, the pipeline for production resources is weak – a problem compounded by the well-understood coming wave of retirements in the utility workforce and the general shortage of skilled trades needed to work on infrastructure projects. Combine these constraints with a surge in worldwide demand for power plants, and it’s easy to understand why the industry is actively pursuing strategies to secure materials and train labor.

The reactor designers, manufacturers and construction companies that would execute these projects display great confidence. They’re keen on the “turnkey solution” as a way to reduce the risk of multiple vendors pointing fingers when things go wrong. Yet these are the same firms that have been openly criticized for change orders and cost overruns. Christopher Crane, chief operating officer of the utility Exelon Corp., warned contractors in a recent industry meeting that the utilities would “not take all the risk this time around.” When faced with complicated infrastructure development in the past, vendors have often pointed to their expertise with complex projects. Is the development of more sophisticated scheduling and configuration management capability, along with the assignment of vendor accountability, enough to handle the complexity issue? The industry is aware of this limitation but does not as yet have strong management techniques for handling it effectively.

Early indications from regulators are that the COLs submitted to date are not meeting the NRC’s guidance and expectations in all regards, possibly a result of the applicants’ rush to make the 2008 year-end deadline for the incentives set forth in the Energy Policy Act. This could extend the licensing process and strain the resources of the NRC. In addition, the requirements of the NRC principally deal with public safety and environmental concerns. There are myriad other design requirements entailed in making a plant operate profitably.

The bigger risk is that the core strength of the industry – its ability to make significant incremental improvements – could also serve as the seed of its failure as it faces this next challenge. Investors, state regulators and the public are not likely to excuse serious cost overruns and time delays as they may have in the past. Utility executives are clear that nuclear is good to the extent that it’s economical. When asked what single concern they find most troubling, they often reply, “That we don’t know what we don’t know.”

What we do know is that there are no methods currently in place for beginning successful development of this next generation of nuclear power plants, and that the industry’s core management skill set may not be sufficient to build a process that differs from a “learn as you go” approach. Thus, it’s critical that the first few plants succeed – not just for their investors but for the entire industry.

THE OPPORTUNITY – KNOWING WHAT YOU DON’T KNOW

The vendors supporting the nuclear power industry represent some of the most prestigious engineering and equipment design and manufacturing firms in the world: Bechtel, Fluor, GE, Westinghouse, Areva and Hitachi. Despite this, the industry is not known for having a strong foundation in managing innovation. In a world that possesses complex physical capital and myriad intangible human assets, political forces and public opinion as well as technology are all required to get a plant to the point of producing power. Thus, more advanced management science could represent the missing piece of the puzzle for the nuclear power industry.

An advanced, decision-making framework can help utilities manage unpredictable events, increasing their ability to handle the planning and anticipated disruptions that often beset long, complex projects. By using advanced management science, the nuclear industry can take what it knows and create a learning environment to fi nd out more about what it doesn’t know, improving its odds for success.

Growing (or Shrinking) Trends in Nuclear Power Plant Construction

Around the world, the prospects for nuclear power generation are increasing – opportunities made clear by the number of currently under-construction nuclear plants that are smaller than those currently in the limelight. Offering advantages in certain situations, these smaller plants can more readily serve smaller grids as well as be used for distributed generation (with power plants located close to the demand centers and the main grid providing back-up). Smaller plants are also easier to finance, particularly in countries that are still in the early days of their nuclear power programs.

In recent years, development and licensing efforts have focused primarily on large, advanced reactors, due to their economies of scale and obvious application to developed countries with substantial grid infrastructure. Meanwhile, the wide scope for smaller nuclear plants has received less attention. However, of the 30 or more countries that are moving toward implementing nuclear power programs, most are likely to be looking initially for units under 1,000 MWe, and some for units of less than half that amount.

EXISTING DESIGNS

With that in mind, let’s take a look at some of the current designs.

There are many plants under 1,000 MWe now in operation, even if their replacements tend to be larger. (In 2007 four new units were connected to the grid – two large ones, one 202-MWe unit and one 655-MWe unit.) In addition, some smaller reactors are either on offer now or likely to be available in the next few years.

Five hundred to 700 MWe. There are several plants in this size range, including Westinghouse AP600 (which has U.S. design certification) and the Canadian Candu-6 (being built in Romania). In addition, China is building two CNP-600 units at Qinshan but does not plan to build any more of them. In Japan, Hitachi-GE has completed the design of a 600-MWe version of its 1,350-MWe ABWR, which has been operating for 10 years.

Two hundred and fifty to 500 MWe. And finally, in the 250- to 500-MWe category (output that is electric rather than heat), there are a few designs pending but little immediately on offer.

IRIS. Being developed by an international team led by Westinghouse in the United States, IRIS – or, more formally, International Reactor Innovative and Secure – is an advanced third-generation modular 335-MWe pressurized water reactor (PWR) with integral steam generators and a primary coolant system all within the pressure vessel. U.S. design certification is at pre-application stage with a view to final design approval by 2012 and deployment by 2015 to 2017.

VBER-300 PWR. This 295- to 325-MWe unit from Russia was designed by OKBM based on naval power plants and is now being developed as a land-based unit with the state-owned nuclear holding company Kazatomprom, with a view to exporting it. The first two units will be built in Southwest Kazakhstan under a Russian-Kazakh joint venture.

VK-300. This Russian-built boiling water reactor is being developed for co-generation of both power and district heating or heat for desalination (150 MWe plus 1675 GJ/hr) by the nuclear research and development organization NIKIET. The unit evolved from the VK-50 BWR at Dimitrovgrad but uses standard components from larger reactors wherever possible. In September 2007, it was announced that six of these units would be built at Kola and at Primorskaya in Russia’s far east, to start operating between 2017 and 2020.

NP-300 PWR. Developed in France from submarine power plants and aimed at export markets for power, heat and desalination, this Technicatome (Areva)- designed reactor has passive safety systems and can be built for applications of from 100 to 300 MWe.

China is also building a 300-MWe PWR (pressurized water reactor) nuclear power plant in Pakistan at Chasma (alongside another that started up in 2000); however, this is an old design based on French technology and has not been offered more widely. The new unit is expected to come online in 2011.

One hundred to 300 MWe. This category includes both conventional PWR and high-temperature gas-cooled reactors (HTRs); however, none in the second category are being built yet. Argentina’s CAREM nuclear power plant is being developed by CNEA and INVAP as a modular 27-MWe simplified PWR with integral steam generators designed to be used for electricity generation or for water desalination.

FLOATING PLANTS

After many years of promoting the idea, Russia’s state-run atomic energy corporation Rosatom has approved construction of a nuclear power plant on a 21,500-ton barge to supply 70 MWe of power plus 586 GJ/hr of heat to Severodvinsk, in the Archangelsk region of Russia. The contract to build the first unit was let by nuclear power station operator Rosenergoatom to the Sevmash shipyard in May 2006. Expected to cost $337 million (including $30 million already spent in design), the project is 80 percent financed by Rosenergoatom and 20 percent financed by Sevmash. Operation is expected to begin in mid-2010.

Rosatom is planning to construct seven additional floating nuclear power plants, each (like the initial one) with two 35- MWe OKBM KLT-40S nuclear reactors. Five of these will be used by Gazprom – the world’s biggest extractor of natural gas – for offshore oil and gas field development and for operations on Russia’s Kola and Yamal Peninsulas. One of these reactors is planned for 2012 commissioning at Pevek on the Chukotka Peninsula, and another is planned for the Kamchatka region, both in the far east of the country. Even farther east, sites being considered include Yakutia and Taimyr. Electricity cost is expected to be much lower than from present alternatives. In 2007 an agreement was signed with the Sakha Republic (Yakutia region) to build a floating plant for its northern parts, using smaller ABV reactors.

OTHER DESIGNS

On a larger scale, South Korea’s SMART is a 100-MWe PWR with integral steam generators and advanced safety features. It is designed to generate electricity and/or thermal applications such as seawater desalination. Indonesia’s national nuclear energy agency, Batan, has undertaken a pre-feasibility study for a SMART reactor for power and desalination on Madura Island. However, this awaits the building of a reference plant in Korea.

There are three high-temperature, gas-cooled reactors capable of being used for power generation, but much of the development impetus has been focused on the thermo-chemical production of hydrogen. Fuel for the first two consists of billiard ball-size pebbles that can withstand very high temperatures. These aim for a step-change in safety, economics and proliferation resistance.

China’s 200-MWe HTR-PM is based on a well-tested small prototype, and a two-module plant is due to start construction at Shidaowan in Shandong province in 2009. This reactor will use the conventional steam cycle to generate power. Start-up is scheduled for 2013. After the demonstration plant, a power station with 18 modules is envisaged.

Very similar to China’s plant is South Africa’s Pebble Bed Modular Reactor (PBMR), which is being developed by a consortium led by the utility Eskom. Production units will be 165 MWe. The PBMR will have a direct-cycle gas turbine generator driven by hot helium. The PBMR Demonstration unit is expected to start construction at Koeberg in 2009 and achieve criticality in 2013.

Both of these designs are based on earlier German reactors that have some years of operational experience. A U.S. design, the Modular helium Reactor (GT-MHR), is being developed in Russia; in its electrical application, each unit would directly drive a gas turbine giving 280 MWe.

These three designs operate at much higher temperatures than ordinary reactors and offer great potential as sources of industrial heat, including for the thermo-chemical production of hydrogen on a large scale. Much of the development thinking going into the PBMR has been geared to synthetic oil production by Sasol (South African Coal and Oil).

MODULAR CONSTRUCTION

The IRIS developers have outlined the economic case for modular construction of their design (about 330 MWe), and it’s an argument that applies similarly to other smaller units. These developers point out that IRIS, with its moderate size and simple design, is ideally suited for modular construction. The economy of scale is replaced here with the economy of serial production of many small and simple components and prefabricated sections. They expect that construction of the first IRIS unit will be completed in three years, with subsequent production taking only two years.

Site layouts have been developed with multiple single units or multiple twin units. In each case, units will be constructed with enough space around them to allow the next unit to be constructed while the previous one is operating and generating revenue. And even with this separation, the plant footprint can be very compact: a site with three IRIS single modules providing 1000 MWe is similar to or smaller in size than one with a comparable total power single unit.

Eventually, IRIS’ capital and production costs are expected to be comparable to those of larger plants. however, any small unit offers potential for a funding profile and flexibility impossible to achieve with larger plants. As one module is finished and starts producing electricity, it will generate positive cash fl ow for the construction of the next module. Westinghouse estimates that 1,000 MWe delivered by three IRIS units built at three-year intervals financed at 10 percent for 10 years requires a maximum negative cash flow of less than $700 million (compared with about three times that for a single 1,000-MWe unit). For developed countries, small modular units offer the opportunity of building as necessary; for developing countries, smaller units may represent the only option, since such country’s electric grids are likely unable to take 1,000-plus- MWe single units.

Distributed generation. The advent of reactors much smaller than those being promoted today means that reactors will be available to serve smaller grids and to be put into use for distributed generation (with power plants close to the demand centers and the main grid used for back-up). This does not mean, however, that large units serving national grids will become obsolete – as some appear to wish.

WORLD MARKET

One aspect of the global Nuclear Energy Partnership program is international deployment of appropriately sized reactors with desirable designs and operational characteristics (some of which include improved economics, greater safety margins, longer operating cycles with refueling intervals of up to three years, better proliferation resistance and sustainability). Several of the designs described earlier in this paper are likely to meet these criteria.

IRIS itself is being developed by an international team of 20 organizations from ten countries (Brazil, Croatia, Italy, Japan, Lithuania, Mexico, Russia, Spain, the United Kingdom and the United States) on four continents – a clear demonstration of how reactor development is proceeding more widely.

Major reactor designers and vendors are now typically international in character and marketing structure. To wit: the United Kingdom’s recent announcement that it would renew its nuclear power capacity was anticipated by four companies lodging applications for generic design approval – two from the United States (each with Japanese involvement), one from Canada and one from France (with German involvement). These are all big units, but in demonstrating the viability of late third-generation technology, they will also encourage consideration of smaller plants where those are most appropriate.

Wind Energy: Balancing the Demand

In recent years, exponential demand for new U.S. wind energy-generating facilities has nearly doubled America’s installed wind generation. By the end of 2007, our nation’s total wind capacity stood at more than 16,000 megawatts (MW) – enough to power more than 4.5 million average American homes each year. And in 2007 alone, America’s new wind capacity grew 45 percent over the previous year – a record 5,244 MW of new projects and more new generating capacity than any other single electricity resource contributed in the same year. At the same time, wind-related employment nearly doubled in the United States during 2007, totaling 20,000 jobs. At more than $9 billion in cumulative investment, wind also pumped new life into regional economies hard hit by the recent economic downturn. [1]

The rapid development of wind installations in the United States comes in response to record-breaking demand driven by a confluence of factors: overwhelming consumer demand for clean, renewable energy; skyrocketing oil prices; power costs that compete with natural gas-fired power plants; and state legislatures that are competing to lure new jobs and wind power developments to their states. Despite these favorable conditions, the wind energy industry has been unable to meet America’s true demand for new wind energy-generating facilities. The barriers include the following: availability of key materials, the ability to manufacture large key components and the accessibility of skilled factory workers.

With the proper policies and related investments in infrastructure and workforce development, the United States stands to become a powerhouse exporter of wind power equipment, a wind technology innovator and a wind-related job creation engine. Escalating demand for wind energy is spurred by wind’s competitive cost against rising fossil fuel prices and mounting concerns over the environment, climate change and energy security.

Meanwhile, market trends and projections point to strong, continued demand for wind well into the future. Over the past decade, a similar surge in wind energy demand has taken place in the European Union (E.U.) countries. Wind power capacity there currently totals more than 50,000 MW, with projections that wind could provide at least 15 percent of the E.U.’s electricity by 2020 – amounting to an installed wind capacity of 180,000 MW and an estimated workforce of more than 200,000 people in wind power manufacturing, installation and maintenance jobs.

How is it, then, that European countries were able to secure the necessary parts and people while the United States fell short in its efforts on these fronts? After all, America has a bigger land mass and a larger, more high-quality wind resource than the E.U. countries. Indeed, the United States is already home to the world’s largest wind farms, including the 735-MW Horse Hollow Wind Energy Center in Texas, which generates power for about 230,000 average homes each year. What’s more, this country also has an extensive manufacturing base, a skilled labor pool and a pressing need to address energy and climate challenges.

So what’s missing? In short, robust national policy support – a prerequisite for strong, long-term investment in the sector. Such support would enable the industry to secure long lead-time materials and sufficient ramp-up to train and employ workers to continue wind power’s surging growth. Thus, the United States must rise to the occasion and assemble several key, interrelated puzzle pieces – policy, parts and people – if it’s to tap the full potential of wind energy.

POLICY: LONG-TERM SUPPORT AND INVESTMENT

In the United States, the federal government has played a key role in funding research and development, commercialization and large-scale deployment of most of the energy sources we rely on today. The oil and natural gas industry has enjoyed permanent subsidies and tax credits that date back to 1916 when Congress created the first tax breaks for oil and gas production. The coal industry began receiving similar support in 1932 with the passage of the first depletion allowances that enabled mining companies to deduct the value of coal removed from a mine from their taxable revenue.

Still in effect today, these incentives were designed to spur exploration and extraction of oil, gas and coal, and have since evolved to include such diverse mechanisms as royalty relief for resources developed on public lands; accelerated depreciation for investments in projects like pipelines, drilling rigs and refineries; and ongoing support for technology R&D and commercialization, such as the Department of Energy’s now defunct FutureGen program for coal research, its Deep Trek program for natural gas development and the VortexFlow SX tool for low-producing oil and gas wells.

For example, the 2005 energy bill passed by Congress provided more than $2 billion in tax relief for the oil and gas industry to encourage investment in exploration and distribution infrastructure. [2] The same bill also provided an expansion of existing support for coal, which in 2003 had a 10-year value of more than $3 billion. Similarly, the nuclear industry receives extensive support for R&D – the 2008 federal budget calls for more than $500 million in support for nuclear research – as well as federal indemnity that helps lower its insurance premiums. [3]

Over the past 15 years, the wind power industry has also enjoyed federal support, with a small amount of funding for R&D (the federal FY 2006 budget allotted $38 million for wind research) and the bulk of federal support taking the form of the Production Tax Credit (PTC) for wind power generation. The PTC has helped make wind energy more cost-competitive with other federally subsidized energy sources; just as importantly, its relatively routine renewal by Congress has created conditions under which market participants have grown accustomed to its effect on wind power finance.

However, in contrast to its consistent policies for coal, natural gas and nuclear power, Congress has never granted longterm approval to the wind power PTC. For more than a decade, in fact, Congress has failed to extend the PTC for longer than two years. And in three different years, the credit was allowed to expire with substantial negative consequences for the industry. Each year that the PTC has expired, major suppliers have had to, in the words of one senior wind power executive, “shut down their factories, lay off their people and go home.”

In 2000, 2002 and 2004, the expiration of the PTC sent wind development plummeting, with an almost complete collapse of the industry in 2000. If the PTC is allowed to expire at the end of 2008, American Wind Energy Associates (AWEA) estimates that as many as 75,000 domestic jobs could be lost as the industry slows production of turbines and power consumers reduce demand for new wind power projects.

The last three years have seen tenuous progress, with Congress extending the PTC for one and then two years; however, the wind industry is understandably concerned about these short-term extensions. Of significant importance is the corresponding effect a long-term or permanent extension of the PTC would have on the U.S. manufacturing sector and related investment activity. For starters, it would put the industry on an even footing with its competitors in the fossil fuels and nuclear industries. More importantly, it would send a clear signal to the U.S. manufacturing community that wind power is a solid, long-term investment.

PARTS: UNLEASHING THE NEXT MANUFACTURING BOOM

To fully grasp the trickle-down effects of an uncertain PTC on the wind power and related manufacturing industries, one must understand the industrial scale of a typical wind power development. Today’s wind turbines represent the largest rotating machinery in the world: a modern-day, 1.5-megawatt machine towers more than 300 feet above the ground with blades that out-span the wings of a 747 jetliner, and a typical utility-scale wind farm will include anywhere from 30 to 200 of these machines, planted in rows or staggered lines across the landscape.

The sheer size and scope of a utility-scale wind farm demands a sophisticated and established network of heavy equipment and parts manufacturers can fulfill orders in a timely fashion. Representing a familiar process for anyone who’s worked in a steel mill, forgery, gear-works or similar industrial facility, the manufacture of each turbine requires massive, rolled steel tubes for the tower; a variety of bearings and related components for lubricity in the drive shaft and hub; cast steel for housings and superstructure; steel forgings for shafts and gears; gearboxes for torque transmission; molded fiberglass, carbon fiber or hybrid blades; and electronic components for controls, monitoring and other functions.

U.S. manufacturers have extensive experience making all of these components for other end-use applications, and many have even succeeded in becoming suppliers to the wind industry. For example, Ameron International – a Pasadena, Calif.-based maker of industrial steel pipes, poles and related coatings – converted an aging heavy-steel fabrication plant in Fontana, Calif., to make wind towers. At 80 meters tall, 4.8 meters in diameter and weighing in at 200 tons, a wind tower requires large production facilities that have high up-front capital costs. By converting an existing facility, Ameron was able to capture a key and rapidly growing segment of the U.S. wind market in high-wind Western states while maintaining its position in other markets for its steel products.

Other manufacturers have also seen the opportunity that wind development presents and have taken similar steps. For example, Beaird Co. Ltd, a Shreveport, La.-based metal fabrication and machined parts manufacturer, supplies towers to the Midwest, Texas and Florida wind markets, as does DMI Industries from facilities in Fargo, N.D., and Tulsa, Okla.

But the successful conversion of existing manufacturing facilities to make parts for the wind industry belies an underlying challenge: investment in new manufacturing capacity to serve the wind industry is hindered by the lack of a clear policy framework. Even at wind’s current growth rates and with the resulting pent-up domestic demand for parts, the U.S. manufacturing sector is understandably reticent to invest in new production capacity.

The cause for this reticence is depicted graphically in Figure 1. With the stop-and-go nature of the PTC regarding U.S. wind development, and the consistent demand for their products in other end-use sectors, American manufacturers have strong disincentives to invest in new capital projects targeting the wind industry. It can take two to six years to build a new factory and 15 or more years to recapture the investment. The one- to two-year investment cycle of the U.S. wind industry is therefore only attractive to players who are comfortable with the risk and can manage wind as a marginal customer rather than an anchor tenant. This means that over the long haul, the United States could be legislating itself out of the “renewables” space, which arguably has a potential of several trillion dollars of global infrastructure.

The result in the marketplace: the United States ends up importing many of the large manufactured parts that go into a modern wind turbine – translating to a missed opportunity for domestic manufacturers that could be claiming a larger chunk of the underdeveloped U.S. wind market. As the largest consumer of electricity on earth, the United States also represents the biggest untapped market for wind power. At the end of 2007, with multiple successive years of 30 to 40 percent growth, wind power claimed just 1 percent of the U.S. electricity market. The raw potential for wind power in the United States is three times our total domestic consumption, according to the U.S. Energy Information Administration; if supply chain issues weren’t a problem, wind power could feasibly grow to supply as much as 20 to 30 percent of our $330 billion annual domestic electricity market. At 20 percent of domestic energy supply, the United States would need 300,000 MW of installed wind power capacity – an amount that would take 20 to 30 years of sustained manufacturing and development to achieve. But that would require growth well above our current pace of 4,000 to 5,000 MW annually – growth that simply isn’t possible given current supply constraints.

Of course, that’s just the U.S. market. Global wind development is set to more than triple by 2015, with cumulative installed capacity expected to rise from approximately 91 gigawatts (GW) by the end of 2007 to more than 290 GW by the end of 2015, according to forecasts by Emerging Energy Research (EER). Annual MW added for global wind power is expected to increase more than 50 percent, from approximately 17.5 GW in 2007 to more than 30 GW in 2015, according to EER’s forecasts. [4]

By offering the wind power industry the same long-term tax benefits enjoyed by other energy sources, Congress could trigger a wave of capital investment in new manufacturing capacity and turn the United States from a net importer of wind power equipment to a net exporter. But extending the PTC is not the final step: as much as any other component, a robust wind manufacturing sector needs skilled and dedicated people.

PEOPLE: RECLAIMING OUR MANUFACTURING ROOTS

In 2003, the National Association of Manufacturers released a study outlining many of the challenges facing our domestic manufacturing base. “Keeping America Competitive – how a Talent Shortage Threatens U.S. Manufacturing” highlights the loss of skilled manufacturing workers to foreign competitors, the problem of an aging workforce and a shift to a more urban, high tech economy and culture.

In particular, the study notes a number of “image” problems for the manufacturing industry. To wit: Among a geographically, ethnically and socio-economically diverse set of respondents – ranging from students, parents and teachers to policy analysts, public officials, union leaders, and manufacturing employees and executives – the sector’s image was found to be heavily loaded with negative connotations (and universally tied to the old “assembly line” stereotype) and perceived to be in a state of decline.

When asked to describe the images associated with a career in manufacturing, student respondents offered phrases such as “serving a life sentence,” being “on a chain gang” or a “slave to the line,” and even being a “robot.” Even more telling, most adult respondents said that people “just have no idea” of manufacturing’s contribution to the American economy.

The effect of this “sector fatigue” can be seen across the Rust Belt in the aging factories, retiring workforce and depressed communities being heavily impacted by America’s turn away from manufacturing. Wind power may be uniquely positioned to help reverse this trend. A growing number of America’s young people are concerned about environmental issues, such as pollution and global warming, and want to play a role in solving these problems. With the lure of good-paying jobs in an industry committed to environmental quality and poised for tremendous growth, wind power may provide an answer to manufacturers looking to lure and retain top talent.

We’ve already seen that you don’t need a large wind power resource in your state to enjoy the economic benefits of wind’s surging growth: whether it’s rolled steel from Louisiana and Oklahoma, gear boxes and cables from Wisconsin and New Hampshire, electronic components from Massachusetts and Vermont, or substations and blades from Ohio and Florida, the wind industry’s needs for manufactured parts – and the skilled labor that makes them – is massive, distributed and growing by the day.

UNLEASHING THE POWER OF EVOLUTION

The wind power industry offers a unique opportunity for revitalizing America’s manufacturing sector, creating vibrant job growth in currently depressed regions and tapping new export markets for American- made parts. For utilities and energy consumers, wind power provides a hedge against volatile energy costs and harvests one of our most abundant natural resources for energy security.

The time for wind power is now. As mankind has evolved, so too have our primary sources of energy: from the burning of wood and animal dung to whale oil and coal; to petroleum, natural gas and nuclear fuels; and (now) to wind turbines. The shift to wind power represents a natural evolution and progression that will provide both the United States and the world with critical economic, environmental and technological solutions. As energy technologies continue to evolve and mature, wind power will soon be joined by solar power, ocean current power and even hydrogen as cost-competitive solutions to our pressing energy challenges.

ENDNOTES

  1. “American Wind Energy Association 2007 Market Report” (January 2008). www.awea.org/Market_Report_Jan08.pdf
  2. Energy Policy Act of 2005, Section 1323-1329. www.citizen.org/documents/energyconferencebill0705.pdf
  3. Aileen Roder, “An Overview of Senate Energy Bill Subsidies to the Fossil Fuel Industry” (2003), Taxpayers for Common Sense website. www.taxpayer.net/greenscissors/LearnMore/senatefossilfuelsubsidies.htm
  4. “Report: global Wind Power Base Expected to Triple by 2015” (November 2007), North American Windpower. www.nawindpower.com/naw/e107_plugins/content/content_lt.php?content.1478

The Virtual Generator

Electric utility companies today constantly struggle to find a balance between generating sufficient power to satisfy their customers’ dynamic load requirements and minimizing their capital and operating costs. They spend a great deal of time and effort attempting to optimize every element of their generation, transmission and distribution systems to achieve both their physical and economic goals.

In many cases, “real” generators waste valuable resources – waste that if not managed efficiently can go directly to the bottom line. Energy companies therefore find the concept of a “virtual generator,” or a virtual source of energy that can be turned on when needed, very attractive. Although generally only representing a small percentage of utilities’ overall generation capacity, virtual generators are quick to deploy, affordable, cost-effective and represent a form of “green energy” that can help utilities meet carbon emission standards.

Virtual generators use forms of dynamic voltage and capacitance (Volt/ VAr) adjustments that are controlled through sensing, analytics and automation. The overall process involves first flattening or tightening the voltage profiles by adding additional voltage regulators to the distribution system. Then, by moving the voltage profile up or down within the operational voltage bounds, utilities can achieve significant benefits (Figure 1). It’s important to understand, however, that because voltage adjustments will influence VArs, utilities must also adjust both the placement and control of capacitors (Figure 2).

Various business drivers will influence the use of Volt/VAr. A utility could, for example, use Volt/VAr to:

  • Respond to an external system-wide request for emergency load reduction;
  • Assist in reducing a utility’s internal load – both regional and throughout the entire system;
  • Target specific feeder load reduction through the distribution system;
  • Respond as a peak load relief (a virtual peaker);
  • Optimize Volt/VAr for better reliability and more resiliency;
  • Maximize the efficiency of the system and subsequently reduce energy generation or purchasing needs;
  • Achieve economic benefits, such as generating revenue by selling power on the spot market; and
  • Supply VArs to supplement off-network deficiencies.

Each of the above potential benefits falls into one of four domains: peaking relief, energy conservation, VAr management or reliability enhancement. The peaking relief and energy conservation domains deal with load reduction; VAr management, logically enough, involves management of VArs; and reliability enhancement actually increases load. In this latter domain, the utility will use increased voltage to enable greater voltage tolerances in self-healing grid scenarios or to improve the performance of non-constant power devices to remove them from the system as soon as possible and therefore improve diversity.

Volt/VAr optimization can be applied to all of these scenarios. It is intended to either optimize a utility’s distribution network’s power factor toward unity, or to purposefully make the power factor leading in anticipation of a change in load characteristics.

Each of these potential benefits comes from solving a different business problem. Because of this, at times they can even be at odds with each other. Utilities must therefore create fairly complex business rules supported by automation to resolve any conflicts that arise.

Although the concept of load reduction using Volt/VAr techniques is not new, the ability to automate the capabilities in real time and drive the solutions with various business requirements is a relatively recent phenomenon. Energy produced with a virtual generator is neither free nor unlimited. However, it is real in the sense that it allows the system to use energy more efficiently.

A number of things are driving utilities’ current interest in virtual generators, including the fact that sensors, analytics, simulation, geospatial information, business process logic and other forms of information technology are increasingly affordable and robust. In addition, lower-cost intelligent electrical devices (IEDs) make virtual generators possible and bring them within reach of most electric utility companies.

The ability to innovate an entirely new solution to support the above business scenarios is now within the realm of possibility for the electric utility company. As an added benefit, much of the base IT infrastructure required for virtual generators is the same as that required for other forms of “smart grid” solutions, such as advanced meter infrastructure (AMI), demand side management (DSM), distributed generation (DG) and enhanced fault management. Utilities that implement a well-designed virtual generator solution will ultimately be able to align it with these other power management solutions, thus optimizing all customer offerings that will help reduce load.

HOW THE SOLUTION WORKS

All utilities are required, for regulatory or reliability reasons, to stay within certain high- and low-voltage parameters for all of their customers. In the United States the American Society for Testing and Materials (ATSM) guidelines specify that the nominal voltage for a residential single-phase service should be 120 volts with a plus or minus 6-volt variance (that is, 114 to 126 volts). Other countries around the world have similar guidelines. Whatever the actual values are, all utilities are required to operate within these high- and low-voltage “envelopes.” In some cases, additional requirements may be imposed as to the amount of variance – the number of volts changed or the percent change in the voltage – that can take place over a period of minutes or hours.

Commercial customers may have different high/low values, but the principle remains the same. In fact, it is the mixture of residential, commercial and industrial customers on the same feeder that makes the virtual generation solution almost a requirement if a utility wants to optimize its voltage regulation.

Although it would be ideal for a utility to deliver 120-volt power consistently to all customers, the physical properties of the distribution system as well as dynamic customer loading factors make this difficult. Most utilities are already trying to accomplish this through planning, network and equipment adjustments, and in many cases use of automated voltage control devices. Despite these efforts, however, in most networks utilities are required to run the feeder circuit at higher-than-nominal levels at the head of the circuit in order to provide sufficient voltage for downstream users, especially those at the tails or end points of the circuit.

In a few cases, electric utilities have added manual or automatic voltage regulators to step up voltage at one or more points in a feeder circuit because of nonuniform loading and/or varied circuit impedance characteristics throughout the circuit profile. This stepped-up slope, or curve, allows the utility company to comply with the voltage level requirements for all customers on the circuit. In addition, utilities can satisfy the VAr requirements for operational efficiency of inductive loads using switched capacitor banks, but they must coordinate those capacitor banks with voltage adjustments as well as power demand. Refining voltage profiles through virtual generation usually implies a tight corresponding control of capacitance as well.

The theory behind a robust Volt/ VAr regulated feeder circuit is based on the same principles but applied in an innovative manner. Rather than just using voltage regulators to keep the voltage profile within the regulatory envelope, utilities try to “flatten” the voltage curve or slope. In reality, the overall effect is a stepped/slope profile due to economic limitations on the number of voltage regulators applied per circuit. This flattening has the effect of allowing an overall reduction, or decrease, in nominal voltage. In turn the operator may choose to move the voltage curve up or down within the regulatory voltage envelope. Utilities can derive extra benefit from this solution because all customers within a given section of a feeder circuit could be provided with the same voltage level, which should result in less “problem” customers who may not be in the ideal place on the circuit. It could also minimize the possible power wastage of overdriving the voltage at the head of the feeder in order to satisfy customers at the tails.

THE ROLE OF AUTOMATION IN DELIVERING THE VIRTUAL GENERATOR

Although theoretically simple in concept, executing and maintaining a virtual generator solution is a complex task that requires real-time coordination of many assets and business rules. Electrical distribution networks are dynamic systems with constantly changing demands, parameters and influencers. Without automation, utilities would find it impossible to deliver and support virtual generators, because it’s infeasible to expect a human – or even a number of humans – to operate such systems affordably and reliably. Therefore, utilities must leverage automation to put humans in monitoring rather than controlling roles.

There are many “inputs” to an automated solution that supports a virtual generator. These include both dynamic and static information sources. For example, real-time sensor data monitoring the condition of the networks must be merged with geospatial information, weather data, spot energy pricing and historical data in a moment-by-moment, repeating cycle to optimize the business benefits of the virtual generator. Complicating this, in many cases the team managing the virtual generator will not “own” all of the inputs required to feed the automated system. Frequently, they must share this data with other applications and organizational stakeholders. It’s therefore critical that utilities put into place an open, collaborative and integrated technology infrastructure that supports multiple applications from different parts of the business.

One of the most critical aspects of automating a virtual generator is having the right analytical capabilities to decide where and how the virtual generator solution should be applied to support the organizations’ overall business objectives. For example, utilities should use load predictors and state estimators to determine future states of the network based on load projections given the various Volt/VAr scenarios they’re considering. Additionally, they should use advanced analytic analyses to determine the resiliency of the network or the probability of internal or external events influencing the virtual generator’s application requirements. Still other types of analyses can provide utilities with a current view of the state of the virtual generator and how much energy it’s returning to the system.

While it is important that all these techniques be used in developing a comprehensive load-management strategy, they must be unified into an actionable, business-driven solution. The business solution must incorporate the values achieved by the virtual generator solutions, their availability, and the ability to coordinate all of them at all times. A voltage management solution that is already being used to support customer load requirements throughout the peak day will be of little use to the utility for load management. It becomes imperative that the utility understand the effect of all the voltage management solutions when they are needed to support the energy demands on the system.

The Technology Demonstration Center

When a utility undergoes a major transformation – such as adopting new technologies like advanced metering – the costs and time involved require that the changes are accepted and adopted by each of the three major stakeholder groups: regulators, customers and the utility’s own employees. A technology demonstration center serves as an important tool for promoting acceptance and adoption of new technologies by displaying tangible examples and demonstrating the future customer experience. IBM has developed the technology center development framework as a methodology to efficiently define the strategy and tactics required to develop a technology center that will elicit the desired responses from those key stakeholders.

KEY STAKEHOLDER BUY-IN

To successfully implement major technology change, utilities need to consider the needs of the three major stakeholders: regulators, customers and employees.

Regulators. Utility regulators are naturally wary of any transformation that affects their constituents on a grand scale, and thus their concerns must be addressed to encourage regulatory approval. The technology center serves two purposes in this regard: educating the regulators and showing them that the utility is committed to educating its customers on how to receive the maximum benefits from these technologies.

Given the size of a transformation project, it’s critical that regulators support the increased spending required and any consequent increase in rates. Many regulators, even those who favor new technologies, believe that the utility will benefit the most and should thus cover the cost. If utilities expect cost recovery, the regulators need to understand the complexity of new technologies and the costs of the interrelated systems required to manage these technologies. An exhibit in the technology center can go “behind the curtain,” giving regulators a clearer view of these systems, their complexity and the overall cost of delivering them.

Finally, each stage in the deployment of new technologies requires a new approval process and provides opportunities for resistance from regulators. For the utility, staying engaged with regulators throughout the process is imperative, and the technology center provides an ideal way to continue the conversation.

Customers. Once regulators give their approval, the utility must still make its case to the public. The success of a new technology project rests on customers’ adoption of the technology. For example, if customers continue using appliances as they always did, at a regular pace throughout the day and not adjusting for off-peak pricing, the utility will fail to achieve the major planned cost advantage: a reduction in production facilities. Wide-scale customer adoption is therefore key. Indeed, general estimates indicate that customer adoption rates of roughly 20 percent are needed to break even in a critical peak-pricing model. [1]

Given the complexity of these technologies, it’s quite possible that customers will fail to see the value of the program – particularly in the context of the changes in energy use they will need to undertake. A well-designed campaign that demonstrates the benefits of tiered pricing will go a long way toward encouraging adoption. By showcasing the future customer experience, the technology center can provide a tangible example that serves to create buzz, get customers excited and educate them about benefits.

Employees. Obtaining employee buy-in on new programs is as important as winning over the other two stakeholder groups. For transformation to be successful, an understanding of the process must be moved out of the boardroom and communicated to the entire company. Employees whose responsibilities will change need to know how they will change, how their interactions with the customer will change and what benefits are in it for them. At the same time, utility employees are also customers. They talk to friends and spread the message. They can be the utility’s best advocates or its greatest detractors. Proper internal communication is essential for a smooth transition from the old ways to the new, and the technology center can and should be used to educate employees on the transformation.

OTHER GOALS FOR THE TECHNOLOGY DEMONSTRATION CENTER

The objectives discussed above represent one possible set of goals for a technology center. Utilities may well have other reasons for erecting the technology center, and these should be addressed as well. As an example, the utility may want to present a tangible display of its plans for the future to its investors, letting them know what’s in store for the company. Likewise, the utility may want to be a leader in its industry or region, and the technology center provides a way to demonstrate that to its peer companies. The utility may also want to be recognized as a trendsetter in environmental progress, and a technology center can help people understand the changes the company is making.

The technology center needs to be designed with the utility’s particular environment in mind. The technology center development framework is, in essence, a road map created to aid the utility in prioritizing the technology center’s key strategic priorities and components to maximize its impact on the intended audience.

DEVELOPING THE TECHNOLOGY CENTER

Unlike other aspects of a traditional utility, the technology center needs to appeal to customers visually, as well as explain the significance and impact of new technologies. The technology center development framework presented here was developed by leveraging trends and experiences in retail, including “experiential” retail environments such as the Apple Stores in malls across the United States. These new retail environments offer a much richer and more interactive experience than traditional retail outlets, which may employ some basic merchandising and simply offer products for sale.

Experiential environments have arisen partly as a response to competition from online retailers and the increased complexity of products. The Technology Center Development Framework uses the same state-of-the-art design strategies that we see adopted by high-end retailers, inspiring the executives and leadership of the utility to create a compelling experience that will enable the utility to elicit the desired response and buy-in from the stakeholders described above.

Phase 1: Technology Center Strategy

During this phase, a utility typically spends four to eight weeks developing an optimal strategy for the technology center. To accomplish this, planners identify and delineate in detail three major elements:

  • The technology center’s goals;
  • Its target audience; and
  • Content required to achieve those goals.

As shown in Figure 1, these pieces are not mutually exclusive; in fact, they’re more likely to be iterative: The technology center’s goals set the stage for determining the audience and content, and those two elements influence each other. The outcome of this phase is a complete strategy road map that defines the direction the technology center will take.

To understand the Phase 1 objectives properly, it’s necessary to examine the logic behind them. The methodology focuses on the three elements mentioned previously – goals, audience and content – because these are easily overlooked and misaligned by organizations.

Utility companies inevitably face multiple and competing goals. Thus, it’s critical to identify the goals specifically associated with the technology center and to distinguish them from other corporate goals or goals associated with implementing a new technology. Taking this step forces the organization to define which goals can be met by the technology center with the greatest efficiency, and establishes a clear plan that can be used as a guide in resolving the inevitable future conflicts.

Similarly, the stakeholders served by the utility represent distinct audiences. Based on the goals of the center and the organization, as well as the internal expectations set by managers, the target audience needs to be well defined. Many important facets of the technology center, such as content and location, will be partly determined by the target audience. Finally, the right content is critical to success. A regulator may want to see different information than customers.

In addition, the audience’s specific needs dictate different content options. Do the utility’s customers care about the environment? Do they care more about advances in technology? Are they concerned about how their lives will change in the future? These questions need to be answered early in the process.

The key to successfully completing Phase 1 is constant engagement with the utility’s decision makers, since their expectations for the technology center will vary greatly depending on their responsibilities. Throughout this phase, the technology center’s planners need to meet with these decision makers on a regular basis, gather and respect their opinions, and come to the optimal mix for the utility on the whole. This can be done through interviews or a series of workshops, whichever is better suited for the utility. We have found that by employing this process, an organization can develop a framework of goals, audience and content mix that everyone will agree on – despite differing expectations.

Phase 2: Design Characteristics

The second phase of the development framework focuses on the high-level physical layout of the technology center. These “design characteristics” will affect the overall layout and presentation of the technology center.

We have identified six key characteristics that need to be determined. Each is developed as a trade-off between two extremes; this helps utilities understand the issues involved and debate the solutions. Again, there are no right answers to these issues – the optimal solution depends on the utility’s environment and expectations:

  • Small versus large. The technology center can be small, like a cell phone store, or large, like a Best Buy.
  • Guided versus self-guided. The center can be designed to allow visitors to guide themselves, or staff can be retained to guide visitors through the facility.
  • Single versus multiple. There may be a single site, or multiple sites. As with the first issue (small versus large), one site may be a large flagship facility, while the others represent smaller satellite sites.
  • Independent versus linked. Depending on the nature of the exhibits, technology center sites may operate independently of each other or include exhibits that are remotely linked in order to display certain advanced technologies.
  • Fixed versus mobile. The technology center can be in a fixed physical location, but it can also be mounted on a truck bed to bring the center to audiences around the region.
  • Static versus dynamic. The exhibits in the technology center may become outdated. How easy will it be to change or swap them out?

Figure 2 illustrates a sample set of design characteristics for one technology center, using a sample design characteristic map. This map shows each of the characteristics laid out around the hexagon, with the preference ranges represented at each vertex. By mapping out the utility’s options with regard to the design characteristics, it’s possible to visualize the trade-offs inherent in these decisions, and thus identify the optimal design for a given environment. In addition, this type of map facilitates reporting on the project to higher-level executives, who may benefit from a visual executive summary of the technology center’s plan.

The tasks in Phase 2 require the utility’s staff to be just as engaged as in the strategy phase. A workshop or interviews with staff members who understand the various needs of the utility’s region and customer base should be conducted to work out an optimal plan.

Phase 3: Execution Variables

Phases 1 and 2 provide a strategy and design for the technology center, and allow the utility’s leadership to formulate a clear vision of the project and come to agreement on the ultimate purpose of the technology center. Phase 3 involves engaging the technology developers to identify which aspects of the new technology – for example, smart appliances, demand-side management, outage management and advanced metering – will be displayed at the technology center.

During this phase, utilities should create a complete catalog of the technologies that will be demonstrated, and match them up against the strategic content mix developed in Phase 1. A ranking is then assigned to each potential new technology based on several considerations, such as how well it matches the strategy, how feasible it is to demonstrate the given technology at the center, and what costs and resources would be required. Only the most efficient and well-matched technologies and exhibits will be displayed.

During Phase 3, outside vendors are also engaged, including architects, designers, mobile operators (if necessary) and real estate agents, among others. With the first two phases providing a guide, the utility can now open discussions with these vendors and present a clear picture of what it wants. The technical requirements for each exhibit will be cataloged and recorded to ensure that any design will take all requirements into account. Finally, the budget and work plan are written and finalized.

CONCLUSION

With the planning framework completed, the team can now build the center. The framework serves as the blueprint for the center, and all relevant benchmarks must be transparent and open for everyone to see. Disagreements during the buildout phase can be referred back to the framework, and issues that don’t fit the framework are discarded. In this way, the utility can ensure that the technology center will meet its goals and serve as a valuable tool in the process of transformation.

Thank you to Ian Simpson, IBM Global Business Services, for his contributions to this paper.

ENDNOTE

  1. Critical peak pricing refers to the model whereby utilities use peak pricing only on days when demand for electricity is at its peak, such as extremely hot days in the summer.

Utility Mergers and Acquisitions: Beating the Odds

Merger and acquisition activity in the U.S. electric utility industry has increased following the 2005 repeal of the Public Utility Holding Company Act (PUHCA). A key question for the industry is not whether M&A will continue, but whether utility executives are prepared to manage effectively the complex regulatory challenges that have evolved.

M&A activity is (and always has been) the most potent, visible and (often) irreversible option available to utility CEOs who wish to reshape their portfolios and meet their shareholders’ expectations for returns. However, M&A has too often been applied reflexively – much like the hammer that sees everything as a nail.

The American utility industry is likely to undergo significant consolidation over the next five years. There are several compelling rationales for consolidation. First, M&A has the potential to offer real economic value. Second, capital-market and competitive pressures favor larger companies. Third, the changing regulatory landscape favors larger entities with the balance sheet depth to weather the uncertainties on the horizon.

LEARNING FROM THE PAST

Historically, however, acquirers have found it difficult to derive value from merged utilities. With the exception of some vertically integrated deals, most M&A deals have been value-neutral or value-diluting. This track record can be explained by a combination of factors: steep acquisition premiums, harsh regulatory givebacks, anemic cost reduction targets and (in more than half of the deals) a failure to achieve targets quickly enough to make a difference. In fact, over an eight-year period, less than half the utility mergers actually met or exceeded the announced cost reduction levels resulting from the synergies of the merged utilities (Figure 1).

The lessons learned from these transactions can be summarized as follows: Don’t overpay; negotiate a good regulatory deal; aim high on synergies; and deliver on them.

In trying to deliver value-creating deals, CEOs often bump up against the following realities:

  • The need to win approval from the target’s shareholders drives up acquisition premiums.
  • The need to receive regulatory approval for the deal and to alleviate organizational uncertainty leads to compromises.
  • Conservative estimates of the cost reductions resulting from synergies are made to reduce the risk of giving away too much in regulatory negotiations.
  • Delivering on synergies proves tougher than anticipated because of restrictions agreed to in regulatory deals or because of the organizational inertia that builds up during the 12- to 18-month approval process.

LOOKING AT PERFORMANCE

Total shareholder return (TSR) is significantly affected by two external deal negotiation levers – acquisition premiums and regulatory givebacks – and two internal levers – synergies estimated and synergies delivered. Between 1997 and 2004, mergers in all U.S. industries created an average TSR of 2 to 3 percent relative to the market index two years after closing. In contrast, utilities mergers typically underperformed the utility index by about 2 to 3 percent three years after the transaction announcement. T&D mergers underperformed the index by about 4 percent, whereas mergers of vertically integrated utilities beat the index by about 1 percent three years after the announcement (Figure 2).

For 10 recent mergers, the lower the share of the merger savings retained by the utilities and the higher the premium paid for the acquisition, the greater the likelihood that the deal destroyed shareholder value, resulting in negative TSR.

Although these appear to be obvious pitfalls that a seasoned management team should be able to recognize and overcome, translating this knowledge into tangible actions and results has been difficult.

So how can utility boards and executives avoid being trapped in a cycle of doing the same thing again and again while expecting different results (Einstein’s definition of insanity)? We suggest that a disciplined end-to-end M&A approach will (if well-executed) tilt the balance in the acquirer’s favor and generate long-term shareholder value. That approach should include the four following broad objectives:

  • Establishment of compelling strategic logic and rationale for the deal;
  • A carefully managed regulatory approval process;
  • Integration that takes place early and aggressively; and
  • A top-down approach for designing realistic but ambitious economic targets.

GETTING IT RIGHT: FOUR BROAD OBJECTIVES THAT ENHANCE M&A VALUE CREATION

To complete successful M&As, utilities must develop a more disciplined approach that incorporates the lessons learned from both utilities and other industrial sectors. At the highest level, adopting a framework with four broad objectives will enhance value creation before the announcement of the deal and through post-merger integration. To do this, utilities must:

  1. Establish a compelling strategic logic and rationale for the deal. A critical first step is asking the question, why do the merger? To answer this question, deal participants must:
    • Determine the strategic logic for long-term value creation with and without M&A. Too often, executives are optimistic about the opportunity to improve other utilities, but they overlook the performance potential in their current portfolio. For example, without M&A, a utility might be able to invest and grow its rate base, reduce the cost of operations and maintenance, optimize power generation and assets, explore more aggressive rate increases and changes to the regulatory framework, and develop the potential for growth in an unregulated environment. Regardless of whether a utility is an acquirer or a target, a quick (yet comprehensive) assessment will provide a clear perspective on potential shareholder returns (and risks) with and without M&A.
    • Conduct a value-oriented assessment of the target. Utility executives typically have an intuitive feel for the status of potential M&A targets adjacent to their service territories and in the broader subregion. However, when considering M&A, they should go beyond the obvious criteria (size and geography) and candidates (contiguous regional players) to consider specific elements that expose the target’s value potential for the acquirer. Such value drivers could include an enhanced power generation and asset mix, improvements in plant availability and performance, better cost structures, an ability to respond to the regulatory environment, and a positive organizational and cultural fit. Also critical to the assessment are the noneconomic aspects of the deal, such as headquarters sharing, potential loss of key personnel and potential paralysis of the company (for example, when a merger or acquisition freezes a company’s ability to pursue M&A and other large initiatives for two years).
    • Assess internal appetites and capabilities for M&A. Successful M&A requires a broad commitment from the executive team, enough capable people for diligence and integration, and an appetite for making the tough decisions essential to achieving aggressive targets. Acquirers should hold pragmatic executive-level discussions with potential targets to investigate such aspects as cultural fit and congruence of vision. Utility executives should conduct an honest assessment of their own management teams’ M&A capabilities and depth of talent and commitment. Among historic M&A deals, those that involved fewer than three states and those in which the acquirer was twice as big as the target were easier to complete and realized more value.
  2. Carefully manage the regulatory approval process. State regulatory approvals present the largest uncertainty and risk in utility M&A, clearly affecting the economics of any deal. However, too often, these discussions start and end with rate reductions so that the utility can secure approvals. The regulatory approval process should be similar to the rigorous due diligence that’s performed before the deal’s announcement. This means that when considering M&A, utilities should:
    • Consider regulatory benefits beyond the typical rate reductions. The regulatory approval process can be used to create many benefits that share rewards and risks, and to provide advantages tailored to the specific merger’s conditions. Such benefits include a stronger combined balance sheet and a potential equity infusion into the target’s subsidiaries; an ability to better manage and hedge a larger combined fuel portfolio; the capacity to improve customer satisfaction; a commitment to specific rate-based investment levels; and a dedication to relieving customer liability on pending litigation. For example, to respond to regulatory policies that mandate reduced emissions, merged companies can benefit not only from larger balance sheets but also from equity infusions to invest in new technology or proven technologies. Merged entities are also afforded the opportunity to leverage combined emissions reduction portfolios.
    • Systematically price out a full range of regulatory benefits. The range should include the timing of “gives” (that is, the sharing of synergy gains with customers in the form of lower rates) as a key value lever; dedicated valuations of potential plans and sensitivities from all stakeholders’ perspectives; and a determination of the features most valued by regulators so that they can be included in a strategy for getting M&A approvals. Executives should be wary of settlements tied to performance metrics that are vaguely defined or inadequately tracked. They should also avoid deals that require new state-level legislation, because too much time will be required to negotiate and close these complex deals. Finally, executives should be wary of plans that put shareholder benefits at the end of the process, because current PUC decisions may not bind future ones.
    • Be prepared to walk away if the settlement conditions imposed by the regulators dilute the economics of the deal. This contingency plan requires that participating executives agree on the economic and timing triggers that could lead to an unattractive deal.
  3. Integrate early and aggressively. Historically, utility transactions have taken an average of 15 months from announcement to closing, given the required regulatory approvals. With such a lengthy time lag, it’s been easy for executives to fall into the trap of putting off important decisions related to the integration and post-merger organization. This delay often leads to organizational inertia as employees in the companies dig in their heels on key issues and decisions rather than begin to work together. To avoid such inertia, early momentum in the integration effort, embodied in the steps outlined below, is critical.
    • Announce the executive team’s organization early on. Optimally, announcements should be made within the first 90 days, and three or four well-structured senior-management workshops with the two CEOs and key executives should occur within the first two months. The decisions announced should be based on such considerations as the specific business unit and organizational options, available leadership talent and alignment with synergy targets by area.
    • Make top-down decisions about integration approach according to business and function. Many utility mergers appear to adopt a “template” approach to integration that leads to a false sense of comfort regarding the process. Instead, managers should segment decision making for each business unit and function. For example, when the acquirer has a best-practice model for fossil operations, the target’s plants and organization should simply be absorbed into the acquirer’s model. When both companies have strong practices, a more careful integration will be required. And when both companies need to transform a particular function, the integration approach should be tailored to achieve a change in collective performance.
    • Set clear guidelines and expectations for the integration. A critical part of jump-starting the integration process is appointing an integration officer with true decision-making authority, and articulating the guidelines that will serve as a road map for the integration teams. These guidelines should clearly describe the roles of the corporation and individual operating teams, as well as provide specific directions about control and organizational layers and review and approval mechanisms for major decisions.
    • >Systematically address legal and organizational bottlenecks. The integration’s progress can be impeded by legal or organizational constraints on the sharing of sensitive information. In such situations, significant progress can be achieved by using clean teams – neutral people who haven’t worked in the area before – to ensure data is exchanged and sanitized analytical results are shared. Improved information sharing can aid executive-level decision making when it comes to commercially sensitive areas such as commercial marketing-and-trading portfolios, performance improvements, and other unregulated business-planning and organizational decisions.
  4. Use a top-down approach to design realistic but ambitious economic targets. Synergies from utility mergers have short shelf lives. With limits on a post-merger rate freeze or rate-case filing, the time to achieve the targets is short. To achieve their economic targets, merged utilities should:
    • Construct the top five to 10 synergy initiatives to capture value and translate them into road maps with milestones and accountabilities. Identifying and promoting clear targets early in the integration effort lead to a focus on the merger’s synergy goals.
    • Identify the links between synergy outcomes and organizational decisions early on, and manage those decisions from the top. Such top-down decisions should specify which business units or functional areas are to be consolidated. Integration teams often become gridlocked over such decisions because of conflicts of interest and a lack of objectivity.
    • Control the human resources policies related to the merger. Important top-down decisions include retention and severance packages and the appointment process. Alternative severance, retirement and retention plans should be priced explicitly to ensure a tight yet fair balance between the plans’ costs and benefits.
    • Exploit the merger to create opportunities for significant reductions in the acquirer’s cost base. Typical merger processes tend to focus on reductions in the target’s cost base. However, in many cases the acquirer’s cost base can also be reduced. Such reductions can be a significant source of value, making the difference between success and failure. They also communicate to the target’s employees that the playing field is level.
    • Avoid the tendency to declare victory too soon. Most synergies are related to standardization and rationalization of practices, consolidation of line functions and optimization of processes and systems. These initiatives require discipline in tracking progress against key milestones and cost targets. They also require a tough-minded assessment of red flags and cost increases over a sustained time frame – often two to three years after the closing.

RECOMMENDATIONS: A DISCIPLINED PROCESS IS KEY

Despite the inherent difficulties, M&A should remain a strategic option for most utilities. If they can avoid the pitfalls of previous rounds of mergers, executives have an opportunity to create shareholder value, but a disciplined and comprehensive approach to both the M&A process and the subsequent integration is essential.

Such an approach begins with executives who insist on a clear rationale for value creation with and without M&A. Their teams must make pragmatic assessments of a deal’s economics relative to its potential for improving base business. If they determine the deal has a strong rationale, they must then orchestrate a regulatory process that considers broad options beyond rate reductions. Having the discipline to walk away if the settlement conditions dilute the deal’s economics is a key part of this process. A disciplined approach also requires that an aggressive integration effort begin as soon as the deal has been announced – an effort that entails a modular approach with clear, fast, top-down decisions on critical issues. Finally, a disciplined process requires relentless follow-through by executives if the deal is to achieve ambitious yet realistic synergy targets.

Software-Based Intelligence: The Missing Link in the SmartGrid Vision

Achieving the SmartGrid vision requires more than advanced metering infrastructure (AMI), supervisory control and data acquisition (SCADA), and advanced networking technologies. While these critical technologies provide the main building blocks of the SmartGrid, its fundamental keystone – its missing link – will be embedded software applications located closer to the edge of the electric distribution network. Only through embedded software will the true SmartGrid vision be realized.

To understand what we mean by the SmartGrid, let’s take a look at some of its common traits:

  • It’s highly digital.
  • It’s self-healing.
  • It offers distributed participation and control.
  • It empowers the consumer.
  • It fully enables electricity markets.
  • It optimizes assets.
  • It’s evolvable and extensible.
  • It provides information security and privacy.
  • It features an enhanced system for reliability and resilience.

All of the above-described traits – which together comprise a holistic definition of the SmartGrid – share the requirement to embed intelligence in the hardware infrastructure (which is composed of advanced grid components such as AMI and SCADA). Just as important as the hardware for hosting the embedded software are the communications and networking technologies that enable real-time and near realtime communications among the various grid components.

The word intelligence has many definitions; however, the one cited in the 1994 Wall Street Journal article “Mainstream Science on Intelligence” (by Linda Gottfredson, and signed by 51 other professors) offers a reasonable application to the SmartGrid. It defines the word intelligence as the “ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience.”

While the ability of the grid to approximate the reasoning and learning capabilities of humans may be a far-off goal, the fact that the terms intelligence and smart appear so often these days begs the following question: How can the existing grid become the SmartGrid?

THE BRAINS OF THE OPERATION

The fact that the SmartGrid derives its intelligence directly from analytics and algorithms via embedded intelligence applications based on analytical software can’t be overemphasized. While seemingly simple in concept and well understood in other industries, this topic typically isn’t addressed in any depth in many SmartGrid R&D and pilot projects. Due to the viral nature of the SmartGrid industry, every company with any related technology is calling that technology SmartGrid technology – all well and good, as long as you aren’t concerned about actually having intelligence in your SmartGrid project. It is this author’s opinion, however, that very few companies actually have the right stuff to claim the “smart” or “intelligence” part of the SmartGrid infrastructure – what we see as the missing link in the SmartGrid value chain.

A more realistic way to define intelligence in reference to the SmartGrid might read as follows:

The ability to provide longer-term planning and balancing of the grid; near and real-time sensing, filtering and planning; and balancing of the grid, with additional capabilities for self-healing, adaptive response and upgradeable logic to support continuous changes to grid operations in order to ensure cost reductions, reliability and resilience.

Software-based intelligence can be applied to all aspects or characteristics of the SmartGrid as discussed above. Figure 1 summarizes these roles.

BASIC BUILDING BLOCKS

Taking into consideration the very high priority that must be placed on established IT-industry concepts of security and interoperability as defined in the GridWise Architecture Council (GWAC) Framework for Interoperability, the SmartGrid should include as its basic building blocks the components outlined in Figure 2.

The real-world grid and supporting infrastructure will need to incorporate legacy systems as well as incremental changes consisting of multiple and disparate upgrade paths. The ideal path to realizing the SmartGrid vision must consider the installation of any SmartGrid project using the order shown in Figure 2 – that is, the device hardware would be installed in Block 1, communications and networking infrastructure added in Block 2, embedded intelligence added in Block 3, and middleware services and applications layered in Block 4. In a perfect world, the embedded intelligence software in Block 3 would be configured into the device block at the time of design or purchase. Some intelligence types (in the form of services or applications) that could be preconfigured into the device layer with embedded software could include (but aren’t limited to) the following:

  • Capture. Provides status and reports on operation, performance and usage of a given monitored device or environment.
  • Diagnose. Enables device to self-optimize or allows a service person to monitor, troubleshoot, repair and maintain devices; upgrades or augments performance of a given device; and prevents problems with version control, technology obsolescence and device failure.
  • Control and automate. Coordinates the sequenced activity of several devices. This kind of intelligence can also cause devices to perform on/off discreet actions.
  • Profile and track behavior. Monitors variations in the location, culture, performance, usage and sales of a device.
  • Replenishment and commerce. Monitors consumption of a device and buying patterns of the end-user (allowing applications to initiate purchase orders or other transactions when replenishment is needed); provides location mapping and logistics; tracks and optimizes the service support system for devices.

EMBEDDED INTELLIGENCE AT WORK

Intelligence types will, of course, differ according to their application. For example, a distribution utility looking to optimize assets and real-time distribution operations may need sophisticated mathematical and artificial intelligence solutions with dynamic, nonlinear optimization models (to accommodate a high amount of uncertainty), while a homeowner wishing to participate in demand response may require less sophisticated business rules. The embedded intelligence is, therefore, responsible for the management and mining of potentially billions, if not trillions, of device-generated data points for decision support, settlement, reliability and other financially significant transactions. This computational intelligence can sense, store and analyze any number of information patterns to support the SmartGrid vision. In all cases, the software infrastructure portion of the SmartGrid building blocks must accommodate any number of these cases – from simple to complex – if the economics are to be viable.

For example, the GridAgents software platform is being used in several large U.S. utility distribution automation infrastructure enhancements to embed intelligence in the entire distribution and extended infrastructure; this in turn facilitates multiple applications simultaneously, as depicted in Figure 3 (highlighting microgrids and compact networks). Included are the following example applications: renewables integration, large-scale virtual power plant applications, volt and VAR management, SmartMeter management and demand response integration, condition-based maintenance, asset management and optimization, fault location, isolation and restoration, look-ahead contingency analysis, distribution operation model analysis, relay protection coordination, “islanding” and microgrid control, and sense-and-respond applications.

Using this model of embedded intelligence, the universe of potential devices that could be directly included in the grid system includes buildings and home automation, distribution automation, substation automation, transmission system, and energy market and operations – all part of what Harbor Research terms the Pervasive Internet. The Pervasive Internet concept assumes that devices are connected using TCP/IP protocols; however, it is not limited by whether a particular network represents a mission-critical SCADA or home automation (which obviously require very different security protocols). As the missing link, the embedded software intelligence we’ve been talking about can be present in any of these Pervasive Internet devices.

DELIVERY SYSTEMS

There are many ways to deliver the embedded software intelligence building block of the SmartGrid, and many vendors who will be vying to participate in this rapidly expanding market. In a physical sense, the embedded intelligence can be delivered though various grid interfaces, including facility-level and distribution-system automation and energy management systems. The best way to realize the SmartGrid vision, however, will most likely come out of making as much use as possible of the existing infrastructure (since installing new infrastructure is extremely costly). The most promising areas for embedding intelligence include the various gateways and collector nodes, as well as devices on the grid itself (as shown in Figure 4). Examples of such devices include SmartMeter gateways, substation PCs, inverter gateways and so on. By taking advantage of the natural and distributed hierarchy of device networks, multiple SmartGrid service offerings can be delivered with a common infrastructure and common protocols.

Some of the most promising technologies for delivering the embedded intelligence layer of the SmartGrid infrastructure include the following:

  • The semantic Web is an extension of the current Web that permits machine-understandable data. It provides a common framework that allows data to be shared and re-used across application and company boundaries. It integrates applications using URLs for naming and XML for syntax.
  • Service-oriented computing represents a cross-disciplinary approach to distributed software. Services are autonomous, platform-independent computational elements that can be described, published, discovered, orchestrated and programmed using standard protocols. These services can be combined into networks of collaborating applications within and across organizational boundaries.
  • Software agents are autonomous, problem-solving computational entities. They often interact and cooperate with other agents (both people and software) that may have conflicting aims. Known as multi-agent systems, such environments add the ability to coordinate complex business processes and adapt to changing conditions on the fly.

CONCLUSION

By incorporating the missing link in the SmartGrid infrastructure – the embedded-intelligence software building block – the SmartGrid vision can not only be achieved, but significant benefits to the utility and other stakeholders can be delivered much more efficiently and with incremental changes to the functions supporting the SmartGrid vision. Embedded intelligence provides a structured way to communicate with and control the large number of disparate energy-sensing, communications and control systems within the electric grid infrastructure. This includes the capability to deploy at low cost, scale and enable security as well as the ability to interoperate with the many types of devices, communication networks, data protocols and software systems used to manage complex energy networks.

A fully distributed intelligence approach based on embedded software offers potential advantages in lower cost, flexibility, security, scalability and acceptance among a wide group of industry stakeholders. By embedding functionality in software and distributing it across the electrical distribution network, the intelligence is pushed to the edge of the system network, where it can provide the most value. In this way, every node can be capable of hosting an intelligent software program. Although decentralized structures remain a controversial topic, this author believes they will be critical to the success of next-generation energy networks (the SmartGrid). The current electrical grid infrastructure is composed of a large number of existing potential devices that provide data which can serve as the starting point for embedded smart monitoring and decision support, including electric meters, distribution equipment, network protectors, distributed energy resources and energy management systems. From a high-level
design perspective, the embedded intelligence software architecture needs to support the following:

  • Decentralized management and intelligence;
  • Extensibility and reuse of software applications;
  • new components that can be removed or added to the system with little central control or coordination;
  • Fault tolerance both at the system level and the subsystem level to detect and recover from system failures;
  • need support for carrying out analysis and control where the resources are available, not where the results are needed (at edge versus the central grid);
  • Compatibility with different information technology devices and systems;
  • Open communication protocols that run on any network; and
  • Interoperability and integration with existing and evolving energy standards.

Adding the embedded-intelligence building block to existing SmartGrid infrastructure projects (including AMI and SCADA) and advanced networking technology projects will bring the SmartGrid vision to market faster and more economically while accommodating the incremental nature of SmartGrid deployments. The embedded intelligence software can provide some of the greatest benefits of the SmartGrid, including asset optimization, run-time intelligence and flexibility, the ability to solve multiple problems with a single infrastructure and greatly reduced integration costs through interoperability.

The Smart Grid: A Balanced View

Energy systems in both mature and developing economies around the world are undergoing fundamental changes. There are early signs of a physical transition from the current centralized energy generation infrastructure toward a distributed generation model, where active network management throughout the system creates a responsive and manageable alignment of supply and demand. At the same time, the desire for market liquidity and transparency is driving the world toward larger trading areas – from national to regional – and providing end-users with new incentives to consume energy more wisely.

CHALLENGES RELATED TO A LOW-CARBON ENERGY MIX

The structure of current energy systems is changing. As load and demand for energy continue to grow, many current-generation assets – particularly coal and nuclear systems – are aging and reaching the end of their useful lives. The increasing public awareness of sustainability is simultaneously driving the international community and national governments alike to accelerate the adoption of low-carbon generation methods. Complicating matters, public acceptance of nuclear energy varies widely from region to region.

Public expectations of what distributed renewable energy sources can deliver – for example, wind, photovoltaic (PV) or micro-combined heat and power (micro-CHP) – are increasing. But unlike conventional sources of generation, the output of many of these sources is not based on electricity load but on weather conditions or heat. From a system perspective, this raises new challenges for balancing supply and demand.

In addition, these new distributed generation technologies require system-dispatching tools to effectively control the low-voltage side of electrical grids. Moreover, they indirectly create a scarcity of “regulating energy” – the energy necessary for transmission operators to maintain the real-time balance of their grids. This forces the industry to try and harness the power of conventional central generation technologies, such as nuclear power, in new ways.

A European Union-funded consortium named Fenix is identifying innovative network and market services that distributed energy resources can potentially deliver, once the grid becomes “smart” enough to integrate all energy resources.

In Figure 1, the Status Quo Future represents how system development would play out under the traditional system operation paradigm characterized by today’s centralized control and passive distribution networks. The alternative, Fenix Future, represents the system capacities with distributed energy resources (DER) and demand-side generation fully integrated into system operation, under a decentralized operating paradigm.

CHALLENGES RELATED TO NETWORK OPERATIONAL SECURITY

The regulatory push toward larger trading areas is increasing the number of market participants. This trend is in turn driving the need for increased network dispatch and control capabilities. Simultaneously, grid operators are expanding their responsibilities across new and complex geographic regions. Combine these factors with an aging workforce (particularly when trying to staff strategic processes such as dispatching), and it’s easy to see why utilities are becoming increasingly dependent on information technology to automate processes that were once performed manually.

Moreover, the stochastic nature of energy sources significantly increases uncertainty regarding supply. Researchers are trying to improve the accuracy of the information captured in substations, but this requires new online dispatching stability tools. Additionally, as grid expansion remains politically controversial, current efforts are mostly focused on optimizing energy flow in existing physical assets, and on trying to feed asset data into systems calculating operational limits in real time.

Last but not least, this enables the extension of generation dispatch and congestion into distribution low-voltage grids. Although these grids were traditionally used to flow energy one way – from generation to transmission to end-users – the increasing penetration of distributed resources creates a new need to coordinate the dispatch of these resources locally, and to minimize transportation costs.

CHALLENGES RELATED TO PARTICIPATING DEMAND

Recent events have shown that decentralized energy markets are vulnerable to price volatility. This poses potentially significant economic threats for some nations because there’s a risk of large industrial companies quitting deregulated countries because they lack visibility into long-term energy price trends.

One potential solution is to improve market liquidity in the shorter term by providing end-users with incentives to conserve energy when demand exceeds supply. The growing public awareness of energy efficiency is already leading end-users to be much more receptive to using sustainable energy; many utilities are adding economic incentives to further motivate end-users.

These trends are expected to create radical shifts in transmission and distribution (T&D) investment activities. After all, traditional centralized system designs, investments and operations are based on the premise that demand is passive and uncontrollable, and that it makes no active contribution to system operations.

However, the extensive rollout of intelligent metering capabilities has the potential to reverse this, and to enable demand to respond to market signals, so that end-users can interact with system operators in real or near real time. The widening availability of smart metering thus has the potential to bring with it unprecedented levels of demand response that will completely change the way power systems are planned, developed and operated.

CHALLENGES RELATED TO REGULATION

Parallel with these changes to the physical system structure, the market and regulatory frameworks supporting energy systems are likewise evolving. Numerous energy directives have established the foundation for a decentralized electricity supply industry that spans formerly disparate markets. This evolution is changing the structure of the industry from vertically integrated, state-owned monopolies into an environment in which unbundled generation, transmission, distribution and retail organizations interact freely via competitive, accessible marketplaces to procure and sell system services and contracts for energy on an ad hoc basis.

Competition and increased market access seem to be working at the transmission level in markets where there are just a handful of large generators. However, this approach has yet to be proven at the distribution level, where it could facilitate thousands and potentially millions of participants offering energy and systems services in a truly competitive marketplace.

MEETING THE CHALLENGES

As a result, despite all the promise of distributed generation, the current decentralized system will become increasingly unstable without the corresponding development of technical, market and regulatory frameworks over the next three to five years.

System management costs are increasing, and threats to system security are a growing concern as installed distributed generating capacity in some areas exceeds local peak demand. The amount of “regulating energy” provisions rises as stress on the system increases; meanwhile, governments continue to push for distributed resource penetration and launch new energy efficiency ideas.

At the same time, most of the large T&D utilities intend to launch new smart grid prototypes that, once stabilized, will be scalable to millions of connection points. The majority of these rollouts are expected to occur between 2010 and 2012.

From a functionality standpoint, the majority of these associated challenges are related to IT system scalability. The process will require applying existing algorithms and processes to generation activities, but in an expanded and more distributed manner.

The following new functions will be required to build a smart grid infrastructure that enables all of this:

New generation dispatch. This will enable utilities to expand their portfolios of current-generation dispatching tools to include schedule-generation assets for transmission and distribution. Utilities could thus better manage the growing number of parameters impacting the decision, including fuel options, maintenance strategies, the generation unit’s physical capability, weather, network constraints, load models, emissions (modeling, rights, trading) and market dynamics (indices, liquidity, volatility).

Renewable and demand-side dispatching systems. By expanding current energy management systems (EMS) capability and architecture, utilities should be able to scale to include millions of active producers and consumers. Resources will be distributed in real time by energy service companies, promoting the most eco-friendly portfolio dispatch methods based on contractual arrangements between the energy service providers and these distributed producers and consumers.

Integrated online asset management systems. new technology tools that help transmission grid operators assess the condition of their overall assets in real time will not only maximize asset usage, but will lead to better leveraging of utilities’ field forces. new standards such as IEC61850 offer opportunities to manage such models more centrally and more consistently.

Online stability and defense plans. The increasing penetration of renewable generation into grids combined with deregulation increases the need for fl ow control into interconnections between several transmission system operators (TSOs). Additionally, the industry requires improved “situation awareness” tools to be installed in the control centers of utilities operating in larger geographical markets. Although conventional transmission security steady state indicators have improved, utilities still need better early warning applications and adaptable defense plan systems.

MOVING TOWARDS A DISTRIBUTED FUTURE

As concerns about energy supply have increased worldwide, the focus on curbing demand has intensified. Regulatory bodies around the world are thus actively investigating smart meter options. But despite the benefits that smart meters promise, they also raise new challenges on the IT infrastructure side. Before each end-user is able to flexibly interact with the market and the distribution network operator, massive infrastructure re-engineering will be required.

nonetheless, energy systems throughout the world are already evolving from a centralized to a decentralized model. But to successfully complete this transition, utilities must implement active network management through their systems to enable a responsive and manageable alignment of supply and demand. By accomplishing this, energy producers and consumers alike can better match supply and demand, and drive the world toward sustainable energy conservation.