Successful Smart Grid Architecture

The smart grid is progressing well on several fronts. Groups such as the Grid Wise Alliance, events such as Grid Week, and national policy citations such as the American Recovery and Reinvestment Act in the U.S., for example, have all brought more positive attention to this opportunity. The boom in distributed renewable energy and its demands for a bidirectional grid are driving the need forward, as are sentiments for improving consumer control and awareness, giving customers the ability to engage in real-time energy conservation.

On the technology front, advances in wireless and other data communications make wide-area sensor networks more feasible. Distributed computation is certainly more powerful – just consider your iPod! Even architectural issues such as interoperability are now being addressed in their own forums such as Grid Inter-Op. It seems that the recipe for a smart grid is coming together in a way that many who envisioned it would be proud. But to avoid making a gooey mess in the oven, an overall architecture that carefully considers seven key ingredients for success must first exist.

Sources of Data

Utilities have eons of operational data: both real time and archival, both static (such as nodal diagrams within distribution management systems) and dynamic (such as switching orders). There is a wealth of information generated by field crews, and from root-cause analyses of past system failures. Advanced metering infrastructure (AMI) implementations become a fine-grained distribution sensor network feeding communication aggregation systems such as Silver Springs Network’s Utility IQ or Trilliant’s Secure Mesh Network.

These data sources need to be architected to be available to enhance, support and provide context for real-time data coming in from new intelligent electronic devices (IEDs) and other smart grid devices. In an era of renewable energy sources, grid connection controllers become yet another data source. With renewables, micro-scale weather forecasting such as IBM Research’s Deep Thunder can provide valuable context for grid operation.

Data Models

Once data is obtained, in order to preserve its value in a standard format, one can think in terms of an extensible markup language (XML)-oriented database. Modern implementations of these databases have improved performance characteristics, and the International Engineering Consortium (IEC) common information/ generic interface definition (CIM/GID) model, though oriented more to assets than operations, is a front-running candidate for consideration.

Newer entries, such as device language message specification – coincidence-ordered subsets expectation maximization (DLMS-COSEM) for AMI, are also coming into practice. Sometimes, more important than the technical implementation of the data, however, is the model that is employed. A well-designed data model not only makes exchange of data and legacy program adjustments easier, but it can also help the applicability of security and performance requirements. The existence of data models is often a good indicator of an intact governance process, for it facilitates use of the data by multiple applications.

Communications

Customer workshops and blueprinting sessions have shown that one of the most common issues needing to be addressed is the design of the wide-area communication system. Data communications architecture affects data rate performance, the cost of distributed intelligence and the identification of security susceptibilities.

There is no single communications technology that is suitable for all utilities, or even for all operational areas across any individual utility. Rural areas may be served by broadband over powerline (BPL), while urban areas benefit from multi-protocol label switching (MPLS) and purpose- designed mesh networks, enhanced by their proximity to fiber.

In the future, there could be entirely new choices in communications. So, the smart grid architect needs to focus on security, standardized interfaces to accept new technology, enablement of remote configuration of devices to minimize any touching of smart grid devices once installed, and future-proofing the protocols.

The architecture should also be traceable to the business case. This needs to include probable use cases that may not be in the PUC filing, such as AMI now, but smart grid later. Few utilities will be pleased with the idea of a communication network rebuild within five years of deploying an AMI-only network.

Communications architecture must also consider power outages, so battery backup, solar recharging, or other equipment may be required. Even arcane details such as “Will the antenna on a wireless device be the first thing to blow off in a hurricane?” need to be considered.

Security

Certainly, the smart grid’s purpose is to enhance network reliability, not lower its security. But with the advent of North American Reliability Corp. Critical Infrastructure Protection (NERC-CIP), security has risen to become a prime consideration, usually addressed in phase one of the smart grid architecture.

Unlike the data center, field-deployed security has many new situations and challenges. There is security at the substation – for example, who can access what networks, and when, within the control center. At the other end, security of the meter data in a proprietary AMI system needs to be addressed so that only authorized applications and personnel can access the data.

Service oriented architecture (SOA) appliances are network devices to enable integration and help provide security at the Web services message level. These typically include an integration device, which streamlines SOA infrastructures; an XML accelerator, which offloads XML processing; and an XML security gateway, which helps provide message-level, Web-services security. A security gateway helps to ensure that only authorized applications are allowed to access the data, whether an IP meter or an IED. SOA appliance security features complement the SOA security management capabilities of software.

Proper architectures could address dynamic, trusted virtual security domains, and be combined not only with intrusion protection systems, but anomaly detection systems. If hackers can introduce viruses in data (such as malformed video images that leverage faults in media players), then similar concerns should be under discussion with smart grid data. Is messing with 300 MegaWatts (MW) of demand response much different than cyber attacking a 300 MW generator?

Analytics

A smart grid cynic might say, “Who is going to look at all of this new data?” That is where analytics supports the processing, interpretation and correlation of the flood of new grid observations. One part of the analytics would be performed by existing applications. This is where data models and integration play a key role. Another part of the analytics dimension is with new applications and the ability of engineers to use a workbench to create their customized analytics dashboard in a self-service model.

Many utilities have power system engineers in a back office using spreadsheets; part of the smart grid concept is that all data is available to the community to use modern tools to analyze and predict grid operation. Analytics may need a dedicated data bus, separate from an enterprise service bus (ESB) or enterprise SOA bus, to meet the timeliness and quality of service to support operational analytics.

A two-tier or three-tier (if one considers the substations) bus is an architectural approach to segregate data by speed and still maintain interconnections that support a holistic view of the operation. Connections to standard industry tools such as ABB’s NEPLAN® or Siemens Power Technologies International PSS®E, or general tools such as MatLab, should be considered at design time, rather than as an additional expense commitment after smart grid commissioning.

Integration

Once data is sensed, securely communicated, modeled and analyzed, the results need to be applied for business optimization. This means new smart grid data gets integrated with existing applications, and metadata locked in legacy systems is made available to provide meaningful context.

This is typically accomplished by enabling systems as services per the classic SOA model. However, issues of common data formats, data integrity and name services must be considered. Data integrity includes verification and cross-correlation of information for validity, and designation of authoritative sources and specific personnel who own the data.

Name services addresses the common issue of an asset – whether transformer or truck – having multiple names in multiple systems. An example might be a substation that has a location name, such as Walden; a geographic information system (GIS) identifier such as latitude and longitude; a map name such as nearest cross streets; a capital asset number in the financial system; a logical name in the distribution system topology; an abbreviated logical name to fit in the distribution management system graphical user interface (DMS GUI); and an IP address for the main network router in the substation.

Different applications may know new data by association with one of those names, and that name may need translation to be used in a query with another application. While rewriting the applications to a common model may seem appealing, it may very well send a CIO into shock. While the smart grid should help propagate intelligence throughout the utility, this doesn’t necessarily mean to replace everything, but it should “information-enable” everything.

Interoperability is essential at both a service level and at the application level. Some vendors focus more at the service, but consider, for example, making a cell phone call from the U.S. to France – your voice data may well be code division multiple access (CDMA) in the U.S., travel by microwave and fiber along its path, and emerge in France in a global system for mobile (GSM) environment, yet your speech, the “application level data,” is retained transparently (though technology does not yet address accents!).

Hardware

The world of computerized solutions does not speak to software alone. For instance, AMI storage consolidation addresses the concern that the volume of data coming into the utility will be increasing exponentially. As more meter data can be read in an on-demand fashion, data analytics will be employed to properly understand it all, requiring a sound hardware architecture to manage, back-up and feed the data into the analytics engines. In particular, storage is needed in the head-end systems and the meter-data management systems (MDMS).

Head-end systems pull data from the meters to provide management functionality while the MDMS collects data from head-end systems and validates it. Then the data can be used by billing and other business applications. Data in both the head-end systems and the master copy of the MDMS is replicated into multiple copies for full back up and disaster recovery. For MDMS, the master database that stores all the aggregated data is replicated for other business applications, such as customer portal or data analytics, so that the master copy of the data is not tampered with.

Since smart grid is essentially performing in real time, and the electricity business is non-stop, one must think of hardware and software solutions as needing to be fail-safe with automated redundancy. The AMI data especially needs to be reliable. The key factors then become: operating system stability; hardware true memory access speed and range; server and power supply reliability; file system redundancy such as a JFS; and techniques such as FlashCopy to provide a point-in-time copy of a logical drive.

Flash Copy can be useful in speeding up database hot backups and restore. VolumeCopy can extend the replication functionality by providing the ability to copy contents of one volume to another. Enhanced remote mirroring (Global Mirror, Global Copy and Metro Mirror) can provide the ability to mirror data from one storage system to another, over extended distances.

Conclusion

Those are seven key ingredients for designing or evaluating a recipe for success with regard to implementing the smart grid at your utility. Addressing these dimensions will help achieve a solid foundation for a comprehensive smart grid computing system architecture.

Improving Call Center Performance Through Process Enhancements

The great American philosopher Yogi Berra once said, “If you don’t know where you’re going, chances are you will end up somewhere else.” Yet many utilities possess only a limited understanding of their call center operations, which can prevent them from reaching the ultimate goal: improving performance and customer satisfaction, and reducing costs.

Utilities face three key barriers in seeking to improve their call center operations:

  • Call centers routinely collect data on “average” performance, such as average handle time, average speed of answer and average hold time, without delving into the details behind the averages. The risk is that instances of poor and exemplary performance alike are not revealed by such averages.
  • Call centers typically perform quality reviews on less than one-half percent of calls received. Poor performance by individual employees – and perhaps the overall call center – can thus be masked by insufficient data.
  • Calls centers often fail to periodically review their processes. When they do, they frequently lack statistically valid data to perform the reviews. Without detailed knowledge of call center processes, utilities are unlikely to recognize and correct problems.

There are, however, proven methods for overcoming these problems. We advocate a three-step process designed to achieve more effective and efficient call center operations: collect sufficient data; analyze the data; and review and monitor progress on an ongoing basis.

STEP 1: COLLECT SUFFICIENT DATA

The ideal sampling size is 1,000 randomly selected calls. This size call sample typically provides results that are accurate +/- 3 percent, with a more than 90 percent degree of confidence. These are typical levels of accuracy and confidence that businesses require before they are likely to undertake action.

The types of data that should be collected from each call include:

  • Call type, such as new service, emergency, bill payment or high bill, and subcall type.
  • Number of systems and/or screens used – for example, how many screens did it take to complete a new service request?
  • Actions taken during the call, such as greeting the customer, gathering customer- identity data, understanding the problem or delivering the solution.
  • Actions taken after the call – for example, entering data into computer systems, or sending notes or Emails to the customer or contact center colleagues.

Having the right tool can greatly facilitate data collection. For example, the call center data collection tool pictured in Figure 1 captures this information quickly and easily, using three push-button timers that enable accurate data collection.

When a call is being reviewed, the analyst pushes the green buttons to indicate which of 12 different steps within a call sequence is occurring. The steps include greeting, hold and transfer, among others. Similarly, the yellow buttons enable the analyst to collect the time elapsed for each of 15 different screens that may be used and up to 15 actions taken after the call is finished.

This analysis resembles a traditional “time and motion” study, because in many ways it is just that. But the difference here is that we can use new automated tools, such as the voice and screen capture tools and data collector shown, as well as new approaches, to gain new insights.

The data capture tool also enables the analyst to collect up to 100 additional pieces of data, including the “secondary and tertiary call type.” (As an example, a credit call may be the primary call type, a budget billing the secondary call type and a customer in arrears the tertiary call type.) The tool also lets the analyst use drop-down boxes to quickly collect data on transfers, hold time, mistakes made and opportunities noted.

Moreover, this process can be executed quickly. In our experience, it takes four trained employees five days to gather data on 1,000 calls.

STEP 2: ANALYZE THE DATA

Having collected this large amount of data, how do you use the information to reduce costs and improve customer and employee satisfaction? Again, having the right tool enables analysts to easily generate statistics and graphs from the collected data. Figure 2 shows the type of report that can be generated based on the recommended data collection.

The analytic value of Figure 2 is that it addresses the fact that most call center reports focus on “averages” and thus fail to reveal other important details. Figure 2 shows the 1,000 calls by call-handle time. Note that the “average” call took 4.65 minutes; however, many calls took a minute or less, and a disturbingly large number of calls took well over 11 minutes.

Using the captured data, utilities can then analyze what causes problem calls. In this example, we analyzed 5 percent of the calls (49 in total) and identified several problems:

  • Customer service representatives (CSRs) were taking calls for which they were inadequately trained, causing high hold times and inordinately large screen usage numbers.
  • IT systems were slow on one particular call type.
  • There were no procedures in place to intercede when an employee took more than a specified number of minutes to complete a call.
  • Procedures were laborious, due to Public Utilities Commission (PUC) regulations or – more likely – internally mandated rules.

This kind of analysis, which we describe as a “longest call” review, typically helps identify problems that can be resolved at minimal cost. In fact, our experience in utility and other call centers confirms that this kind of analysis often allows companies to cut call-handle time by 10 to 15 seconds.

It’s important to understand what 10 to 15 fewer seconds of call-handle time means to the call center – and, most importantly, to customers. For a typical utility call center with 200 or more CSRs, the shorter handle time can result in a 5 percent cost reduction, or roughly $1 million annually. Companies that can comprehend the economic value and customer satisfaction associated with reducing average handle time, even by one second, are likely to be better focused on solving problems and prioritizing solutions.

Surprisingly, the longest 5 percent of calls typically represent nearly 15 percent of the total call center handle time, representing a mother lode of opportunity for improvement.

Another important benefit that can result from this detailed examination of call center sampling data involves looking at hold time. A sample hold time analysis graph is pictured in Figure 3.

Excessive hold times tend to be caused by bad call routing, lengthy notes on file, unclear processes and customer issues. Each of these problems has a solution, usually low-cost and easily implemented. Most importantly, the value of each action is quantified and understood, based on the data collected.

Other useful questions to ask include:

  • What are the details behind the high average after-call work (ACW) time? How does this affect your call center costs?
  • How would it help budget discussions with IT if you knew the impact of such things as inefficient call routing, poor integrated voice response (IVR) scripts or low screen pop percentages?
  • What analyses can you perform to understand how you should improve training courses and focus your quality review efforts?

The output of these analyses can prove invaluable in budget discussions and in prioritizing improvement efforts, and is also useful in communicating proposals to senior management, CSRs, quality review staff, customers and external organizations. The data can also be the starting point for a Six Sigma review.

Utilities can frequently achieve a 20 percent cost reduction by collecting the right data and analyzing it at a sufficiently granular level. Following is a breakdown of the potential savings:

  • Three percent savings can be achieved by reducing longest calls by 10 seconds.
  • Five percent savings can be gained by reducing ACW by 15 seconds.
  • Five percent savings can be realized by improving call routing – usually by aligning CSR skills required with CSR skills available – by 15 seconds.
  • Three percent savings can be achieved by improving process for two frequent processes by 10 seconds each.
  • Three percent savings can be realized by improving IVR and screen pop frequency and quality of information by 10 seconds.
  • One percent savings can be gained by improving IT response time on selected screens by three seconds.

STEP 3: REVIEW AND MONITOR PROGRESS ON AN ONGOING BASIS

Although this white paper focuses on the data collection and analyses procedures used, the key difference in this approach is the optimization strategy behind it.

The two-step approach outlined above starts with utilities recognizing that improvement opportunities exist, understanding the value of detailed data in identifying these opportunities and enabling the data collected to be easily presented and reviewed. Taken as a whole, this process can produce prioritized, high-ROI recommendations.

To gain the full value of this approach, utilities should do the following:

  • Engage the quality review team, trainers, supervisors and CSRs in the review process;
  • Expand the focus of the quality review team from looking only at individual CSRs’ performance to looking at organizational processes as well;
  • Have trainers embed the new lessons learned in training classes;
  • Encourage supervisors to reinforce lessons learned in team meetings and one-on-one coaching; and
  • Require CSRs to identify issues that can be studied in future reviews and follow the lessons learned.

Leading organizations perform these reviews periodically, building on their understanding of their call centers’ current status and using that understanding to formulate actions for future improvement.

Once the first study is complete, utilities also have a benchmark to which results from future studies can be compared. The value of having these prior analyses should be obvious in each succeeding review, as hold times decline, average handle times decrease, calls are routed more frequently to the properly skilled person and IT investments made based on ROI analyses begin to yield benefits.

Beyond these savings, customer and employee satisfaction should increase. When a call is routed to the CSR with the requisite skills needed to handle it, both the customer and the CSR are happier. Customer and CSR frustration will also be reduced when there are clear procedures to escalate calls, and IT systems fail less frequently.

IMPLEMENTING A CALL CENTER REVIEW

Although there are some commonalities in improving utilities’ call center performance, there are always unique findings specific to a given call center that help define the nature and volume of opportunities, as well as help chart the path to improvement.

By realizing that benefit opportunities exist and applying the process steps described above, and by using appropriate tools to reduce costs and improve customer and CSR satisfaction, utilities have the opportunity to transform the effectiveness of their call centers.

Perhaps we should end with another quote from Yogi: “The future ain’t what it used to be.” In fact, for utilities that implement these steps, the future will likely be much better.

Smart Metering Options for Electric and Gas Utilities

Should utilities replace current consumption meters with “smart metering” systems that provide more information to both utilities and customers? Increasingly, the answer is yes. Today, utilities and customers are beginning to see the advantages of metering systems that provide:

  • Two-way communication between the utility and the meter; and
  • Measurement that goes beyond a single consolidated quarterly or monthly consumption total to include time-of-use and interval measurement.

For many, “smart metering” is synonymous with an advanced metering infrastructure (AMI) that collects, processes and distributes metered data effectively across the entire utility as well as to the customer base (Figure 1).

SMART METERING REVOLUTIONIZES UTILITY REVENUE AND SERVICE POTENTIAL

When strategically evaluated and deployed, smart metering can deliver a wide variety of benefits to utilities.

Financial Benefits

  • Significantly speeds cash flow and associated earnings on revenue. Smart metering permits utilities to read meters and send the data directly to the billing application. Bills go out immediately, cutting days off the meter-to-cash cycle.
  • Improves return on investment via faster processing of final bills. Customers can request disconnects as the moving van pulls away. Smart metering polls the meter and gives the customer the amount of the final bill. Online or credit card payments effectively transform final bill collection cycles from a matter of weeks to a matter of seconds.
  • Reduces bad debt. Smart metering helps prevent bad debt by facilitating the use of prepayment meters. It also reduces the size of overdue bills by enabling remote disconnects, which do not depend on crew availability.

Operational Cost Reductions

  • Slashes the cost to connect and disconnect customers. Smart metering can virtually eliminate the costs of field crews and vehicles previously required to change service from the old to the new residents of a metered property.
  • Lowers insurance and legal costs. Field crew insurance costs are high – and they’re even higher for employees subject to stress and injury while disconnecting customers with past-due bills. Remote disconnects through smart metering lower these costs. They also reduce medical leave, disability pay and compensation claims. Remote disconnects also significantly cut the number of days that employees and lawyers spend on perpetrator prosecutions and attempts to recoup damages.
  • Cuts the costs of managing vegetation. Smart metering can pinpoint blinkouts, reducing the cost of unnecessary tree trimming.
  • Reduces grid-related capital expenses. With smart metering, network managers can analyze and improve block-by-block power flows. Distribution planners can better size transformers. Engineers can identify and resolve bottlenecks and other inefficiencies. The benefits include increased throughput and reductions in grid overbuilding.
  • Shaves supply costs. Supply managers use interval data to fine-tune supply portfolios. Because smart metering enables more efficient procurement and delivery, supply costs decline.
  • Cuts fuel costs. Many utility service calls are “false alarms.” Checking meter status before dispatching crews prevents many unnecessary truck rolls. Reduces theft. Smart metering can identify illegal attempts to reconnect meters, or to use energy and water in supposedly vacant premises. It can also detect theft by comparing flows through a valve or transformer with billed consumption.

Compliance Monitoring

  • Ensures contract compliance. Gas utilities can use one-hour interval meters to monitor compliance from interruptible, or “non-core,” customers and to levy fines against contract violators.
  • Ensures regulatory compliance. Utilities can monitor the compliance of customers with significant outdoor lighting by comparing similar intervals before and during a restricted time period. For example, a jurisdiction near a wildlife area might order customers to turn off outdoor lighting so as to promote breeding and species survival.
  • Reduces outage duration by identifying outages more quickly and pinpointing outage and nested outage locations. Smart metering also permits utilities to ensure outage resolution at every meter location.
  • Sizes outages more accurately. Utilities can ensure that they dispatch crews with the skills needed – and adequate numbers of personnel – to handle a specific job.
  • Provides updates on outage location and expected duration. Smart metering helps call centers inform customers about the timing of service restoration. It also facilitates display of outage maps for customer and public service use.
  • Detect voltage fluctuations. Smart metering can gather and report voltage data. Customer satisfaction rises with rapid resolution of voltage issues.

New Services

For utilities that offer services besides commodity delivery, smart metering provides an entry to such new business opportunities as:

  • Monitoring properties. Landlords reduce costs of vacant properties when utilities notify them of unexpected energy or water consumption. Utilities can perform similar services for owners of vacation properties or the adult children of aging parents.
  • Monitoring equipment. Power-use patterns can reveal a need for equipment maintenance. Smart metering enables utilities to alert owners or managers to a need for maintenance or replacement.
  • Facilitating home and small-business networks. Smart metering can provide a gateway to equipment networks that automate control or permit owners to access equipment remotely. Smart metering also facilitates net metering, offering some utilities a path toward involvement in small-scale solar or wind generation.

Environmental Improvements

Many of the smart metering benefits listed above include obvious environmental benefits. When smart metering lowers a utility’s fuel consumption or slows grid expansion, cleaner air and a better preserved landscape result. Smart metering also facilitates conservation through:

  • Leak detection. When interval reads identify premises where water or gas consumption never drops to zero, leaks are an obvious suspect.
  • Demand response and critical peak pricing. Demand response encourages more complete use of existing base power. Employed in conjunction with critical peak pricing, it also reduces peak usage, lowering needs for new generators and transmission corridors.
  • Load control. With the consent of the owner, smart metering permits utilities or other third parties to reduce energy use inside a home or office under defined circumstances.

CHALLENGES IN SMART METERING

Utilities preparing to deploy smart metering systems need to consider these important factors:

System Intelligence. There’s a continuing debate in the utility industry as to whether smart metering intelligence should be distributed or centralized. Initial discussions of advanced metering tended to assume intelligence embedded in meters. Distributed intelligence seemed part of a trend, comparable to “smart cards,” “smart locks” and scores of other everyday devices with embedded computing power.

Today, industry consensus favors centralized intelligence. Why? Because while data processing for purposes of interval billing can take place in either distributed or central locations, other applications for interval data and related communications systems cannot. In fact, utilities that opt for processing data at the meter frequently make it impossible to realize a number of the benefits listed above.

Data Volume. Smart metering inevitably increases the amount of meter data that utilities must handle. In the residential arena, for instance, using hour-long measurement intervals rather than monthly consumption totals replaces 12 annual reads per customer with 8,760 reads – a 730-fold increase.

In most utilities today, billing departments “own” metering data. Interval meter reads, however, are useful to many departments. These readings can provide information on load size and shape – data that can then be analyzed to help reduce generation and supply portfolio costs. Interval reads are even more valuable when combined with metering features like two-way communication between meter and utility, voltage monitoring and “last gasp” messages that signal outages.

This new data provides departments outside billing with an information treasure trove. But when billing departments control the data, others frequently must wait for access lest they risk slowing down billing to a point that damages revenue flow.

Meter Data Management. An alternative way to handle data volume and multiple data requests is to offload it into a stand-alone meter data management (MDM) application.

MDM applications gather and store meter data. They can also perform the preliminary processing required for different departments and programs. Most important, MDM gives all units equal access to commonly held meter data resources (Figure 2).

MDM provides an easy pathway between data and the multiple applications and departments that need it. Utilities can more easily consolidate and integrate data from multiple meter types, and reduce the cost of building and maintaining application interfaces. Finally, MDM provides a place to store and use data, whose flow into the system cannot be regulated – for example, in situations such as the flood of almost simultaneous messages from tens of thousands of meters sending a “last gasp” during a major outage.

WEIGHING THE COSTS AND BENEFITS OF SMART METERING

Smart metering on a mass scale is relatively new. No utility can answer all questions in advance. There are ways, however, to mitigate the risks:

Consider all potential benefits. Smart metering may be a difficult cost to justify if it rests solely on customer acceptance of demand response. Smart metering is easier to cost-justify when its deployment includes, for instance, the value of the many benefits listed above.

Evaluate pilots. Technology publications are full of stories about successful pilots followed by unsuccessful products. That’s because pilots frequently protect participants from harsh financial consequences. And it’s difficult for utility personnel to avoid spending time and attention on participants in ways that encourage them to buy into the program. Real-life program rollouts lack these elements.

Complicating the problem are likely differences between long-term and short-term behavior. The history of gasoline conservation programs suggests that while consumers initially embrace incentives to car pool or use public transportation, few make such changes on a permanent basis.

Examining the experiences of utilities in the smart metering forefront – in Italy, for example, or in California and Idaho – may provide more information than a pilot.

Develop a complete business case. Determining the cost-benefit ratio of smart metering is challenging. Some costs – for example, meter prices and installation charges – may be relatively easy to determine. Others require careful calculations. As an example, when interval meters replace time-of-use meters, how does the higher cost of interval meters weigh against the fact that they don’t require time-of-use manual reprogramming?

As in any business case, some costs must be estimated:

  • Will customer sign-up equal the number needed to break even?
  • How long will the new meters last?
  • Do current meter readers need to be retrained, and if so, what will that cost?
  • Will smart metering help retain customers that might otherwise be lost?
  • Can new services such as equipment efficiency analyses be offered, and if so, how much should the utility charge for them?

Since some utilities are already rolling out smart metering programs, it’s becoming easier to obtain real-life numbers (rather than estimates) to plug into your business case.

CONSIDER ALTERNATIVES

Technology is “smart” only when it reduces the cost of obtaining specified objectives. Utilities may find it valuable to try lower-cost routes to some results, including:

  • Customer charges to prevent unnecessary truck rolls. Such fees are common among telephone service providers and have worked well for some gas utilities responding to repeated false alarms from householder-installed carbon monoxide detectors.
  • Time-of-use billing with time/rate relationships that remain constant for a year or more. This gives consumers opportunities to make time-shifting a habit.
  • Customer education to encourage consumers to use the time-shifting features on their appliances as a contribution to the environment. Most consumers have no idea that electricity goes to waste at night. Keeping emissions out of the air and transmission towers out of the landscape could be far more compelling to many consumers than a relatively small saving resulting from an on- and off-peak pricing differential.
  • Month-to-month rate variability. One study found that approximately a third of the efficiency gains from real-time interval pricing could be captured by simply varying the flat retail rates monthly – and at no additional cost for metering. [1] While a third of the efficiency gains might not be enough to attain long-term goals, they might be enough to fill in a shorter-term deficit, permitting technology costs and regulatory climates to stabilize before decisions must be made.
  • Multitier pricing based on consumption. Today, two-tier pricing – that is, a lower rate for the first few-hundred kilowatt-hours per month and a higher rate for additional hours – is common. However, three or four tiers might better capture the attention of those whose consumption is particularly high – owners of large homes and pool heaters, for instance – without burdening those at the lower end of the economic ladder. Tiers plus exception handling for hardships like high-consuming medical equipment would almost certainly be less difficult and expensive than universal interval metering.

A thorough evaluation of the benefits and challenges of advanced metering systems, along with an understanding of alternative means to achieving those benefits, is essential to utilities considering deployment of advanced metering systems.

Note: The preceding was excerpted from the Oracle white paper “Smart Metering for Electric and Gas Utilities.” To receive the complete paper, Email oracleutilities_ww@oracle.com.

ENDNOTE

  1. Holland and Mansur, “The Distributional and Environmental Effects of Time-varying Prices in Competitive Electricity Markets.” Results published in “If RTP Is So Great, Why Don’t We See More of It?” Center for the Study of Energy Markets Research Review, University of California Energy Institute, Spring 2006. Available at www.ucei.berkeley.edu/

Microsoft Helps Utilities Use IT to Create Winning Relationships

The utilities industry worldwide is experiencing growing energy demand in a world with shifting fuel availability, increasing costs, a shrinking workforce and mounting global environmental pressures. Rate case filings and government regulations, especially those regarding environmental health and safety, require utilities to streamline reporting and operate safely enterprise-wide. At the same time, increasing competition and costs drive the need for service reliability and better customer service. Each issue causes utilities to depend more and more on information technology (IT).

The Microsoft Utility team works with industry partners to create and deploy industry-specific solutions that help utilities transform challenges into opportunities and empower utilities workers to thrive in today’s market-driven environment. Solutions are based on the world’s most cost-effective, functionally rich, and secure IT platform. The Microsoft platform is interoperable with a wide variety of systems and proven to improve people’s abilities to access information and work with others across boundaries. Together, they help utilities optimize operations in each line of business.

Customer care. Whether a utility needs to modernize a call center, add customer self-service or respond to new business requirements such as green power, Microsoft and its partners provide solutions for turning the customer experience into a powerful competitive advantage with increased cost efficiencies, enhanced customer service and improved financial performance.

Transmission and distribution. Growing energy demand makes it critical to effectively address safe, reliable and efficient power delivery worldwide. To help utilities meet these needs, Microsoft and its partners offer EMS, DMS and SCADA systems; mobile workforce management solutions; project intelligence; geographic information systems; smart metering/grid; and work/asset/document management tools that streamline business processes and offer connectivity across the enterprise and beyond.

Generation. Microsoft and its partners provide utilities with a view across and into their generation operations that enables them to make better decisions to improve cycle times, output and overall effectiveness while reducing the carbon footprint. With advanced software solutions from Microsoft and its partners, utilities can monitor equipment to catch early failure warnings, measure fleets’ economic performance and reduce operational and environment risk.

Energy trading and risk management. Market conditions require utilities to optimize energy supply performance. Microsoft and its partners’ enterprise risk management and trading solutions help utilities feed the relentless energy demands in a resource-constrained world.

Regulatory compliance. Microsoft and its partners offer solutions to address the compliance requirements of the European Union; Federal Energy Regulatory Commission; North American Reliability Council; Sarbanes-Oxley Act of 2000; Environmental, Health and Safety; and other regional jurisdiction regulations and rate case issues. With solutions from Microsoft partners, utilities have a proactive approach to compliance, the most effective way to manage operational risk across the enterprise.

Enterprise. To optimize their businesses, utility executives need real-time visibility across the enterprise. Microsoft and its partners provide integrated e-business solutions that help utilities optimize their interactions with customers, vendors and partners. These enterprise applications address business intelligence and reporting, customer relationship management, collaborative workspaces, human resources and financial management.

Using Analytics for Better Mobile Technology Decisions

Mobile computing capabilities have been proven to drive business value by providing traveling executives, field workers and customer service personnel with real-time access to customer data. Better and more timely access to information shortens response times, improves accuracy and makes the workforce more productive.

However, although your organization may agree that technology can improve business processes, different stakeholders – IT management, financial and business leadership and operations personnel – often have different perspectives on the real costs and value of mobility. For example, operations wants tools that help employees work faster and focus more intently on the customer; finance wants the solution that costs the least amount this quarter; and IT wants to implement mobile projects that can succeed without draining resources from other initiatives.

It may not be obvious, but there are ways to achieve everyone’s goals. Analytics can help operations, finance and IT find common ground. When teams understand the data, they can understand the logic. And when they understand the logic they can support making the right decision.

EXPOSING THE FORMULA

Deploying mobile technology is a strategic initiative with far-reaching consequences for the health of an enterprise. In the midst of evaluating a mobile project, however, it’s easy to forget that the real goal of hardware-acquisition initiatives is to make the workforce more productive and improve both the top and bottom lines over the long term.

Most decision-analytics tools focus on up-front procurement questions alone, because the numbers seem straightforward and uncomplicated. But these analyses miss the point. The best analysis is one that can determine which of the solutions will provide the most advantages to the workforce at the lowest possible overall cost to the organization.

To achieve the best return on investment we must do more than recoup an out-of-pocket expense: Are customers better served? Are employees working better, faster, smarter? Though hard to quantify, these are the fundamental aspects that determine the return on investment (ROI) of technology.

It’s possible to build a vendor-neutral analysis to calculate the total cost of ownership (TCO) and ROI of mobile computers. Panasonic Computer Solutions Company, the manufacturer of Toughbook notebooks, enlisted the services of my analytics company, Serious Networks, Inc., to develop an unbiased TCO/ROI application to help companies make better decisions when purchasing mobile computers.

The Panasonic-sponsored operational analysis tool provides statistically valid answers by performing a simulation of the devices as they would be used and managed in the field, generating a model that compares the costs and benefits of multiple manufacturers’ laptops. Purchase cost, projected downtime, the range of wireless options, notebook features, support and other related costs are all incorporated into this analytic toolset.

Using over 100 unique simulations with actual customers, four key TCO/ROI questions emerged:

  • What will it cost to buy a proposed notebook solution?
  • What will it cost to own it over the life of the project?
  • What will it cost to deploy and decommission the units?
  • What value will be created for the organization?

MOVING BEYOND GUESSTIMATES – CONSIDERING COSTS AND VALUE OVER A LIFETIME

There is no such thing as an average company, so an honest analysis uses actual corporate data instead of industry averages. Just because a device is the right choice for one company does not make it the right choice for yours.

An effective simulation takes into account the cost of each competing device, the number of units and the rate of deployment. It calculates the cost of maintaining a solution and establishes the value of productive time using real loaded labor rates or revenue hours. It considers buy versus lease questions and can extrapolate how features will be used in the field.

As real-world data is entered, the software determines which mobile computing solution is most likely to help the company reach its goals. Managers can perform what-if analyses by adjusting assumptions and re-running the simulation. Within this framework, managers will build a business case that forecasts the costs of each mobile device against the benefits derived over time (see Figures 1 and 2).

MAKING INTANGIBLES TANGIBLE

The 90-minute analysis process is very granular. It’s based on the industry segment – because it simulates the tasks of the workforce – and compares up to 10 competing devices.

Once devices are selected, purchase or lease prices are entered, followed by value-added benefits like no-fault warranties and on-site support. Intangible factors favoring one vendor over another, such as incumbency, are added to the data set. The size and rate of the deployment, as well as details that determine the cost of preparing the units for the workforce, are also considered.

Next the analysis accounts for the likelihood and cost of failure, using your own experience as a baseline. Somewhat surprisingly, the impact of failure is given less weight than most outside observers would expect. Reliability is important, but it’s not the only or most important attribute.

What is given more weight are productivity and operational enhancements, which can have a significantly greater financial impact than reliability, because statistically employees will spend much more of their time working than dealing with equipment malfunctions.

A matrix of features and key workforce behaviors is developed to examine the relative importance of touch screens, wireless and GPS, as well as each computer vendor’s ability to provide those features as standard or extra-cost equipment. The features are rated for their time and motion impact on your organization, and an operations efficiency score is applied to imitate real-world results.

During the session, the workforce is described in detail, because this information directly affects the cost and benefit. To assess the value of a telephone lineman’s time, for example, the system must know the average number of daily service orders, the percentage of those service calls that require re-work and whether linemen are normally in the field five, six or seven days a week.

Once the data is collected and input it can be modified to provide instantaneous what-if, heads-up and break-even analyses reports – without interference from the vendor. The model is built in Microsoft Excel so that anyone can assess the credibility of the analysis and determine independently that there are no hidden calculations or unfair formulas skewing the results.

CONCLUSION

The Panasonic simulation tool can help different organizations within a company come to consensus before making a buying decision. Analytics help clarify whether a purpose-built rugged or business-rugged system or some other commercial notebook solution is really the right choice for minimizing the TCO and maximizing the ROI of workforce mobility.

ABOUT THE AUTHOR

Jason Buk is an operations director at Serious Networks, Inc., a Denver-based business analytics firm. Serious Networks uses honest forecasting and rigorous analysis to determine what resources are most likely to increase the effectiveness of the workforce, meet corporate goals and manage risk in the future.

Analyzing Substation Asset Health Information for Increased Reliability And Return on Investment

Asset Management, Substation Automation, AMI and Intelligent Grid Monitoring are common and growing investments for most utilities today. The foundation for effective execution of these initiatives is built upon the ability to efficiently collect, store, analyze and report information from the rapidly growing number of smart devices and business systems. Timely and automated access to this information is now more than ever helping drive the profitability and success of utilities. Most utilities have made significant investments in modern substation equipment but fail to continuously analyze and interpret the real-time health indicators of these assets. Continued investment in state-of-the-art operational assets will yield little return on investment unless the information can be harvested and interpreted in a meaningful way.

DATA CAPTURE AND PRESENTATION

InStep’s eDNA (Enterprise Distributed Network Architecture) software is used by many of the world’s leading utilities to collect, store, display and report on the operational and asset health-related information produced by their intelligent assets. eDNA is a highly scalable enterprise application specifically designed for integrating data from SCADA, IEDs, utility meters and other smart devices with the corporate enterprise. This provides centralized access to the real-time, historical and asset health related data that most applications throughout a utility depend upon for managing reliability and profitability.

A real-time historian is needed for collection, organization and reporting of the substation asset measurement data. Today, asset health monitoring is often not present or it is comprised of fixed alarm limits defined within the device or historian. Additionally, fixed end-of-life calculations are used for determining an asset’s health. It is a daunting task to identify and maintain fixed limits and calculations that can be variable based on the actual device characteristics, operating history, ambient conditions and device settings. As a result, the historian alone does not provide for a complete asset monitoring strategy.

ADVANCED ANALYTICS

InStep’s PRiSM software is a self-learning analytic application for monitoring the real-time health of critical assets in support of Condition Based Maintenance (CBM). PRiSM uses artificial intelligence and sophisticated data-mining techniques to determine when a piece of equipment is performing poorly or is likely to fail. The early identification of equipment problems leads to reduced maintenance costs and increased availability, reliability, production quality and capacity.

The software learns from an asset’s individual operating history and develops a series of operational profiles for each piece of equipment. These operational profi es are compared to an equipment’s real-time data to identify and predict failures before they occur. Alarms and Email notification are used to alert personnel of pending problems. PRiSM includes an advanced analysis application for identifying why an asset is not performing as expected.

TECHNOLOGY ADVANCEMENT

Utilities are rapidly replacing legacy devices and systems with modern technologies. These new systems are typically better instrumented to provide utilities with the information necessary to more effectively operate and better maintain their assets. The status of a breaker can be good information for determining the path of power fl ow but does not provide enough information to determine the health of the device or when it is likely to fail. Modern IEDs and utility meters support tens to hundreds of data points in a single device. This data is quite valuable and necessary in supporting a modern utility asset management program. Many common utility applications such as maintenance management, outage management, meter data management, capacity planning and other advanced analytical systems can be best leveraged when accurate high-resolution historical data is readily available. An intelligent condition monitoring analytical layer is needed for effective monitoring of such a large population of devices and sensors.

CONCLUSION

The need for efficient and effective data management is rapidly growing as utilities continue to update their assets and business systems. This is further driving the need for a highly scalable enterprise historian. The historian is expanding beyond the traditional role of supporting operations and is becoming a key application for effective asset management and overall business success. The historian alone does not provide for a robust real-time asset health monitoring strategy, but when combined with an advanced online condition monitoring application such as InStep’s PRiSM technology, significant savings and increased reliability can be achieved. InStep continues to play a key and growing role in supporting many of the most successful utilities in their operational, reliability and asset monitoring efforts.

Technology with vision for Today’s Utilities

Around the world, utilities are under pressure. Citizens demand that utilities provide energy and water without undermining environmental quality. Customers seek choice and convenience, and regulators respond with new market structures. Financial stakeholders look for operational efficiency at a time when aging workforces and infrastructures need replacement.

Pressures like these are forcing utilities to re-examine every aspect of the utility business, from supply to consumption. And no utility can handle those changes alone.

Oracle has positioned itself to become utilities’ software partner of choice in the quest to respond positively and completely to these pressures. To do so, Oracle brings together a worldwide team of utility experts, software applications that address mission-critical utility needs, a rock-solid suite of corporate operational software and world-leading middleware and technology.

The result: Flexible, innovative solutions that increase efficiency, improve stakeholder satisfaction and future-proof the organization.

Oracle has reshaped the utilities IT marketplace. During the past year, by acquiring two world leaders in utility-specific applications – SPL WorldGroup and Lodestar – Oracle has created Oracle Utilities, a new brand that establishes a unique portfolio of proven software, integrating industry-specific applications with the capabilities of Oracle Applications, Oracle Fusion Middleware and Oracle Database.

Oracle Utilities offers the world’s most complete suite of end-to-end information technology solutions for the gas, water and electric utilities that communities around the world depend on. Our revolutionary approach to providing utilities with the applications and expertise they need brings together:

  • Oracle Utilities solutions, utility-specific revenue and operations management applications:
    • Customer Care and Billing
    • Mobile Workforce Management
    • Network Management System
    • Work and Asset Management
    • Meter Data Management
    • Load Analysis
    • Load Profiling and Settlement
    • Portfolio Management
    • Quotations Management
    • Business Intelligence

These solutions are available stand-alone, or as an integrated suite.

  • Oracle’s ERP, database and infrastructure software:
    • Oracle E-Business Suite and other ERP applications
    • TimesTen and Sleepycat for real-time data management
    • Data hubs for customer and product master data management
    • Analytics that provide insight and customer intelligence
    • ContentDB, SpatialDB and RecordsDB for content management
    • Secure Enterprise Search for enterprise-wide search needs
  • Siebel CRM for larger competitive utilities’ call centers, specialized contacts and sales:
    • Most comprehensive solution for Sales, Service and Marketing
    • Complete out-of-the box solution that’s easy to tailor to your needs
    • Results such as percentage increase in sales pipeline, user adoption, opportunity-to-win ratios and doubled revenue growth

Stand-alone, each of these products meets utilities’ unique customer and service needs. Together, they enable multi-departmental business processes. The result is an unparalleled set of technologies that address utilities’ most pressing current and emerging issues.

THE VISION

Cross-organizational business processes and best practices are key to addressing today’s complex challenges. Oracle Utilities provides the path via which utilities may:

  • Advance customer care with:
    • Real-time 360-degree views of customer information
    • Tools to help customers save time and money
    • Ability to introduce or retire products and services quickly in response to emerging customer needs
  • Enhance revenue and operations management:
    • Avoid revenue leakage across end-to-end transactions
    • Increase the visibility and auditability of key business processes
    • Manage assets strategically
    • Bill for services and collect revenue cost-effectively
    • Increase field crew and network efficiency
    • Track and improve performance against goals
    • Achieve competitive advantage with a leading-edge infrastructure that helps utilities respond quickly to change
  • Reduce total cost of ownership through access to a single global vendor with:
    • Proven best-in-class utility management solutions
    • Comprehensive, world-class capabilities in applications and technology infrastructure
    • A global 24/7 distribution and support network with 7,000 service personnel
    • Over 14,000 software developers
    • Over 19,000 partners
  • Address the “Green Agenda”:
    • Help reduce pollution
    • Increase efficiency

STRATEGIC TECHNOLOGY FOR THE EMERGING UTILITY

Today’s utility is beset by urgent issues – environmental concerns, rising costs, aging workforces, changing markets, regulatory demands and rising stakeholder expectations.

Oracle Utilities can help meet these challenges by providing the leading mission-critical utilities suite in the marketplace today. Oracle integrates industry-specific customer care and billing, network management, work and asset management, mobile workforce management and meter data management applications with the capabilities of Oracle’s industry-leading enterprise applications, business intelligence tools, middleware and database technologies. We enable customers to adapt more nimbly to market deregulation, help them meet ever-evolving customer demands, enhance operational excellence and deliver on commitments to environmental conservation.

Oracle Utilities’ flexible, standards-based applications and architecture help utilities innovate. They lead toward coherent technology solutions. Oracle helps utilities keep pace with change without losing focus on the energy, water and waste services fundamental to local and global human and economic welfare.

Only Oracle powers the information-driven enterprise by offering a complete, integrated solution for every segment of the utilities industry – from generation and transmission to distribution and retail services. And when you run Oracle applications on Oracle technology, you speed implementation, optimize performance and maximize ROI.

Utilities today need a suite of software applications and technology to serve as a robust springboard from which to meet the challenges of the future.

Oracle offers that suite.

Oracle Utilities solutions enable you to meet tomorrow’s customer needs while addressing the varying concerns of financial stakeholders, employees, communities and governments. We work with you to address emerging issues and changing business conditions. We help you to evolve to take advantage of new technology directions and to incorporate innovation into ongoing activity.

Partnering with Oracle helps you to future-proof your utility.

CONTACT US

For more information, call +1.800.275.4775 to speak to an Oracle representative, or visit oracle.com/industries/utilities.

Copyright © 2008, Oracle. All rights reserved. Published in the U.S.A. This document is provided for information purposes only and the contents hereof are subject to change without notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document and no contractual obligations are formed either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without our prior written permission.

Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners.

Leveraging the Data Deluge: Integrated Intelligent Utility Network

If you define a machine as a series of interconnected parts serving a unified purpose, the electric power grid is arguably the world’s largest machine. The next-generation version of the electric power grid – called the intelligent utility network (IUN), the smart grid or the intelligent grid, depending on your nationality or information source – provides utilities with enhanced transparency into grid operations.

Considering the geographic and logical scale of the electric grid from any one utility’s point of view, a tremendous amount of data will be generated by the additional “sensing” of the workings of the grid provided by the IUN. This output is often described as a “data flood,” and the implication that businesses could drown in it is apropos. For that reason, utility business managers and engineers need analytical tools to keep their heads above water and obtain insight from all this data. Paraphrasing the psychologist Abraham Maslow, the “hierarchy of needs” for applying analytics to make sense of this data flood could be represented as follows (Figure 1).

  • Insight represents decisions made based on analytics calculated using new sensor data integrated with existing sensor or quasi-static data.
  • Knowledge means understanding what the data means in the context of other information.
  • Information means understanding precisely what the data measures.
  • Data represents the essential reading of a parameter – often a physical parameter.

In order to reap the benefits of accessing the higher levels of this hierarchy, utilities must apply the correct analytics to the relevant data. One essential element is integrating the new IUN data with other data over the various time dimensions. Indeed, it is analytics that allow utilities to truly benefit from the enhanced capabilities of the IUN compared to the traditional electric power grid. Analytics can be comprised solely of calculations (such as measuring reactive power), or they can be rule-based (such as rating a transform as “stressed” if it has a more than 120 percent nameplate rating over a two-hour period).

The data to be analyzed comes from multiple sources. Utilities have for years had supervisory control and data acquisition (SCADA) systems in place that employ technologies to transmit voltage, current, watts, volt ampere reactives (VARs) and phase angle via leased telephone lines at 9,600 baud, using the distributed network protocol (DNP3). Utilities still need to integrate this basic information from these systems.

In addition, modern electrical power equipment often comes with embedded microprocessors capable of generating useful non-operational information. This can include switch closing time, transformer oil chemistry and arc durations. These pieces of equipment – generically called intelligent electrical devices (IEDs) – often have local high-speed sequences of event recorders that can be programmed to deliver even more data for a report for post-event analysis.

An increasing number of utilities are beginning to see the business cases for implementing an advanced metering infrastructure (AMI). A large-scale deployment of such meters would also function as a fine-grained edge sensor system for the distribution network, providing not only consumption but voltage, power quality and load phase angle information. In addition, an AMI can be a strategic platform for initiating a program of demand-response load control. Indeed, some innovative utilities are considering two-way AMI meters to include a wireless connection such as Zigbee to the consumer’s home automation network (HAN), providing even finer detail to load usage and potential controllability.

Companies must find ways to analyze all this data, both from explicit sources such as IEDs and implicit sources such as AMI or geographical information systems (GIS). A crucial aspect of IUN analysis is the ability to integrate conventional database data with time-synchronized data, since an isolated analytic may be less useful than no analytic data at all.

CATEGORIES AND RELATIONSHIPS

There are many different categories of analytics that address the specific needs of the electric power utility in dealing with the data deluge presented by the IUN. Some depend on the state regulatory environment, which not only imposes operational constraints on utilities but also determines the scope and effect of what analytics information exchange is required. For example, a generation-to-distribution utility – what some fossil plant owners call “fire to wire” – may have system-wide analytics that link in load dispatch to generation economics, transmission line realities and distribution customer load profiles. Other utilities operate power lines only, and may not have their own generation capabilities or interact with consumers at all. Utilities like these may choose to focus initially on distribution analytics such as outage predication and fault location.

Even the term analytics can have different meanings for different people. To the power system engineer it involves phase angles, voltage support from capacitor banks and equations that take the form “a + j*b.” To the line-of-business manager, integrated analytics may include customer revenue assurance, lifetime stress analysis of expensive transformers and dashboard analytics driving business process models. Customer service executives could use analytics to derive emergency load control measures based on a definition of fairness that could become quite complex. But perhaps the best general definition of analytics comes from the Six Sigma process mantra of “define, measure, analyze, improve, control.” In the computer-driven IUN, this would involve:

  • Define. This involves sensor selection and location.
  • Measure. SCADA systems enable this process.
  • Analyze. This can be achieved using IUN Analytics.
  • Improve. This involves grid performance optimization, as well as business process enhancements.
  • Control. This is achieved by sending commands back to grid devices via SCADA, and by business process monitoring.

The term optimization can also be interpreted in several ways. Utilities can attempt to optimize key performance indicators (KPIs) such as the system average interruption duration index (SAIDI, which is somewhat consumer-oriented) on grid efficiency in terms of megawatts lost to component heating, business processes (such as minimizing outage time to repair) or meeting energy demand with minimum incremental fuel cost.

Although optimization issues often cross departmental boundaries, utilities may make compromises for the sake of achieving an overall strategic goal that can seem elusive or even run counter to individual financial incentives. An important part of higher-level optimization – in a business sense rather than a mathematical one – is the need for a utility to document its enterprise functions using true business process modeling tools. These are essential to finding better application integration strategies. That way, the business can monitor the advisories from analytics in the tool itself, and more easily identify business process changes suggested by patterns of online analytics.

Another aspect of IUN analytics involves – using a favorite television news phrase – “connecting the dots.” This means ensuring that a utility actually realizes the impact of a series of events on an end state, even though the individual events may appear unrelated.

For example, take complex event processing (CEP). A “simple” event might involve a credit card company’s software verifying that your credit card balance is under the limit before sending an authorization to the merchant. A “complex” event would take place if a transaction request for a given credit card account was made at a store in Boston, and another request an hour later in Chicago. After taking in account certain realities of time and distance, the software would take an action other than approval – such as instructing the merchant to verify the cardholder’s identity.

Back in the utilities world, consideration of weather forecasts in demand-response action planning, or distribution circuit redundancy in the face of certain existing faults, can be handled by such software. The key in developing these analytics is not so much about establishing valid mathematical relationships as it is about giving a businessperson the capability to create and define rules. These rules must be formulated within an integrated set of systems that support cross-functional information. Ultimately, it is the businessperson who relates the analytics back to business processes.

AVAILABLE TOOLS

Time can be a critical variable in successfully using analytics. In some cases, utilities require analytics to be responsive to the electric power grid’s need to input, calculate and output in an actionable time frame.

Utilities often have analytics built into functions in their distribution management or energy management systems, as well as individual analytic applications, both commercial and home-grown. And some utilities are still making certain decisions by importing data into a spreadsheet and using a self-developed algorithm. No matter what the source, the architecture of the analytics system should provide a non-real-time “bus,” often a service-oriented architecture (SOA) or Web services interface, but also a more time-dependent data bus that supports common industry tools used for desktop analytics within the power industry.

It’s important that everyone in the utility has internally published standards for interconnecting their analytics to the buses, so all authorized stakeholders can access it. Utilities should also set enterprise policy for special connectors, manual entry and duplication of data, otherwise known as SOA governance.

The easier it is for utilities to use the IUN data, the less likely it is that their engineering, operations and maintenance staffs will be overwhelmed by the task of actually acquiring the data. Although the term “plug and play” has taken on certain negative connotations – largely due to the fact that few plug-and-play devices actually do that – the principle of easily adding a tool is still both valid and valuable. New instances of IUN can even include Web 2.0 characteristics for the purpose of mash-ups – easily configurable software modules that link, without pain, via Web services.

THE GOAL OF IMPLEMENTING ANALYTICS

Utilities benefit from applying analytics by making the best use of integrated utility enterprise information and data models, and unlocking employee ideas or hypotheses about ways to improve operations. Often, analytics are also useful in helping employees identify suspicious relationships between data. The widely lamented “aging workforce” issue typically involves the loss of senior staff who can visualize relationships that aren’t formally captured, and who were able to make connections that others didn’t see. Higher-level analytics can partly offset the impact of the aging workforce brain drain.

Another type of analytics is commonly called “business intelligence.” But although a number of best-selling general-purpose BI tools are commercially available, utilities need to ensure that the tools have access to the correct, unique, authoritative data. Upon first installing BI software, there’s sometimes a tendency among new users to quickly assemble a highly visual dashboard – without regard to the integrity of the data they’re importing into the tool.

Utilities should also create enterprise data models and data dictionaries to ensure the accuracy of the information being disseminated throughout the organization. After all, utilities frequently use analytics to create reports that summarize data at a high level. Yet some fault detection schemes – such as identifying problems in buried cables – may need original, detailed source data. For that reason utilities must have an enterprise data governance scheme in place.

In newer systems, data dictionaries and models can be provided by a Web service. But even if the dictionary consists of an intermediate lookup table in a relational database, the principles still hold: Every process and calculated variable must have a non-ambiguous name, a cross-reference to other major systems (such as a distribution management system [DMS] or geographic information system [GIS]), a pointer to the data source and the name of the person who owns the data. It is critical for utilities to assign responsibility for data accuracy, validation, source and caveats at the beginning of the analytics engineering process. Finding data faults after they contribute to less-than-correct results from the analytics is of little use. Utilities may find data scrubbing and cross-validation tools from the IT industry to be useful where massive amounts of data are involved.

Utilities have traditionally used simulation primarily as a planning tool. However, with the continued application of Moore’s law, the ability to feed a power system simulation with real-time data and solve a state estimation in real time can result in an affordable crystal ball for predicting problems, finding anomalies or performing emergency problem solving.

THE IMPORTANCE OF STANDARDS

The emergence of industry-wide standards is making analytics easier to deploy across utility companies. Standards also help ease the path to integration. After all, most electrons look the same around the world, and the standards arising from the efforts of Kirchoff, Tesla and Maxwell have been broadly adopted globally. (Contrary views from the quantum mechanics community will not be discussed here!) Indeed, having a documented, self-describing data model is important for any utility hoping to make enterprise-wide use of data for analytics; using an industry-standard data model makes the analytics more easily shareable. In an age of greater grid interconnection, more mergers and acquisitions, and staff shortages, utilities’ ability to reuse and share analytics and create tools on top of standards-based data models has become increasingly important.

Standards are also important when interfacing to existing utility systems. Although the IUN may be new, data on existing grid apparatus and layout may be decades old. By combining the newly added grid observations with the existing static system information to form a complete integration scenario, utilities can leverage analytics much more effectively.

When deploying an IUN, there can be a tendency to use just the newer, sensor-derived data to make decisions, because one knows where it is and how to access it. But using standardized data models makes incorporating existing data less of an issue. There is nothing wrong with creating new data models for older data.

CONCLUSION

To understand the importance of analytics in relation to the IUN, imagine an ice-cream model (pick your favorite flavor). At the lowest level we have data: the ice cream is 30 degrees. At the next level we have information: you know that it is 30 degrees on the surface of the ice cream, and that it will start melting at 32 degrees. At the next level we have knowledge: you’re measuring the temperature of the middle scoop of a three-scoop cone, and therefore when it melts, the entire structure will collapse. At the insight level we bring in other knowledge – such as that the ambient air temperature is 80 degrees, and that the surface temperature of the ice cream has been rising 0.5 degrees per minute since you purchased it. Then the gastronomic analytics activate and take preemptive action, causing you to eat the whole cone in one bite, because the temporary frozen-teeth phenomenon is less of a business risk than having the scoops melt and fault to ground.

Business Intelligence: The ‘Better Light Bulb’ for Improved Decision Making

Although some utilities have improved organizational agility by providing high-level executives with real-time visibility into operations, if they’re to be truly effective, these businesses must do more than simply implement CEO-level dashboards. They must provide this kind of visibility to every employee who needs it. To achieve this, utilities need to be able to collect data from many disparate sources and present it in a way that allows people company-wide to access the right information at the right time in the form of easy-to-use and actionable business intelligence (BI).

The following statement from the Gartner EXP CIO report “Creating Enterprise Leverage: The 2007 CIO Agenda,” led by Mark McDonald and Tina Nunno (February 2007).

Success in 2007 requires making the enterprise different to attract and retain customers. In response, many CIOs are looking for new sources of enterprise leverage, including technical excellence, agility, information and innovation.

This statement holds true. But converting data into useful information for employees in different levels and roles creates a new challenge. Technological advances that produce exponentially increasing volumes of data, coupled with historical data silos, have made it extremely difficult for utilities professionals to access, process and analyze data in a way that allows them to make effective decisions. What’s needed: BI technology tools that are not only available to the C-level executive or the accounting department, but to everyone – civil and electrical engineers, technicians, planners, customer service representatives, safety officers and others.

BI solutions also need to handle data in a way that mirrors the way people work. Such solutions should be capable of supporting the full spectrum of use – from individuals’ personal content to information created by team members for use by the team and formal IT-created structured and controlled content for use enterprise-wide.

The good news is that BI has become more accessible, easier to use and more affordable so that people throughout the enterprise – not just accountants or senior executives – can gain insight into the business and make better informed decisions.

RIGHT-TIME PERFORMANCE MANAGEMENT

“The Gartner Magic Quadrant for Business Intelligence Platforms, 2008,” by James Richardson, Kurt Schlegel, Bill Hostmann and Neil McMurchy (February 2008), has this to say about the value of BI:

CIOs are coming under increasing pressure to invest in technologies that drive business transformation and strategic change. BI can deliver on this promise if deployed successfully, because it could improve decision making and operational efficiency, which in turn drive the top line and the bottom line.

Greg Todd, Accenture Information Management Services global lead for resources at Accenture, advises that monthly, or even weekly, reports just aren’t enough for utilities to remain agile. Says Todd, “The utilities industry is dynamic. Everything from plant status and market demand to generation capacity and asset condition needs near real-time performance management to provide the insight for people enterprise-wide to make the right decisions in a timely fashion – not days or weeks after the event.”

By having access to near real-time performance monitoring across the enterprise, utilities executives, managers, engineers and front-line operations personnel can rapidly analyze information and make decisions to improve performance. This in turn allows them more agility to respond to today’s regulatory, competitive and economic imperatives.

For example, Edipower, one of Italy’s leading energy providers, has implemented an infrastructure that will grow as its business grows and support the BI technology it needs to guarantee power plant availability as market conditions and regulations dictate. According to Massimo Pernigotti, CIO of Edison, consolidating the family of companies’ technology platforms and centralizing its data network allowed the utility to fully integrate its financial and production data analyses. Says Pernigotti, “Using the new application, staff can prepare scorecards and business intelligence summaries that plant managers can then access from portable devices, ensuring near real-time performance management.”

To achieve this level of performance management, utilities professionals need easy access to both structured and unstructured data from multiple sources, as illustrated in Figure 1. This data can be “owned” by many different departments and span multiple locations. It can come from operational control systems, meter data systems, customer information systems, financial systems and human resources and enterprise resource planning (ERP) systems, to name a few sources. New and more widely available BI tools allow engineers and others to quickly view near real-time information and use it to create key performance indicators (KPIs) that can be used to monitor and manage the operational health of an organization.

KPIs commonly include things like effective forced outage factors (EFOFs), average customer downtime, average customer call resolution time, fuel cost per megawatt hour (MWh), heat rates, capacity utilization, profit margin, total sales and many other critical indicators. Traditionally, this data would be reported in dozens of documents that took days or weeks to compile while problems continued to progress. Using BI, however, these KPIs can be calculated in minutes.

With context-sensitive BI, safety professionals have the visibility to monitor safety incidents and environmental impacts. In addition, engineers can analyze an asset’s performance and energy consumption – and solve problems before they become critical.

One of the largest U.S.-based electric power companies recently completed a corporate acquisition and divestiture. As part of its reorganization, the company sought a way to reduce capital expenditures for producing power as well as an effective way to capture and transfer knowledge in light of an aging workforce. By adopting a new BI platform and monitoring a comprehensive set of custom KPIs in near real time, the company was able to give employees access to its generation performance metrics, which in turn led to improved generation demand-and-surplus forecasts. As a result, the company was able to better utilize its existing power plants and reduce capital expenditures for building new ones.

BI tools are also merging with collaboration tools to provide right-time information about business performance that employees at every organizational level can access and which can be shared across corporate boundaries and continents. This will truly change the way people work. Indeed, the right solution combines BI and collaboration, which not only improves business insight, but also enables staff to work together in real time to make sound decisions more quickly and easily and to proactively solve problems.

With these collaboration capabilities increasingly built into today’s BI solutions, firms can create virtual teams that interact using audio and video over large geographical distances. When coupled with real-time monitoring and alerting, this virtual collaboration enables employees – and companies – to make more informed decisions and subsequently become more agile.

Andre Blumberg, group information technology manager for Hong Kong’s CLP Group, believes that user friendliness and user empowerment are key success factors for BI adoption. Says Blumberg, “Enabling users to create reports and perform slice-and-dice analysis in a familiar Windows user interface is important to successfully leveraging BI capabilities.”

As more utilities implement KPI dashboards and scorecards as performance management tools, they open the door for next-generation technologies that feature dynamic mashups and equipment animations, and create a 24×7 collaborative environment to help managers, engineers and operations personnel detect and analyze problems faster and more effectively in a familiar and secure environment. The environment will be common across roles and cost much less than other solutions with similar capabilities. All this allows utilities operations personnel to “see the needle in the haystack” and make quicker and better decisions that drive operational efficiency and improve the bottom line. Collaboration enables personnel to engage in key issues in a timely fashion via this new desktop environment. In addition, utilities can gain preemptive knowledge of operational problems and act before the problems become critical.

BETTER DECISIONS IMPROVE BUSINESS INSIGHT

Everyone in the organization can benefit from understanding what drives a utility, the key metrics for success and how the company is performing against those metrics (see Figure 2). By definition, BI encompasses everyone, so logically everyone should be able to use it.

According to Rick Nicholson, vice president of research for Energy Insights, an IDC company, the nature of BI recently changed dramatically. For many years, BI was a reporting solution and capability used primarily by a small number of business analysts. “Today, BI solutions have become more accessible, easier to use and more affordable, and they’re being deployed to managers, supervisors, line-of-business staff and external stakeholders,” says Nicholson. “We expect the use of business intelligence in the utility industry to continue to increase due to factors such as new report and compliance requirements, changes in trading markets, new customer programs such as energy efficiency and demand response, and intelligent grid initiatives.”

Accenture’s Todd believes that traditional BI focuses on analyzing the past, whereas real-time BI today can provide an immediate chance to affect the future. Says Todd, “Smart users of BI today take the growing volume of corporate operational data and the constant fl ow of raw information and turn it into usable and business-relevant insight – in near real time – and even seek to manage future events using analytics.” (See Figure 2.)

Most importantly, today’s BI gives utility information workers a way of understanding what’s going on in the business that’s both practical and actionable. Dr. J. Patrick Kennedy, the founder and CEO of performance management vendor OSIsoft, says that the transaction-level detail provided from enterprise software often offers a good long-term history, but it does not answer many of the important operations questions. Further, this type of software typically represents a “pull” rather than a “push” technology.

Says Kennedy, “People think in terms of context, trends, interactions, risk and reward – to answer these questions effectively requires actionable information to help them make the right decisions. Integrating systems enables these decisions by providing users with a dynamic BI application within a familiar platform.”

WHAT GOOD BI SYSTEMS LOOK LIKE

Here are some critical characteristics to look for in an enterprise-class BI solution:

  • The BI solution should integrate with the existing IT infrastructure and not require major infrastructure changes or replacement of legacy software applications.
  • The technology should mirror day-today business processes already in place (rather than expect users to adapt to it).
  • The application should be easy to use without extensive IT support.
  • The BI solution should connect seamlessly to multiple data sources rather than require workers to toggle in and out of a broad range of proprietary applications.
  • An effective BI solution will provide the ability to forecast, plan, budget and create scorecards and consolidated financial reports in a single, integrated product.
  • The BI solution should support navigation directly from each KPI to the underlying data supporting that KPI.
  • Analysis and reporting capabilities should be flexible and allow for everything from collecting complex data from unique sources to heavy-duty analytics and enterprise-wide production reporting.
  • The BI solution should support security by role, location and more. If access to certain data needs to be restricted, access management should be automated.

The true measure of BI success is that users actually use it. For this to happen, BI must be easy to learn and use. It should provide the right information in the right amount of detail to the right people. And it must present this information in easily customized scorecards, dashboards and wikis, and be available to anyone. If utilities can achieve this, they’ll be able to make better decisions much more quickly.

SEEING THE LIGHT

BI is about empowering people to make decisions based on relevant and current information so that they can focus on the right problems and pay attention to the right customers. By using BI to monitor performance and analyze both financial and operational data, organizations can perform real-time collaboration and make truly transformational decisions. Given the dynamic nature of the utilities industry, BI is a critical tool for making organizations more flexible and agile – and for enabling them to easily anticipate and manage change.