Successful Smart Grid Architecture

The smart grid is progressing well on several fronts. Groups such as the Grid Wise Alliance, events such as Grid Week, and national policy citations such as the American Recovery and Reinvestment Act in the U.S., for example, have all brought more positive attention to this opportunity. The boom in distributed renewable energy and its demands for a bidirectional grid are driving the need forward, as are sentiments for improving consumer control and awareness, giving customers the ability to engage in real-time energy conservation.

On the technology front, advances in wireless and other data communications make wide-area sensor networks more feasible. Distributed computation is certainly more powerful – just consider your iPod! Even architectural issues such as interoperability are now being addressed in their own forums such as Grid Inter-Op. It seems that the recipe for a smart grid is coming together in a way that many who envisioned it would be proud. But to avoid making a gooey mess in the oven, an overall architecture that carefully considers seven key ingredients for success must first exist.

Sources of Data

Utilities have eons of operational data: both real time and archival, both static (such as nodal diagrams within distribution management systems) and dynamic (such as switching orders). There is a wealth of information generated by field crews, and from root-cause analyses of past system failures. Advanced metering infrastructure (AMI) implementations become a fine-grained distribution sensor network feeding communication aggregation systems such as Silver Springs Network’s Utility IQ or Trilliant’s Secure Mesh Network.

These data sources need to be architected to be available to enhance, support and provide context for real-time data coming in from new intelligent electronic devices (IEDs) and other smart grid devices. In an era of renewable energy sources, grid connection controllers become yet another data source. With renewables, micro-scale weather forecasting such as IBM Research’s Deep Thunder can provide valuable context for grid operation.

Data Models

Once data is obtained, in order to preserve its value in a standard format, one can think in terms of an extensible markup language (XML)-oriented database. Modern implementations of these databases have improved performance characteristics, and the International Engineering Consortium (IEC) common information/ generic interface definition (CIM/GID) model, though oriented more to assets than operations, is a front-running candidate for consideration.

Newer entries, such as device language message specification – coincidence-ordered subsets expectation maximization (DLMS-COSEM) for AMI, are also coming into practice. Sometimes, more important than the technical implementation of the data, however, is the model that is employed. A well-designed data model not only makes exchange of data and legacy program adjustments easier, but it can also help the applicability of security and performance requirements. The existence of data models is often a good indicator of an intact governance process, for it facilitates use of the data by multiple applications.


Customer workshops and blueprinting sessions have shown that one of the most common issues needing to be addressed is the design of the wide-area communication system. Data communications architecture affects data rate performance, the cost of distributed intelligence and the identification of security susceptibilities.

There is no single communications technology that is suitable for all utilities, or even for all operational areas across any individual utility. Rural areas may be served by broadband over powerline (BPL), while urban areas benefit from multi-protocol label switching (MPLS) and purpose- designed mesh networks, enhanced by their proximity to fiber.

In the future, there could be entirely new choices in communications. So, the smart grid architect needs to focus on security, standardized interfaces to accept new technology, enablement of remote configuration of devices to minimize any touching of smart grid devices once installed, and future-proofing the protocols.

The architecture should also be traceable to the business case. This needs to include probable use cases that may not be in the PUC filing, such as AMI now, but smart grid later. Few utilities will be pleased with the idea of a communication network rebuild within five years of deploying an AMI-only network.

Communications architecture must also consider power outages, so battery backup, solar recharging, or other equipment may be required. Even arcane details such as “Will the antenna on a wireless device be the first thing to blow off in a hurricane?” need to be considered.


Certainly, the smart grid’s purpose is to enhance network reliability, not lower its security. But with the advent of North American Reliability Corp. Critical Infrastructure Protection (NERC-CIP), security has risen to become a prime consideration, usually addressed in phase one of the smart grid architecture.

Unlike the data center, field-deployed security has many new situations and challenges. There is security at the substation – for example, who can access what networks, and when, within the control center. At the other end, security of the meter data in a proprietary AMI system needs to be addressed so that only authorized applications and personnel can access the data.

Service oriented architecture (SOA) appliances are network devices to enable integration and help provide security at the Web services message level. These typically include an integration device, which streamlines SOA infrastructures; an XML accelerator, which offloads XML processing; and an XML security gateway, which helps provide message-level, Web-services security. A security gateway helps to ensure that only authorized applications are allowed to access the data, whether an IP meter or an IED. SOA appliance security features complement the SOA security management capabilities of software.

Proper architectures could address dynamic, trusted virtual security domains, and be combined not only with intrusion protection systems, but anomaly detection systems. If hackers can introduce viruses in data (such as malformed video images that leverage faults in media players), then similar concerns should be under discussion with smart grid data. Is messing with 300 MegaWatts (MW) of demand response much different than cyber attacking a 300 MW generator?


A smart grid cynic might say, “Who is going to look at all of this new data?” That is where analytics supports the processing, interpretation and correlation of the flood of new grid observations. One part of the analytics would be performed by existing applications. This is where data models and integration play a key role. Another part of the analytics dimension is with new applications and the ability of engineers to use a workbench to create their customized analytics dashboard in a self-service model.

Many utilities have power system engineers in a back office using spreadsheets; part of the smart grid concept is that all data is available to the community to use modern tools to analyze and predict grid operation. Analytics may need a dedicated data bus, separate from an enterprise service bus (ESB) or enterprise SOA bus, to meet the timeliness and quality of service to support operational analytics.

A two-tier or three-tier (if one considers the substations) bus is an architectural approach to segregate data by speed and still maintain interconnections that support a holistic view of the operation. Connections to standard industry tools such as ABB’s NEPLAN® or Siemens Power Technologies International PSS®E, or general tools such as MatLab, should be considered at design time, rather than as an additional expense commitment after smart grid commissioning.


Once data is sensed, securely communicated, modeled and analyzed, the results need to be applied for business optimization. This means new smart grid data gets integrated with existing applications, and metadata locked in legacy systems is made available to provide meaningful context.

This is typically accomplished by enabling systems as services per the classic SOA model. However, issues of common data formats, data integrity and name services must be considered. Data integrity includes verification and cross-correlation of information for validity, and designation of authoritative sources and specific personnel who own the data.

Name services addresses the common issue of an asset – whether transformer or truck – having multiple names in multiple systems. An example might be a substation that has a location name, such as Walden; a geographic information system (GIS) identifier such as latitude and longitude; a map name such as nearest cross streets; a capital asset number in the financial system; a logical name in the distribution system topology; an abbreviated logical name to fit in the distribution management system graphical user interface (DMS GUI); and an IP address for the main network router in the substation.

Different applications may know new data by association with one of those names, and that name may need translation to be used in a query with another application. While rewriting the applications to a common model may seem appealing, it may very well send a CIO into shock. While the smart grid should help propagate intelligence throughout the utility, this doesn’t necessarily mean to replace everything, but it should “information-enable” everything.

Interoperability is essential at both a service level and at the application level. Some vendors focus more at the service, but consider, for example, making a cell phone call from the U.S. to France – your voice data may well be code division multiple access (CDMA) in the U.S., travel by microwave and fiber along its path, and emerge in France in a global system for mobile (GSM) environment, yet your speech, the “application level data,” is retained transparently (though technology does not yet address accents!).


The world of computerized solutions does not speak to software alone. For instance, AMI storage consolidation addresses the concern that the volume of data coming into the utility will be increasing exponentially. As more meter data can be read in an on-demand fashion, data analytics will be employed to properly understand it all, requiring a sound hardware architecture to manage, back-up and feed the data into the analytics engines. In particular, storage is needed in the head-end systems and the meter-data management systems (MDMS).

Head-end systems pull data from the meters to provide management functionality while the MDMS collects data from head-end systems and validates it. Then the data can be used by billing and other business applications. Data in both the head-end systems and the master copy of the MDMS is replicated into multiple copies for full back up and disaster recovery. For MDMS, the master database that stores all the aggregated data is replicated for other business applications, such as customer portal or data analytics, so that the master copy of the data is not tampered with.

Since smart grid is essentially performing in real time, and the electricity business is non-stop, one must think of hardware and software solutions as needing to be fail-safe with automated redundancy. The AMI data especially needs to be reliable. The key factors then become: operating system stability; hardware true memory access speed and range; server and power supply reliability; file system redundancy such as a JFS; and techniques such as FlashCopy to provide a point-in-time copy of a logical drive.

Flash Copy can be useful in speeding up database hot backups and restore. VolumeCopy can extend the replication functionality by providing the ability to copy contents of one volume to another. Enhanced remote mirroring (Global Mirror, Global Copy and Metro Mirror) can provide the ability to mirror data from one storage system to another, over extended distances.


Those are seven key ingredients for designing or evaluating a recipe for success with regard to implementing the smart grid at your utility. Addressing these dimensions will help achieve a solid foundation for a comprehensive smart grid computing system architecture.

Leveraging the Data Deluge: Integrated Intelligent Utility Network

If you define a machine as a series of interconnected parts serving a unified purpose, the electric power grid is arguably the world’s largest machine. The next-generation version of the electric power grid – called the intelligent utility network (IUN), the smart grid or the intelligent grid, depending on your nationality or information source – provides utilities with enhanced transparency into grid operations.

Considering the geographic and logical scale of the electric grid from any one utility’s point of view, a tremendous amount of data will be generated by the additional “sensing” of the workings of the grid provided by the IUN. This output is often described as a “data flood,” and the implication that businesses could drown in it is apropos. For that reason, utility business managers and engineers need analytical tools to keep their heads above water and obtain insight from all this data. Paraphrasing the psychologist Abraham Maslow, the “hierarchy of needs” for applying analytics to make sense of this data flood could be represented as follows (Figure 1).

  • Insight represents decisions made based on analytics calculated using new sensor data integrated with existing sensor or quasi-static data.
  • Knowledge means understanding what the data means in the context of other information.
  • Information means understanding precisely what the data measures.
  • Data represents the essential reading of a parameter – often a physical parameter.

In order to reap the benefits of accessing the higher levels of this hierarchy, utilities must apply the correct analytics to the relevant data. One essential element is integrating the new IUN data with other data over the various time dimensions. Indeed, it is analytics that allow utilities to truly benefit from the enhanced capabilities of the IUN compared to the traditional electric power grid. Analytics can be comprised solely of calculations (such as measuring reactive power), or they can be rule-based (such as rating a transform as “stressed” if it has a more than 120 percent nameplate rating over a two-hour period).

The data to be analyzed comes from multiple sources. Utilities have for years had supervisory control and data acquisition (SCADA) systems in place that employ technologies to transmit voltage, current, watts, volt ampere reactives (VARs) and phase angle via leased telephone lines at 9,600 baud, using the distributed network protocol (DNP3). Utilities still need to integrate this basic information from these systems.

In addition, modern electrical power equipment often comes with embedded microprocessors capable of generating useful non-operational information. This can include switch closing time, transformer oil chemistry and arc durations. These pieces of equipment – generically called intelligent electrical devices (IEDs) – often have local high-speed sequences of event recorders that can be programmed to deliver even more data for a report for post-event analysis.

An increasing number of utilities are beginning to see the business cases for implementing an advanced metering infrastructure (AMI). A large-scale deployment of such meters would also function as a fine-grained edge sensor system for the distribution network, providing not only consumption but voltage, power quality and load phase angle information. In addition, an AMI can be a strategic platform for initiating a program of demand-response load control. Indeed, some innovative utilities are considering two-way AMI meters to include a wireless connection such as Zigbee to the consumer’s home automation network (HAN), providing even finer detail to load usage and potential controllability.

Companies must find ways to analyze all this data, both from explicit sources such as IEDs and implicit sources such as AMI or geographical information systems (GIS). A crucial aspect of IUN analysis is the ability to integrate conventional database data with time-synchronized data, since an isolated analytic may be less useful than no analytic data at all.


There are many different categories of analytics that address the specific needs of the electric power utility in dealing with the data deluge presented by the IUN. Some depend on the state regulatory environment, which not only imposes operational constraints on utilities but also determines the scope and effect of what analytics information exchange is required. For example, a generation-to-distribution utility – what some fossil plant owners call “fire to wire” – may have system-wide analytics that link in load dispatch to generation economics, transmission line realities and distribution customer load profiles. Other utilities operate power lines only, and may not have their own generation capabilities or interact with consumers at all. Utilities like these may choose to focus initially on distribution analytics such as outage predication and fault location.

Even the term analytics can have different meanings for different people. To the power system engineer it involves phase angles, voltage support from capacitor banks and equations that take the form “a + j*b.” To the line-of-business manager, integrated analytics may include customer revenue assurance, lifetime stress analysis of expensive transformers and dashboard analytics driving business process models. Customer service executives could use analytics to derive emergency load control measures based on a definition of fairness that could become quite complex. But perhaps the best general definition of analytics comes from the Six Sigma process mantra of “define, measure, analyze, improve, control.” In the computer-driven IUN, this would involve:

  • Define. This involves sensor selection and location.
  • Measure. SCADA systems enable this process.
  • Analyze. This can be achieved using IUN Analytics.
  • Improve. This involves grid performance optimization, as well as business process enhancements.
  • Control. This is achieved by sending commands back to grid devices via SCADA, and by business process monitoring.

The term optimization can also be interpreted in several ways. Utilities can attempt to optimize key performance indicators (KPIs) such as the system average interruption duration index (SAIDI, which is somewhat consumer-oriented) on grid efficiency in terms of megawatts lost to component heating, business processes (such as minimizing outage time to repair) or meeting energy demand with minimum incremental fuel cost.

Although optimization issues often cross departmental boundaries, utilities may make compromises for the sake of achieving an overall strategic goal that can seem elusive or even run counter to individual financial incentives. An important part of higher-level optimization – in a business sense rather than a mathematical one – is the need for a utility to document its enterprise functions using true business process modeling tools. These are essential to finding better application integration strategies. That way, the business can monitor the advisories from analytics in the tool itself, and more easily identify business process changes suggested by patterns of online analytics.

Another aspect of IUN analytics involves – using a favorite television news phrase – “connecting the dots.” This means ensuring that a utility actually realizes the impact of a series of events on an end state, even though the individual events may appear unrelated.

For example, take complex event processing (CEP). A “simple” event might involve a credit card company’s software verifying that your credit card balance is under the limit before sending an authorization to the merchant. A “complex” event would take place if a transaction request for a given credit card account was made at a store in Boston, and another request an hour later in Chicago. After taking in account certain realities of time and distance, the software would take an action other than approval – such as instructing the merchant to verify the cardholder’s identity.

Back in the utilities world, consideration of weather forecasts in demand-response action planning, or distribution circuit redundancy in the face of certain existing faults, can be handled by such software. The key in developing these analytics is not so much about establishing valid mathematical relationships as it is about giving a businessperson the capability to create and define rules. These rules must be formulated within an integrated set of systems that support cross-functional information. Ultimately, it is the businessperson who relates the analytics back to business processes.


Time can be a critical variable in successfully using analytics. In some cases, utilities require analytics to be responsive to the electric power grid’s need to input, calculate and output in an actionable time frame.

Utilities often have analytics built into functions in their distribution management or energy management systems, as well as individual analytic applications, both commercial and home-grown. And some utilities are still making certain decisions by importing data into a spreadsheet and using a self-developed algorithm. No matter what the source, the architecture of the analytics system should provide a non-real-time “bus,” often a service-oriented architecture (SOA) or Web services interface, but also a more time-dependent data bus that supports common industry tools used for desktop analytics within the power industry.

It’s important that everyone in the utility has internally published standards for interconnecting their analytics to the buses, so all authorized stakeholders can access it. Utilities should also set enterprise policy for special connectors, manual entry and duplication of data, otherwise known as SOA governance.

The easier it is for utilities to use the IUN data, the less likely it is that their engineering, operations and maintenance staffs will be overwhelmed by the task of actually acquiring the data. Although the term “plug and play” has taken on certain negative connotations – largely due to the fact that few plug-and-play devices actually do that – the principle of easily adding a tool is still both valid and valuable. New instances of IUN can even include Web 2.0 characteristics for the purpose of mash-ups – easily configurable software modules that link, without pain, via Web services.


Utilities benefit from applying analytics by making the best use of integrated utility enterprise information and data models, and unlocking employee ideas or hypotheses about ways to improve operations. Often, analytics are also useful in helping employees identify suspicious relationships between data. The widely lamented “aging workforce” issue typically involves the loss of senior staff who can visualize relationships that aren’t formally captured, and who were able to make connections that others didn’t see. Higher-level analytics can partly offset the impact of the aging workforce brain drain.

Another type of analytics is commonly called “business intelligence.” But although a number of best-selling general-purpose BI tools are commercially available, utilities need to ensure that the tools have access to the correct, unique, authoritative data. Upon first installing BI software, there’s sometimes a tendency among new users to quickly assemble a highly visual dashboard – without regard to the integrity of the data they’re importing into the tool.

Utilities should also create enterprise data models and data dictionaries to ensure the accuracy of the information being disseminated throughout the organization. After all, utilities frequently use analytics to create reports that summarize data at a high level. Yet some fault detection schemes – such as identifying problems in buried cables – may need original, detailed source data. For that reason utilities must have an enterprise data governance scheme in place.

In newer systems, data dictionaries and models can be provided by a Web service. But even if the dictionary consists of an intermediate lookup table in a relational database, the principles still hold: Every process and calculated variable must have a non-ambiguous name, a cross-reference to other major systems (such as a distribution management system [DMS] or geographic information system [GIS]), a pointer to the data source and the name of the person who owns the data. It is critical for utilities to assign responsibility for data accuracy, validation, source and caveats at the beginning of the analytics engineering process. Finding data faults after they contribute to less-than-correct results from the analytics is of little use. Utilities may find data scrubbing and cross-validation tools from the IT industry to be useful where massive amounts of data are involved.

Utilities have traditionally used simulation primarily as a planning tool. However, with the continued application of Moore’s law, the ability to feed a power system simulation with real-time data and solve a state estimation in real time can result in an affordable crystal ball for predicting problems, finding anomalies or performing emergency problem solving.


The emergence of industry-wide standards is making analytics easier to deploy across utility companies. Standards also help ease the path to integration. After all, most electrons look the same around the world, and the standards arising from the efforts of Kirchoff, Tesla and Maxwell have been broadly adopted globally. (Contrary views from the quantum mechanics community will not be discussed here!) Indeed, having a documented, self-describing data model is important for any utility hoping to make enterprise-wide use of data for analytics; using an industry-standard data model makes the analytics more easily shareable. In an age of greater grid interconnection, more mergers and acquisitions, and staff shortages, utilities’ ability to reuse and share analytics and create tools on top of standards-based data models has become increasingly important.

Standards are also important when interfacing to existing utility systems. Although the IUN may be new, data on existing grid apparatus and layout may be decades old. By combining the newly added grid observations with the existing static system information to form a complete integration scenario, utilities can leverage analytics much more effectively.

When deploying an IUN, there can be a tendency to use just the newer, sensor-derived data to make decisions, because one knows where it is and how to access it. But using standardized data models makes incorporating existing data less of an issue. There is nothing wrong with creating new data models for older data.


To understand the importance of analytics in relation to the IUN, imagine an ice-cream model (pick your favorite flavor). At the lowest level we have data: the ice cream is 30 degrees. At the next level we have information: you know that it is 30 degrees on the surface of the ice cream, and that it will start melting at 32 degrees. At the next level we have knowledge: you’re measuring the temperature of the middle scoop of a three-scoop cone, and therefore when it melts, the entire structure will collapse. At the insight level we bring in other knowledge – such as that the ambient air temperature is 80 degrees, and that the surface temperature of the ice cream has been rising 0.5 degrees per minute since you purchased it. Then the gastronomic analytics activate and take preemptive action, causing you to eat the whole cone in one bite, because the temporary frozen-teeth phenomenon is less of a business risk than having the scoops melt and fault to ground.