Quality Improvement Assessments: Improving Quality, Reliability, and Efficiency of Utility Operational Systems

Quality and Improvement gears

Utility organizations face increasing pressures from regulators, safety organizations, and their customers to maintain modern systems that provide real-time information updates and increasing levels of reliability. In response, most utilities will engage in large projects to modernize their data or software systems. With substantial budget demands and a high cost on internal staff, these projects can be overwhelming to plan and execute.

Quality Improvement Assessments

Before embarking on a large-scale modernization effort, it is important to develop a clear picture of an organization’s software and systems so decision makers can effectively understand specific components that can be improved to achieve higher reliability, efficiency, and support more modern platforms.  

Many utilities are choosing to execute pre-emptive Quality Improvement Assessments to address the increasing demand for continuous improvement. These assessments can be the starting point of planning a phased approach to system, software, and process modernization/improvement overall. 

Some of the greatest gains for utilities looking at system modernization can be realized by focusing their Quality Improvement Assessments in the following target areas: 

  1. Upgrade management patterns
  2. Modern platforms providing scalable infrastructure
  3. Advanced technology patterns
  4. Enhancing monitoring, reporting, and transparency

Setting a Pattern for Uninterrupted Technology Upgrade/Enhancement

As applications and platforms evolve, organizations encounter pressures from regulators and internal business units to continuously adopt and implement new technology. The cost and complexity of large, multi-year technology or application upgrades can be prohibitive for many utility organizations. 

An alternative to counter these challenges is to adopt continuous delivery pipelines within your IT operations and across functional business units. This pattern of software/system delivery embraces IT upgrades and enhancements as a continuous and repetitive activity as opposed to a periodic event.  

The first step in this approach is to build an enterprise roadmap with multiple yearly business releases in the roadmap to accommodate new technologies, new integrations and upgrades to existing technologies and integrations. Periodically, the enterprise roadmap should be revisited, and the priorities of the business releases may change based on business conditions and technologies and integrations may need to be added or removed.

This delivery approach allows the business to manage system and software upgrades as a series of aggregated and incremental advancements. It prioritizes implementation of new technology as a constant process and avoids getting locked into software and system versions due to long running, big bang replacement or infrequent upgrade projects. In addition, the iterative process of replacing and upgrading systems/software allows for repeated opportunities to assess the quality, reliability, and value of existing solutions and business workflows.

Utility organizations adopting this IT delivery style can achieve a sustainable upgrade pattern that includes the continuous assessment and improvement of the quality, reliability, and effectiveness within their existing software and data systems.

Modern Platforms Lending to Phased Upgrades and Improvements

Modern hosting platforms and hosted solutions can provide additional opportunities for organizations to execute a phased approach to continuous improvements. With cloud hosting solutions becoming more secure and convenient, they can provide temporary, parallel environments that allow upgrade or enhancement project life cycles to execute while existing systems are in operation.  

Cloud-hosted infrastructure is convenient, efficiently deployed, and cost-effective for providing temporary infrastructure/environments needed when conducting parallel releases within your organization. Additional environments can be quickly deployed and configured. Conversely, these hosted environments can also be quickly decommissioned to allow maximum cost savings to your business when release cycles are complete.   

A phased pattern for incorporating system/software improvements is particularly useful for organizations with multiple, disparate jurisdictions or regions. Having segmented entities within an organization provides opportunities for more manageable and efficient upgrade efforts and release cycles. New functionality and capabilities can be developed, tested, and released by region, reducing the overhead and overall risk to the organization. 

After a capability or enhancement has been delivered to one region, the cost, complexity, and risk delivering to the remainder of the jurisdictions is significantly reduced. By compartmentalizing the release of improvements and new capabilities, your organization can reduce long term costs of upgrades and enhancements. In addition, each iterative release cycle is another opportunity for your organization to evaluate and improve the quality, reliability, and efficiency of the software, systems, or processes involved.

This pattern can be challenging to implement when executing on procured, internally managed hardware. Continuous improvement and delivery patterns require a fluctuating demand for infrastructure and environments to support the parallel release activities occurring. Implementing scalable, cloud-hosted infrastructure provides the platform flexibility to meet this demand.

Evaluating Opportunities for Advanced Technology Patterns

Implementing near real-time reporting and status updates of critical issues and assets is an expectation by regulators and customers. Maintaining up to the minute visibility is becoming the standard. To accomplish this, organizations are looking for solutions and improvements that reduce the latency of systems, solutions, and processes. Evaluating advancements in common practices and technology solutions are providing utility companies with opportunities to reduce latency inherent to traditional systems and solutions.  

Integration mechanisms that synchronize assets and data between systems have traditionally implemented a passive synchronization pattern. Status updates applied in one system are periodically sent to integrated data systems. Unfortunately, the passive integration pattern itself presents designed latency into the system.  

Executing a solution quality assessment would identify these mechanisms and look for software and solutions that implement more active integration patterns. This pattern of integration synchronizes data changes across integrated systems at a significantly higher velocity.  

With more of today’s platforms and solutions supporting event-driven paradigms, organizations have an opportunity to apply improvements that significantly reduce the rate at which data synchronizes across the enterprise. Apache Kafka is an example of an event streaming solution or complex event processor (CEP) that many organizations are investing in to support higher visibility, inflight computations, and reduced latency across their business.  

Many mobile, web, and cloud-based services offer mechanisms, such as webhooks, allowing changes applied in one interface to be immediately sent to the URL of an event processor. Evaluating an organization’s current integration patterns and technical solutions provides opportunities to harness the benefits of these advanced patterns and solutions to reduce latency across the landscape.

Consolidation within integration technologies is another area many companies are realizing benefits. Traditionally, many companies maintain an integration layer with multiple different technologies and solutions. This becomes difficult and costly to maintain.  

Some organizations are resolving this cost and complexity with efforts to consolidate within this layer of the system. API layers, such as MuleSoft, Apigee, and many others, are becoming more popular for utilities to help centralize integrations between systems to a single, high-powered entity that has advanced monitoring, logging, and alerting capabilities. 

By centralizing integrations between systems to fewer components, many organizations reduce occurrences of integration failures that go undetected or unreported. An assessment of your existing integrations and technologies used within the integration tier would identify if consolidated solutions would be an option to help increase visibility and transparency of your enterprise systems.

Assessing Opportunities for Enhanced Monitoring, Reporting, and Transparency

In the current environment of increasing occurrences of extreme weather events and expectations of higher transparency, regulating authorities are increasing the granularity and requirements of monitoring, reporting, and data integrity. Consolidated monitoring solutions are being deployed to make monitoring and logging more efficient and more easily managed across all systems.  

Consolidated monitoring components can support more advanced alerting by applying refined condition filtering and real-time alerting to IT administrators and business owners when failures or priority events occur across the enterprise. Higher data quality mechanisms are also becoming available. Refined data models, along with enhanced runtime validation components, are creating much higher levels of data integrity.  

A good example of this trend in the Utility GIS landscape is ESRI’s UPDM data model and ArcGIS Utility Network. The combination of a refined data model, enhanced rule base, and in-edit validation mechanisms support an organization’s movement toward higher data quality within their enterprise GIS and related systems.

Forcing higher standards on data, as it is entered or migrated, provides increased reliability in reporting. The advanced consolidated monitoring solutions, enhanced alerting, and improved data integrity will create a level of transparency that will be required by organization business leaders and regulatory commission inquiries.

Benefits of Quality Improvement Assessments

Business systems that support energy organizations can be complex, highly distributed, and integrated to many other components within a utility company’s IT landscape. This complexity can be challenging for an organization to monitor, report on, apply enhancements to, and adapt to evolving regulatory requirements and IT advancements. Many organizations are choosing to proactively execute Quality Improvement Assessments that allow software developers and solution architects to evaluate existing software and data solutions for opportunities to improve reliability, stability, efficiency, security, and quality.  

Findings derived from performing an effective Quality Improvement Assessment can provide utility organizations with: 

Informed Decision Making – Better understanding of quality, reliability, or traceability gaps within existing software or data solutions that may increase risks of being out of compliance with regulatory requirements

Increased Effectiveness and Efficiency – A means of identifying opportunities to implement improved workflows, patterns, or technical solutions to achieve higher operational efficiency, reducing internal costs

Greater ROI – Opportunities to reduce complexity, redundancy, or long-term maintenance efforts, lowering the total cost of ownership for the organization

Tom Helmer headshot

10 years at UDC / 32 years in GIS

Tom Helmer

Executive Solution Architect for UDC and SAFe® 4 Certified Agilist (SA), Tom has extensive experience designing and integrating utility solutions around GIS and related technologies, including advanced metering infrastructure and smart grid solutions.

10 years at UDC27 years in GIS

Ben Dwinal

Ben is a Solution Architect and certified GISP with extensive experience in remote sensing, geospatial intelligence, military and defense technologies, application development, and related technologies.