Confident Engineering Starts Here: Inside Nuclearn’s Agentic 50.59 Workflows

Few areas in nuclear engineering are as foundational—or as complex—as 10 CFR 50.59. Whether evaluating a plant modification, equipment upgrade, or digital system implementation, the question remains the same: Does this change require prior NRC approval? For decades, answering that question has required long hours of research, interpretation, and justification—often across fragmented guidance, buried precedent, and aging internal documentation.

With the launch of Agentic 50.59 Workflows, Nuclearn is changing that. Part of our Engineering AI solution suite, these new 50.59 capabilities bring nuclear-specific AI directly into the licensing workflow—helping engineers and licensing professionals move faster, with more clarity, and without compromising traceability or safety.

“Every minute spent searching through fragmented documentation is a minute not spent on what matters most—safe, reliable nuclear operations,” said Brad Fox, CEO and Co-Founder of Nuclearn.

“Our 50.59 solution transforms weeks of manual research into minutes of AI-powered analysis, giving engineers the confidence to move forward with complete regulatory clarity. This isn’t just about efficiency—it’s about empowering the nuclear workforce to focus on innovation while maintaining the highest safety standards.”


The Problem: Complex, Manual, and Costly

The traditional 50.59 process relies heavily on institutional memory, document repositories, and scattered evaluations. Teams tasked with screening a proposed change must often:

  • Manually search through decades of precedent
  • Cross-check NRC guidance, internal evaluations, and site-specific bases
  • Align interpretations that vary between individuals or sites
  • Spend valuable engineering time on documentation—not decision-making

This approach isn’t just inefficient. It slows innovation, increases the risk of misinterpretation, and puts undue burden on engineers who should be focused on solving real-world problems.


The Solution: Agentic 50.59 Workflows

With Agentic 50.59, Nuclearn introduces a transformative way to evaluate whether a proposed change impacts the facility’s licensing basis. This isn’t a form fill or a static checklist—it’s a dynamic, interactive workflow powered by AI built for nuclear.

Here’s how it works:

  • 🧠 Semantic Search: Quickly surface relevant past 50.59 evaluations, NRC precedent, and guidance based on meaning—not just keywords.
  • 🔗 Cited Authority: Automatically reference regulatory sources, site-specific licensing bases, and industry standards to support decision-making.
  • 💬 Conversational Interface: Ask detailed questions and receive context-aware responses from an AI trained on real nuclear data.
  • 📁 Traceability: Capture every step of the evaluation process, including justification, sourcing, and reasoning—ready for audit or peer review.

The result? A workflow that reduces hours of research into minutes of meaningful analysis, and one that scales with your team—whether you’re evaluating a single change or hundreds during an outage or upgrade window.


More Than a Feature—It’s a Foundation

Agentic 50.59 is more than just a search tool or a digital form. It’s a new foundation for how engineering teams can tackle complex licensing workflows using purpose-built AI.

It serves as the cornerstone of Engineering AI, a larger solution suite designed to support engineering, QA, and licensing teams with smarter tools for repetitive, high-stakes tasks.

Other key products in the Engineering AI suite include:


📋 Reportability Screener

Quickly pre-screen CRs and proposed changes against NRC reporting criteria. Offers rapid insights into whether a condition may require reporting—reducing uncertainty and speeding up response.


🔎 CR Smart Search

AI-driven search that allows users to explore past condition reports based on similarity, outcome, and resolution—ideal for CAP, QA, and engineering teams trying to learn from precedent.


🧾 Document Comparison Tool

Compare procedures, licensing documents, design packages, or evaluations side-by-side. Highlights structural and content differences for easier review, traceability, and QA.


Together, these solutions create a connected workflow environment that supports end-to-end engineering decisions—especially in high-impact areas like plant modifications, configuration control, and licensing justification.


Built for What’s Next

Nuclearn built Agentic 50.59 in partnership with top utilities and engineering firms—teams who know what it means to screen hundreds of modifications in a matter of weeks during an outage or digital upgrade.

They asked for a solution that could:

  • Reduce the manual burden of research
  • Help newer engineers ramp up faster
  • Preserve institutional knowledge
  • Deliver fast, consistent results that hold up to regulatory scrutiny

And that’s exactly what Engineering AI delivers.


Who Benefits?

The release of this product comes at a critical time for the industry. Many plants are navigating:

  • License renewal and extension projects
  • Power uprates and equipment modifications
  • Aging management strategies
  • Fleet-wide standardization of licensing practices

In all of these efforts, 50.59 plays a defining role. Nuclearn’s solution doesn’t remove engineering judgment—it strengthens it with data, precedent, and intelligent workflows that match the pace and complexity of the industry today.


Smarter Doesn’t Mean Riskier

At Nuclearn, we know that in nuclear, speed means nothing without safety. That’s why the Agentic 50.59 workflows are built with Part 810 compliance, on-premise deployment options, and full traceability baked in from the start. You get efficiency, yes—but never at the cost of oversight, documentation, or regulatory alignment.


See It in Action

The Agentic 50.59 capability is now available as part of the Nuclearn Platform and can be demoed live with your engineering or licensing team.

Whether you’re a utility preparing for your next project window or an EPC firm supporting client upgrades, this is your chance to see how AI can meaningfully improve your engineering outcomes—without changing your standards.

📅 Schedule a live demo
🌐 Explore Engineering AI at: www.nuclearn.ai


Final Thought

Modernizing 50.59 isn’t just a nice-to-have—it’s essential to nuclear innovation, safety, and performance. With Agentic 50.59 Workflows, Nuclearn is giving engineers what they’ve long needed: a faster, smarter, and more transparent path to confident licensing decisions.

Because real modernization doesn’t start with a new form. It starts here—with the power to ask better questions, find better answers, and act with clarity.


Trusted by Nuclear. Built for Engineers. Powered by Nuclearn.

The Risk of Unvalidated Research: Why AtomAssist Is Built for the Work That Matters Most

When it comes to nuclear, energy, and environmental work—there’s no room for guesswork.

In today’s fast-paced professional world, where timelines are short and the information we rely on must be accurate, many teams are turning to artificial intelligence to support research and reporting. But in industries where compliance, safety, and regulatory integrity are non-negotiable, the source of that information matters just as much—if not more—than the speed of the answer.

That’s where AtomAssist comes in.

Designed for engineers, field professionals, analysts, and managers in highly regulated fields like nuclear and utilities, AtomAssist was created to solve a specific problem: helping users access, understand, and trust their own documents and data—faster and more reliably than ever before.

A First-Hand Use Case from Deep Fission

During a recent session, Ingrid Nordby of Deep Fission walked through how she used AtomAssist to navigate a complex research task focused on groundwater contamination and borehole data—critical components in environmental and nuclear facility assessments.

“I was particularly interested in groundwater contamination test results,” Ingrid shared. “I had a collection of scientific articles, reports, and field data, and I uploaded everything into AtomAssist to see how it could help.”

Once the materials were in the system, Ingrid asked AtomAssist to generate summaries, extract specific insights, and even build a clear, technical narrative. The results were impressive.

“It returned exactly what I uploaded—only now it was organized and explained in a way I could use in a report,” she said. “It saved me hours of work.”

Built for Validation

What sets AtomAssist apart is its commitment to validation. In high-risk sectors, an answer is only as good as its proof—and AtomAssist ensures every output is traceable back to original, verified source documents.

Ingrid explained how easy it was to confirm where the information was coming from:
“I clicked on the ‘Sources’ tab, and it gave me all the validation information I needed. I knew the data it was referencing was the exact documentation I had uploaded.”

This level of traceability gives teams peace of mind. When regulators or internal stakeholders ask, “Where did this come from?”—the answer is a click away.

From Raw Data to Ready-to-Use Narratives

AtomAssist doesn’t just analyze documents—it helps translate them into usable content. Ingrid was able to pull results from multiple uploaded files and ask AtomAssist to build a narrative that aligned with her technical goals.

“I wasn’t just looking for information,” she said. “I wanted information I could use right away—and that’s what AtomAssist gave me.”

The narrative tools also allow for follow-up questions, refinements, and targeted insights—so if you need a version for a technical appendix, a stakeholder update, or a management summary, the system can help build each from the same core data.

Creating Reusable Knowledge Sets

In regulated industries, the same data often needs to be used across teams and departments. One of the most powerful features Ingrid used was the ability to write extracted insights into new datasets within the AtomAssist platform.

With help from the Nuclearn team, she learned how to consolidate all validated source references into a structured dataset that could be referenced again and again.

“Now I’m thinking about how to create a single-source document that my whole team can use,” Ingrid said. “Once the content is verified and structured, AtomAssist makes it easy to pull from that data in the future.”

This capability supports knowledge retention, reduces rework, and keeps everyone aligned on the same version of the truth—without the chaos of emails, folders, or uncontrolled edits.

Precision Is a Requirement, Not a Bonus

For professionals in nuclear, utilities, safety, and compliance, documentation isn’t a suggestion—it’s a system of record. Misinformation, outdated reports, or vague sourcing can have consequences ranging from delayed operations to regulatory penalties.

That’s why AtomAssist was built with precision and trust at its core. Every analysis, summary, or insight provided by the platform is grounded in what’s already approved by your organization.

It’s not searching a public database. It’s not scanning the internet. It’s referencing only the material you’ve given it—the material that meets your compliance requirements, your safety standards, and your internal review processes.

This difference is what makes AtomAssist not just useful, but essential in high-stakes environments.

Security and Compliance by Design

AtomAssist is built for deployment in secure environments. It meets the demands of on-premise requirements, data confidentiality, and Part 810 compliance.

Whether you’re a nuclear site manager, a corrective action program lead, or an engineer managing records for regulatory filings, AtomAssist respects the boundaries and expectations of your industry.

And it doesn’t require users to learn a new interface or scripting language. It works where you work—using your documents, your taxonomy, and your subject matter.

Reducing Risk and Enhancing Productivity

Ingrid’s experience underscores what so many professionals in complex industries already know: you don’t have time to double-check everything manually—but you can’t afford to get it wrong.

AtomAssist eliminates the guesswork. It enables you to pull trusted data from your own source library, validate it instantly, and build what you need with confidence.

From policies and procedures to test reports and technical briefs, AtomAssist can support:

  • Engineering & Maintenance Documentation
  • Licensing and Environmental Reports
  • Root Cause & Corrective Action Narratives
  • Outage Preparation Materials
  • Executive Summaries & Stakeholder Briefings

All while ensuring your work is based on real, validated information—not approximations.

Looking Forward: A Smarter Way to Work

What Ingrid found in AtomAssist wasn’t just an AI system. It was a work partner. One that respects the technical rigor of her field, the pressure of her deadlines, and the importance of making sure every claim is backed up.

As she put it:

“AtomAssist helped me get to what I needed faster. But more importantly, it helped me trust the process. Everything I used had validation behind it.”

For teams working in regulated industries, that level of trust is priceless.

The Bottom Line

In critical sectors, research isn’t just about finding information—it’s about defending it. Every decision, every report, and every stakeholder update must stand up to scrutiny.

That’s what AtomAssist is built for. It empowers professionals to do their best work, backed by the sources they already trust. It’s secure, compliant, and ready to be deployed in the toughest documentation environments.

So, the next time you’re preparing a report, chasing down test results, or building a summary for executive review, remember:

With AtomAssist, you’re not just answering questions. You’re building with certainty.

NuclearN v1.9 Release

“At NuclearN, we are committed to continuous innovation. Our goal is to release a new version of our platform every 3 months, ensuring that our customers always have access to the latest advancements in technology and efficiency.”

— Jerrold Vincent & Brad Fox, NuclearN co-founders

The release of NuclearN version 1.9 at the end of 2023 introduced a new product plus new features and enhancements aimed at improving operational efficiency and the user experience for power generating utilities and beyond.


NuclearN Project Genius

The major addition with this release – Project Genius – integrates analytics and intelligence for large and complex projects. By using AI to learn from historical project data, and leveraging Monte Carlo simulations for new projects, Project Genius can automatically identify key project risks and highlight key opportunities for improving schedule, quality and cost.

Project Genius is now being implemented across a customer fleet in the United States, capitalizing on its strength in using Monte Carlo simulations for fleet-wide projects. This feature excels in forecasting uncertain project outcomes, streamlining risk identification, and uncovering opportunities to enhance project schedules, ultimately boosting decision-making and overall project efficiency. For more information about Project Genius, click here.


Critical vs Non-Critical Field Classification in Automation

This update allows users to classify fields in automation workflows as critical or non-critical, a crucial distinction for prioritizing decisions like condition reporting and significance levels. The platform now distinguishes accuracy in two areas – one for critical and the other for non-critical fields.  The changes are reflected in Auto Flow reports and KPIs, facilitating a more natural evaluation of results aligned with actual business value and impacts.



Bug Reporter

Our new email-based Bug Reporter captures error information and relevant logs, encrypts them, and creates a downloadable file for users to email to our support team. This simplifies bug reporting, making communication of issues more efficient.



Report Template Updates

We have refined our report templates, enhancing their intuitiveness and user-friendliness, ensuring the valuable data NuclearN provides is more accessible and actionable.

Version 1.9 showcases our continuous innovation and responsiveness to the energy sector’s needs, emphasizing robust, secure solutions that leverage AI and advanced technologies to amplify human expertise. This focus reflects our commitment to precision, safety, and reliability, positioning NuclearN as a leader in operational excellence and forward-thinking energy generation, with safety and efficiency as our guiding principles.



Stay informed and engaged with everything AI in the nuclear sector by visiting The NuclearN Blog. Join the conversation and be part of the journey as we explore the future of AI in power generation together.

How AI is Powering Up the Nuclear Industry 


Sequoyah Nuclear Power Plant 

In an era where digital fluency is the new literacy, Large Language Models (LLMs) have emerged as revolutionary game-changers. These models are not just regurgitating information; they’re learning procedures and grasping formal logic. This isn’t an incremental change; it’s a leap. They’re making themselves indispensable across sectors as diverse as finance, healthcare, and cybersecurity. And now, they’re lighting up a path forward in another high-stakes arena: the nuclear sector.



The Limits of One-Size-Fits-All: Why Specialized Domains Need More Than Standard LLMs

In today’s digital age, Large Language Models (LLMs) like GPT-4 have become as common as smartphones, serving as general-purpose tools across various sectors. While their wide-ranging training data, which spans from social media to scientific papers, is useful for general capabilities, this limits their effectiveness in specialized domains. This limitation is especially glaring in fields that require precise and deep knowledge, such as nuclear physics or complex legal systems. It’s akin to using a Swiss Army knife when what you really need is a surgeon’s scalpel.

In contrast, specialized fields like nuclear engineering demand custom-tailored AI solutions. Publicly-available LLMs lack the precision needed to handle the nuanced language, complex protocols, and critical safety standards inherent in these areas. Custom-built AI tools go beyond mere language comprehension; they become repositories of essential field-specific knowledge, imbued with the necessary legal norms, safety protocols, and operational parameters. By focusing on specialized AI, we pave the way for more reliable and precise tools, moving beyond the “Swiss Army knife” approach to meet the unique demands of specialized sectors.

LLMs are Swiss Army knives in that they are great at a multitude of tasks; this is paradoxical to their utility in a field like nuclear where nuance is everything.


The Swiss Army Knife In Action

Below is a common response from a public chatbot on most plant specific questions. The information about this site is widely available online and has been published well before 2022 with the power plant’s commission date occurring in 1986.

From the chatbot’s response, the generic information provided by this public-available model does not give enough clarity for experts to rely on. To answer the above question, the model will need to be adapted to a specific domain.

Adapting general models to be domain specific is not easy however.  Some challenges with this task include:

  1. Financial and Technical Hurdles in Fine-Tuning—Fine-tuning public models is a costly affair. Beyond the financial aspect, modifications risk destabilizing the intricate instruct/RLHF tuning, a nuanced balance established by experts.
  2. Data Security: A Custodian Crisis —Public models weren’t built with high-security data custodianship in mind. This lack of a secure foundation poses risks, especially for sensitive information.
  3. A Dead End for Customization—Users face a brick wall when it comes to customizing these off-the-shelf models. Essential access to model weights is restricted, stifling adaptability and innovation.
  4. Stagnation in Technological Advancement —These models lag behind, missing out on revolutionary AI developments like RLAIF, DPO, or soft prompting. This stagnation limits their applicability and efficiency in evolving landscapes.
  5. The Impossibility of Refinement and Adaptation—Processes integral for optimization, such as model pruning, knowledge distillation, or weight sharing, are off the table. Without these, the models remain cumbersome and incompatible with consumer-grade hardware.


NuclearN

NuclearN specializes in AI-driven solutions tailored for the nuclear industry, combining advanced hardware, expert teams, and a rich data repository of nuclear information to create Large Language Models (LLMs) that excel in both complexity and precision. Unlike generic LLMs, ours are fine-tuned with nuclear-specific data, allowing us to automate a range of tasks from information retrieval to analytics with unparalleled accuracy.


What makes our models better than off-the-shelf LLMs? 

Large Language Models (LLMs) from NuclearN are trained on specialized nuclear data that are transforming several core tasks within the nuclear industry, leveraging their vast knowledge base and advanced understanding of nuclear context-specific processes. These models, when expertly trained with the right blend of data, algorithms, and parameters, can facilitate a range of complex tasks and information management functions with remarkable efficiency and precision.

NuclearN is training our LLMs to enhance several core functions:

  1. Routine Question-Answering: NuclearN’s trains LLMs on a rich dataset of nuclear terminologies, protocols, and safety procedures. They offer accurate and context-aware answers to technical and procedural questions, serving as a reliable resource that reduces the time needed for research and minimizes human error.
  2. Task-Specific and Site-Specific Fine Tuning: Even though our LLMs are trained to be nuclear-specific, different sites can have very specific plant designs, processes, and terminology.  Tasks such as engineering evaluations or work instruction authoring may be performed in a style unique to the site.  NuclearN offers private and secure, site and task-specific fine tuning of our LLMs to meet these needs and deliver unparalleled performance.
  3. Neural Search: The search capabilities of our LLMs go beyond mere keyword matching. They understand the semantic and contextual relationships between different terminologies and concepts in nuclear science. This advanced capability is critical when one needs to sift through large volumes of varied documents—be it scientific papers, historical logs, or regulatory guidelines—to extract the most pertinent information. It enhances both the efficiency and depth of tasks like literature review and risk assessment.
  4. Document Summarization: In an industry awash with voluminous reports and papers, the ability to quickly assimilate information is vital. Our LLMs can parse through these lengthy documents and distill them into concise yet comprehensive summaries. They preserve key findings, conclusions, and insights, making it easier for professionals to stay informed without being overwhelmed by data.
  5. Trend Analysis from Time-Series Data: The nuclear industry often relies on process and operational data gathered from sensors in the plant to track equipment performance and impacts from various activities. NuclearN is training our LLMs to be capable of analyzing these time-series data sets to discern patterns, correlations, or trends over time. This allows our LLMs to have a significantly more comprehensive view of the plant, which is particularly valuable for monitoring equipment health and predicting operational impacts.

By leveraging the capabilities of NuclearN’s specialized LLMs in these functional areas, the nuclear industry can realize measurable improvements in operational efficiency and strategic decision-making.

Stay informed and engaged with everything AI in the nuclear sector by visiting The NuclearN Blog. Join the conversation and be part of the journey as we explore the future of AI in nuclear technology together. 

Nuclearn v1.8 – Neural Search and Easier Automation

Nuclearn recently released version 1.8 of its analytics and automation platform, bringing major upgrades like neural search for intuitive queries, configurable automation routines, expanded analytics outputs, and enhanced ETL data integration. Together these features, some of them AI-driven, aim to optimize workflows and performance.

Neural Search

The neural search upgrade allows searching based on intent rather than keywords, even with ambiguous queries. Neural algorithms understand semantics, context, synonyms, and data formats. This saves time compared to traditional keyword searches, and provides significant advantages when context-sensitive information retrieval is crucial.

Some of the benefits of neural search include:
Precision of Search Results: Traditional keyword-based searches often yield an overwhelming number of irrelevant results, making it difficult for plant personnel to find the specific information they need. Neural search engines deliver results with ranked relevance. This means results are not just based on keyword match but on the basis of how closely the content of the document matches the intent of the search query.  

Contextual Understanding: Boolean queries, which are typically used in traditional search engines, lack the ability to understand the contextual nuances of complex technical language often found in engineering and compliance documentation. Neural search algorithms have a kind of “semantic understanding” that can understand the context behind a query, providing more relevant results. In addition, Neural search understands synonyms and related terms, crucial when dealing with the specialized lexicon in nuclear, thus making searches more robust.

Multiple Data Formats: Nuclear plants often store data in different formats, such as PDFs, Word documents, sensor logs, and older, legacy systems. A neural search engine can be trained to understand and index different types of data, providing a unified search experience across multiple data formats. 

Selective Classification for Unmatched Automation Accuracy

AutoCAP Screener also saw major improvements in v1.8. You can now set desired overall accuracy levels for automation templates. The Nuclearn platform then controls the confidence thresholds using a statistical technique called “selective classification” that enables theoretically guaranteed risk controls. This enables the system to ensure it operates above a user-defined automation accuracy level.

.

With selective classification, plants can improve automation rates and efficiency without compromising the quality of critical decisions. Risk is minimized by abstaining from acting in uncertain cases. The outcome is automation that consistently aligns with nuclear-grade precision and trustworthiness. By giving you accuracy configuration control, we ensure our AI technology conforms to your reliability needs. 

Additionally, multiple quality of life enhancements were added to the AutoCAP audit pages. Users can now sort the audit page results, add filters, integrate PowerBI dashboards with audit results, and even export the automation results to csv. These enhancements make it easier and more flexible for users to assess, evaluate, and monitor the automation system.

Analytics & Reporting Enhancements

On the analytics front, our customers wanted more customizations. v1.8 answers their request with the ability to upload their own custom report templates. In addition, customers can change date aggregations in reports to tailor the visualizations for specific audiences and uses. Enhanced dataset filtering and exporting also allows sending analyzed data to PowerBI or Excel for further manipulation or presentation.

Buckets

Editing analytics buckets is now more flexible too, with overwrite and save-as options. We added the ability to exclude and filter buckets from the visualization more easily and make changes to existing buckets, including their name.  

Data Integration

Behind the scenes, ETL workflows (meaning “extract, transform, load” data) were upgraded to more seamlessly ingest plant data into the Nuclearn platform. Users can now schedule recurring ETL jobs and share workflows between sites. With smooth data onboarding, you can focus your time on analytics and automation rather than manually uploading data. 

With advanced search, configurable automation, expanded analytics, and optimized data integration in v1.8, the Nuclearn Platform is better equipped to drive operational optimization using AI-powered technology. This release highlights Nuclearn’s commitment to meaningful innovation that solves real-world needs.

Nuclearn CAP Screening Video Series Index

This is a short informational blog that indexes videos explaining Nuclearn’s CAP Automation system.

Navigating to the AutoFlow Screen:

The AutoFlow screen is where the entire CAP Pipeline is configured and visually displayed. It consists of individual decision points in green blocks.

Navigating the Individual Decision Blocks:

The individual decision blocks are where the decision automations are controlled. Set thresholds and enable or disable automations at a per decision level for the overall decision block.

Navigating the Record Audit Page:

This video shows how to get from the AutoFlow to the audit page.

Explaining the Audit Table:

The record audit page contains a historical record of every issue/CR that has been processed by Nuclearn. All of the information that was available at prediction time is displayed in this table, as well as all of the decisions made by Nuclearn about this record.

Navigating the Screening Decision KPIs:

KPIs are displayed for several different metrics that Nuclearn measures from the overall system. Includes items like automation efficiency, accuracy, records processed, etc…

Quickly get to to the Audit Table:

This video simply shows how to quickly get from the homepage to the audit screen of interest.

 

Nuclearn v1.7 – An Optimized Customer Experience

Nuclearn v1.7 is our quickest release yet, coming just two months after v1.6! The theme of this release is responding to and delivering on our customers evolving needs. In this version we’ve focused on the integration of our platform with a nuclear site’s platform, user interface redesign, and optimization of our software for increased performance.

Seamless Integration of Customer Platform to Nuclearn

Over the last year, we have observed a challenge facing several customers: data integrations were taking time and money to develop and deploy, and would sometimes delay projects. To further improve the value to our customers, this release simplifies that integration process between the Nuclearn platform and external application databases. We now have the functionality to extract and transform a site’s data from various databases and load them into Nuclearn data models.

Customers can easily process and manipulate their data through the new job functionality. The feature allows the creation of multi-step jobs to extract, transform and load data into and from internal and external datasets. Administrators have the flexibility to execute jobs manually or schedule them to run automatically. They will also have access to job status and logs view. Additionally, the ability to create write back integrations has been added to our roadmap.

User Interface Redesign

One of the major changes in v1.7.1 is simplifying the user experience for common tasks. In previous releases, some common tasks involved dozens of clicks across multiple screens, making it difficult and unintuitive for users. Our new design features a more task-based approach, where key tasks can be performed on a single screen.

The first example of this new approach is the new functionality for Dataset Scoring. Users are now walked the through the step-by-step process needed to score and analyze a dataset from scratch on a single screen, including selecting a dataset, choosing the appropriate Nuclearn AI model, mapping data to the model inputs, scoring the dataset and analyzing the outputs.

We’ve also improved the layout on various menus and tables across the platform. Users should see more information about key objects, and not have tables load wider than their screens.

Optimization to increase performance

In v1.7 Nuclearn’s AI models have been optimized! Our new models have a 10x model size reduction and 2-5x speedup of per record dataset scoring. while maintaining their accuracy. What does this mean for our customers? This results in a faster installation of our products and also increased speed on the transfer of a site’s data to our platform. These new models are not activated by default to give customers time to test and convert existing processes to the new models, but we strongly encourage enabling them as soon as you can!

Nuclearn Platform Release Detailed Notes

V1.7.1

  • Refinements to the Extract-Transform-Load (ETL) job user interface and validations
  • Improvements to the Cron job scheduling page appearance
  • Early release of the Dataset Scoring wizard. Home page defaults to the wizard page now
  • Action buttons now display the icon only, and expand to display text on hover
  • Misc frontend appearance tweaks

Updated frontend to v0.5.2

  • [Bug Fix] Missing Hamburger button and collapse/expand icon icon
  • [Bug Fix] Clicking on a dataset row that has a nested json/detail data displays object instead of value
  • [Bug Fix] Name columns are too narrow and truncate text most of the time in data tables
  • [Bug Fix] Attempting cron job scheduling displays Undefined error
  • [Bug Fix] Job schedules breadcrumb in the header is wrong
  • [Bug Fix] Load dataset job step defaults to unique id column of first dataset in the dropdown
  • [Bug Fix] Having Load Dataset as last step in an ETL job allows empty names to be saved
  • [Bug Fix] Cannot create a job until at least one external connection gets created
  • [Bug Fix] Error encountered during step – Dataset id 0 not found
  • [Bug Fix] Cannot create a new report with only one bucket present
  • [Bug Fix] Unable to navigate to Job Schedules page
  • [Bug Fix] Run job notification message inconsistent
  • [Bug Fix] Modify existing and create new job allows the save with an empty name
  • Added early version of dataset scoring wizard that guides the user through various steps needed to score a dataset
  • Frontend ETL job step failure error display
  • Empty name validation error in job step not specific enough
  • Datasets table display on some screen sizes displays horizontal scroll bar
  • User session is now shared across web browser tabs so frontend can be used from multiple tabs at the same time
  • Change secondary button color to make it easier to distinbuish between disabled and clickable buttons
  • Enable modifying model version config and model base image
  • Cron job scheduling UI now guides the user through various options
  • Previous job runs page button alignment
  • Allow updates to External Connection name
  • Clicking Run from Update Job screen now executes the job right away and no longer needs two clicks

Updated MRE to v0.6.1

  • Support running a record through the automation route when it is posted to the upsert source data record
  • When deleting a job ensure scheduled jobs are deleted so that we dont have orphans

v1.7.0

  • Extract-Transform-Load (ETL) job and scheduling functionality now in preview
    • Extract and transform data from SQL Server and Oracle databases and load into Nuclearn datasets
    • Setup automatic job execution on a recurring schedule
    • Simplify integration between Nuclearn and external application database
  • Brand new model base runtime environment shipped in addition to the traditional one
    • Enables up to 10x model size reduction and 2-5x speedup of per record dataset scoring
    • Scheduled to completely replace traditional model base runtime environment in version 1.9
    • Shipped new versions of WANO and PO&C models based on new model base runtime environment
      • New models are undeployed by default to give customers time to test and convert existing processes to new models
  • Enabled support for multiple runtime environments and allowed per model environment assignment
  • Enabled binding of each Nuclearn user role to separate Active Directory user group
  • Other misc updates

Updated frontend to v0.5.0

  • Job functionality now in preview
    • Added Jobs link to sidebar (preview)
      • Allows creation of multi-step jobs to extract, transform and load data into and from internal and external datasets
      • Jobs can be executed manually, or scheduled to run automatically
      • Job status and logs view added
    • Added External Connections to the sidebar
      • Allows Nuclearn to connect to external databases
      • Current support for Oracle or SQL Server databases only
  • Azure Active Directory Integration Improvements
    • Rerun azure login flow and clear nuclearn Authz token cache during the Azure AD login procedure or HTTP 401 error from the API
    • Invalidate react query when nuclearn token is expired
    • [Bug fix] Log off fails under some circumstances
  • Misc updates
    • [Bug fix] Hamburger button on collapsed left panel does not display full size panel on click[Bug fix] Visualization is spelled wrong on the buckets page[Bug fix] Dataset and bucket lists render poorly on certain screensizes[Bug fix] User profile dropdown opens under the automation template toolbar[Bug fix] Infinite loader on datasets is missing records on displayChange secondary button color to make it easier to distinguish when button can be clickedMake undeployed models collapsed on Models pageDisable the ability to create new versions for pre-installed modelsOnly run isUTF8 validator on csv upload when file is below a certain size, display warning that file won’t be validated otherwise.Improved footer appearance
  • Updated MRE to v0.6.0
  • Extract-Transform-Load (ETL) job and external connection functionality (preview).
    • Added APIs to check the status of a job
    • Added APIs to run a job
    • Added APIs to store a job
    • Added APIs to to store external connections
    • Added job scheduler component
    • Added APIs to create, update or delete job schedules
  • Enabled support for multiple runtime environments and allowed per model environment assignment
    • Added API to update model versions
    • Model versions can be updated to use a different model base runtime environment
    • If model base runtime environment is not specified, most current one is picked by default
    • All existing model versions will be updated to use traditional model base runtime environment
  • Misc fixes and improvements

Misc Updates

  • Updated database to PostgreSQL 13.8

Nuclearn v1.6 – Leveling Up Analytics and Automation

It’s been a while since we last posted about a release, so this update is going to cover two minor releases of Nuclearn! Nuclearn Platform v1.5 and v1.6 have been delivering value to our customers over the last 6 months, and we are excited to share some of the new features to the general public. While extensive detailed release notes can be found at the bottom of this post, we want to highlight three enhancements that delivered considerable functionality and greatly enhanced the customer experience. 

  • End to End Assessment Readiness
  • Prediction Overrides
  • Enhancements to Automation and Audit

End to End Assessment Readiness

Nuclearn v1.6 gives customers the ability to automate the entire data analysis portion of preparing for an upcoming INPO, WANO or other audit assessment. Customers can now automatically tag each piece of Corrective Action Program data with Performance Objectives & Criteria, perform comprehensive visual analytics highlighting areas for improvement, and generate a comprehensive Assessment Readiness report including supporting data.

We’ve made significant enhancements to our Cluster Analytics Visualizations, including additional options for customization, improved readability, and additional functionality for search, filtering, and interactivity. Once a potential area of concern is discovered, customers can now save the set of selected labels and analytics parameters in a Bucket.

New Report functionality allows customers to generate their own reports within Nuclearn. With v1.6, customers can use the “Automated AFI Report Template” to select multiple Buckets from an Assessment Readiness analysis and automatically generate a comprehensive Assessment Readiness report. These reports are customizable, easily previewed in a browser and can even be downloaded as an editable Word document or pdf file.

Prediction Overrides

v1.6 now allows our customers to override model predictions.  Even the best machine learning models are sometimes wrong, and now users have the ability to view and override model predictions for any record. The overridden values can then be used for subsequent analysis and to improve and further fine-tune future models.

Enhancements to Automation and Audit

We’ve made various improvements to the Automation functionality within Nuclearn in v1.6, including a major UI update to the Audit pane. It is now much easier to see what records were automated or manually sampled, view incorrect predictions, and explore automation performance. We have also added the ability to “AutoFlow” a Dataset through an Automation Pipeline, allowing customers with non-integrated Nuclearn deployments to easily produce automation recommendations on uploaded CAP data.

Beyond the most notable items we’ve highlighted, there are plenty more enhancements and bug fixes. Additional details can be found in the release notes below, covering all minor and patch releases from v1.5.0 to v1.6.1.

Nuclearn Platform Release Detailed Notes

v1.6.1

  • Fixed issue with upgrade script, where RHEL upgrades from 1.5.x to 1.6 would partially fail.
  • Updated np_app_storage_service to version 0.4.0 to ensure default report template actually ships with platform.

Upgraded MRE to v0.5.1

  • Added artifact record for AFI Report template.
  • Updated libreoffice dependencies.

Upgraded frontend to v0.4.1

  • Fixed bug in where filters on analytics, where where filter would update incorrectly.

v1.6.0

  • Reports functionality now in preview. Automatically generate editable reports from a selection of Buckets.
  • Major quality of life enhancements to Analytics and Cluster Analytics, reducing workarounds and improving user experience.
  • Improvements to Automations, including a major UI update to the Audit pane.
  • Other misc updates.

Updated frontend to v0.4.0

  • Reports functionality now in preview.
    • Added reports link to sidebar.
    • Added ability to generate reports based on a selection of Buckets.
      • New report template available to generate an AFIs and Strengths report.
      • Easily preview the report in the browser.
      • Choose to download the report as an editable .docx or as a .pdf file.
  • Significant enhancements to Analytics and the Cluster Analytics visualization.
    • Cluster Analytics visualization enhancements
      • Added ability to adjust thresholds and colors.
      • Improved tool tips to add additional information and make them easier to read.
        • Tooltips now additionally include record count and the detailed “heat” calculation.
        • Tooltips also added to PO&C badges in the Bucket Details pane.
          • Added ability to exclude buckets and PO&Cs from the Cluster Analytics visualization. Exclusion pane is now available underneath the chart.
        • Added ability to search the PO&C labels and descriptions using the magnifying glass icon on the top right of the visualization.
        • Added ability to reset the zoom/pan on the Cluster Analytics visualization using the expand icon on the top right of the visualization.
    • Added support to include more than one split date in an analytic.
    • Added ability to include custom filters in an analytic.
    • Renamed additional analytics dropdown to “Export”, and renamed options to better reflect what they do.
    • Included option in Raw Predictions CSV export to choose whether user wants no additional data or all of the input columns for the analytic in the export.
  • Major UI update in the Automation Audit pane.
    • It is now much easier to see what records were automated or manually sampled.
    • Incorrect predictions are now colored red.
    • If a record is not automated, the fields that were the cause have a “person” icon next to them, indicating the system was not confident enough in the prediction and a human needs to review the record.
    • When an audit record is expanded in the Automation Audit pane, the predictions now appear at the top of the expansion, as well as the actual values (if available). If there is a mismatch, the prediction is colored red.
  • “Quality of Life” updates to Automations.
    • Added the ability to manually “AutoFlow” a Dataset through an Automation pipeline. This functionality is available on the “Overview” pane of an Automation.
    • Automation Configs now have an option to “Prohibit Duplicate Automation”. When this option is enabled, if the Automation encounters a record UID it has processed before, it returns an HTTP 422 error response.
    • When creating a new Automation Config, user must select which Model Version they want to use (used to always use the latest model version).
  • Misc updates.
    • Upgraded react version to 18.2.
    • Cleaned up unused code in several source files.

Updated MRE to v0.5.0

  • Report generation (preview).
    • Create, update and delete reports and report templates.
    • Report templates are stored as word documents, using a jinja-like template format.
    • An unlimited number of buckets can be tied to a report and used to render it.
    • Rendered reports can be downloaded as docx or pdf.
    • First report template “AFIs and Strengths Report” added to platform.
    • Added “artifact” storage capabilities.
      • Can now create, update, and delete media artifacts.
    • 3 new tables added – report, artifact, and bucketreports.
  • Various improvements to Automations.
    • Created a tie between Automation Configs and Model Versions.
      • During upgrade, existing Automation Configs will be tied to the latest version of the model their parent Automation Config Template is associated with.
      • When calling the automation route, the Model Version tied to the Automation Config is now used to predict the fields, which may not be the latest version.
      • Test Runs are also processed and displayed based on the tied Model Version.
    • Automation Configs can now be configured to prohibit duplicate automation. If the automation route is called with a record uid that has been previously automated by the Automation Config Template, an HTTP 422 response is returned.
    • Data from a Dataset can now be fed directly to an Automation Template from within the platform by performing an AutoFlow run on a Dataset. Previously an outside script was needed to call the automation api.
    • Automation Data Records can now be retrieved with the current ground truth Source Data Record.
  • Various enhancements to Analytics and Datasets.
    • Improved handling of scoring runs, especially when errors are encountered during scoring. A scoring run can now be canceled by calling the route /datasets/{datast_id}/cancel-score.
    • Increased Dataset description maximum length from 300 characters to 1,000.
    • Platform now ships with a demo Dataset (NRC Event Reports 2019-2020), Analytic Buckets, Automation Template, and associated examples.
    • Fixed bug where the unique column field would only be stored the first time data was uploaded to a Dataset.
    • Added benchmark proportion and relative differences to analytic results when a benchmark dataset is configured
    • When downloading a raw predictions csv for an Analytic, columns used for model inputs and analytic inputs are now included in the download.
    • Added support for an unlimited number of arbitrary split dates in an Analytic (previously only one was supported).
  • Misc fixes and improvements.
    • Upgraded docker image base to ubuntu:22.04.
    • Removed several old source code files that were no longer being used.
    • Upgraded target python version from 3.9 to 3.10.
    • Improved error handling for a variety of different issues
    • Fixed bug where a corrupted model wheel file could be saved in file cache. MRE will clear the cache and attempt to redownload if a corrupted file is encountered.

v1.5.5

Patched various security vulnerabilities, including:

  • Forced TLS version >= 1.2
  • Fixed various content headers
  • Enabled javascript strict mode on config.js
  • Updated np_app_proxy to v0.0.3

v1.5.4

Updated MRE to v0.4.5

  • Added a route to retrieve prediction overrides directly
  • Patched various python package vulnerabilities

v1.5.3

  • Scored predictions override now in preview
  • Dataset viewer now has filtering
  • Enhancements to application authentication administration
  • Misc bug fixes and error handling improvements

Updated frontend to v0.3.1

  1. PREVIEW: Added ability to view and override scored predictions
    • Navigate to override page by clicking on a record in the dataset viewer
    • Users can view any predictions for any model a source data record has been scored on
    • Users can override any prediction confidence with a value between 0 and 1
    • Users can set all non-overridden values for a record to 0 confidence by using the “Set Remaining to No” button
  2. Application authentication enhancements
    • Added ability for admins to manually update “email_validated” for users on the user page
    • Added ability for admins to generate a password reset link on the user page
  3. Filters added to dataset view
    • Users can now filter the records being viewed in the dataset viewer by filtering on any column
    • Multiple filter conditions can be added
    • Updated node.js to LTS version 16.18

Updated MRE to v0.4.4

  • Prediction overrides
    • New ability to override scored data record predictions via route /datasets/{dataset_id}/override_predictions/{source_uid}/{model_version_id}/{model_label_class_output_name}
    • Added ScoredDataRecordOverrides table
    • Added “override_order” and “override_confidence” columns to scored data record predictions that are updated when overrides are made
    • Added route /datasets/{dataset_id}/prediction_details/{source_uid}/{model_version_id}/{model_label_class_output_name} to get latest predictions
  • Dataset filters
    • Added support for filters to /datasets/{dataset_id}/records route
  • Cleanup logically deleted datasets and associated records
    • Added API route /datasets/permanent-delete-datasets to clean up logically deleted datasets
    • Added check to not allow a logical delete of a dataset when it is still being referenced by an automation config template
    • Added check to not allow a logical delete of an automation config template when it is parent to one or more other automation config templates
  • Better support for app authentication setup
    • Added route /auth/password/reset-request-token/ to produce a password reset link
    • Updated route /user/update-email-validated/ to set email_validated attribute on users to true or false
  • Misc
    • Improved performance and memory usage on setup of large scoring jobs by only storing scoring status in shared memory instead of the entire source data record
    • Improved error handling when duplicate records found in source data record sync
    • Added additional error handlers to improve error messages

Updated Nuclearn PlatformReleases

  • Increased gunicorn worker timeout to 7,200 seconds from 240
  • Improved upgrade script to fix issues upgrading within patch versions
  • Improved nuclearn-save-images.sh to use pigz if installed to decrease zip file creation time

Updated Dependencies

  • np_app_db updated to version 0.0.3 to patch vulnerabilities
  • np_app_proxy updated to version 0.0.2 to patch vulnerabilities
  • np_app_storage_service updated to version 0.2.1 to patch vulnerabilities
  • modelbase updated to version 0.3.2 to patch vulnerabilities

v1.5.2

Updated MRE to v0.4.2

  • Fixed bug where analytic csv export was not returning a stream

v1.5.1

Updated MRE to v0.4.1

  • Fixed bug where only one model would deploy on restart

v1.5.0

Updated Frontend to v0.3.0

  • Release of Cluster Analytics
    • Added Cluster Analytics to the dataset analytics screen.
      • Ability to select a specific slice and time period for viewing.
      • Interactive cluster analytics displaying labels (PO&C codes), the number of records associated with the label, and a “heat” color based on a weighted average of key metrics.
      • Interactive cluster label locations are based on semantic similarity of the labels and the records within those labels.
      • Ability to click on one or more labels to view details, including a time series chart, slice comparison, and specific records.
    • Added “Buckets” (preview).
      • Buckets are a specific selection of labels for specific analytic options.
      • Buckets have a name and description.
      • Ability to navigate directly to cluster analytics with associated analytic parameters and selected labels by clicking the “Analyze” button on the Bucket list.
      • Ability to view all available buckets for selected analytic options from the Cluster Analytics pane.
    • Major updates to the dataset analytics screen.
      • Default options updated for most analytic options to match recommended values.
      • Default view of analytic options made much simpler, with only the most commonly adjusted options seen. Advanced options can be selected with a toggle button on the top right.
      • Added ability to select “Benchmark” datasets in analytic options. Benchmark values are retrieved from the provided dataset and joined onto the analytic results via the predicted label.
      • Less commonly used analytics viewing options have been consolidated behind an “Additional Analytics” dropdown button.
      • Significantly reduced need to pass around all analytic parameters for every analytic call, instead using the “Analytic” server-side persistence.
  • CAP Automation Minor Enhancements
    • Added automation configuration integration tab with dynamically generated code examples.
    • Added cumulative time period option to automation configuration KPIs.
    • Added the ground truth data record details to the automation configuration Audit table.
    • Automation configuration Audit table now displays accuracy, automated, and manual sample as three separate columns.
  • Misc Updates
    • Reorganized sidebar navigation to separate Admin (Models & Users), Analytics (Buckets & Datasets), and Products.
    • Added sorting to most dropdown selectors.
    • Added option to log inference request data to a dataset when mapping a dataset to a model version.

Update MRE to v0.4.1

  • Version 3 of the WANO PO&C Labeler released
    • New Neural Network architecture implements latest state of the art techniques.
    • Improved accuracy and better coverage across a wider variety of PO&C codes.
  • Major updates to analytics
    • Added ability to include a “benchmark” value in stats analytics. The benchmark value is retrieved from a dataset, and joined onto the stats analytics results by matching predicted labels with a column from the benchmark dataset.
    • Added ability when running stats analytics to split time periods by an arbitrary date instead of just a week/month/quarter/year.
    • Stats analytics parameters are now persisted server-side as an “analytic”. This allows the frontend and other integrations to reference analytic parameters by a single ID rather than having to track and pass over a dozen different analytic parameters.
    • New “Bucket” functionality. Buckets track a set of selected labels and other parameters for an analytics, as well as a name and description. Added ability to create, update, delete and view buckets.
    • New route to get the source data records related to an analytic and specific label selections.
    • Added a route to produce a list of WANO PO&C codes and their descriptions as well as x/y coordinates for cluster mapping.
  • Quality of life improvements to dataset management
    • Added ability to log the data sent to an “infer” request for a model to a dataset. When datasets are mapped to a model version, the option to log infer requests to that dataset is now included.
    • The field selected as the source UID when uploading data to a dataset is now saved.
  • Update MRE to support multiple processes/workers running at the same time. This is the most significant performance improvement to MRE so far.
    • Updated connection pooling to be process specific.
    • Updated default number of workers to 8 from 1.
    • Updated model version deployment on startup to be multi-process safe.
    • Refactored dataset model scoring to be mutli-process safe.
  • Misc updates
    • Upgraded target python version from 3.8 to 3.9.
    • Added more detailed exception handling in various places.
    • Added custom exception handlers and handle uncaught custom exceptions during route calls. This should reduce the number of nondescript HTTP500 errors.
    • Added “cumulative” time interval option to automation KPIs.
    • Added a check to ensure the database and mre are operating in UTC.
    • Deprecated the following routes:
      • /models/{model_id}/version/{version_number}/config/
      • /models/active
      • /stats/

Misc

  • Updated np_app_storage_service to version 0.2.

Come visit us at nuclearn.ai, or follow our LinkedIn page for regular updates.

A New Approach To Safety Analytics

Over the the last few months, we have been working on developing useful safety analytics for utilities. We’ve seen safety analytics challenges that both Nuclear Utilities and Non-Nuclear Utilities seem to have in common, specifically whether: (1) is there a way to analyze future work for potential injury risk and the types of injuries that may occur, and (2) can historically performed work be analyzed, binned, and coded to determine what kinds of safety issues seem to occur on a regular or seasonal basis? After several experiments, trials and a few new insights, we are excited to share these new Safety Analytics techniques!

Challenges with Safety Programs

Our solutions aim to solve several business problems that are surprisingly common to most utility organizations trying to analyze and improve safety performance. Some of those problems include:

(1) Scheduled safety communications can be too broad and non-specific to the work being performed which dilutes the safety message and results in a lower safety ‘signal to noise’ ratio. Employees end up disregarding communications that they learn are irrelevant, and/or spending time on information that is not directly applicable or actionable.

Pin on Safety Posters
Broad safety communications deliver well intentioned, but non-specific messages.

(2) Safety observations may not target the highest value (e.g. most risky or injury prone) activities being performed as that value is unknown or incalculable. Activities being observed may be low-risk, resulting in a confusing message to the frontline.

(3) The causes of upticks in injuries within certain workgroups are often unknown. Managers may see an uptrend in injuries within a certain group and do not have the tools, trend-codes, or identified commonalities needed to address the situation.

Our Approach

We’ve found a set of techniques that work to solve these business challenges and are pleased to offer to customers a value added safety analysis product. The solution is two part.

First, we use machine learning models to review and apply a Significant Injury or Fatality (SIF) type based on information published by the Occupational Safety and Health Administration. This allows us to draw from previous injury data to determine the most likely injuries (injury type, location on body) for any work activity. We can apply this model to future work schedules, then bucket by elements like ‘Workgroup’ or ‘Week’. The resulting analytics provide data-driven forecasts for injury risk for which tailored communications can be crafted or observation tasks assigned.

Second, we’ve developed a novel trend code application mechanism that allows us to apply brand new codes without any historically coded data! This method uses recent advancements in Natural Language Processing (NLP) techniques to break a fundamental machine learning paradigm that would otherwise require mountains of historically labeled data to provide accurate results. Using this technique we have been able to create a suite of trend codes based directly on the OSHA Safety and Health Program Management Guidelines. This allows us to analyze safety data in a way that has never been done before, generating new, actionable insights for improving industrial safety.

Nuclearn Safety Analysis

These two new approaches come together to deliver our Nuclearn Safety Analysis product.

Forecast Injury Type Safety Analysis Dashboard Result

This PowerBI dashboard shows a forecasted increased level of exposure to potential burn injuries, particularly the ‘maint’ organization to increased burns due to sulfuric acid sump inspections. The ‘ismc’ org is at particular risk for cuts and lacerations first, with electric shock second. Using these insights, tailored communications would be sent to these groups in a ‘just-in-time’ format to address potential challenges and reduce the risk of a significant injury.

Dashboard of Historically Coded Safety Data

Again, this PowerBI dashboard shows a high proportion of OSHA.MW.4 (hazard prevention and control at multiemployer worksites), followed by OSHA.PE.2 (correct program deficiencies and identify opportunities to improve). Analyzing over time, we see oscillation of some safety codes as well as seasonal volatility of certain codes.

By leveraging Nuclearn Safety Analysis, utilities can begin taking informed actions to improve industrial safety in ways never before possible:

  • A safety analyst or communications professional can automatically review and analyze weeks or months of both forward looking and historical work activities for safety impact. They can use this information to tailor impactful and actionable safety messages that cut through the safety noise and drive results at the organizational level.
  • Observation program managers can use the forward looking results to assign observation resources to the riskiest work with the highest likelihood of severe injury.
  • Front line managers can review tasks for their given work groups and adjust pre-job briefs or weekly meetings to put preventative measures in place for the weeks activities.

To learn more about Nuclearn’s Safety Analysis offering and the Nuclearn Platform, send us an email at contact@nuclearn.ai.

Capitalizing on CAP Screening Automation

CAP Screening automation continues to be adopted across the Nuclear industry. As of April 2022, at least 4 nuclear utilities in North America have implemented or are currently implementing CAP Screening automation, and at least a half dozen more are strongly considering pursuing it in the near future. However, not everyone in the nuclear industry is intimately familiar with the concept, or may only have a partial picture of the scope of CAP Screening Automation. In this post, we will quickly cover the basics of CAP Screening, automation, and the value it can deliver for utilities operating Nuclear Power Plants.

Corrective Action Programs and Nuclear Power Plants

For those unfamiliar with nuclear power operations, every Nuclear Power Plant operating within the US is required by law to run a Corrective Action Program (CAP). In the Nuclear Regulatory Commissions own words, CAP is:

The system by which a utility finds and fixes problems at the nuclear plant. It includes a process for evaluating the safety significance of the problems, setting priorities in correcting the problems, and tracking them until they have been corrected.

https://www.nrc.gov/reading-rm/basic-ref/glossary/corrective-action-program.html

CAP is an integral part of operating a nuclear power plant, and touches almost every person and process inside the organization. It also happens to be a manually intensive process, and costs each utility millions of dollars in labor costs each year to run.

CAP Screening

Screening incoming issue reports is the biggest process component of running a CAP, and is how utilities “…[evaluate] the safety significant of the problems [and set] priorities in correcting the problems…”. The screening process often starts immediately after a Condition Report is initiated, when a frontline leader reviews the report, verifies all appropriate information is captured, and sometimes escalates the issue to operations or maintenance. Next, the Condition Report is sent to either a centralized “screening committee”, or to distributed CAP coordinators. These groups review each and every Condition Report to evaluate safety significance, assess priority, and assign tasks. Somewhere between 5,000 and 10,000 Condition Reports per reactor are generated and go through this process each year.

Example CAP Screening process with a centralized screening committee.

In addition to the core screening, most utilities also screen Condition Reports for regulatory impacts, reportability, maintenance rule functional failure applicability, trend codes, and other impacts. These are important parts of the CAP Screening process, even if they are sometimes left out of conversations about CAP Screening automation.

Automating CAP Screening with AI

Every step in CAP Screening listed above is a manual process. The leader review, screening, and impact assessments are all performed by people. Each of the listed steps has a well defined input, well defined outputs, and has been performed more or less the same way for years. This consistency and wealth of historical data makes CAP Screening ripe for automation using artificial intelligence.

Introducing AI-driven automation into the CAP Screening process allows many of the Condition Reports to bypass the manual steps in the process. Before being screened, Condition Reports are instead sent through an AI agent trained on years of historical data that will predict the safety impacts, priorities, etc. and produce the confidence in it’s predictions. Based on system configuration, Condition Reports with the highest confidence will bypass the manual screening process altogether.

An example CAP Screening workflow with the introduction of AI-driven automation.

In the best implementations, CAP Screening automation will also include sending a small portion of “automatable” condition reports through the manual screening process. This “human in the loop” approach facilitates continuous quality control of the AI by comparing results from the manual process to what the AI would have done. When combined with detailed audit records, the CAP Screening automation system can produce audit reports and metrics that helps the organization ensure the quality of their CAP Screening.

Results will vary by utility, but a site adopting CAP Screening automation can expect to automate screening on anywhere between 10% to 70% of their Condition Reports. The proportion of Condition Reports automated is a function of the accuracy of the AI models, the consistency of the historical screening process, and the “risk of inaccuracy” the utility is willing to take. We expect this proportion to continue to increase in the future as AI models improve and CAP programs are adjusted to include automation.

Why are Utilities Interested in CAP Screening Automation?

Correctly implemented, CAP Screening automation is a very high value proposition for a utility. CAP Screening personnel are often highly experienced, highly paid, and in short supply. Reducing the number of Condition Reports that have to be manually screened reduces the number of personnel that have to be dedicated to CAP Screening. Automation also improves the consistency of screening and assignment, reducing rework and reassignments. Automation also eliminates the screening lead time for many Condition Reports, allowing utilities to act more quickly on the issues identified in CAP.

Various Nuclear Power Plants in North America are automating portions of the CAP Screening processes using artificial intelligence and realizing the value today. Automated screening is one of the reasons why we believe AI is the promising future of Nuclear CAP. The efficiency savings, improved consistency, reduce CAP-maintain-operate cycle times, and other benefits from CAP Screening automation are too valuable to ignore, and we expect most nuclear utilities to Capitalize on CAP Screening automation over the next several years.

Interested in automating the CAP Screening Processes at your plant? Nuclearn offers a commercial CAP Screening Automation software solution leveraging state of the art AI models tailored to nuclear power. Learn more by setting up a call or emailing us at sales@nuclearn.ai