“At NuclearN, we are committed to continuous innovation. Our goal is to release a new version of our platform every 3 months, ensuring that our customers always have access to the latest advancements in technology and efficiency.”
— Jerrold Vincent & Brad Fox, NuclearN co-founders
The release of NuclearN version 1.9 at the end of 2023 introduced a new product plus new features and enhancements aimed at improving operational efficiency and the user experience for power generating utilities and beyond.
NuclearN Project Genius
The major addition with this release – Project Genius – integrates analytics and intelligence for large and complex projects. By using AI to learn from historical project data, and leveraging Monte Carlo simulations for new projects, Project Genius can automatically identify key project risks and highlight key opportunities for improving schedule, quality and cost.
Project Genius is now being implemented across a customer fleet in the United States, capitalizing on its strength in using Monte Carlo simulations for fleet-wide projects. This feature excels in forecasting uncertain project outcomes, streamlining risk identification, and uncovering opportunities to enhance project schedules, ultimately boosting decision-making and overall project efficiency. For more information about Project Genius, click here.
Critical vs Non-Critical Field Classification in Automation
This update allows users to classify fields in automation workflows as critical or non-critical, a crucial distinction for prioritizing decisions like condition reporting and significance levels. The platform now distinguishes accuracy in two areas – one for critical and the other for non-critical fields. The changes are reflected in Auto Flow reports and KPIs, facilitating a more natural evaluation of results aligned with actual business value and impacts.
Bug Reporter
Our new email-based Bug Reporter captures error information and relevant logs, encrypts them, and creates a downloadable file for users to email to our support team. This simplifies bug reporting, making communication of issues more efficient.
Report Template Updates
We have refined our report templates, enhancing their intuitiveness and user-friendliness, ensuring the valuable data NuclearN provides is more accessible and actionable.
Version 1.9 showcases our continuous innovation and responsiveness to the energy sector’s needs, emphasizing robust, secure solutions that leverage AI and advanced technologies to amplify human expertise. This focus reflects our commitment to precision, safety, and reliability, positioning NuclearN as a leader in operational excellence and forward-thinking energy generation, with safety and efficiency as our guiding principles.
Stay informed and engaged with everything AI in the nuclear sector by visiting The NuclearN Blog. Join the conversation and be part of the journey as we explore the future of AI in power generation together.
In an era where digital fluency is the new literacy, Large Language Models (LLMs) have emerged as revolutionary game-changers. These models are not just regurgitating information; they’re learning procedures and grasping formal logic. This isn’t an incremental change; it’s a leap. They’re making themselves indispensable across sectors as diverse as finance, healthcare, and cybersecurity. And now, they’re lighting up a path forward in another high-stakes arena: the nuclear sector.
The Limits of One-Size-Fits-All: Why Specialized Domains Need More Than Standard LLMs
In today’s digital age, Large Language Models (LLMs) like GPT-4 have become as common as smartphones, serving as general-purpose tools across various sectors. While their wide-ranging training data, which spans from social media to scientific papers, is useful for general capabilities, this limits their effectiveness in specialized domains. This limitation is especially glaring in fields that require precise and deep knowledge, such as nuclear physics or complex legal systems. It’s akin to using a Swiss Army knife when what you really need is a surgeon’s scalpel.
In contrast, specialized fields like nuclear engineering demand custom-tailored AI solutions. Publicly-available LLMs lack the precision needed to handle the nuanced language, complex protocols, and critical safety standards inherent in these areas. Custom-built AI tools go beyond mere language comprehension; they become repositories of essential field-specific knowledge, imbued with the necessary legal norms, safety protocols, and operational parameters. By focusing on specialized AI, we pave the way for more reliable and precise tools, moving beyond the “Swiss Army knife” approach to meet the unique demands of specialized sectors.
LLMs are Swiss Army knives in that they are great at a multitude of tasks; this is paradoxical to their utility in a field like nuclear where nuance is everything.
The Swiss Army Knife In Action
Below is a common response from a public chatbot on most plant specific questions. The information about this site is widely available online and has been published well before 2022 with the power plant’s commission date occurring in 1986.
From the chatbot’s response, the generic information provided by this public-available model does not give enough clarity for experts to rely on. To answer the above question, the model will need to be adapted to a specific domain.
Adapting general models to be domain specific is not easy however. Some challenges with this task include:
Financial and Technical Hurdles in Fine-Tuning—Fine-tuning public models is a costly affair. Beyond the financial aspect, modifications risk destabilizing the intricate instruct/RLHF tuning, a nuanced balance established by experts.
Data Security: A Custodian Crisis —Public models weren’t built with high-security data custodianship in mind. This lack of a secure foundation poses risks, especially for sensitive information.
A Dead End for Customization—Users face a brick wall when it comes to customizing these off-the-shelf models. Essential access to model weights is restricted, stifling adaptability and innovation.
Stagnation in Technological Advancement —These models lag behind, missing out on revolutionary AI developments like RLAIF, DPO, or soft prompting. This stagnation limits their applicability and efficiency in evolving landscapes.
The Impossibility of Refinement and Adaptation—Processes integral for optimization, such as model pruning, knowledge distillation, or weight sharing, are off the table. Without these, the models remain cumbersome and incompatible with consumer-grade hardware.
NuclearN
NuclearN specializes in AI-driven solutions tailored for the nuclear industry, combining advanced hardware, expert teams, and a rich data repository of nuclear information to create Large Language Models (LLMs) that excel in both complexity and precision. Unlike generic LLMs, ours are fine-tuned with nuclear-specific data, allowing us to automate a range of tasks from information retrieval to analytics with unparalleled accuracy.
What makes our models better than off-the-shelf LLMs?
Large Language Models (LLMs) from NuclearN are trained on specialized nuclear data that are transforming several core tasks within the nuclear industry, leveraging their vast knowledge base and advanced understanding of nuclear context-specific processes. These models, when expertly trained with the right blend of data, algorithms, and parameters, can facilitate a range of complex tasks and information management functions with remarkable efficiency and precision.
NuclearN is training our LLMs to enhance several core functions:
Routine Question-Answering: NuclearN’s trains LLMs on a rich dataset of nuclear terminologies, protocols, and safety procedures. They offer accurate and context-aware answers to technical and procedural questions, serving as a reliable resource that reduces the time needed for research and minimizes human error.
Task-Specific and Site-Specific Fine Tuning: Even though our LLMs are trained to be nuclear-specific, different sites can have very specific plant designs, processes, and terminology. Tasks such as engineering evaluations or work instruction authoring may be performed in a style unique to the site. NuclearN offers private and secure, site and task-specific fine tuning of our LLMs to meet these needs and deliver unparalleled performance.
Neural Search: The search capabilities of our LLMs go beyond mere keyword matching. They understand the semantic and contextual relationships between different terminologies and concepts in nuclear science. This advanced capability is critical when one needs to sift through large volumes of varied documents—be it scientific papers, historical logs, or regulatory guidelines—to extract the most pertinent information. It enhances both the efficiency and depth of tasks like literature review and risk assessment.
Document Summarization: In an industry awash with voluminous reports and papers, the ability to quickly assimilate information is vital. Our LLMs can parse through these lengthy documents and distill them into concise yet comprehensive summaries. They preserve key findings, conclusions, and insights, making it easier for professionals to stay informed without being overwhelmed by data.
Trend Analysis from Time-Series Data: The nuclear industry often relies on process and operational data gathered from sensors in the plant to track equipment performance and impacts from various activities. NuclearN is training our LLMs to be capable of analyzing these time-series data sets to discern patterns, correlations, or trends over time. This allows our LLMs to have a significantly more comprehensive view of the plant, which is particularly valuable for monitoring equipment health and predicting operational impacts.
By leveraging the capabilities of NuclearN’s specialized LLMs in these functional areas, the nuclear industry can realize measurable improvements in operational efficiency and strategic decision-making.
Stay informed and engaged with everything AI in the nuclear sector by visiting The NuclearN Blog. Join the conversation and be part of the journey as we explore the future of AI in nuclear technology together.
Nuclearn recently released version 1.8 of its analytics and automation platform, bringing major upgrades like neural search for intuitive queries, configurable automation routines, expanded analytics outputs, and enhanced ETL data integration. Together these features, some of them AI-driven, aim to optimize workflows and performance.
Neural Search
The neural search upgrade allows searching based on intent rather than keywords, even with ambiguous queries. Neural algorithms understand semantics, context, synonyms, and data formats. This saves time compared to traditional keyword searches, and provides significant advantages when context-sensitive information retrieval is crucial.
Some of the benefits of neural search include: Precision of Search Results: Traditional keyword-based searches often yield an overwhelming number of irrelevant results, making it difficult for plant personnel to find the specific information they need. Neural search engines deliver results with ranked relevance. This means results are not just based on keyword match but on the basis of how closely the content of the document matches the intent of the search query.
Contextual Understanding: Boolean queries, which are typically used in traditional search engines, lack the ability to understand the contextual nuances of complex technical language often found in engineering and compliance documentation. Neural search algorithms have a kind of “semantic understanding” that can understand the context behind a query, providing more relevant results. In addition, Neural search understands synonyms and related terms, crucial when dealing with the specialized lexicon in nuclear, thus making searches more robust.
Multiple Data Formats: Nuclear plants often store data in different formats, such as PDFs, Word documents, sensor logs, and older, legacy systems. A neural search engine can be trained to understand and index different types of data, providing a unified search experience across multiple data formats.
Selective Classification for Unmatched Automation Accuracy
AutoCAP Screener also saw major improvements in v1.8. You can now set desired overall accuracy levels for automation templates. The Nuclearn platform then controls the confidence thresholds using a statistical technique called “selective classification” that enables theoretically guaranteed risk controls. This enables the system to ensure it operates above a user-defined automation accuracy level.
.
With selective classification, plants can improve automation rates and efficiency without compromising the quality of critical decisions. Risk is minimized by abstaining from acting in uncertain cases. The outcome is automation that consistently aligns with nuclear-grade precision and trustworthiness. By giving you accuracy configuration control, we ensure our AI technology conforms to your reliability needs.
Additionally, multiple quality of life enhancements were added to the AutoCAP audit pages. Users can now sort the audit page results, add filters, integrate PowerBI dashboards with audit results, and even export the automation results to csv. These enhancements make it easier and more flexible for users to assess, evaluate, and monitor the automation system.
Analytics & Reporting Enhancements
On the analytics front, our customers wanted more customizations. v1.8 answers their request with the ability to upload their own custom report templates. In addition, customers can change date aggregations in reports to tailor the visualizations for specific audiences and uses. Enhanced dataset filtering and exporting also allows sending analyzed data to PowerBI or Excel for further manipulation or presentation.
Buckets
Editing analytics buckets is now more flexible too, with overwrite and save-as options. We added the ability to exclude and filter buckets from the visualization more easily and make changes to existing buckets, including their name.
Data Integration
Behind the scenes, ETL workflows (meaning “extract, transform, load” data) were upgraded to more seamlessly ingest plant data into the Nuclearn platform. Users can now schedule recurring ETL jobs and share workflows between sites. With smooth data onboarding, you can focus your time on analytics and automation rather than manually uploading data.
With advanced search, configurable automation, expanded analytics, and optimized data integration in v1.8, the Nuclearn Platform is better equipped to drive operational optimization using AI-powered technology. This release highlights Nuclearn’s commitment to meaningful innovation that solves real-world needs.
This is a short informational blog that indexes videos explaining Nuclearn’s CAP Automation system.
Navigating to the AutoFlow Screen:
The AutoFlow screen is where the entire CAP Pipeline is configured and visually displayed. It consists of individual decision points in green blocks.
Navigating the Individual Decision Blocks:
The individual decision blocks are where the decision automations are controlled. Set thresholds and enable or disable automations at a per decision level for the overall decision block.
Navigating the Record Audit Page:
This video shows how to get from the AutoFlow to the audit page.
Explaining the Audit Table:
The record audit page contains a historical record of every issue/CR that has been processed by Nuclearn. All of the information that was available at prediction time is displayed in this table, as well as all of the decisions made by Nuclearn about this record.
Navigating the Screening Decision KPIs:
KPIs are displayed for several different metrics that Nuclearn measures from the overall system. Includes items like automation efficiency, accuracy, records processed, etc…
Quickly get to to the Audit Table:
This video simply shows how to quickly get from the homepage to the audit screen of interest.
Nuclearn v1.7 is our quickest release yet, coming just two months after v1.6! The theme of this release is responding to and delivering on our customers evolving needs. In this version we’ve focused on the integration of our platform with a nuclear site’s platform, user interface redesign, and optimization of our software for increased performance.
Seamless Integration of Customer Platform to Nuclearn
Over the last year, we have observed a challenge facing several customers: data integrations were taking time and money to develop and deploy, and would sometimes delay projects. To further improve the value to our customers, this release simplifies that integration process between the Nuclearn platform and external application databases. We now have the functionality to extract and transform a site’s data from various databases and load them into Nuclearn data models.
Customers can easily process and manipulate their data through the new job functionality. The feature allows the creation of multi-step jobs to extract, transform and load data into and from internal and external datasets. Administrators have the flexibility to execute jobs manually or schedule them to run automatically. They will also have access to job status and logs view. Additionally, the ability to create write back integrations has been added to our roadmap.
User Interface Redesign
One of the major changes in v1.7.1 is simplifying the user experience for common tasks. In previous releases, some common tasks involved dozens of clicks across multiple screens, making it difficult and unintuitive for users. Our new design features a more task-based approach, where key tasks can be performed on a single screen.
The first example of this new approach is the new functionality for Dataset Scoring. Users are now walked the through the step-by-step process needed to score and analyze a dataset from scratch on a single screen, including selecting a dataset, choosing the appropriate Nuclearn AI model, mapping data to the model inputs, scoring the dataset and analyzing the outputs.
We’ve also improved the layout on various menus and tables across the platform. Users should see more information about key objects, and not have tables load wider than their screens.
Optimization to increase performance
In v1.7 Nuclearn’s AI models have been optimized! Our new models have a 10x model size reduction and 2-5x speedup of per record dataset scoring. while maintaining their accuracy. What does this mean for our customers? This results in a faster installation of our products and also increased speed on the transfer of a site’s data to our platform. These new models are not activated by default to give customers time to test and convert existing processes to the new models, but we strongly encourage enabling them as soon as you can!
Nuclearn Platform Release Detailed Notes
V1.7.1
Refinements to the Extract-Transform-Load (ETL) job user interface and validations
Improvements to the Cron job scheduling page appearance
Early release of the Dataset Scoring wizard. Home page defaults to the wizard page now
Action buttons now display the icon only, and expand to display text on hover
Misc frontend appearance tweaks
Updated frontend to v0.5.2
[Bug Fix] Missing Hamburger button and collapse/expand icon icon
[Bug Fix] Clicking on a dataset row that has a nested json/detail data displays object instead of value
[Bug Fix] Name columns are too narrow and truncate text most of the time in data tables
[Bug Fix] Job schedules breadcrumb in the header is wrong
[Bug Fix] Load dataset job step defaults to unique id column of first dataset in the dropdown
[Bug Fix] Having Load Dataset as last step in an ETL job allows empty names to be saved
[Bug Fix] Cannot create a job until at least one external connection gets created
[Bug Fix] Error encountered during step – Dataset id 0 not found
[Bug Fix] Cannot create a new report with only one bucket present
[Bug Fix] Unable to navigate to Job Schedules page
[Bug Fix] Run job notification message inconsistent
[Bug Fix] Modify existing and create new job allows the save with an empty name
Added early version of dataset scoring wizard that guides the user through various steps needed to score a dataset
Frontend ETL job step failure error display
Empty name validation error in job step not specific enough
Datasets table display on some screen sizes displays horizontal scroll bar
User session is now shared across web browser tabs so frontend can be used from multiple tabs at the same time
Change secondary button color to make it easier to distinbuish between disabled and clickable buttons
Enable modifying model version config and model base image
Cron job scheduling UI now guides the user through various options
Previous job runs page button alignment
Allow updates to External Connection name
Clicking Run from Update Job screen now executes the job right away and no longer needs two clicks
Updated MRE to v0.6.1
Support running a record through the automation route when it is posted to the upsert source data record
When deleting a job ensure scheduled jobs are deleted so that we dont have orphans
v1.7.0
Extract-Transform-Load (ETL) job and scheduling functionality now in preview
Extract and transform data from SQL Server and Oracle databases and load into Nuclearn datasets
Setup automatic job execution on a recurring schedule
Simplify integration between Nuclearn and external application database
Brand new model base runtime environment shipped in addition to the traditional one
Enables up to 10x model size reduction and 2-5x speedup of per record dataset scoring
Scheduled to completely replace traditional model base runtime environment in version 1.9
Shipped new versions of WANO and PO&C models based on new model base runtime environment
New models are undeployed by default to give customers time to test and convert existing processes to new models
Enabled support for multiple runtime environments and allowed per model environment assignment
Enabled binding of each Nuclearn user role to separate Active Directory user group
Other misc updates
Updated frontend to v0.5.0
Job functionality now in preview
Added Jobs link to sidebar (preview)
Allows creation of multi-step jobs to extract, transform and load data into and from internal and external datasets
Jobs can be executed manually, or scheduled to run automatically
Job status and logs view added
Added External Connections to the sidebar
Allows Nuclearn to connect to external databases
Current support for Oracle or SQL Server databases only
Azure Active Directory Integration Improvements
Rerun azure login flow and clear nuclearn Authz token cache during the Azure AD login procedure or HTTP 401 error from the API
Invalidate react query when nuclearn token is expired
[Bug fix] Log off fails under some circumstances
Misc updates
[Bug fix] Hamburger button on collapsed left panel does not display full size panel on click[Bug fix] Visualization is spelled wrong on the buckets page[Bug fix] Dataset and bucket lists render poorly on certain screensizes[Bug fix] User profile dropdown opens under the automation template toolbar[Bug fix] Infinite loader on datasets is missing records on displayChange secondary button color to make it easier to distinguish when button can be clickedMake undeployed models collapsed on Models pageDisable the ability to create new versions for pre-installed modelsOnly run isUTF8 validator on csv upload when file is below a certain size, display warning that file won’t be validated otherwise.Improved footer appearance
Updated MRE to v0.6.0
Extract-Transform-Load (ETL) job and external connection functionality (preview).
Added APIs to check the status of a job
Added APIs to run a job
Added APIs to store a job
Added APIs to to store external connections
Added job scheduler component
Added APIs to create, update or delete job schedules
Enabled support for multiple runtime environments and allowed per model environment assignment
Added API to update model versions
Model versions can be updated to use a different model base runtime environment
If model base runtime environment is not specified, most current one is picked by default
All existing model versions will be updated to use traditional model base runtime environment
It’s been a while since we last posted about a release, so this update is going to cover two minor releases of Nuclearn! Nuclearn Platform v1.5 and v1.6 have been delivering value to our customers over the last 6 months, and we are excited to share some of the new features to the general public. While extensive detailed release notes can be found at the bottom of this post, we want to highlight three enhancements that delivered considerable functionality and greatly enhanced the customer experience.
End to End Assessment Readiness
Prediction Overrides
Enhancements to Automation and Audit
End to End Assessment Readiness
Nuclearn v1.6 gives customers the ability to automate the entire data analysis portion of preparing for an upcoming INPO, WANO or other audit assessment. Customers can now automatically tag each piece of Corrective Action Program data with Performance Objectives & Criteria, perform comprehensive visual analytics highlighting areas for improvement, and generate a comprehensive Assessment Readiness report including supporting data.
We’ve made significant enhancements to our Cluster Analytics Visualizations, including additional options for customization, improved readability, and additional functionality for search, filtering, and interactivity. Once a potential area of concern is discovered, customers can now save the set of selected labels and analytics parameters in a Bucket.
New Report functionality allows customers to generate their own reports within Nuclearn. With v1.6, customers can use the “Automated AFI Report Template” to select multiple Buckets from an Assessment Readiness analysis and automatically generate a comprehensive Assessment Readiness report. These reports are customizable, easily previewed in a browser and can even be downloaded as an editable Word document or pdf file.
Prediction Overrides
v1.6 now allows our customers to override model predictions. Even the best machine learning models are sometimes wrong, and now users have the ability to view and override model predictions for any record. The overridden values can then be used for subsequent analysis and to improve and further fine-tune future models.
Enhancements to Automation and Audit
We’ve made various improvements to the Automation functionality within Nuclearn in v1.6, including a major UI update to the Audit pane. It is now much easier to see what records were automated or manually sampled, view incorrect predictions, and explore automation performance. We have also added the ability to “AutoFlow” a Dataset through an Automation Pipeline, allowing customers with non-integrated Nuclearn deployments to easily produce automation recommendations on uploaded CAP data.
Beyond the most notable items we’ve highlighted, there are plenty more enhancements and bug fixes. Additional details can be found in the release notes below, covering all minor and patch releases from v1.5.0 to v1.6.1.
Nuclearn Platform Release Detailed Notes
v1.6.1
Fixed issue with upgrade script, where RHEL upgrades from 1.5.x to 1.6 would partially fail.
Updated np_app_storage_service to version 0.4.0 to ensure default report template actually ships with platform.
Upgraded MRE to v0.5.1
Added artifact record for AFI Report template.
Updated libreoffice dependencies.
Upgraded frontend to v0.4.1
Fixed bug in where filters on analytics, where where filter would update incorrectly.
v1.6.0
Reports functionality now in preview. Automatically generate editable reports from a selection of Buckets.
Major quality of life enhancements to Analytics and Cluster Analytics, reducing workarounds and improving user experience.
Improvements to Automations, including a major UI update to the Audit pane.
Other misc updates.
Updated frontend to v0.4.0
Reports functionality now in preview.
Added reports link to sidebar.
Added ability to generate reports based on a selection of Buckets.
New report template available to generate an AFIs and Strengths report.
Easily preview the report in the browser.
Choose to download the report as an editable .docx or as a .pdf file.
Significant enhancements to Analytics and the Cluster Analytics visualization.
Cluster Analytics visualization enhancements
Added ability to adjust thresholds and colors.
Improved tool tips to add additional information and make them easier to read.
Tooltips now additionally include record count and the detailed “heat” calculation.
Tooltips also added to PO&C badges in the Bucket Details pane.
Added ability to exclude buckets and PO&Cs from the Cluster Analytics visualization. Exclusion pane is now available underneath the chart.
Added ability to search the PO&C labels and descriptions using the magnifying glass icon on the top right of the visualization.
Added ability to reset the zoom/pan on the Cluster Analytics visualization using the expand icon on the top right of the visualization.
Added support to include more than one split date in an analytic.
Added ability to include custom filters in an analytic.
Renamed additional analytics dropdown to “Export”, and renamed options to better reflect what they do.
Included option in Raw Predictions CSV export to choose whether user wants no additional data or all of the input columns for the analytic in the export.
Major UI update in the Automation Audit pane.
It is now much easier to see what records were automated or manually sampled.
Incorrect predictions are now colored red.
If a record is not automated, the fields that were the cause have a “person” icon next to them, indicating the system was not confident enough in the prediction and a human needs to review the record.
When an audit record is expanded in the Automation Audit pane, the predictions now appear at the top of the expansion, as well as the actual values (if available). If there is a mismatch, the prediction is colored red.
“Quality of Life” updates to Automations.
Added the ability to manually “AutoFlow” a Dataset through an Automation pipeline. This functionality is available on the “Overview” pane of an Automation.
Automation Configs now have an option to “Prohibit Duplicate Automation”. When this option is enabled, if the Automation encounters a record UID it has processed before, it returns an HTTP 422 error response.
When creating a new Automation Config, user must select which Model Version they want to use (used to always use the latest model version).
Misc updates.
Upgraded react version to 18.2.
Cleaned up unused code in several source files.
Updated MRE to v0.5.0
Report generation (preview).
Create, update and delete reports and report templates.
Report templates are stored as word documents, using a jinja-like template format.
An unlimited number of buckets can be tied to a report and used to render it.
Rendered reports can be downloaded as docx or pdf.
First report template “AFIs and Strengths Report” added to platform.
Added “artifact” storage capabilities.
Can now create, update, and delete media artifacts.
3 new tables added – report, artifact, and bucketreports.
Various improvements to Automations.
Created a tie between Automation Configs and Model Versions.
During upgrade, existing Automation Configs will be tied to the latest version of the model their parent Automation Config Template is associated with.
When calling the automation route, the Model Version tied to the Automation Config is now used to predict the fields, which may not be the latest version.
Test Runs are also processed and displayed based on the tied Model Version.
Automation Configs can now be configured to prohibit duplicate automation. If the automation route is called with a record uid that has been previously automated by the Automation Config Template, an HTTP 422 response is returned.
Data from a Dataset can now be fed directly to an Automation Template from within the platform by performing an AutoFlow run on a Dataset. Previously an outside script was needed to call the automation api.
Automation Data Records can now be retrieved with the current ground truth Source Data Record.
Various enhancements to Analytics and Datasets.
Improved handling of scoring runs, especially when errors are encountered during scoring. A scoring run can now be canceled by calling the route /datasets/{datast_id}/cancel-score.
Increased Dataset description maximum length from 300 characters to 1,000.
Platform now ships with a demo Dataset (NRC Event Reports 2019-2020), Analytic Buckets, Automation Template, and associated examples.
Fixed bug where the unique column field would only be stored the first time data was uploaded to a Dataset.
Added benchmark proportion and relative differences to analytic results when a benchmark dataset is configured
When downloading a raw predictions csv for an Analytic, columns used for model inputs and analytic inputs are now included in the download.
Added support for an unlimited number of arbitrary split dates in an Analytic (previously only one was supported).
Misc fixes and improvements.
Upgraded docker image base to ubuntu:22.04.
Removed several old source code files that were no longer being used.
Upgraded target python version from 3.9 to 3.10.
Improved error handling for a variety of different issues
Fixed bug where a corrupted model wheel file could be saved in file cache. MRE will clear the cache and attempt to redownload if a corrupted file is encountered.
v1.5.5
Patched various security vulnerabilities, including:
Forced TLS version >= 1.2
Fixed various content headers
Enabled javascript strict mode on config.js
Updated np_app_proxy to v0.0.3
v1.5.4
Updated MRE to v0.4.5
Added a route to retrieve prediction overrides directly
Patched various python package vulnerabilities
v1.5.3
Scored predictions override now in preview
Dataset viewer now has filtering
Enhancements to application authentication administration
Misc bug fixes and error handling improvements
Updated frontend to v0.3.1
PREVIEW: Added ability to view and override scored predictions
Navigate to override page by clicking on a record in the dataset viewer
Users can view any predictions for any model a source data record has been scored on
Users can override any prediction confidence with a value between 0 and 1
Users can set all non-overridden values for a record to 0 confidence by using the “Set Remaining to No” button
Application authentication enhancements
Added ability for admins to manually update “email_validated” for users on the user page
Added ability for admins to generate a password reset link on the user page
Filters added to dataset view
Users can now filter the records being viewed in the dataset viewer by filtering on any column
Multiple filter conditions can be added
Updated node.js to LTS version 16.18
Updated MRE to v0.4.4
Prediction overrides
New ability to override scored data record predictions via route /datasets/{dataset_id}/override_predictions/{source_uid}/{model_version_id}/{model_label_class_output_name}
Added ScoredDataRecordOverrides table
Added “override_order” and “override_confidence” columns to scored data record predictions that are updated when overrides are made
Added route /datasets/{dataset_id}/prediction_details/{source_uid}/{model_version_id}/{model_label_class_output_name} to get latest predictions
Dataset filters
Added support for filters to /datasets/{dataset_id}/records route
Cleanup logically deleted datasets and associated records
Added API route /datasets/permanent-delete-datasets to clean up logically deleted datasets
Added check to not allow a logical delete of a dataset when it is still being referenced by an automation config template
Added check to not allow a logical delete of an automation config template when it is parent to one or more other automation config templates
Better support for app authentication setup
Added route /auth/password/reset-request-token/ to produce a password reset link
Updated route /user/update-email-validated/ to set email_validated attribute on users to true or false
Misc
Improved performance and memory usage on setup of large scoring jobs by only storing scoring status in shared memory instead of the entire source data record
Improved error handling when duplicate records found in source data record sync
Added additional error handlers to improve error messages
Updated Nuclearn PlatformReleases
Increased gunicorn worker timeout to 7,200 seconds from 240
Improved upgrade script to fix issues upgrading within patch versions
Improved nuclearn-save-images.sh to use pigz if installed to decrease zip file creation time
Updated Dependencies
np_app_db updated to version 0.0.3 to patch vulnerabilities
np_app_proxy updated to version 0.0.2 to patch vulnerabilities
np_app_storage_service updated to version 0.2.1 to patch vulnerabilities
modelbase updated to version 0.3.2 to patch vulnerabilities
v1.5.2
Updated MRE to v0.4.2
Fixed bug where analytic csv export was not returning a stream
v1.5.1
Updated MRE to v0.4.1
Fixed bug where only one model would deploy on restart
v1.5.0
Updated Frontend to v0.3.0
Release of Cluster Analytics
Added Cluster Analytics to the dataset analytics screen.
Ability to select a specific slice and time period for viewing.
Interactive cluster analytics displaying labels (PO&C codes), the number of records associated with the label, and a “heat” color based on a weighted average of key metrics.
Interactive cluster label locations are based on semantic similarity of the labels and the records within those labels.
Ability to click on one or more labels to view details, including a time series chart, slice comparison, and specific records.
Added “Buckets” (preview).
Buckets are a specific selection of labels for specific analytic options.
Buckets have a name and description.
Ability to navigate directly to cluster analytics with associated analytic parameters and selected labels by clicking the “Analyze” button on the Bucket list.
Ability to view all available buckets for selected analytic options from the Cluster Analytics pane.
Major updates to the dataset analytics screen.
Default options updated for most analytic options to match recommended values.
Default view of analytic options made much simpler, with only the most commonly adjusted options seen. Advanced options can be selected with a toggle button on the top right.
Added ability to select “Benchmark” datasets in analytic options. Benchmark values are retrieved from the provided dataset and joined onto the analytic results via the predicted label.
Less commonly used analytics viewing options have been consolidated behind an “Additional Analytics” dropdown button.
Significantly reduced need to pass around all analytic parameters for every analytic call, instead using the “Analytic” server-side persistence.
CAP Automation Minor Enhancements
Added automation configuration integration tab with dynamically generated code examples.
Added cumulative time period option to automation configuration KPIs.
Added the ground truth data record details to the automation configuration Audit table.
Automation configuration Audit table now displays accuracy, automated, and manual sample as three separate columns.
Misc Updates
Reorganized sidebar navigation to separate Admin (Models & Users), Analytics (Buckets & Datasets), and Products.
Added sorting to most dropdown selectors.
Added option to log inference request data to a dataset when mapping a dataset to a model version.
Update MRE to v0.4.1
Version 3 of the WANO PO&C Labeler released
New Neural Network architecture implements latest state of the art techniques.
Improved accuracy and better coverage across a wider variety of PO&C codes.
Major updates to analytics
Added ability to include a “benchmark” value in stats analytics. The benchmark value is retrieved from a dataset, and joined onto the stats analytics results by matching predicted labels with a column from the benchmark dataset.
Added ability when running stats analytics to split time periods by an arbitrary date instead of just a week/month/quarter/year.
Stats analytics parameters are now persisted server-side as an “analytic”. This allows the frontend and other integrations to reference analytic parameters by a single ID rather than having to track and pass over a dozen different analytic parameters.
New “Bucket” functionality. Buckets track a set of selected labels and other parameters for an analytics, as well as a name and description. Added ability to create, update, delete and view buckets.
New route to get the source data records related to an analytic and specific label selections.
Added a route to produce a list of WANO PO&C codes and their descriptions as well as x/y coordinates for cluster mapping.
Quality of life improvements to dataset management
Added ability to log the data sent to an “infer” request for a model to a dataset. When datasets are mapped to a model version, the option to log infer requests to that dataset is now included.
The field selected as the source UID when uploading data to a dataset is now saved.
Update MRE to support multiple processes/workers running at the same time. This is the most significant performance improvement to MRE so far.
Updated connection pooling to be process specific.
Updated default number of workers to 8 from 1.
Updated model version deployment on startup to be multi-process safe.
Refactored dataset model scoring to be mutli-process safe.
Misc updates
Upgraded target python version from 3.8 to 3.9.
Added more detailed exception handling in various places.
Added custom exception handlers and handle uncaught custom exceptions during route calls. This should reduce the number of nondescript HTTP500 errors.
Added “cumulative” time interval option to automation KPIs.
Added a check to ensure the database and mre are operating in UTC.
Over the the last few months, we have been working on developing useful safety analytics for utilities. We’ve seen safety analytics challenges that both Nuclear Utilities and Non-Nuclear Utilities seem to have in common, specifically whether: (1) is there a way to analyze future work for potential injury risk and the types of injuries that may occur, and (2) can historically performed work be analyzed, binned, and coded to determine what kinds of safety issues seem to occur on a regular or seasonal basis? After several experiments, trials and a few new insights, we are excited to share these new Safety Analytics techniques!
Challenges with Safety Programs
Our solutions aim to solve several business problems that are surprisingly common to most utility organizations trying to analyze and improve safety performance. Some of those problems include:
(1) Scheduled safety communications can be too broad and non-specific to the work being performed which dilutes the safety message and results in a lower safety ‘signal to noise’ ratio. Employees end up disregarding communications that they learn are irrelevant, and/or spending time on information that is not directly applicable or actionable.
(2) Safety observations may not target the highest value (e.g. most risky or injury prone) activities being performed as that value is unknown or incalculable. Activities being observed may be low-risk, resulting in a confusing message to the frontline.
(3) The causes of upticks in injuries within certain workgroups are often unknown. Managers may see an uptrend in injuries within a certain group and do not have the tools, trend-codes, or identified commonalities needed to address the situation.
Our Approach
We’ve found a set of techniques that work to solve these business challenges and are pleased to offer to customers a value added safety analysis product. The solution is two part.
First, we use machine learning models to review and apply a Significant Injury or Fatality (SIF) type based on information published by the Occupational Safety and Health Administration. This allows us to draw from previous injury data to determine the most likely injuries (injury type, location on body) for any work activity. We can apply this model to future work schedules, then bucket by elements like ‘Workgroup’ or ‘Week’. The resulting analytics provide data-driven forecasts for injury risk for which tailored communications can be crafted or observation tasks assigned.
Second, we’ve developed a novel trend code application mechanism that allows us to apply brand new codes without any historically coded data! This method uses recent advancements in Natural Language Processing (NLP) techniques to break a fundamental machine learning paradigm that would otherwise require mountains of historically labeled data to provide accurate results. Using this technique we have been able to create a suite of trend codes based directly on the OSHA Safety and Health Program Management Guidelines. This allows us to analyze safety data in a way that has never been done before, generating new, actionable insights for improving industrial safety.
Nuclearn Safety Analysis
These two new approaches come together to deliver our Nuclearn Safety Analysis product.
This PowerBI dashboard shows a forecasted increased level of exposure to potential burn injuries, particularly the ‘maint’ organization to increased burns due to sulfuric acid sump inspections. The ‘ismc’ org is at particular risk for cuts and lacerations first, with electric shock second. Using these insights, tailored communications would be sent to these groups in a ‘just-in-time’ format to address potential challenges and reduce the risk of a significant injury.
Again, this PowerBI dashboard shows a high proportion of OSHA.MW.4 (hazard prevention and control at multiemployer worksites), followed by OSHA.PE.2 (correct program deficiencies and identify opportunities to improve). Analyzing over time, we see oscillation of some safety codes as well as seasonal volatility of certain codes.
By leveraging Nuclearn Safety Analysis, utilities can begin taking informed actions to improve industrial safety in ways never before possible:
A safety analyst or communications professional can automatically review and analyze weeks or months of both forward looking and historical work activities for safety impact. They can use this information to tailor impactful and actionable safety messages that cut through the safety noise and drive results at the organizational level.
Observation program managers can use the forward looking results to assign observation resources to the riskiest work with the highest likelihood of severe injury.
Front line managers can review tasks for their given work groups and adjust pre-job briefs or weekly meetings to put preventative measures in place for the weeks activities.
To learn more about Nuclearn’s Safety Analysis offering and the Nuclearn Platform, send us an email at contact@nuclearn.ai.
CAP Screening automation continues to be adopted across the Nuclear industry. As of April 2022, at least 4 nuclear utilities in North America have implemented or are currently implementing CAP Screening automation, and at least a half dozen more are strongly considering pursuing it in the near future. However, not everyone in the nuclear industry is intimately familiar with the concept, or may only have a partial picture of the scope of CAP Screening Automation. In this post, we will quickly cover the basics of CAP Screening, automation, and the value it can deliver for utilities operating Nuclear Power Plants.
Corrective Action Programs and Nuclear Power Plants
For those unfamiliar with nuclear power operations, every Nuclear Power Plant operating within the US is required by law to run a Corrective Action Program (CAP). In the Nuclear Regulatory Commissions own words, CAP is:
The system by which a utility finds and fixes problems at the nuclear plant. It includes a process for evaluating the safety significance of the problems, setting priorities in correcting the problems, and tracking them until they have been corrected.
CAP is an integral part of operating a nuclear power plant, and touches almost every person and process inside the organization. It also happens to be a manually intensive process, and costs each utility millions of dollars in labor costs each year to run.
CAP Screening
Screening incoming issue reports is the biggest process component of running a CAP, and is how utilities “…[evaluate] the safety significant of the problems [and set] priorities in correcting the problems…”. The screening process often starts immediately after a Condition Report is initiated, when a frontline leader reviews the report, verifies all appropriate information is captured, and sometimes escalates the issue to operations or maintenance. Next, the Condition Report is sent to either a centralized “screening committee”, or to distributed CAP coordinators. These groups review each and every Condition Report to evaluate safety significance, assess priority, and assign tasks. Somewhere between 5,000 and 10,000 Condition Reports per reactor are generated and go through this process each year.
In addition to the core screening, most utilities also screen Condition Reports for regulatory impacts, reportability, maintenance rule functional failure applicability, trend codes, and other impacts. These are important parts of the CAP Screening process, even if they are sometimes left out of conversations about CAP Screening automation.
Automating CAP Screening with AI
Every step in CAP Screening listed above is a manual process. The leader review, screening, and impact assessments are all performed by people. Each of the listed steps has a well defined input, well defined outputs, and has been performed more or less the same way for years. This consistency and wealth of historical data makes CAP Screening ripe for automation using artificial intelligence.
Introducing AI-driven automation into the CAP Screening process allows many of the Condition Reports to bypass the manual steps in the process. Before being screened, Condition Reports are instead sent through an AI agent trained on years of historical data that will predict the safety impacts, priorities, etc. and produce the confidence in it’s predictions. Based on system configuration, Condition Reports with the highest confidence will bypass the manual screening process altogether.
In the best implementations, CAP Screening automation will also include sending a small portion of “automatable” condition reports through the manual screening process. This “human in the loop” approach facilitates continuous quality control of the AI by comparing results from the manual process to what the AI would have done. When combined with detailed audit records, the CAP Screening automation system can produce audit reports and metrics that helps the organization ensure the quality of their CAP Screening.
Results will vary by utility, but a site adopting CAP Screening automation can expect to automate screening on anywhere between 10% to 70% of their Condition Reports. The proportion of Condition Reports automated is a function of the accuracy of the AI models, the consistency of the historical screening process, and the “risk of inaccuracy” the utility is willing to take. We expect this proportion to continue to increase in the future as AI models improve and CAP programs are adjusted to include automation.
Why are Utilities Interested in CAP Screening Automation?
Correctly implemented, CAP Screening automation is a very high value proposition for a utility. CAP Screening personnel are often highly experienced, highly paid, and in short supply. Reducing the number of Condition Reports that have to be manually screened reduces the number of personnel that have to be dedicated to CAP Screening. Automation also improves the consistency of screening and assignment, reducing rework and reassignments. Automation also eliminates the screening lead time for many Condition Reports, allowing utilities to act more quickly on the issues identified in CAP.
Various Nuclear Power Plants in North America are automating portions of the CAP Screening processes using artificial intelligence and realizing the value today. Automated screening is one of the reasons why we believe AI is the promising future of Nuclear CAP. The efficiency savings, improved consistency, reduce CAP-maintain-operate cycle times, and other benefits from CAP Screening automation are too valuable to ignore, and we expect most nuclear utilities to Capitalize on CAP Screening automation over the next several years.
Interested in automating the CAP Screening Processes at your plant? Nuclearn offers a commercial CAP Screening Automation software solution leveraging state of the art AI models tailored to nuclear power. Learn more by setting up a call or emailing us at sales@nuclearn.ai
Nuclearn Platform v1.4.0 is by far our biggest release yet! This release brings a lot of functionality we have been excited about for a long time to our customers. While the detailed release notes are quite extensive, there are 4 major enhancements that will interest most customers:
CAP Screening Automation & Workflow Automation General Availability
Improvements to Dataset and Dataset Analytics
Kubernetes Support
Azure AD Authentication Support
CAP Screening Automation & Workflow Automation General Availability
Nuclearn’s Workflow Automation features have been in preview since Q4 2021, and are core to enabling our CAP Screening Automation products. With Nuclearn v1.4.0, these features are now generally available for our customers using our CAP Screening Automation module! This release exposed the capabilities to build automation templates and configurations via the web interface, making it very easy to set up new CAP Screening Automations.
This release ties the Automation workflows much more closely in with our existing Dataset and Model functionality, making it even easier to deploy, maintain, monitor, and audit CAP Screening Automations. Additionally, the functionality added in this release makes it very easy to apply Nuclearn’s Workflow Automation to other processes beyond CAP Screening!
Improvements to Dataset and Dataset Analytics
v1.4.0 brings many new user experiences and stability enhancements to the Dataset and Dataset Analytics feature added in 1.3. These include a far more intuitive user interface, progress bars for monitoring the status of long-running scoring jobs, more flexible analytic options, and numerous bug fixes. These enhancements should make using Datasets for INPO Evaluation Readiness Assessments or Continuous Monitoring even easier!
Datasets UI Updates
Kubernetes Support
With the release of v1.4.0, Nuclearn is now supported on Kubernetes. As many enterprises move their containerized applications to Kubernetes, this is an important addition to the platform. Nuclearn Platform releases now included a Helm Chart for the entire platform and detailed instructions for configuring Nuclearn for Kubernetes. We have found that our deployments are actually easier to configure and install on Kubernetes, in addition to the horizontal scalability and fault tolerance a deployment on Kubernetes provides.
Azure AD Authentication Support
In addition to Active Directory authentication via LDAPS and native application authentication, Nuclearn v1.4.0 includes top-level support for Azure Active Directory (Azure AD) authentication. Customers leveraging Azure AD authentication within Nuclearn are able to SSO into the Nuclearn platform, and easily configure group permissions with Azure AD.
Beyond the most notable items already listed, there are even more miscellaneous enhancements and bug fixes. Additional details can be found in the detailed release notes below.
Nuclearn Platform Release Detailed Notes
v1.4.0
Highlights
Workflow Automation General Availability
Improvements to Dataset and Dataset Analytics
Support for Kubernetes
AzureAD authentication support
Updated Web Frontend To v1.2
Workflow Automation General Availability
Forms for creating, editing and deleting automation templates from scratch
Ability to view parent and child automation templates
Automation template overview page
Improvements to autoflow configuration page
Ability to kick off automation config test runs directly from the frontend
Several fixes to audit details page for correctness and performance
New KPI page for each automation template
Automation template can now either render automation configuration for a model or be a “parent” automation with children automations, and pass global parameters down to children
Improvements to datasets and dataset analytics
Redesigned dataset UI buttons to be more intuitive
Adding a progress bar during dataset scoring
Added ability to select whether to include first and last time periods in dataset analytics
Added “week” as an option for time grouping in dataset analytics
Added ability to directly download raw predictions from analytics page
Added ability to download an entire dataset as a csv file
Improved error messages for dataset uploads
Major enhancements to model and model version management
Changed model details routes to no longer be product specific
Standardized and improved model deployment buttons
Added new forms for creating models from scratch
Added new forms for creating new model versions and uploaing supporting whl files
Model admin page now uses collapsing cards for models and model versions to make UI easier to navigate
Most API calls related to models migrated from axios to ReactQuery, which will improve performance and enable better UI in the future
Most model react components migrated from legacy classes to react-hooks
“Predict Now” can now support models that require more than one input field
Fixed bug where UI would not render if a model did not have an associated model version
Misc
New login page for supporting AzureAD authentication
Fixed bug where users had to login twice after their session times out
Minor UI fixes to fit pages without scrolling more often
Improved loading icons/UI in many place across application
Update Model Engine to v1.3
Workflow Automation General Availability
Many additional routes for supporting Automation Templates, Configs, Audit, and KPIs on the frontend
Added ability to specify parent/child automation config templates
Added ability to provide configuration data for an automation config template
Refactored “Test Runs” to be generated from an automation template, dataset, and model version instead of just a model version
Automation configuration templates can now be tied to a “ground truth” dataset
Accuracy is now calculated and saved on the automation data record rather than calculating on the fly
Added unique constraint on AutomationConfigTemplate names
No max name length limit for automation configuration templates
Soft deletes for automation configuration templates
Removed hardcoded product ids and automation configuration template ids from routes and operations
Updated permissions and roles across all automation config routes
Updated testdata routes to still return model labels if no test runs have been completed
Dataset & Analytics Improvements
Added “week” as a valid option for dataset stat timeslices
A central dataset scoring queue is maintained so that multiple requests to score a dataset do not conflict
Added scoring progress route to check scoring progress
Improvements to csv upload validation, including checking for null UIDs, verifying encoding is either UTF-8 or ascii, and other misc improvements
Added route for downloading dataset data as a csv file
Added route for retrieving scored model predictions as a csv file
Added support to dataset stats for including/excluding first and last time periods
Model Deployment & Orchestration Overhaul
Support for multiple model backends
DockerOrchestrator and KubeOrchestrator added as supported model backends
Configuration for multiple backends provided via mre-config “ModelOrchestration” entry
Disable undeploying models on startup by setting ModelOrchestration -> undeploy_model_on_startup_failure = false
Orchestrators are now mostly “stateless”, and query backends to retrieve model status
Major improvements to model binary handling
Added routes for creating and deleting model binaries
Better support for uploading new model binaries and tying to model versions
Significant performance improvement in get_model_binary route
Ability to provide pre-signed temporary tokens to model orchestration interfaces to download binaries from MRE rather than MRE having to push model binaries to containers directly
Fixed bug where updating an existing model/dataset mapping would fail
Added routes for creating new models and model versions
Changed model deletions to be soft deletes
Removed “basemodel” tables and references as it was no longer used after model refactors
Better GRPC error handling
All model inference operation now support or use field translations to map input data to required model inputs
Kubernetes support
Support for MRE sitting behind a “/api” root path by setting the “RUNTIME_ENV” environment variable to “K8S”
Added KubeOrchestrator model orchestration interface
Azure AD Support
Azure AD added as a supported authentication provider
Users now have a “username” and a separate “user_display_name”. These are always the same except for users created via AzureAD as AzureAD does not use email addresses as a unique identifier.
Added functions for syncing user roles with remote authentication providers
Misc
Created new model version entries for the Wano PO&C Labeler and PO&C Labeler
Old model versions are undeployed by default and new model versions are deployed by default
Existing integrations via the APIs may break if they specify v1 of the labeler models
Configuration file location is now set via the “CONFIG_FILE” environment variable
Added support for deprecating and removing API routes
Deprecated routes can be forced to fail by setting the “STRICT_DEPRECTATION” environment variable to “true”
In the near future, the Nuclear Corrective Action Program (CAP) will be sleek, streamlined, and highly efficient; where occasionally humans participants are required to review and deliberate over only the most complicated issues requiring their vast experience and wisdom. For everything else, a trained army of CAP AI agents invisibly process issues, review and alert on trends, assign corrective actions, and take feedback from human coaches via purpose-designed human/AI interfaces.
No longer will a team of humans be subject to hours upon days of analysis for trend detection, a Senior Reactor Operator forced to process another condition report about a cracked sidewalk, or an Engineer left waiting for a corrective action item to be issued to her inbox. These functions will have been largely automated with the focused application of AI-based technology. Here are the five reasons this future is highly probable, based on both the current state of the Nuclear Industry and leading-edge AI technology.
Cost Savings and Improved Quality
It comes as no surprise to anyone that has worked in the Nuclear Industry that running an effective CAP program is expensive. CAP demands a significant investment into human resources that have adequate experience to effectively diagnose and resolve the problems experienced in an operating power plant. In practice, this requires either dedicated staffing or rotating employees out of primary roles to fulfill a CAP function.
By applying intelligent automation to the Screening, Work Generation, and Issue Trending processes, a resource reduction of approximately 45% is expected.
Beyond reducing the number of resources required, AI reduces the total amount of time required to execute portions of the CAP process. While a human screening team may only be able to process conditions on a daily basis, an AI system can review and screen conditions and issue work items immediately. More quickly getting workable tasks into the hands of employees saves money and improves CAP quality.
For those issues that may be too complex for AI to effectively handle, a human-in-the-loop system can be employed, where AI knows when it is unsure and can reach out for human assistance. By using human-in-the-loop the cost of the CAP program is reduced while keeping quality the same or better.
Additionally, AI can lower the threshold for issue documentation. Deployment of an information extraction AI lets employees more naturally capture issues using natural language, without filling out specialized forms. When issues become easier to document, they are documented more often, the overall information input into the CAP program increases, and the chance an issue is corrected becomes greater. AI that immediately evaluates the quality and completeness of the submitted report enables automated dialogue with the submitter. This can encourage behaviors such as adding information, clarify issues, correcting spelling, or otherwise encourage behaviors that promote report quality, increasing the effectiveness of the overall CAP program.
Scale
The most valuable problems to solve are frequently the largest. CAP and associated activities are one of the largest opportunities in Nuclear. CAP lies at the heart of the Nuclear Industry, and requires participation from almost every trade and profession at each site. The ubiquity of CAP combined with the savings potential provides an immense incentive for plant operators, industry vendors, and industry research groups to discover and implement ways to make these programs run more sustainably and efficiently. Specialized AI that can automate various tasks are at the top of mind of industry groups such as the Electric Power Research Institute, the Idaho National Laboratories, and various utility in-house teams.
A fortunate side effect of the CAP program is the production of large quantities of high-quality data – data ideal for training the AI systems that will be used to automate the same functions. Most of this data is captured in free-form text as natural language. Language with a specific Nuclear vocabulary and dialect, but natural language nonetheless. The scale of this data puts it on par with the datasets utilized by the large technology vendors and academic institutions to develop and train the most effective AI systems. Thanks to the scale of Nuclear CAP data, these large AI systems can be specialized to operate in the Nuclear domain – increasing performance and effectiveness for the tasks at hand.
Transportability
The most notable advancements in AI technology of the late 2010s were around the development of advanced natural language-based AI. This AI has the ability to understand human language more naturally and effectively than previously thought possible. Breakthroughs in this area are characterized by the ability of AI to transfer learning from one problem to another. An AI good at classifying condition report quality will be better at identifying equipment tags vs one specifically trained just to identify equipment tags.
The benefit for the nuclear industry is that an AI system trained at Plant X will be able to transfer its learning to Plant Y and be more performant than one trained at just Plant Y. This is similar to how a working professional at Diablo Canyon would more easily adapt and apply their knowledge when transferring to Turkey Point than someone not having worked in the nuclear industry at all. Similar to a human employee, an AI system will benefit from the variety of different knowledge obtained from general industry data. Learning specifics for any one plant will be faster, cheaper, and easier for any plant wishing to specialize the AI system for use in automation once trained on general industry data.
As a result, solutions developed at one site will be able to be shared. With commonly applicable training and similar problems, the industry can work to solve the big problems once with ‘large’ or ‘hard’ AI, and transport the solution from plant to plant for the benefit of the entire industry.
Automated Screening
One of the more specific solutions apparent when applying AI to the CAP process is the automation of the condition screening process. Condition screening is the process of reviewing a received report of a non-standard condition in or around the plant, then applying certain tags, codes, or classifications, assigning an owner, and generating the appropriate work items that address the condition. For some plants, this process involves dedicated groups of senior employees that work daily to manually perform this process. For others, this involves dispersed resources periodically gathering together to complete screening. In either case, the resources are usually senior-level and experienced, and thus expensive. The following estimation of resources spent by the industry for this process each year illustrates just how large an opportunity there is:
The screening process has certain properties: repeatability and complexity of task, quality of data, scale, cost, etc. that make it extremely promising to apply AI-powered automation — discussion worthy of a separate blog post…coming soon!
Automated Trending
Automated trending is the sequel to Automated Screening – it’s what comes after the conditions have been identified and actions issued. Normally done ‘cognitively’ or via brute force search of the condition data, trending is resource-intensive and largely manual. Read Nuclearn’s Nuclear CAP Coding AI – Better Performance at a Lower Cost to find out more about how AI can help automate and simplify the trending task.
Bonus: The Rapid Progress of AI Technology
The five points above are only achievable due to the explosion in the progress of various technologies that underpin how AI learns and behaves. The speed in recent years with which new AI tools achieve human-level performance on various vision and language tasks is unprecedented. As seen in the chart below, developing AI that can recognize simple numerical digits at human-level performance took over 10 years; to recognize cats, dogs, cars and other everyday objects in images took about 5 years. More recently, developing AI that can recognize and manipulate human language took only about 2 years.
The accelerating pace of AI advancements shows no sign of stopping anytime soon. This type of rapid advancement, combined with the scale, transportability, and savings of CAP, allows Nuclearn to confidently say AI is the future of Nuclear CAP.
Take the next step...
Start with NuclearN today and be a part of the future of Nuclear Innovation.
To provide the best experiences, we use technologies like cookies to store and/or access device information. Consenting to these technologies will allow us to process data such as browsing behavior or unique IDs on this site. Not consenting or withdrawing consent, may adversely affect certain features and functions.
Functional
Always active
The technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user, or for the sole purpose of carrying out the transmission of a communication over an electronic communications network.
Preferences
The technical storage or access is necessary for the legitimate purpose of storing preferences that are not requested by the subscriber or user.
Statistics
The technical storage or access that is used exclusively for statistical purposes.The technical storage or access that is used exclusively for anonymous statistical purposes. Without a subpoena, voluntary compliance on the part of your Internet Service Provider, or additional records from a third party, information stored or retrieved for this purpose alone cannot usually be used to identify you.
Marketing
The technical storage or access is required to create user profiles to send advertising, or to track the user on a website or across several websites for similar marketing purposes.