Powering the Future: Amazon and Google’s Investment in SMRs and How NuclearN is Driving AI Integration for a Carbon-Free Tomorrow

NuclearN applauds the groundbreaking investments from Amazon and Google in Small Modular Reactors (SMRs), recognizing how these developments align with the future of clean, sustainable energy. Amazon’s $500 million partnership with Energy Northwest and Google’s collaboration with Kairos Power are significant milestones in the integration of nuclear energy with advanced technology. These partnerships emphasize that nuclear energy, combined with cutting-edge technologies like artificial intelligence (AI), is crucial to powering the infrastructure required for future technological advancements.

As Phil Zeringue, VP of Strategic Partnerships at NuclearN, points out, “These partnerships are critical to powering the future technologies we rely on, and the synergy between nuclear and AI is key to a sustainable, energy-secure future.” Zeringue’s perspective reflects the evolving role of nuclear energy as a foundational pillar for the world’s energy needs, especially as global tech giants like Amazon and Google take action to secure carbon-free power sources for their growing infrastructure.

NuclearN is well-positioned at the intersection of nuclear energy and technology. With over 48 nuclear reactors across North America and Europe currently utilizing our AI-driven tools to enhance their operations, we understand the immense potential that AI holds in transforming how nuclear energy is deployed. We also recognize that Amazon’s and Google’s investment in SMRs goes beyond simply meeting their energy needs; it represents a commitment to long-term sustainability and a clear acknowledgment of the need for innovation to address the global energy crisis.

The Intersection of AI and Nuclear Energy

Nuclear energy has long been regarded as a critical component of achieving a carbon-free future. However, the rise of SMRs provides a more flexible, scalable option for energy generation. Unlike traditional, larger nuclear reactors, SMRs can be deployed in a wider range of locations, require less upfront capital, and offer shorter construction timelines. Yet, despite these advantages, challenges such as human error, design changes, and logistical issues remain. This is where AI comes into play.

At NuclearN, we see AI as a critical enabler in the SMR deployment process. By integrating AI into SMR projects, we can significantly reduce human error, streamline design changes, and improve operational efficiency. Our AI-driven tools are already being used at nuclear sites to automate planning, streamline documentation processes, and enhance safety protocols. These tools allow engineers and operators to focus on building and maintaining SMRs efficiently while minimizing risks.

Phil Zeringue underscores the importance of these technologies, stating, “The integration of AI with SMRs is crucial for enhancing safety, reducing risks, and increasing the overall efficiency of these projects. These technologies not only align with our social impact goals but are also necessary to ensure the reliable, secure, and sustainable energy needed to power the future.”

By leveraging AI, we can address the two largest drivers of delays in SMR construction: human error and design changes. AI solutions can automate repetitive tasks, provide real-time insights for decision-making, and ensure that complex data is handled with precision. This not only improves the safety and efficiency of SMR projects but also reduces costs and keeps projects on schedule.

Amazon and Google: Leading the Charge in SMR Development

The recent announcements from Amazon and Google demonstrate that some of the world’s most innovative companies are betting on nuclear energy to meet their future energy needs. Amazon’s $500 million investment in partnership with Energy Northwest, along with Google’s collaboration with Kairos Power, showcases a growing recognition of SMRs’ potential to provide reliable, carbon-free energy. These companies are not just investing in their own infrastructure—they are signaling to the world that nuclear energy, paired with advanced technology, is a critical solution to the energy and environmental challenges we face.

Zeringue notes, “As more technology companies like Amazon and Google invest in nuclear technologies to power their infrastructure requirements, new nuclear power is needed more than ever. Which mean new workforce, new manufacturing and new tools to shrink the timelines to bring these crucial assets online”

These tech giants rely on vast amounts of energy to power their global operations, data centers, and the technologies that billions of people use every day. As they continue to expand their reach, the demand for secure and sustainable energy sources grows. Nuclear energy, particularly SMRs, offers a viable path forward. By investing in these solutions, Amazon and Google are not only securing their own energy futures but also helping to pave the way for the wider adoption of SMRs.

NuclearN’s Role in Supporting the Future of Energy

NuclearN’s mission is to drive forward the advancement of AI technologies alongside scalable energy solutions like SMRs. With our expertise in AI and nuclear energy, we are uniquely positioned to support the growth of SMRs and help meet the world’s increasing energy demands. Our work with 48 nuclear sites across North America and Europe has provided us with valuable insights into how AI can transform nuclear operations. By automating critical processes, improving safety, and reducing human error, we help nuclear facilities operate more efficiently and safely.

Our alignment with companies like Amazon and Google goes beyond shared goals of carbon neutrality. We recognize the importance of building partnerships that drive innovation and ensure a sustainable future. As SMRs become an increasingly important part of the energy landscape, the role of AI will only continue to grow. By integrating AI into SMR projects, we can accelerate the deployment of these reactors and ensure they operate safely and efficiently.

As Phil Zeringue emphasizes, “AI is a major force driving the need more carbon free energy, it is not lost on us that we are providing AI solution to make nuclear power the responsible, reliable and affordable choice which can power more AI which can further improve nuclear, it’s a virtuous cycle we are proud to be at the forefront of.”

Looking Ahead: A Carbon-Free, Energy-Secure Future

The investments from Amazon and Google mark an exciting chapter in the future of nuclear energy. Their leadership in this space highlights the importance of sustainable energy solutions to meet the demands of the modern world. NuclearN remains committed to advancing AI technologies that support the deployment and operation of SMRs, ensuring that the energy needs of the future are met with innovation, safety, and sustainability.

As the energy landscape evolves, it’s clear that partnerships between technology companies and nuclear energy providers are key to driving the development of scalable, carbon-free energy. NuclearN is proud to be part of this movement and looks forward to collaborating with industry leaders to create a cleaner, safer, and more secure energy future for all.



Securing the Future: How NuclearN’s On-Premise AI Solutions Provide Unmatched Security

In industries where security is non-negotiable, the cloud can pose serious risks. From nuclear energy to utilities and beyond, handling sensitive data requires an approach that prioritizes control, compliance, and security at every step. That’s where NuclearN comes in.

At NuclearN, we’re proud to offer Advanced & Generative AI solutions that operate entirely on-premise. Our approach ensures that your organization’s critical data remains within your environment, giving you complete oversight and protection from external threats.

The Risks of Cloud Dependency

While cloud-based AI platforms can offer flexibility, they come with a host of risks—especially for industries handling highly regulated or confidential information. These include:

  • Data breaches and unauthorized access
  • Compliance challenges with industry regulations
  • Vulnerabilities in Cloud Provider Systems

For organizations in the nuclear and utilities sectors, such risks can lead to severe consequences. That’s why NuclearN’s on-premise solutions are the ideal choice for organizations that require the highest level of security.

Why On-Premise AI is the Best Choice for the Nuclear Industry

Our on-premise AI solutions offer a more secure alternative to cloud-based platforms. With NuclearN’s technology, your data never leaves your secure environment, ensuring full control over how it’s stored, accessed, and used.

Here’s why our on-premise solutions stand out:

  • Data Control: You maintain complete ownership of your data, ensuring it’s never exposed to third-party providers.
  • Compliance: Our AI solutions help you meet stringent regulatory requirements, keeping your data safe from both legal and security threats.
  • Security: With no cloud involvement, our AI solutions provide an extra layer of defense, securing sensitive data against cyberattacks and breaches.

NuclearN’s AI Solutions: Built for Security and Efficiency

NuclearN’s AI-powered tools, including AtomAssist and Capitalizer, are designed to optimize workflows while protecting your organization from security threats. Our solutions allow teams to focus on innovation and problem-solving, without worrying about data exposure.

  • AtomAssist: Automates and optimizes outage planning and reporting with AI-powered insights, all while ensuring data stays on your secure servers.
  • Capitalizer: Maximizes financial efficiency by automating expense classification, helping nuclear facilities improve their income statements securely.

A Secure Future with NuclearN

When it comes to securing sensitive data, compromise isn’t an option. NuclearN offers on-premise AI solutions that ensure your data stays safe, compliant, and under your control—so you can focus on driving innovation.

Ready to experience the next level of AI security? Discover how NuclearN’s on-premise AI solutions can transform your organization’s workflows while protecting what matters most.


Learn More About NuclearN: NuclearN.ai

Enhance Your Outage Management with NuclearN

Managing nuclear plant outages is a critical task that demands meticulous planning, precise execution, and rapid responses to unforeseen challenges. At NuclearN, our platform, designed by nuclear engineers for nuclear engineers, offers a comprehensive solution to transform your outage management process, ensuring safety, compliance, and cost-effectiveness.

Comprehensive Outage Schedule Support

NuclearN provides robust schedule support to help you manage and mitigate risks effectively:

  • Identify Risks: Quickly identify high-schedule risk activities by department to prioritize and mitigate potential delays.
  • Accurate Predictions: Obtain precise outage duration predictions to plan effectively.
  • Flexibility in Planning: Run “what-if” scenarios to understand the impact of scope changes and adjust plans proactively.

Financial Support for Optimal Budget Management

Efficient financial management is crucial during outages. NuclearN offers tools to optimize your budget and ensure financial accuracy:

  • Budget Optimization: Strategically reduce online and spring outage O&M charges to accommodate outage expenses.
  • Asset Validation: Ensure all planned outage work orders are correctly classified as fixed assets, preventing financial discrepancies.

Engineering Support for Efficient Operations

NuclearN enhances engineering support by speeding up processes and ensuring compliance:

  • Expedite Changes: Speed up the process for DCNs and temporary alterations to keep projects on track.
  • Compliance Checks: Efficient 50.59 screening for replacing obsolete parts, ensuring regulatory compliance.
  • Scope Review: Accelerate the engineering review of scope, ensuring all aspects are covered in time.

Outage Readiness: Preparing for Success

With NuclearN, your facility is well-prepared before the outage begins, setting the stage for success with fewer resources:

  • Daily Monitoring: See changes to schedule risks daily as tasks are complete, allowing quick adjustments.
  • Impact Analysis: Understand each activity’s percentage impact on the critical path to prioritize effectively.
  • Scenario Planning: Continuously run “what-if” scenarios to adapt to new challenges and changes.
  • Risk Identification: View high-schedule risk activities by group to allocate resources where needed most.

Observation Program for Continuous Improvement

NuclearN supports an efficient observation program to drive continuous improvement:

  • Efficient Observations: Supervisors can conduct faster, lower-friction observations.
  • Real-Time Analysis: Benefit from real-time trending and analysis to detect and address issues promptly.
  • Enhanced Quality: Ensure higher quality and more meaningful observations.

Issue Resolution and Enhanced Operations

NuclearN enables your team to react swiftly and efficiently during outages:

  • Expedite Changes: Speed up DCNs and temporary alterations.
  • Compliance Checks: Efficient 50.59 screening for obsolete parts.
  • Scope Review: Accelerate the engineering review of scope.

Optimize Financial and Engineering Efficiency

NuclearN helps you optimize budgets and ensure accurate financial management:

  • Expense Management: Reduce unnecessary O&M charges.
  • Asset Classification: Ensure accurate classification of planned outage work orders.

Stay ahead with NuclearN and transform your outage management strategy today!

NuclearN v1.9 Release

“At NuclearN, we are committed to continuous innovation. Our goal is to release a new version of our platform every 3 months, ensuring that our customers always have access to the latest advancements in technology and efficiency.”

— Jerrold Vincent & Brad Fox, NuclearN co-founders

The release of NuclearN version 1.9 at the end of 2023 introduced a new product plus new features and enhancements aimed at improving operational efficiency and the user experience for power generating utilities and beyond.


NuclearN Project Genius

The major addition with this release – Project Genius – integrates analytics and intelligence for large and complex projects. By using AI to learn from historical project data, and leveraging Monte Carlo simulations for new projects, Project Genius can automatically identify key project risks and highlight key opportunities for improving schedule, quality and cost.

Project Genius is now being implemented across a customer fleet in the United States, capitalizing on its strength in using Monte Carlo simulations for fleet-wide projects. This feature excels in forecasting uncertain project outcomes, streamlining risk identification, and uncovering opportunities to enhance project schedules, ultimately boosting decision-making and overall project efficiency. For more information about Project Genius, click here.


Critical vs Non-Critical Field Classification in Automation

This update allows users to classify fields in automation workflows as critical or non-critical, a crucial distinction for prioritizing decisions like condition reporting and significance levels. The platform now distinguishes accuracy in two areas – one for critical and the other for non-critical fields.  The changes are reflected in Auto Flow reports and KPIs, facilitating a more natural evaluation of results aligned with actual business value and impacts.



Bug Reporter

Our new email-based Bug Reporter captures error information and relevant logs, encrypts them, and creates a downloadable file for users to email to our support team. This simplifies bug reporting, making communication of issues more efficient.



Report Template Updates

We have refined our report templates, enhancing their intuitiveness and user-friendliness, ensuring the valuable data NuclearN provides is more accessible and actionable.

Version 1.9 showcases our continuous innovation and responsiveness to the energy sector’s needs, emphasizing robust, secure solutions that leverage AI and advanced technologies to amplify human expertise. This focus reflects our commitment to precision, safety, and reliability, positioning NuclearN as a leader in operational excellence and forward-thinking energy generation, with safety and efficiency as our guiding principles.



Stay informed and engaged with everything AI in the nuclear sector by visiting The NuclearN Blog. Join the conversation and be part of the journey as we explore the future of AI in power generation together.

How AI is Powering Up the Nuclear Industry 


Sequoyah Nuclear Power Plant 

In an era where digital fluency is the new literacy, Large Language Models (LLMs) have emerged as revolutionary game-changers. These models are not just regurgitating information; they’re learning procedures and grasping formal logic. This isn’t an incremental change; it’s a leap. They’re making themselves indispensable across sectors as diverse as finance, healthcare, and cybersecurity. And now, they’re lighting up a path forward in another high-stakes arena: the nuclear sector.



The Limits of One-Size-Fits-All: Why Specialized Domains Need More Than Standard LLMs

In today’s digital age, Large Language Models (LLMs) like GPT-4 have become as common as smartphones, serving as general-purpose tools across various sectors. While their wide-ranging training data, which spans from social media to scientific papers, is useful for general capabilities, this limits their effectiveness in specialized domains. This limitation is especially glaring in fields that require precise and deep knowledge, such as nuclear physics or complex legal systems. It’s akin to using a Swiss Army knife when what you really need is a surgeon’s scalpel.

In contrast, specialized fields like nuclear engineering demand custom-tailored AI solutions. Publicly-available LLMs lack the precision needed to handle the nuanced language, complex protocols, and critical safety standards inherent in these areas. Custom-built AI tools go beyond mere language comprehension; they become repositories of essential field-specific knowledge, imbued with the necessary legal norms, safety protocols, and operational parameters. By focusing on specialized AI, we pave the way for more reliable and precise tools, moving beyond the “Swiss Army knife” approach to meet the unique demands of specialized sectors.

LLMs are Swiss Army knives in that they are great at a multitude of tasks; this is paradoxical to their utility in a field like nuclear where nuance is everything.


The Swiss Army Knife In Action

Below is a common response from a public chatbot on most plant specific questions. The information about this site is widely available online and has been published well before 2022 with the power plant’s commission date occurring in 1986.

From the chatbot’s response, the generic information provided by this public-available model does not give enough clarity for experts to rely on. To answer the above question, the model will need to be adapted to a specific domain.

Adapting general models to be domain specific is not easy however.  Some challenges with this task include:

  1. Financial and Technical Hurdles in Fine-Tuning—Fine-tuning public models is a costly affair. Beyond the financial aspect, modifications risk destabilizing the intricate instruct/RLHF tuning, a nuanced balance established by experts.
  2. Data Security: A Custodian Crisis —Public models weren’t built with high-security data custodianship in mind. This lack of a secure foundation poses risks, especially for sensitive information.
  3. A Dead End for Customization—Users face a brick wall when it comes to customizing these off-the-shelf models. Essential access to model weights is restricted, stifling adaptability and innovation.
  4. Stagnation in Technological Advancement —These models lag behind, missing out on revolutionary AI developments like RLAIF, DPO, or soft prompting. This stagnation limits their applicability and efficiency in evolving landscapes.
  5. The Impossibility of Refinement and Adaptation—Processes integral for optimization, such as model pruning, knowledge distillation, or weight sharing, are off the table. Without these, the models remain cumbersome and incompatible with consumer-grade hardware.


NuclearN

NuclearN specializes in AI-driven solutions tailored for the nuclear industry, combining advanced hardware, expert teams, and a rich data repository of nuclear information to create Large Language Models (LLMs) that excel in both complexity and precision. Unlike generic LLMs, ours are fine-tuned with nuclear-specific data, allowing us to automate a range of tasks from information retrieval to analytics with unparalleled accuracy.


What makes our models better than off-the-shelf LLMs? 

Large Language Models (LLMs) from NuclearN are trained on specialized nuclear data that are transforming several core tasks within the nuclear industry, leveraging their vast knowledge base and advanced understanding of nuclear context-specific processes. These models, when expertly trained with the right blend of data, algorithms, and parameters, can facilitate a range of complex tasks and information management functions with remarkable efficiency and precision.

NuclearN is training our LLMs to enhance several core functions:

  1. Routine Question-Answering: NuclearN’s trains LLMs on a rich dataset of nuclear terminologies, protocols, and safety procedures. They offer accurate and context-aware answers to technical and procedural questions, serving as a reliable resource that reduces the time needed for research and minimizes human error.
  2. Task-Specific and Site-Specific Fine Tuning: Even though our LLMs are trained to be nuclear-specific, different sites can have very specific plant designs, processes, and terminology.  Tasks such as engineering evaluations or work instruction authoring may be performed in a style unique to the site.  NuclearN offers private and secure, site and task-specific fine tuning of our LLMs to meet these needs and deliver unparalleled performance.
  3. Neural Search: The search capabilities of our LLMs go beyond mere keyword matching. They understand the semantic and contextual relationships between different terminologies and concepts in nuclear science. This advanced capability is critical when one needs to sift through large volumes of varied documents—be it scientific papers, historical logs, or regulatory guidelines—to extract the most pertinent information. It enhances both the efficiency and depth of tasks like literature review and risk assessment.
  4. Document Summarization: In an industry awash with voluminous reports and papers, the ability to quickly assimilate information is vital. Our LLMs can parse through these lengthy documents and distill them into concise yet comprehensive summaries. They preserve key findings, conclusions, and insights, making it easier for professionals to stay informed without being overwhelmed by data.
  5. Trend Analysis from Time-Series Data: The nuclear industry often relies on process and operational data gathered from sensors in the plant to track equipment performance and impacts from various activities. NuclearN is training our LLMs to be capable of analyzing these time-series data sets to discern patterns, correlations, or trends over time. This allows our LLMs to have a significantly more comprehensive view of the plant, which is particularly valuable for monitoring equipment health and predicting operational impacts.

By leveraging the capabilities of NuclearN’s specialized LLMs in these functional areas, the nuclear industry can realize measurable improvements in operational efficiency and strategic decision-making.

Stay informed and engaged with everything AI in the nuclear sector by visiting The NuclearN Blog. Join the conversation and be part of the journey as we explore the future of AI in nuclear technology together. 

Nuclearn v1.8 – Neural Search and Easier Automation

Nuclearn recently released version 1.8 of its analytics and automation platform, bringing major upgrades like neural search for intuitive queries, configurable automation routines, expanded analytics outputs, and enhanced ETL data integration. Together these features, some of them AI-driven, aim to optimize workflows and performance.

Neural Search

The neural search upgrade allows searching based on intent rather than keywords, even with ambiguous queries. Neural algorithms understand semantics, context, synonyms, and data formats. This saves time compared to traditional keyword searches, and provides significant advantages when context-sensitive information retrieval is crucial.

Some of the benefits of neural search include:
Precision of Search Results: Traditional keyword-based searches often yield an overwhelming number of irrelevant results, making it difficult for plant personnel to find the specific information they need. Neural search engines deliver results with ranked relevance. This means results are not just based on keyword match but on the basis of how closely the content of the document matches the intent of the search query.  

Contextual Understanding: Boolean queries, which are typically used in traditional search engines, lack the ability to understand the contextual nuances of complex technical language often found in engineering and compliance documentation. Neural search algorithms have a kind of “semantic understanding” that can understand the context behind a query, providing more relevant results. In addition, Neural search understands synonyms and related terms, crucial when dealing with the specialized lexicon in nuclear, thus making searches more robust.

Multiple Data Formats: Nuclear plants often store data in different formats, such as PDFs, Word documents, sensor logs, and older, legacy systems. A neural search engine can be trained to understand and index different types of data, providing a unified search experience across multiple data formats. 

Selective Classification for Unmatched Automation Accuracy

AutoCAP Screener also saw major improvements in v1.8. You can now set desired overall accuracy levels for automation templates. The Nuclearn platform then controls the confidence thresholds using a statistical technique called “selective classification” that enables theoretically guaranteed risk controls. This enables the system to ensure it operates above a user-defined automation accuracy level.

.

With selective classification, plants can improve automation rates and efficiency without compromising the quality of critical decisions. Risk is minimized by abstaining from acting in uncertain cases. The outcome is automation that consistently aligns with nuclear-grade precision and trustworthiness. By giving you accuracy configuration control, we ensure our AI technology conforms to your reliability needs. 

Additionally, multiple quality of life enhancements were added to the AutoCAP audit pages. Users can now sort the audit page results, add filters, integrate PowerBI dashboards with audit results, and even export the automation results to csv. These enhancements make it easier and more flexible for users to assess, evaluate, and monitor the automation system.

Analytics & Reporting Enhancements

On the analytics front, our customers wanted more customizations. v1.8 answers their request with the ability to upload their own custom report templates. In addition, customers can change date aggregations in reports to tailor the visualizations for specific audiences and uses. Enhanced dataset filtering and exporting also allows sending analyzed data to PowerBI or Excel for further manipulation or presentation.

Buckets

Editing analytics buckets is now more flexible too, with overwrite and save-as options. We added the ability to exclude and filter buckets from the visualization more easily and make changes to existing buckets, including their name.  

Data Integration

Behind the scenes, ETL workflows (meaning “extract, transform, load” data) were upgraded to more seamlessly ingest plant data into the Nuclearn platform. Users can now schedule recurring ETL jobs and share workflows between sites. With smooth data onboarding, you can focus your time on analytics and automation rather than manually uploading data. 

With advanced search, configurable automation, expanded analytics, and optimized data integration in v1.8, the Nuclearn Platform is better equipped to drive operational optimization using AI-powered technology. This release highlights Nuclearn’s commitment to meaningful innovation that solves real-world needs.

5 Reasons AI is the Promising Future of Nuclear CAP

In the near future, the Nuclear Corrective Action Program (CAP) will be sleek, streamlined, and highly efficient; where occasionally humans participants are required to review and deliberate over only the most complicated issues requiring their vast experience and wisdom. For everything else, a trained army of CAP AI agents invisibly process issues, review and alert on trends, assign corrective actions, and take feedback from human coaches via purpose-designed human/AI interfaces.

No longer will a team of humans be subject to hours upon days of analysis for trend detection, a Senior Reactor Operator forced to process another condition report about a cracked sidewalk, or an Engineer left waiting for a corrective action item to be issued to her inbox. These functions will have been largely automated with the focused application of AI-based technology. Here are the five reasons this future is highly probable, based on both the current state of the Nuclear Industry and leading-edge AI technology.

Cost Savings and Improved Quality

It comes as no surprise to anyone that has worked in the Nuclear Industry that running an effective CAP program is expensive. CAP demands a significant investment into human resources that have adequate experience to effectively diagnose and resolve the problems experienced in an operating power plant. In practice, this requires either dedicated staffing or rotating employees out of primary roles to fulfill a CAP function.

By applying intelligent automation to the Screening, Work Generation, and Issue Trending processes, a resource reduction of approximately 45% is expected.

Beyond reducing the number of resources required, AI reduces the total amount of time required to execute portions of the CAP process. While a human screening team may only be able to process conditions on a daily basis, an AI system can review and screen conditions and issue work items immediately. More quickly getting workable tasks into the hands of employees saves money and improves CAP quality.

For those issues that may be too complex for AI to effectively handle, a human-in-the-loop system can be employed, where AI knows when it is unsure and can reach out for human assistance. By using human-in-the-loop the cost of the CAP program is reduced while keeping quality the same or better.

Additionally, AI can lower the threshold for issue documentation. Deployment of an information extraction AI lets employees more naturally capture issues using natural language, without filling out specialized forms. When issues become easier to document, they are documented more often, the overall information input into the CAP program increases, and the chance an issue is corrected becomes greater. AI that immediately evaluates the quality and completeness of the submitted report enables automated dialogue with the submitter. This can encourage behaviors such as adding information, clarify issues, correcting spelling, or otherwise encourage behaviors that promote report quality, increasing the effectiveness of the overall CAP program.

Scale

The most valuable problems to solve are frequently the largest. CAP and associated activities are one of the largest opportunities in Nuclear. CAP lies at the heart of the Nuclear Industry, and requires participation from almost every trade and profession at each site. The ubiquity of CAP combined with the savings potential provides an immense incentive for plant operators, industry vendors, and industry research groups to discover and implement ways to make these programs run more sustainably and efficiently. Specialized AI that can automate various tasks are at the top of mind of industry groups such as the Electric Power Research Institute, the Idaho National Laboratories, and various utility in-house teams.

A fortunate side effect of the CAP program is the production of large quantities of high-quality data – data ideal for training the AI systems that will be used to automate the same functions. Most of this data is captured in free-form text as natural language. Language with a specific Nuclear vocabulary and dialect, but natural language nonetheless. The scale of this data puts it on par with the datasets utilized by the large technology vendors and academic institutions to develop and train the most effective AI systems. Thanks to the scale of Nuclear CAP data, these large AI systems can be specialized to operate in the Nuclear domain – increasing performance and effectiveness for the tasks at hand.

Transportability

The most notable advancements in AI technology of the late 2010s were around the development of advanced natural language-based AI. This AI has the ability to understand human language more naturally and effectively than previously thought possible. Breakthroughs in this area are characterized by the ability of AI to transfer learning from one problem to another. An AI good at classifying condition report quality will be better at identifying equipment tags vs one specifically trained just to identify equipment tags.

The benefit for the nuclear industry is that an AI system trained at Plant X will be able to transfer its learning to Plant Y and be more performant than one trained at just Plant Y. This is similar to how a working professional at Diablo Canyon would more easily adapt and apply their knowledge when transferring to Turkey Point than someone not having worked in the nuclear industry at all. Similar to a human employee, an AI system will benefit from the variety of different knowledge obtained from general industry data. Learning specifics for any one plant will be faster, cheaper, and easier for any plant wishing to specialize the AI system for use in automation once trained on general industry data.

As a result, solutions developed at one site will be able to be shared. With commonly applicable training and similar problems, the industry can work to solve the big problems once with ‘large’ or ‘hard’ AI, and transport the solution from plant to plant for the benefit of the entire industry.

Automated Screening

One of the more specific solutions apparent when applying AI to the CAP process is the automation of the condition screening process. Condition screening is the process of reviewing a received report of a non-standard condition in or around the plant, then applying certain tags, codes, or classifications, assigning an owner, and generating the appropriate work items that address the condition. For some plants, this process involves dedicated groups of senior employees that work daily to manually perform this process. For others, this involves dispersed resources periodically gathering together to complete screening. In either case, the resources are usually senior-level and experienced, and thus expensive. The following estimation of resources spent by the industry for this process each year illustrates just how large an opportunity there is:

The screening process has certain properties: repeatability and complexity of task, quality of data, scale, cost, etc. that make it extremely promising to apply AI-powered automation — discussion worthy of a separate blog post…coming soon!

Automated Trending

Automated trending is the sequel to Automated Screening – it’s what comes after the conditions have been identified and actions issued. Normally done ‘cognitively’ or via brute force search of the condition data, trending is resource-intensive and largely manual. Read Nuclearn’s Nuclear CAP Coding AI – Better Performance at a Lower Cost to find out more about how AI can help automate and simplify the trending task.

Bonus: The Rapid Progress of AI Technology

The five points above are only achievable due to the explosion in the progress of various technologies that underpin how AI learns and behaves. The speed in recent years with which new AI tools achieve human-level performance on various vision and language tasks is unprecedented. As seen in the chart below, developing AI that can recognize simple numerical digits at human-level performance took over 10 years; to recognize cats, dogs, cars and other everyday objects in images took about 5 years. More recently, developing AI that can recognize and manipulate human language took only about 2 years.

The accelerating pace of AI advancements shows no sign of stopping anytime soon. This type of rapid advancement, combined with the scale, transportability, and savings of CAP, allows Nuclearn to confidently say AI is the future of Nuclear CAP.

 

DARSA: The Guide to Full Process Automation Using AI

You don’t automate right away…

Process automation using Artificial Intelligence is a complex endeavor. To successfully automate a process, automation systems and their implementers need to effectively incorporate complex technologies, a deep understanding of the business processes, risk-based decision making, and organizational change management all at once! This challenge can feel insurmountable to many organizations looking to start adopting AI-driven process automation. And unfortunately for some, it has proven to be so.

Luckily, there are battle-tested methods for bringing an automation system to life and avoiding the potential pitfalls. Here at Nuclearn, we have developed a project implementation process we call DARSA that helps guide us through automation projects. DARSA helps us deliver maximum value with minimal risk by leveraging an iterative, agile approach to AI-driven automation.

So what is DARSA? DARSA is a five-step linear process that stands for Decisions-Data-Direction, Assess, Recommend, Semi-Automate, and Automate. Each step in DARSA is a distinct phase with distinct characteristics, and transitions between these phases are planned explicitly and usually require system changes. To learn more about DARSA, we must dive into the first phase: “Decisions, Data and Direction”.

1) Decisions – Data – Direction

Before starting AI-driven automation, it is important to specify several key factors that will guide the project. These items are Decisions, Data, and Direction.

Decisions are the first, and most critical item to define early in an AI-driven automation project. At the end of the day, if you are embarking on an AI-driven automation project, you are doing so because you need to automate challenging decisions that currently require a human. If there is no decision to be made, then AI will not help automation in any meaningful way. If the decisions are trivial or based on simple rules, there is no need for AI. These processes should be automated by traditional software. So the first, essential part of an AI-driven automation project is identifying and defining the decisions that are planned to be automated.

Data is the next item that must be specified. An AI-driven automation project needs to identify two key sets of data: the input data being used to influence decisions, and the data created as a result of those decisions. These are critical, as AI-driven automation relies heavily on machine learning. To learn how to make decisions automatically, machine learning requires historical data. For each decision, there should be a detailed log of all data used at decision time, and a log of all historical human decisions.

Direction. AI projects must begin with clear direction, but unlike large traditional software projects, AI-driven automation projects cannot begin with a detailed project plan laying out detailed requirements gathering and design. AI and machine learning are notoriously unpredictable – even the most experienced practitioners have challenges predicting how well models will perform on new datasets and new challenges. Automation systems often have to evolve around the unpredictable strengths and weaknesses of the AI. As a result, it is important to specify a clear direction for the project. All members of the project should be aligned with this direction, and use it to guide their iterations and decisions. For example, in an AI-driven automation system for helping generate Pre-Job Brief forms, the direction for the project might be “Reduce the amount of time required to assemble Pre-Job Briefs while maintaining or improving the quality of Pre-Job Briefs”. This simple statement of direction goes a long way towards bounding the project scope, ruling out system design decisions that are unacceptable, and fostering potential innovation.

2) Assess

Once the Decisions, Data and Direction are specified, the most important factors for success in an AI-Driven automation project are fast iterations and good feedback. That is why the second phase in DARSA is “Assess”. During this stage of the project, nothing is actually automated! The “automation” results are being shared with subject matter experts, so they can assess the results and provide feedback. Automation system designers and Data Scientists are already quite familiar with testing how well their system works via various traditional methods. While these can help with generating accuracy metrics (the model is 95% accurate!), these methods are quite poor at evaluating exactly where the automations will fail or be inaccurate, why they are that way, and what the impacts are. The Assess phase is often where important risks and caveats are identified, and where additional considerations for the project are discovered.

Let’s take for example my experience with a project attempting to automate the screening of Condition Reports (CR) at a Nuclear Power Plant. One of the key decisions in screening a CR is determining whether the documented issue has the potential to affect the safe operation of the plant, often referred to as a “Condition Adverse to Quality”. Before even showing our AI model to users, my team had produced some highly accurate models, north of 95% accurate! We knew at the time that the human benchmark for screening was 98%, and we figured we were very close to that number, surely close enough to have successful automation. It was only after going through the “Assess” phase that we learned from our subject matter experts that we had missed a key part of the automation.

We learned during the Assess phase that not all Condition Reports are the same. In fact, there was a drastically asynchronous cost associated with wrong predictions. Overall accuracy was important, but what would make or break the project was the percentage of “Conditions Adverse to Quality” that we incorrectly classified as Not Conditions Adverse to Quality. The reverse error (classifying Not Adverse to Quality as Adverse to Quality) had a cost associated with it – we might end up performing some unnecessary paperwork. But get a few high-profile errors the other way, and we would potentially miss safety-impacting conditions, undermining regulatory trust in the automation system and the CAP Program as a whole.

As a result, we made some fundamental changes to the AI models, as well as the automation system that would eventually be implemented. The AI models were trained to be more conscious of higher-profile errors, and the automation system would take into consideration “confidence” levels of predictions, with a more conservative bias. A thorough Assessment phase reduced the risk of adverse consequences and ensured any pitfalls were detected and mitigated prior to implementation.

3) Recommend

After the Assess phase, Recommendation begins. The Recommend phase typically involves providing the AI results to the manual task performers in real-time, but not automating any of their decisions. This stage is often very low risk – if the AI system is wrong or incorrect there is someone manually reviewing and correcting the errors, preventing any major inaccuracies. This is also the first stage that realizes delivered value of an AI-driven automation system.

Increased manual efficiency is often recognized as a benefit in the Recommend phase. In the majority of cases, it is physically faster for someone to perform a review of the AI’s output and make small corrections versus working the decision task from start to finish. Paired with a proper Human/AI interface, the cognitive load, manual data entry, and the number of keystrokes/mouse clicks are drastically reduced. This helps drive human efficiencies that translate to cost savings.

The Recommendation phase also permits the capture of metrics tracking how well the automation system is performing under real-world use. This is absolutely critical if partial or complete automation is desired. By running in a recommendation setup, exact data about performance can be gathered and analyzed to help improve system performance and gain a deeper objective understanding of your automation risk. This is important for deciding how to proceed with any partial automation while providing evidence to help convince those skeptical of automation.

This stage may last as short as a few weeks, or as long as several years. If the automation system needs additional training data and tweaking, the recommendation phase provides a long runway for doing so in a safe manner. Since the system is already delivering value, there is relieved pressure to reach additional levels of automation. On some projects, this phase may provide enough ROI on its own that stakeholders no longer feel the need to take on additional risk with partial or complete automation.

4) Semi-Automate

The next step in DARSA is “Semi-Automate”. This is the first stage to both fully realize the benefits and risks of automation. This phase is characterized by true automation of a task – but only for a subset of the total tasks performed.

The metrics gathered in the Recommend phase play a key role here, as they can inform which parts of the task are acceptable for automation. As the system encounters different inputs and situations, total system confidence will vary. Based on this confidence, among other metrics, automation can be implemented as a graded approach. Low-risk, high-confidence tasks are usually automated first, and after the system continues to learn, and stakeholder confidence is improved, higher-risk automations can be turned on.

For example, take a system intended to automate the planning and scheduling functions required for Nuclear Work Management. Such a system would begin to partially automate the scheduling of work activities that have low safety and operational impacts, and the planning of repetitive activities that have little historical deviation in the execution work steps. These activities are low-risk (if something goes wrong there are minimal consequences) and high-confidence (the AI has lots of previous examples with defined conditions).

During semi-automation, it is prudent to still have a manual review of a portion of automated tasks to monitor model performance, as well as provide additional training data. Without manual review, there is no longer a “ground truth” for items that have been automated. This makes it challenging to know whether the system is working well! Additionally, AI performance may begin to stagnate without the inclusion of new training examples, similar to how human performance may stagnate without new learnings and experiences.

5) Automate

The final phase that every automation system aims to achieve: complete automation. This phase is characterized by the complete automation of all tasks planned in the scope of the project. The system has been running for long enough and has gathered enough data to prove that there is no human involvement necessary. From this point forward, the only costs associated with the task are the costs associated with running and maintaining the system. Complete automation is more common with tasks that have a lower overall level of risk, yet require a lot of manual effort without automation. The most common example of this in the Nuclear Industry today is automated Corrective Action Program trend coding.

It is expected to jump back and forth between partial and complete automation at some frequency. A common case where this can occur is when the automated task or decision is changed, and the automation system hasn’t learned what those changes are. There will need to be some amount of manual intervention until the system learns the new changes, and full automation can be turned back on. An example of this would be in a “trend-coding” automation system when the “codes” or “tags” applied to data are altered.

Start Using DARSA

DARSA provides a proven roadmap to designing, building, implementing, and iterating an AI-driven automation system. Using this process, organizations embarking on the development of new automation systems can deliver maximum value with minimal risk, using a methodology appropriate for modern AI in practical automation applications.

Visit https://nuclearn.ai to learn more about how Nuclearn uses DARSA to help Nuclear Power Plants achieve AI-driven automation.