NuclearN v1.9 Release

“At NuclearN, we are committed to continuous innovation. Our goal is to release a new version of our platform every 3 months, ensuring that our customers always have access to the latest advancements in technology and efficiency.”

— Jerrold Vincent & Brad Fox, NuclearN co-founders

The release of NuclearN version 1.9 at the end of 2023 introduced a new product plus new features and enhancements aimed at improving operational efficiency and the user experience for power generating utilities and beyond.


NuclearN Project Genius

The major addition with this release – Project Genius – integrates analytics and intelligence for large and complex projects. By using AI to learn from historical project data, and leveraging Monte Carlo simulations for new projects, Project Genius can automatically identify key project risks and highlight key opportunities for improving schedule, quality and cost.

Project Genius is now being implemented across a customer fleet in the United States, capitalizing on its strength in using Monte Carlo simulations for fleet-wide projects. This feature excels in forecasting uncertain project outcomes, streamlining risk identification, and uncovering opportunities to enhance project schedules, ultimately boosting decision-making and overall project efficiency. For more information about Project Genius, click here.


Critical vs Non-Critical Field Classification in Automation

This update allows users to classify fields in automation workflows as critical or non-critical, a crucial distinction for prioritizing decisions like condition reporting and significance levels. The platform now distinguishes accuracy in two areas – one for critical and the other for non-critical fields.  The changes are reflected in Auto Flow reports and KPIs, facilitating a more natural evaluation of results aligned with actual business value and impacts.



Bug Reporter

Our new email-based Bug Reporter captures error information and relevant logs, encrypts them, and creates a downloadable file for users to email to our support team. This simplifies bug reporting, making communication of issues more efficient.



Report Template Updates

We have refined our report templates, enhancing their intuitiveness and user-friendliness, ensuring the valuable data NuclearN provides is more accessible and actionable.

Version 1.9 showcases our continuous innovation and responsiveness to the energy sector’s needs, emphasizing robust, secure solutions that leverage AI and advanced technologies to amplify human expertise. This focus reflects our commitment to precision, safety, and reliability, positioning NuclearN as a leader in operational excellence and forward-thinking energy generation, with safety and efficiency as our guiding principles.



Stay informed and engaged with everything AI in the nuclear sector by visiting The NuclearN Blog. Join the conversation and be part of the journey as we explore the future of AI in power generation together.

How AI is Powering Up the Nuclear Industry 


Sequoyah Nuclear Power Plant 

In an era where digital fluency is the new literacy, Large Language Models (LLMs) have emerged as revolutionary game-changers. These models are not just regurgitating information; they’re learning procedures and grasping formal logic. This isn’t an incremental change; it’s a leap. They’re making themselves indispensable across sectors as diverse as finance, healthcare, and cybersecurity. And now, they’re lighting up a path forward in another high-stakes arena: the nuclear sector.



The Limits of One-Size-Fits-All: Why Specialized Domains Need More Than Standard LLMs

In today’s digital age, Large Language Models (LLMs) like GPT-4 have become as common as smartphones, serving as general-purpose tools across various sectors. While their wide-ranging training data, which spans from social media to scientific papers, is useful for general capabilities, this limits their effectiveness in specialized domains. This limitation is especially glaring in fields that require precise and deep knowledge, such as nuclear physics or complex legal systems. It’s akin to using a Swiss Army knife when what you really need is a surgeon’s scalpel.

In contrast, specialized fields like nuclear engineering demand custom-tailored AI solutions. Publicly-available LLMs lack the precision needed to handle the nuanced language, complex protocols, and critical safety standards inherent in these areas. Custom-built AI tools go beyond mere language comprehension; they become repositories of essential field-specific knowledge, imbued with the necessary legal norms, safety protocols, and operational parameters. By focusing on specialized AI, we pave the way for more reliable and precise tools, moving beyond the “Swiss Army knife” approach to meet the unique demands of specialized sectors.

LLMs are Swiss Army knives in that they are great at a multitude of tasks; this is paradoxical to their utility in a field like nuclear where nuance is everything.


The Swiss Army Knife In Action

Below is a common response from a public chatbot on most plant specific questions. The information about this site is widely available online and has been published well before 2022 with the power plant’s commission date occurring in 1986.

From the chatbot’s response, the generic information provided by this public-available model does not give enough clarity for experts to rely on. To answer the above question, the model will need to be adapted to a specific domain.

Adapting general models to be domain specific is not easy however.  Some challenges with this task include:

  1. Financial and Technical Hurdles in Fine-Tuning—Fine-tuning public models is a costly affair. Beyond the financial aspect, modifications risk destabilizing the intricate instruct/RLHF tuning, a nuanced balance established by experts.
  2. Data Security: A Custodian Crisis —Public models weren’t built with high-security data custodianship in mind. This lack of a secure foundation poses risks, especially for sensitive information.
  3. A Dead End for Customization—Users face a brick wall when it comes to customizing these off-the-shelf models. Essential access to model weights is restricted, stifling adaptability and innovation.
  4. Stagnation in Technological Advancement —These models lag behind, missing out on revolutionary AI developments like RLAIF, DPO, or soft prompting. This stagnation limits their applicability and efficiency in evolving landscapes.
  5. The Impossibility of Refinement and Adaptation—Processes integral for optimization, such as model pruning, knowledge distillation, or weight sharing, are off the table. Without these, the models remain cumbersome and incompatible with consumer-grade hardware.


NuclearN

NuclearN specializes in AI-driven solutions tailored for the nuclear industry, combining advanced hardware, expert teams, and a rich data repository of nuclear information to create Large Language Models (LLMs) that excel in both complexity and precision. Unlike generic LLMs, ours are fine-tuned with nuclear-specific data, allowing us to automate a range of tasks from information retrieval to analytics with unparalleled accuracy.


What makes our models better than off-the-shelf LLMs? 

Large Language Models (LLMs) from NuclearN are trained on specialized nuclear data that are transforming several core tasks within the nuclear industry, leveraging their vast knowledge base and advanced understanding of nuclear context-specific processes. These models, when expertly trained with the right blend of data, algorithms, and parameters, can facilitate a range of complex tasks and information management functions with remarkable efficiency and precision.

NuclearN is training our LLMs to enhance several core functions:

  1. Routine Question-Answering: NuclearN’s trains LLMs on a rich dataset of nuclear terminologies, protocols, and safety procedures. They offer accurate and context-aware answers to technical and procedural questions, serving as a reliable resource that reduces the time needed for research and minimizes human error.
  2. Task-Specific and Site-Specific Fine Tuning: Even though our LLMs are trained to be nuclear-specific, different sites can have very specific plant designs, processes, and terminology.  Tasks such as engineering evaluations or work instruction authoring may be performed in a style unique to the site.  NuclearN offers private and secure, site and task-specific fine tuning of our LLMs to meet these needs and deliver unparalleled performance.
  3. Neural Search: The search capabilities of our LLMs go beyond mere keyword matching. They understand the semantic and contextual relationships between different terminologies and concepts in nuclear science. This advanced capability is critical when one needs to sift through large volumes of varied documents—be it scientific papers, historical logs, or regulatory guidelines—to extract the most pertinent information. It enhances both the efficiency and depth of tasks like literature review and risk assessment.
  4. Document Summarization: In an industry awash with voluminous reports and papers, the ability to quickly assimilate information is vital. Our LLMs can parse through these lengthy documents and distill them into concise yet comprehensive summaries. They preserve key findings, conclusions, and insights, making it easier for professionals to stay informed without being overwhelmed by data.
  5. Trend Analysis from Time-Series Data: The nuclear industry often relies on process and operational data gathered from sensors in the plant to track equipment performance and impacts from various activities. NuclearN is training our LLMs to be capable of analyzing these time-series data sets to discern patterns, correlations, or trends over time. This allows our LLMs to have a significantly more comprehensive view of the plant, which is particularly valuable for monitoring equipment health and predicting operational impacts.

By leveraging the capabilities of NuclearN’s specialized LLMs in these functional areas, the nuclear industry can realize measurable improvements in operational efficiency and strategic decision-making.

Stay informed and engaged with everything AI in the nuclear sector by visiting The NuclearN Blog. Join the conversation and be part of the journey as we explore the future of AI in nuclear technology together. 

Nuclearn v1.8 – Neural Search and Easier Automation

Nuclearn recently released version 1.8 of its analytics and automation platform, bringing major upgrades like neural search for intuitive queries, configurable automation routines, expanded analytics outputs, and enhanced ETL data integration. Together these features, some of them AI-driven, aim to optimize workflows and performance.

Neural Search

The neural search upgrade allows searching based on intent rather than keywords, even with ambiguous queries. Neural algorithms understand semantics, context, synonyms, and data formats. This saves time compared to traditional keyword searches, and provides significant advantages when context-sensitive information retrieval is crucial.

Some of the benefits of neural search include:
Precision of Search Results: Traditional keyword-based searches often yield an overwhelming number of irrelevant results, making it difficult for plant personnel to find the specific information they need. Neural search engines deliver results with ranked relevance. This means results are not just based on keyword match but on the basis of how closely the content of the document matches the intent of the search query.  

Contextual Understanding: Boolean queries, which are typically used in traditional search engines, lack the ability to understand the contextual nuances of complex technical language often found in engineering and compliance documentation. Neural search algorithms have a kind of “semantic understanding” that can understand the context behind a query, providing more relevant results. In addition, Neural search understands synonyms and related terms, crucial when dealing with the specialized lexicon in nuclear, thus making searches more robust.

Multiple Data Formats: Nuclear plants often store data in different formats, such as PDFs, Word documents, sensor logs, and older, legacy systems. A neural search engine can be trained to understand and index different types of data, providing a unified search experience across multiple data formats. 

Selective Classification for Unmatched Automation Accuracy

AutoCAP Screener also saw major improvements in v1.8. You can now set desired overall accuracy levels for automation templates. The Nuclearn platform then controls the confidence thresholds using a statistical technique called “selective classification” that enables theoretically guaranteed risk controls. This enables the system to ensure it operates above a user-defined automation accuracy level.

.

With selective classification, plants can improve automation rates and efficiency without compromising the quality of critical decisions. Risk is minimized by abstaining from acting in uncertain cases. The outcome is automation that consistently aligns with nuclear-grade precision and trustworthiness. By giving you accuracy configuration control, we ensure our AI technology conforms to your reliability needs. 

Additionally, multiple quality of life enhancements were added to the AutoCAP audit pages. Users can now sort the audit page results, add filters, integrate PowerBI dashboards with audit results, and even export the automation results to csv. These enhancements make it easier and more flexible for users to assess, evaluate, and monitor the automation system.

Analytics & Reporting Enhancements

On the analytics front, our customers wanted more customizations. v1.8 answers their request with the ability to upload their own custom report templates. In addition, customers can change date aggregations in reports to tailor the visualizations for specific audiences and uses. Enhanced dataset filtering and exporting also allows sending analyzed data to PowerBI or Excel for further manipulation or presentation.

Buckets

Editing analytics buckets is now more flexible too, with overwrite and save-as options. We added the ability to exclude and filter buckets from the visualization more easily and make changes to existing buckets, including their name.  

Data Integration

Behind the scenes, ETL workflows (meaning “extract, transform, load” data) were upgraded to more seamlessly ingest plant data into the Nuclearn platform. Users can now schedule recurring ETL jobs and share workflows between sites. With smooth data onboarding, you can focus your time on analytics and automation rather than manually uploading data. 

With advanced search, configurable automation, expanded analytics, and optimized data integration in v1.8, the Nuclearn Platform is better equipped to drive operational optimization using AI-powered technology. This release highlights Nuclearn’s commitment to meaningful innovation that solves real-world needs.

5 Reasons AI is the Promising Future of Nuclear CAP

In the near future, the Nuclear Corrective Action Program (CAP) will be sleek, streamlined, and highly efficient; where occasionally humans participants are required to review and deliberate over only the most complicated issues requiring their vast experience and wisdom. For everything else, a trained army of CAP AI agents invisibly process issues, review and alert on trends, assign corrective actions, and take feedback from human coaches via purpose-designed human/AI interfaces.

No longer will a team of humans be subject to hours upon days of analysis for trend detection, a Senior Reactor Operator forced to process another condition report about a cracked sidewalk, or an Engineer left waiting for a corrective action item to be issued to her inbox. These functions will have been largely automated with the focused application of AI-based technology. Here are the five reasons this future is highly probable, based on both the current state of the Nuclear Industry and leading-edge AI technology.

Cost Savings and Improved Quality

It comes as no surprise to anyone that has worked in the Nuclear Industry that running an effective CAP program is expensive. CAP demands a significant investment into human resources that have adequate experience to effectively diagnose and resolve the problems experienced in an operating power plant. In practice, this requires either dedicated staffing or rotating employees out of primary roles to fulfill a CAP function.

By applying intelligent automation to the Screening, Work Generation, and Issue Trending processes, a resource reduction of approximately 45% is expected.

Beyond reducing the number of resources required, AI reduces the total amount of time required to execute portions of the CAP process. While a human screening team may only be able to process conditions on a daily basis, an AI system can review and screen conditions and issue work items immediately. More quickly getting workable tasks into the hands of employees saves money and improves CAP quality.

For those issues that may be too complex for AI to effectively handle, a human-in-the-loop system can be employed, where AI knows when it is unsure and can reach out for human assistance. By using human-in-the-loop the cost of the CAP program is reduced while keeping quality the same or better.

Additionally, AI can lower the threshold for issue documentation. Deployment of an information extraction AI lets employees more naturally capture issues using natural language, without filling out specialized forms. When issues become easier to document, they are documented more often, the overall information input into the CAP program increases, and the chance an issue is corrected becomes greater. AI that immediately evaluates the quality and completeness of the submitted report enables automated dialogue with the submitter. This can encourage behaviors such as adding information, clarify issues, correcting spelling, or otherwise encourage behaviors that promote report quality, increasing the effectiveness of the overall CAP program.

Scale

The most valuable problems to solve are frequently the largest. CAP and associated activities are one of the largest opportunities in Nuclear. CAP lies at the heart of the Nuclear Industry, and requires participation from almost every trade and profession at each site. The ubiquity of CAP combined with the savings potential provides an immense incentive for plant operators, industry vendors, and industry research groups to discover and implement ways to make these programs run more sustainably and efficiently. Specialized AI that can automate various tasks are at the top of mind of industry groups such as the Electric Power Research Institute, the Idaho National Laboratories, and various utility in-house teams.

A fortunate side effect of the CAP program is the production of large quantities of high-quality data – data ideal for training the AI systems that will be used to automate the same functions. Most of this data is captured in free-form text as natural language. Language with a specific Nuclear vocabulary and dialect, but natural language nonetheless. The scale of this data puts it on par with the datasets utilized by the large technology vendors and academic institutions to develop and train the most effective AI systems. Thanks to the scale of Nuclear CAP data, these large AI systems can be specialized to operate in the Nuclear domain – increasing performance and effectiveness for the tasks at hand.

Transportability

The most notable advancements in AI technology of the late 2010s were around the development of advanced natural language-based AI. This AI has the ability to understand human language more naturally and effectively than previously thought possible. Breakthroughs in this area are characterized by the ability of AI to transfer learning from one problem to another. An AI good at classifying condition report quality will be better at identifying equipment tags vs one specifically trained just to identify equipment tags.

The benefit for the nuclear industry is that an AI system trained at Plant X will be able to transfer its learning to Plant Y and be more performant than one trained at just Plant Y. This is similar to how a working professional at Diablo Canyon would more easily adapt and apply their knowledge when transferring to Turkey Point than someone not having worked in the nuclear industry at all. Similar to a human employee, an AI system will benefit from the variety of different knowledge obtained from general industry data. Learning specifics for any one plant will be faster, cheaper, and easier for any plant wishing to specialize the AI system for use in automation once trained on general industry data.

As a result, solutions developed at one site will be able to be shared. With commonly applicable training and similar problems, the industry can work to solve the big problems once with ‘large’ or ‘hard’ AI, and transport the solution from plant to plant for the benefit of the entire industry.

Automated Screening

One of the more specific solutions apparent when applying AI to the CAP process is the automation of the condition screening process. Condition screening is the process of reviewing a received report of a non-standard condition in or around the plant, then applying certain tags, codes, or classifications, assigning an owner, and generating the appropriate work items that address the condition. For some plants, this process involves dedicated groups of senior employees that work daily to manually perform this process. For others, this involves dispersed resources periodically gathering together to complete screening. In either case, the resources are usually senior-level and experienced, and thus expensive. The following estimation of resources spent by the industry for this process each year illustrates just how large an opportunity there is:

The screening process has certain properties: repeatability and complexity of task, quality of data, scale, cost, etc. that make it extremely promising to apply AI-powered automation — discussion worthy of a separate blog post…coming soon!

Automated Trending

Automated trending is the sequel to Automated Screening – it’s what comes after the conditions have been identified and actions issued. Normally done ‘cognitively’ or via brute force search of the condition data, trending is resource-intensive and largely manual. Read Nuclearn’s Nuclear CAP Coding AI – Better Performance at a Lower Cost to find out more about how AI can help automate and simplify the trending task.

Bonus: The Rapid Progress of AI Technology

The five points above are only achievable due to the explosion in the progress of various technologies that underpin how AI learns and behaves. The speed in recent years with which new AI tools achieve human-level performance on various vision and language tasks is unprecedented. As seen in the chart below, developing AI that can recognize simple numerical digits at human-level performance took over 10 years; to recognize cats, dogs, cars and other everyday objects in images took about 5 years. More recently, developing AI that can recognize and manipulate human language took only about 2 years.

The accelerating pace of AI advancements shows no sign of stopping anytime soon. This type of rapid advancement, combined with the scale, transportability, and savings of CAP, allows Nuclearn to confidently say AI is the future of Nuclear CAP.