top of page

MRNEC's Deep Learning Approach for COGNIPLANT

Mr. NeC has recently developed a deep learning approach for understanding and predicting required changes in set points / controls such that production processes of Cogniplant’s use case partners can be optimized. In the FORNA use case, for example, the challenge is to predict changes in set points and controls such that the number of kiln stops can be reduced and therewith production can be increased and waste gas emissions reduced. It is complicated by a very complex interplay between set points / controls and key performance indicators that are influenced by multiple process parameters. Rescheduling multiple set points / controls, each at the right time for appropriate durations, with proper intensity and consistent with the observed other process parameters – all in combination with the size and peculiar plant architecture and health conditions - is the hard problem in optimising the sometimes-conflicting key performance indicators (no toy problem hardly ever or impossible to be addressed in science).

Our approach in minimizing FORNA’s kiln downtime relative to uptime (time-to-stop) consists of applying 5 methodological steps. Firstly, we select principal components of set point and control parameters (related to amount of sawdust fed per line for each shaft during a cycle, amount of limestone fed into each shaft during a cycle, etc.) and those of process value parameters (related to kiln channel (connecting shafts) pressure and temperature, lime temperatures, fuel line pressure for each shaft, etc.). Thereto, we pre-process and analyse process parameters and key performance indicator, i.e., we segment and split parameters and indicator data streams into contiguous time segments of full runs and stops and determine characteristic features such as rolling averages of these streams for each split run and stop.

Next, we determine those principal components amongst process parameters and features (PCA features) that are at least modestly correlated with or explain variation in time-to-stop indicator.

Secondly, we frame the principal process parameters for supervised learning, i.e., select target constraints and key performance indicator parameters y for prediction and other process parameters X for monitoring, set number of input time samples of process parameters, output time samples (time horizon), split data set concerning principal process parameters into training, validation and test set, scale original training, validation and test set, and split training, validation and test set into X and y. The main idea behind our particular framing of the deep learning is that although the historical datasets hide the reasons for when, why and how the time-to-stop parameter is influenced by the principal process parameters related to set points and controls while keeping into account production target constraints, building a deep learning model may give a clue, reveal and quantise them.

Thirdly, we create, compile and fit various deep learning models, i.e., set architecture type (MLP, LSTM, Transformer, etc.), and hyper-parameters (number of hidden layers, units or filters per layer, optimisers, etc.).

Fourthly, we optimize these deep-learning based setpoint/ control schedule prediction models using Optuna on top of MLFLOW, i.e., set OPTUNA study purpose (learning or prediction) and objective and score function for optimization, and log study results (statistics, (best) trial parameters, artifacts (learning curves and/or prediction plots) and model instances on MLFLOW.

Finally, we provide a stored model serving app for off-line and online prediction and updating model instances with new data.

11 views0 comments

Recent Posts

See All

Deep Reinforcement Learning 

Nonlocal Multi-Scale Complex Interaction Network Analytics and Predictive Distributed Control 

Critical Infrastructures 

Health and Home Care

Business Intelligence

Autonomous Transport and Logistics

Smart Regions, Industry and Nations

bottom of page