top of page
Search
Writer's pictureAlfons Salden

Deep Learning for Robust Rescheduling of Industrial Processes

Updated: Feb 17, 2023

Mr. NeC has recently developed a deep learning approach for understanding and predicting required changes in set points / controls such that production processes of Cogniplant’s use case partners can be optimized. In the FORNA use case, for example, the challenge is to predict changes in set points and controls such that the number of kiln stops can be reduced and therewith production can be increased and waste gas emissions reduced. It is complicated by a very complex interplay between set points / controls and key performance indicators that are influenced by multiple process parameters. Rescheduling multiple set points / controls, each at the right time for appropriate durations, with proper intensity and consistent with the observed other process parameters – all in combination with the size and peculiar plant architecture and health conditions - is the hard problem in optimising the sometimes-conflicting key performance indicators (no toy problem hardly ever or impossible to be addressed in science).





Our approach in minimizing FORNA’s kiln downtime relative to uptime (time-to-stop) consists of applying 6 methodological steps. First, we select principal components of set point and control parameters (related to amount of sawdust fed per line for each shaft during a cycle, amount of limestone fed into each shaft during a cycle, etc.) and those of process value parameters (related to kiln channel (connecting shafts) pressure and temperature, lime temperatures, fuel line pressure for each shaft, etc.). Thereto, we pre-process and analyse process parameters and key performance indicator, i.e., we segment and split parameters and indicator data streams into contiguous time segments of full runs and stops and determine characteristic features such as rolling averages of these streams for each split run and stop.






Next, we determine those principal components amongst process parameters and features (PCA features) that are at least modestly correlated with or explain variation in time-to-stop indicator.


Second, we frame the principal process parameters for supervised learning, i.e., select target constraints and key performance indicator parameters y for prediction and other process parameters X for monitoring, set number of input time samples of process parameters, output time samples (time horizon), split data set concerning principal process parameters into training, validation and test set, scale original training, validation and test set, and split training, validation and test set into X and y. The main idea behind our particular framing of the deep learning is that although the historical datasets hide the reasons for when, why and how the time-to-stop parameter is influenced by the principal process parameters related to set points and controls while keeping into account production target constraints, building a deep learning model may give a clue, reveal and quantise them.



Third, we create, compile and fit various deep learning models, i.e., set architecture type (MLP, LSTM, Transformer, etc.), and hyper-parameters (number of hidden layers, units or filters per layer, optimisers, etc.).



Fourth, we optimize these deep-learning based setpoint/ control schedule prediction models using Optuna on top of MLFLOW, i.e., set OPTUNA study purpose (learning or prediction) and objective and score function for optimization, and log study results (statistics, (best) trial parameters, artifacts (learning curves and/or prediction plots) and model instances on MLFLOW.




Fifth, we propose an average multi-scale reactive multi-step processes rescheduling prediction model <m>, defined as the sum of scale-weighted prediction models <m_a-b> for time intervals [0, 1], [1, 10] and [10,100] in minutes ahead. Such a model will be based on set point and control variable predicting (models) m_s at one’s disposal at several temporal scales, say s_1, s_10and s_100 minutes with each a prediction horizon of just one or not too many steps ahead of equivalent durations, i.e., step = s_1, s_10 ands_100 minutes, and having during those steps in the future the optimisation of KPIs and maintaining process condition and production requirements in mind:


<m> = <m_0-1> + <m_1-10> + <m_10-100>


with


<m_0-1> = G_0-1 * (m_1/s_1 + m_10/s_10 +m_100/s_100) * 1_0-1


<m_1-10> = G_1-10 * ((s_10-1)* m_10/s_10 +(s_10 - 1) *m_100/s_100 ) * 1_1-10


<m_10-100> = G_10-100 * m_100/s_100 * 1_10-100


where


1_a-b = 1 for t in [a,b] minutes from prediction time else 0


and

G_0-1 = 1/(1/s_1 + 1/s_10 + 1/s_100) ,...


Use rolling averages of the process parameters at scale s or window-size of 2n+1 or 2n with center in the middle or even better aligned to the right ; having a maximum scale of S = 2N +1 / 2N allows then accordingly combine the multi-scale prediction models as above. Note that notion of the scales need to be integrated into data framing problem for supervised learning of the multi-scale models.


The advantage of this averaged multi-scale (multi-step) approach is that the predictions are statistically more robust than direct methods that employ many steps of primal scale (1 minute resolution) and readily succomb to (numerical) noise in the data or in the processing. Of course, there is more to multiscale or critical scale predictions of robust rescheduling of industrial processes than we can convey - reader may find some inspiration in one of my other blogs on how to retain more sophisticated neural networks for real world problems.


Finally, we provide a stored model serving app for off-line and online prediction and updating model instances with new data.



47 views0 comments

Recent Posts

See All

Comments


Deep Reinforcement Learning 

Nonlocal Multi-Scale Complex Interaction Network Analytics and Predictive Distributed Control 

Critical Infrastructures 

Health and Home Care

Business Intelligence

Autonomous Transport and Logistics

Smart Regions, Industry and Nations

bottom of page