Effective production scheduling is crucial for optimizing manufacturing efficiency, particularly in environments characterized by stochastic demand, sequence-dependent setup times, and dynamic inventory costs. Traditional heuristic-based scheduling methods, while computationally efficient, often struggle to adapt to real-time uncertainties. To address these challenges, this interdisciplinary project explores the application of Deep Reinforcement Learning (DRL) for batching and lot streaming in a single-machine scheduling setting. Within this project we develop a multi-agent scheduling framework that combines heuristic decision-making agents with a DRL-based agent trained using Proximal Policy Optimization (PPO). Simulation-based evaluation systematically examines the performance of these approaches under varying conditions. Key performance metrics, such as machine utilization, order fulfillment rates, and cumulative reward based efficiency, provide a basis for comparison. Results indicate that heuristic scheduling methods, which prioritize just-in-time production, effectively minimize inventory costs but struggle with dynamic adaptation and are less effective in considering product-dependent setup times. In contrast, the DRL-based approach demonstrates greater flexibility and adaptability, leading to improved machine utilization and production efficiency. However, this adaptability comes at the cost of increased inventory accumulation and increased computational complexity. These findings suggest that while DRL holds significant potential for real-time scheduling, its practical implementation must carefully balance adaptability with inventory control and computational feasibility. This project contributes to the integration of artificial intelligence in production scheduling, demonstrating how reinforcement learning (RL) can enhance real-time decision making in dynamic manufacturing environments. Future work should explore extensions to multi-machine systems, hybridizing rule-based and learning-based approaches, and leveraging digital twin technologies for real-time optimization.
«
Effective production scheduling is crucial for optimizing manufacturing efficiency, particularly in environments characterized by stochastic demand, sequence-dependent setup times, and dynamic inventory costs. Traditional heuristic-based scheduling methods, while computationally efficient, often struggle to adapt to real-time uncertainties. To address these challenges, this interdisciplinary project explores the application of Deep Reinforcement Learning (DRL) for batching and lot streaming in a...
»