Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ...Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).展开更多
California is one of the major alfalfa (Medicago sativa L) forage-producing states in the U.S, but its production area has decreased significantly in the last couple of decades. Selection of cultivars with high yield ...California is one of the major alfalfa (Medicago sativa L) forage-producing states in the U.S, but its production area has decreased significantly in the last couple of decades. Selection of cultivars with high yield and nutritive value under late-cutting schedule strategy may help identify cultivars that growers can use to maximize yield while maintaining area for sustainable alfalfa production, but there is little information on this strategy. A field study was conducted to determine cumulative dry matter (DM) and nutritive values of 20 semi- and non-fall dormant (FD) ratings (FD 7 and FD 8 - 10, respectively) cultivars under 35-day cut in California’s Central Valley in 2020-2022. Seasonal cumulative DM yields ranged from 6.8 in 2020 to 37.0 Mg·ha−1 in 2021. Four FD 8 - 9 cultivars were the highest yielding with 3-yrs avg. DM greater than the lowest yielding lines by 46%. FD 7 cultivar “715RR” produced the highest crude protein (CP: 240 g·Kg−1) while FD 8 cultivar “HVX840RR” resulted in the highest neutral detergent fiber digestibility (NDFD: 484 g·Kg−1, 7% greater than the top yielding cultivars) but with DM yield intermediate. Yields and NDFD correlated positively but weakly indicating some semi- and non-FD cultivars performing similarly. These results suggest that selecting high yielding cultivars under 35-day cutting schedule strategy can be used as a tool to help growers to maximize yield while achieving good quality forages for sustainable alfalfa production in California’s Central Valley.展开更多
To solve the sparse reward problem of job-shop scheduling by deep reinforcement learning,a deep reinforcement learning framework considering sparse reward problem is proposed.The job shop scheduling problem is transfo...To solve the sparse reward problem of job-shop scheduling by deep reinforcement learning,a deep reinforcement learning framework considering sparse reward problem is proposed.The job shop scheduling problem is transformed into Markov decision process,and six state features are designed to improve the state feature representation by using two-way scheduling method,including four state features that distinguish the optimal action and two state features that are related to the learning goal.An extended variant of graph isomorphic network GIN++is used to encode disjunction graphs to improve the performance and generalization ability of the model.Through iterative greedy algorithm,random strategy is generated as the initial strategy,and the action with the maximum information gain is selected to expand it to optimize the exploration ability of Actor-Critic algorithm.Through validation of the trained policy model on multiple public test data sets and comparison with other advanced DRL methods and scheduling rules,the proposed method reduces the minimum average gap by 3.49%,5.31%and 4.16%,respectively,compared with the priority rule-based method,and 5.34%compared with the learning-based method.11.97%and 5.02%,effectively improving the accuracy of DRL to solve the approximate solution of JSSP minimum completion time.展开更多
The meta-heuristic algorithm with local search is an excellent choice for the job-shop scheduling problem(JSP).However,due to the unique nature of the JSP,local search may generate infeasible neighbourhood solutions.I...The meta-heuristic algorithm with local search is an excellent choice for the job-shop scheduling problem(JSP).However,due to the unique nature of the JSP,local search may generate infeasible neighbourhood solutions.In the existing literature,although some domain knowledge of the JSP can be used to avoid infeasible solutions,the constraint conditions in this domain knowledge are sufficient but not necessary.It may lose many feasible solutions and make the local search inadequate.By analysing the causes of infeasible neighbourhood solutions,this paper further explores the domain knowledge contained in the JSP and proposes the sufficient and necessary constraint conditions to find all feasible neighbourhood solutions,allowing the local search to be carried out thoroughly.With the proposed conditions,a new neighbourhood structure is designed in this paper.Then,a fast calculation method for all feasible neighbourhood solutions is provided,significantly reducing the calculation time compared with ordinary methods.A set of standard benchmark instances is used to evaluate the performance of the proposed neighbourhood structure and calculation method.The experimental results show that the calculation method is effective,and the new neighbourhood structure has more reliability and superiority than the other famous and influential neighbourhood structures,where 90%of the results are the best compared with three other well-known neighbourhood structures.Finally,the result from a tabu search algorithm with the new neighbourhood structure is compared with the current best results,demonstrating the superiority of the proposed neighbourhood structure.展开更多
In Mobile ad hoc Networks(MANETs),the packet scheduling process is considered the major challenge because of error-prone connectivity among mobile nodes that introduces intolerable delay and insufficient throughput wi...In Mobile ad hoc Networks(MANETs),the packet scheduling process is considered the major challenge because of error-prone connectivity among mobile nodes that introduces intolerable delay and insufficient throughput with high packet loss.In this paper,a Modified Firefly Optimization Algorithm improved Fuzzy Scheduler-based Packet Scheduling(MFPA-FSPS)Mechanism is proposed for sustaining Quality of Service(QoS)in the network.This MFPA-FSPS mechanism included a Fuzzy-based priority scheduler by inheriting the merits of the Sugeno Fuzzy inference system that potentially and adaptively estimated packets’priority for guaranteeing optimal network performance.It further used the modified Firefly Optimization Algorithm to optimize the rules uti-lized by the fuzzy inference engine to achieve the potential packet scheduling pro-cess.This adoption of a fuzzy inference engine used dynamic optimization that guaranteed excellent scheduling of the necessitated packets at an appropriate time with minimized waiting time.The statistical validation of the proposed MFPA-FSPS conducted using a one-way Analysis of Variance(ANOVA)test confirmed its predominance over the benchmarked schemes used for investigation.展开更多
The default scheduler of Apache Hadoop demonstrates operational inefficiencies when connecting external sources and processing transformation jobs.This paper has proposed a novel scheduler for enhancement of the perfo...The default scheduler of Apache Hadoop demonstrates operational inefficiencies when connecting external sources and processing transformation jobs.This paper has proposed a novel scheduler for enhancement of the performance of the Hadoop Yet Another Resource Negotiator(YARN)scheduler,called the Adaptive Node and Container Aware Scheduler(ANACRAC),that aligns cluster resources to the demands of the applications in the real world.The approach performs to leverage the user-provided configurations as a unique design to apportion nodes,or containers within the nodes,to application thresholds.Additionally,it provides the flexibility to the applications for selecting and choosing which node’s resources they want to manage and adds limits to prevent threshold breaches by adding additional jobs as needed.Node or container awareness can be utilized individually or in combination to increase efficiency.On top of this,the resource availability within the node and containers can also be investigated.This paper also focuses on the elasticity of the containers and self-adaptiveness depending on the job type.The results proved that 15%–20%performance improvement was achieved compared with the node and container awareness feature of the ANACRAC.It has been validated that this ANACRAC scheduler demonstrates a 70%–90%performance improvement compared with the default Fair scheduler.Experimental results also demonstrated the success of the enhancement and a performance improvement in the range of 60%to 200%when applications were connected with external interfaces and high workloads.展开更多
The study aimed to identify factors causing delays in scheduled gynaeco-obstetric surgeries at CHUMEFJE in Libreville from January 2019 to July 2020. Through a 16-month observational survey, it was found that out of 3...The study aimed to identify factors causing delays in scheduled gynaeco-obstetric surgeries at CHUMEFJE in Libreville from January 2019 to July 2020. Through a 16-month observational survey, it was found that out of 346 scheduled procedures, 128 (36.4%) were postponed. Organizational issues in the operating theatre were responsible for 80.3% of these delays, with 95.3% being preventable. To enhance efficiency, improvements in operating theatre organization are recommended.展开更多
Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and s...Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and security.One of the core features of TSN is traffic scheduling with bounded low delay in the network.However,traffic scheduling schemes in TSN are usually synthesized offline and lack dynamism.To implement incremental scheduling of newly arrived traffic in TSN,we propose a Dynamic Response Incremental Scheduling(DR-IS)method for time-sensitive traffic and deploy it on a software-defined time-sensitive network architecture.Under the premise of meeting the traffic scheduling requirements,we adopt two modes,traffic shift and traffic exchange,to dynamically adjust the time slot injection position of the traffic in the original scheme,and determine the sending offset time of the new timesensitive traffic to minimize the global traffic transmission jitter.The evaluation results show that DRIS method can effectively control the large increase of traffic transmission jitter in incremental scheduling without affecting the transmission delay,thus realizing the dynamic incremental scheduling of time-sensitive traffic in TSN.展开更多
The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worke...The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.展开更多
In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criti...In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criticalobjectives in this scenario. The existing mechanisms still have weaknesses in balancing the two demands. Theproposed heuristic multi-node collaborative scheduling mechanism (HMNCS) comprises cluster head (CH)election, pre-selection, and task set selectionmechanisms, where the latter two kinds of selections forma two-layerselection mechanism. The CH election innovatively introduces the movement trend of the target and establishesa scoring mechanism to determine the optimal CH, which can delay the CH rotation and thus reduce energyconsumption. The pre-selection mechanism adaptively filters out suitable nodes as the candidate task set to applyfor tracking tasks, which can reduce the application consumption and the overhead of the following task setselection. Finally, the task node selection is mathematically transformed into an optimization problem and thegenetic algorithm is adopted to form a final task set in the task set selection mechanism. Simulation results showthat HMNCS outperforms other compared mechanisms in the tracking accuracy and the network lifetime.展开更多
As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources ha...As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity,which in turn hampers users from achieving optimal satisfaction.Therefore,cloud quantum computing service providers require a unified analysis and scheduling framework for their quantumresources and user jobs to meet the ever-growing usage demands.This paper introduces a new multi-programming scheduling framework for quantum computing in a cloud environment.The framework addresses the issue of limited quantum computing resources in cloud environments and ensures a satisfactory user experience.It introduces three innovative designs:1)Our framework automatically allocates tasks to different quantum backends while ensuring fairness among users by considering both the cloud-based quantum resources and the user-submitted tasks.2)Multi-programming mechanism is employed across different quantum backends to enhance the overall throughput of the quantum cloud.In comparison to conventional task schedulers,our proposed framework achieves a throughput improvement of more than two-fold in the quantum cloud.3)The framework can balance fidelity and user waiting time by adaptively adjusting scheduling parameters.展开更多
In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task ...In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).展开更多
Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of se...Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.展开更多
Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competi...Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competition between batch jobs and online services,co-location frequently impairs the performance of online services.This study presents a quality of service(QoS)prediction-based schedulingmodel(QPSM)for co-locatedworkloads.The performance prediction of QPSM consists of two parts:the prediction of an online service’s QoS anomaly based on XGBoost and the prediction of the completion time of an offline batch job based on randomforest.On-line service QoS anomaly prediction is used to evaluate the influence of batch jobmix on on-line service performance,and batch job completion time prediction is utilized to reduce the total waiting time of batch jobs.When the same number of batch jobs are scheduled in experiments using typical test sets such as CloudSuite,the scheduling time required by QPSM is reduced by about 6 h on average compared with the first-come,first-served strategy and by about 11 h compared with the random scheduling strategy.Compared with the non-co-located situation,QPSM can improve CPU resource utilization by 12.15% and memory resource utilization by 5.7% on average.Experiments show that the QPSM scheduling strategy proposed in this study can effectively guarantee the quality of online services and further improve cluster resource utilization.展开更多
The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow S...The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow Shop Problems(DHFSP)by learning assisted meta-heuristics.This work addresses a DHFSP with minimizing the maximum completion time(Makespan).First,a mathematical model is developed for the concerned DHFSP.Second,four Q-learning-assisted meta-heuristics,e.g.,genetic algorithm(GA),artificial bee colony algorithm(ABC),particle swarm optimization(PSO),and differential evolution(DE),are proposed.According to the nature of DHFSP,six local search operations are designed for finding high-quality solutions in local space.Instead of randomselection,Q-learning assists meta-heuristics in choosing the appropriate local search operations during iterations.Finally,based on 60 cases,comprehensive numerical experiments are conducted to assess the effectiveness of the proposed algorithms.The experimental results and discussions prove that using Q-learning to select appropriate local search operations is more effective than the random strategy.To verify the competitiveness of the Q-learning assistedmeta-heuristics,they are compared with the improved iterated greedy algorithm(IIG),which is also for solving DHFSP.The Friedman test is executed on the results by five algorithms.It is concluded that the performance of four Q-learning-assisted meta-heuristics are better than IIG,and the Q-learning-assisted PSO shows the best competitiveness.展开更多
Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the exis...Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the existing spacetimenetwork (STN) model for the cooperative scheduling problem of yard cranes (YCs) and automated guidedvehicles (AGVs) and extend its application scenarios, two improved STN models are proposed. The flow balanceconstraints in the original model are decomposed, and the trajectory constraints of YCs and AGVs are added toacquire the model STN_A. The coupling constraint in STN_A is updated, and buffer constraints are added toSTN_A so that themodel STN_B is built.As the size of the problem increases, the solution speed of CPLEX becomesthe bottleneck. So a heuristic method containing three groups of heuristic rules is designed to obtain a near-optimalsolution quickly. Experimental results showthat the computation time of STN_A is shortened by 49.47% on averageand the gap is reduced by 1.69% on average compared with the original model. The gap between the solution ofthe heuristic rules and the solution of CPLEX is less than 3.50%, and the solution time of the heuristic rules is onaverage 99.85% less than the solution time of CPLEX. Compared with STN_A, the computation time for solvingSTN_B increases by 58.93% on average.展开更多
The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cess...The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.展开更多
文摘Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO).
文摘California is one of the major alfalfa (Medicago sativa L) forage-producing states in the U.S, but its production area has decreased significantly in the last couple of decades. Selection of cultivars with high yield and nutritive value under late-cutting schedule strategy may help identify cultivars that growers can use to maximize yield while maintaining area for sustainable alfalfa production, but there is little information on this strategy. A field study was conducted to determine cumulative dry matter (DM) and nutritive values of 20 semi- and non-fall dormant (FD) ratings (FD 7 and FD 8 - 10, respectively) cultivars under 35-day cut in California’s Central Valley in 2020-2022. Seasonal cumulative DM yields ranged from 6.8 in 2020 to 37.0 Mg·ha−1 in 2021. Four FD 8 - 9 cultivars were the highest yielding with 3-yrs avg. DM greater than the lowest yielding lines by 46%. FD 7 cultivar “715RR” produced the highest crude protein (CP: 240 g·Kg−1) while FD 8 cultivar “HVX840RR” resulted in the highest neutral detergent fiber digestibility (NDFD: 484 g·Kg−1, 7% greater than the top yielding cultivars) but with DM yield intermediate. Yields and NDFD correlated positively but weakly indicating some semi- and non-FD cultivars performing similarly. These results suggest that selecting high yielding cultivars under 35-day cutting schedule strategy can be used as a tool to help growers to maximize yield while achieving good quality forages for sustainable alfalfa production in California’s Central Valley.
基金Shaanxi Provincial Key Research and Development Project(2023YBGY095)and Shaanxi Provincial Qin Chuangyuan"Scientist+Engineer"project(2023KXJ247)Fund support.
文摘To solve the sparse reward problem of job-shop scheduling by deep reinforcement learning,a deep reinforcement learning framework considering sparse reward problem is proposed.The job shop scheduling problem is transformed into Markov decision process,and six state features are designed to improve the state feature representation by using two-way scheduling method,including four state features that distinguish the optimal action and two state features that are related to the learning goal.An extended variant of graph isomorphic network GIN++is used to encode disjunction graphs to improve the performance and generalization ability of the model.Through iterative greedy algorithm,random strategy is generated as the initial strategy,and the action with the maximum information gain is selected to expand it to optimize the exploration ability of Actor-Critic algorithm.Through validation of the trained policy model on multiple public test data sets and comparison with other advanced DRL methods and scheduling rules,the proposed method reduces the minimum average gap by 3.49%,5.31%and 4.16%,respectively,compared with the priority rule-based method,and 5.34%compared with the learning-based method.11.97%and 5.02%,effectively improving the accuracy of DRL to solve the approximate solution of JSSP minimum completion time.
基金Supported by National Natural Science Foundation of China(Grant Nos.U21B2029 and 51825502).
文摘The meta-heuristic algorithm with local search is an excellent choice for the job-shop scheduling problem(JSP).However,due to the unique nature of the JSP,local search may generate infeasible neighbourhood solutions.In the existing literature,although some domain knowledge of the JSP can be used to avoid infeasible solutions,the constraint conditions in this domain knowledge are sufficient but not necessary.It may lose many feasible solutions and make the local search inadequate.By analysing the causes of infeasible neighbourhood solutions,this paper further explores the domain knowledge contained in the JSP and proposes the sufficient and necessary constraint conditions to find all feasible neighbourhood solutions,allowing the local search to be carried out thoroughly.With the proposed conditions,a new neighbourhood structure is designed in this paper.Then,a fast calculation method for all feasible neighbourhood solutions is provided,significantly reducing the calculation time compared with ordinary methods.A set of standard benchmark instances is used to evaluate the performance of the proposed neighbourhood structure and calculation method.The experimental results show that the calculation method is effective,and the new neighbourhood structure has more reliability and superiority than the other famous and influential neighbourhood structures,where 90%of the results are the best compared with three other well-known neighbourhood structures.Finally,the result from a tabu search algorithm with the new neighbourhood structure is compared with the current best results,demonstrating the superiority of the proposed neighbourhood structure.
文摘In Mobile ad hoc Networks(MANETs),the packet scheduling process is considered the major challenge because of error-prone connectivity among mobile nodes that introduces intolerable delay and insufficient throughput with high packet loss.In this paper,a Modified Firefly Optimization Algorithm improved Fuzzy Scheduler-based Packet Scheduling(MFPA-FSPS)Mechanism is proposed for sustaining Quality of Service(QoS)in the network.This MFPA-FSPS mechanism included a Fuzzy-based priority scheduler by inheriting the merits of the Sugeno Fuzzy inference system that potentially and adaptively estimated packets’priority for guaranteeing optimal network performance.It further used the modified Firefly Optimization Algorithm to optimize the rules uti-lized by the fuzzy inference engine to achieve the potential packet scheduling pro-cess.This adoption of a fuzzy inference engine used dynamic optimization that guaranteed excellent scheduling of the necessitated packets at an appropriate time with minimized waiting time.The statistical validation of the proposed MFPA-FSPS conducted using a one-way Analysis of Variance(ANOVA)test confirmed its predominance over the benchmarked schemes used for investigation.
文摘The default scheduler of Apache Hadoop demonstrates operational inefficiencies when connecting external sources and processing transformation jobs.This paper has proposed a novel scheduler for enhancement of the performance of the Hadoop Yet Another Resource Negotiator(YARN)scheduler,called the Adaptive Node and Container Aware Scheduler(ANACRAC),that aligns cluster resources to the demands of the applications in the real world.The approach performs to leverage the user-provided configurations as a unique design to apportion nodes,or containers within the nodes,to application thresholds.Additionally,it provides the flexibility to the applications for selecting and choosing which node’s resources they want to manage and adds limits to prevent threshold breaches by adding additional jobs as needed.Node or container awareness can be utilized individually or in combination to increase efficiency.On top of this,the resource availability within the node and containers can also be investigated.This paper also focuses on the elasticity of the containers and self-adaptiveness depending on the job type.The results proved that 15%–20%performance improvement was achieved compared with the node and container awareness feature of the ANACRAC.It has been validated that this ANACRAC scheduler demonstrates a 70%–90%performance improvement compared with the default Fair scheduler.Experimental results also demonstrated the success of the enhancement and a performance improvement in the range of 60%to 200%when applications were connected with external interfaces and high workloads.
文摘The study aimed to identify factors causing delays in scheduled gynaeco-obstetric surgeries at CHUMEFJE in Libreville from January 2019 to July 2020. Through a 16-month observational survey, it was found that out of 346 scheduled procedures, 128 (36.4%) were postponed. Organizational issues in the operating theatre were responsible for 80.3% of these delays, with 95.3% being preventable. To enhance efficiency, improvements in operating theatre organization are recommended.
基金supported by the Innovation Scientists and Technicians Troop Construction Projects of Henan Province(224000510002)。
文摘Time-Sensitive Network(TSN)with deterministic transmission capability is increasingly used in many emerging fields.It mainly guarantees the Quality of Service(QoS)of applications with strict requirements on time and security.One of the core features of TSN is traffic scheduling with bounded low delay in the network.However,traffic scheduling schemes in TSN are usually synthesized offline and lack dynamism.To implement incremental scheduling of newly arrived traffic in TSN,we propose a Dynamic Response Incremental Scheduling(DR-IS)method for time-sensitive traffic and deploy it on a software-defined time-sensitive network architecture.Under the premise of meeting the traffic scheduling requirements,we adopt two modes,traffic shift and traffic exchange,to dynamically adjust the time slot injection position of the traffic in the original scheme,and determine the sending offset time of the new timesensitive traffic to minimize the global traffic transmission jitter.The evaluation results show that DRIS method can effectively control the large increase of traffic transmission jitter in incremental scheduling without affecting the transmission delay,thus realizing the dynamic incremental scheduling of time-sensitive traffic in TSN.
基金supported by the Natural Science Foundation of Anhui Province(Grant Number 2208085MG181)the Science Research Project of Higher Education Institutions in Anhui Province,Philosophy and Social Sciences(Grant Number 2023AH051063)the Open Fund of Key Laboratory of Anhui Higher Education Institutes(Grant Number CS2021-ZD01).
文摘The distributed flexible job shop scheduling problem(DFJSP)has attracted great attention with the growth of the global manufacturing industry.General DFJSP research only considers machine constraints and ignores worker constraints.As one critical factor of production,effective utilization of worker resources can increase productivity.Meanwhile,energy consumption is a growing concern due to the increasingly serious environmental issues.Therefore,the distributed flexible job shop scheduling problem with dual resource constraints(DFJSP-DRC)for minimizing makespan and total energy consumption is studied in this paper.To solve the problem,we present a multi-objective mathematical model for DFJSP-DRC and propose a Q-learning-based multi-objective grey wolf optimizer(Q-MOGWO).In Q-MOGWO,high-quality initial solutions are generated by a hybrid initialization strategy,and an improved active decoding strategy is designed to obtain the scheduling schemes.To further enhance the local search capability and expand the solution space,two wolf predation strategies and three critical factory neighborhood structures based on Q-learning are proposed.These strategies and structures enable Q-MOGWO to explore the solution space more efficiently and thus find better Pareto solutions.The effectiveness of Q-MOGWO in addressing DFJSP-DRC is verified through comparison with four algorithms using 45 instances.The results reveal that Q-MOGWO outperforms comparison algorithms in terms of solution quality.
基金the Project Program of Science and Technology on Micro-System Laboratory,No.6142804220101.
文摘In recent years, target tracking has been considered one of the most important applications of wireless sensornetwork (WSN). Optimizing target tracking performance and prolonging network lifetime are two equally criticalobjectives in this scenario. The existing mechanisms still have weaknesses in balancing the two demands. Theproposed heuristic multi-node collaborative scheduling mechanism (HMNCS) comprises cluster head (CH)election, pre-selection, and task set selectionmechanisms, where the latter two kinds of selections forma two-layerselection mechanism. The CH election innovatively introduces the movement trend of the target and establishesa scoring mechanism to determine the optimal CH, which can delay the CH rotation and thus reduce energyconsumption. The pre-selection mechanism adaptively filters out suitable nodes as the candidate task set to applyfor tracking tasks, which can reduce the application consumption and the overhead of the following task setselection. Finally, the task node selection is mathematically transformed into an optimization problem and thegenetic algorithm is adopted to form a final task set in the task set selection mechanism. Simulation results showthat HMNCS outperforms other compared mechanisms in the tracking accuracy and the network lifetime.
文摘As cloud quantum computing gains broader acceptance,a growing quantity of researchers are directing their focus towards this domain.Nevertheless,the rapid surge in demand for cloud-based quantum computing resources has led to a scarcity,which in turn hampers users from achieving optimal satisfaction.Therefore,cloud quantum computing service providers require a unified analysis and scheduling framework for their quantumresources and user jobs to meet the ever-growing usage demands.This paper introduces a new multi-programming scheduling framework for quantum computing in a cloud environment.The framework addresses the issue of limited quantum computing resources in cloud environments and ensures a satisfactory user experience.It introduces three innovative designs:1)Our framework automatically allocates tasks to different quantum backends while ensuring fairness among users by considering both the cloud-based quantum resources and the user-submitted tasks.2)Multi-programming mechanism is employed across different quantum backends to enhance the overall throughput of the quantum cloud.In comparison to conventional task schedulers,our proposed framework achieves a throughput improvement of more than two-fold in the quantum cloud.3)The framework can balance fidelity and user waiting time by adaptively adjusting scheduling parameters.
文摘In current research on task offloading and resource scheduling in vehicular networks,vehicles are commonly assumed to maintain constant speed or relatively stationary states,and the impact of speed variations on task offloading is often overlooked.It is frequently assumed that vehicles can be accurately modeled during actual motion processes.However,in vehicular dynamic environments,both the tasks generated by the vehicles and the vehicles’surroundings are constantly changing,making it difficult to achieve real-time modeling for actual dynamic vehicular network scenarios.Taking into account the actual dynamic vehicular scenarios,this paper considers the real-time non-uniform movement of vehicles and proposes a vehicular task dynamic offloading and scheduling algorithm for single-task multi-vehicle vehicular network scenarios,attempting to solve the dynamic decision-making problem in task offloading process.The optimization objective is to minimize the average task completion time,which is formulated as a multi-constrained non-linear programming problem.Due to the mobility of vehicles,a constraint model is applied in the decision-making process to dynamically determine whether the communication range is sufficient for task offloading and transmission.Finally,the proposed vehicular task dynamic offloading and scheduling algorithm based on muti-agent deep deterministic policy gradient(MADDPG)is applied to solve the optimal solution of the optimization problem.Simulation results show that the algorithm proposed in this paper is able to achieve lower latency task computation offloading.Meanwhile,the average task completion time of the proposed algorithm in this paper can be improved by 7.6%compared to the performance of the MADDPG scheme and 51.1%compared to the performance of deep deterministic policy gradient(DDPG).
基金supported in part by the National Natural Science Foundation of China under Grant 62172192,U20A20228,and 62171203in part by the Science and Technology Demonstration Project of Social Development of Jiangsu Province under Grant BE2019631。
文摘Currently,applications accessing remote computing resources through cloud data centers is the main mode of operation,but this mode of operation greatly increases communication latency and reduces overall quality of service(QoS)and quality of experience(QoE).Edge computing technology extends cloud service functionality to the edge of the mobile network,closer to the task execution end,and can effectivelymitigate the communication latency problem.However,the massive and heterogeneous nature of servers in edge computing systems brings new challenges to task scheduling and resource management,and the booming development of artificial neural networks provides us withmore powerfulmethods to alleviate this limitation.Therefore,in this paper,we proposed a time series forecasting model incorporating Conv1D,LSTM and GRU for edge computing device resource scheduling,trained and tested the forecasting model using a small self-built dataset,and achieved competitive experimental results.
基金supported by the NationalNatural Science Foundation of China(No.61972118)the Key R&D Program of Zhejiang Province(No.2023C01028).
文摘Cloud service providers generally co-locate online services and batch jobs onto the same computer cluster,where the resources can be pooled in order to maximize data center resource utilization.Due to resource competition between batch jobs and online services,co-location frequently impairs the performance of online services.This study presents a quality of service(QoS)prediction-based schedulingmodel(QPSM)for co-locatedworkloads.The performance prediction of QPSM consists of two parts:the prediction of an online service’s QoS anomaly based on XGBoost and the prediction of the completion time of an offline batch job based on randomforest.On-line service QoS anomaly prediction is used to evaluate the influence of batch jobmix on on-line service performance,and batch job completion time prediction is utilized to reduce the total waiting time of batch jobs.When the same number of batch jobs are scheduled in experiments using typical test sets such as CloudSuite,the scheduling time required by QPSM is reduced by about 6 h on average compared with the first-come,first-served strategy and by about 11 h compared with the random scheduling strategy.Compared with the non-co-located situation,QPSM can improve CPU resource utilization by 12.15% and memory resource utilization by 5.7% on average.Experiments show that the QPSM scheduling strategy proposed in this study can effectively guarantee the quality of online services and further improve cluster resource utilization.
基金partially supported by the Guangdong Basic and Applied Basic Research Foundation(2023A1515011531)the National Natural Science Foundation of China under Grant 62173356+2 种基金the Science and Technology Development Fund(FDCT),Macao SAR,under Grant 0019/2021/AZhuhai Industry-University-Research Project with Hongkong and Macao under Grant ZH22017002210014PWCthe Key Technologies for Scheduling and Optimization of Complex Distributed Manufacturing Systems(22JR10KA007).
文摘The flow shop scheduling problem is important for the manufacturing industry.Effective flow shop scheduling can bring great benefits to the industry.However,there are few types of research on Distributed Hybrid Flow Shop Problems(DHFSP)by learning assisted meta-heuristics.This work addresses a DHFSP with minimizing the maximum completion time(Makespan).First,a mathematical model is developed for the concerned DHFSP.Second,four Q-learning-assisted meta-heuristics,e.g.,genetic algorithm(GA),artificial bee colony algorithm(ABC),particle swarm optimization(PSO),and differential evolution(DE),are proposed.According to the nature of DHFSP,six local search operations are designed for finding high-quality solutions in local space.Instead of randomselection,Q-learning assists meta-heuristics in choosing the appropriate local search operations during iterations.Finally,based on 60 cases,comprehensive numerical experiments are conducted to assess the effectiveness of the proposed algorithms.The experimental results and discussions prove that using Q-learning to select appropriate local search operations is more effective than the random strategy.To verify the competitiveness of the Q-learning assistedmeta-heuristics,they are compared with the improved iterated greedy algorithm(IIG),which is also for solving DHFSP.The Friedman test is executed on the results by five algorithms.It is concluded that the performance of four Q-learning-assisted meta-heuristics are better than IIG,and the Q-learning-assisted PSO shows the best competitiveness.
基金National Natural Science Foundation of China(62073212).
文摘Improving the cooperative scheduling efficiency of equipment is the key for automated container terminals to copewith the development trend of large-scale ships. In order to improve the solution efficiency of the existing spacetimenetwork (STN) model for the cooperative scheduling problem of yard cranes (YCs) and automated guidedvehicles (AGVs) and extend its application scenarios, two improved STN models are proposed. The flow balanceconstraints in the original model are decomposed, and the trajectory constraints of YCs and AGVs are added toacquire the model STN_A. The coupling constraint in STN_A is updated, and buffer constraints are added toSTN_A so that themodel STN_B is built.As the size of the problem increases, the solution speed of CPLEX becomesthe bottleneck. So a heuristic method containing three groups of heuristic rules is designed to obtain a near-optimalsolution quickly. Experimental results showthat the computation time of STN_A is shortened by 49.47% on averageand the gap is reduced by 1.69% on average compared with the original model. The gap between the solution ofthe heuristic rules and the solution of CPLEX is less than 3.50%, and the solution time of the heuristic rules is onaverage 99.85% less than the solution time of CPLEX. Compared with STN_A, the computation time for solvingSTN_B increases by 58.93% on average.
基金supported in part by the National Natural Science Foundation of China under Grant 61901128,62273109the Natural Science Foundation of the Jiangsu Higher Education Institutions of China(21KJB510032).
文摘The growing development of the Internet of Things(IoT)is accelerating the emergence and growth of new IoT services and applications,which will result in massive amounts of data being generated,transmitted and pro-cessed in wireless communication networks.Mobile Edge Computing(MEC)is a desired paradigm to timely process the data from IoT for value maximization.In MEC,a number of computing-capable devices are deployed at the network edge near data sources to support edge computing,such that the long network transmission delay in cloud computing paradigm could be avoided.Since an edge device might not always have sufficient resources to process the massive amount of data,computation offloading is significantly important considering the coop-eration among edge devices.However,the dynamic traffic characteristics and heterogeneous computing capa-bilities of edge devices challenge the offloading.In addition,different scheduling schemes might provide different computation delays to the offloaded tasks.Thus,offloading in mobile nodes and scheduling in the MEC server are coupled to determine service delay.This paper seeks to guarantee low delay for computation intensive applica-tions by jointly optimizing the offloading and scheduling in such an MEC system.We propose a Delay-Greedy Computation Offloading(DGCO)algorithm to make offloading decisions for new tasks in distributed computing-enabled mobile devices.A Reinforcement Learning-based Parallel Scheduling(RLPS)algorithm is further designed to schedule offloaded tasks in the multi-core MEC server.With an offloading delay broadcast mechanism,the DGCO and RLPS cooperate to achieve the goal of delay-guarantee-ratio maximization.Finally,the simulation results show that our proposal can bound the end-to-end delay of various tasks.Even under slightly heavy task load,the delay-guarantee-ratio given by DGCO-RLPS can still approximate 95%,while that given by benchmarked algorithms is reduced to intolerable value.The simulation results are demonstrated the effective-ness of DGCO-RLPS for delay guarantee in MEC.