Since Grover’s algorithm was first introduced, it has become a category of quantum algorithms that can be applied to many problems through the exploitation of quantum parallelism. The original application was the uns...Since Grover’s algorithm was first introduced, it has become a category of quantum algorithms that can be applied to many problems through the exploitation of quantum parallelism. The original application was the unstructured search problems with the time complexity of O(). In Grover’s algorithm, the key is Oracle and Amplitude Amplification. In this paper, our purpose is to show through examples that, in general, the time complexity of the Oracle Phase is O(N), not O(1). As a result, the time complexity of Grover’s algorithm is O(N), not O(). As a secondary purpose, we also attempt to restore the time complexity of Grover’s algorithm to its original form, O(), by introducing an O(1) parallel algorithm for unstructured search without repeated items, which will work for most cases. In the worst-case scenarios where the number of repeated items is O(N), the time complexity of the Oracle Phase is still O(N) even after additional preprocessing.展开更多
Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems....Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.展开更多
In this paper, we use the cellular automation model to imitate earthquake process and draw some conclusionsof general applicability. First, it is confirmed that earthquake process has some ordering characters, and it ...In this paper, we use the cellular automation model to imitate earthquake process and draw some conclusionsof general applicability. First, it is confirmed that earthquake process has some ordering characters, and it isshown that both the existence and their mutual arrangement of faults could obviously influence the overallcharacters of earthquake process. Then the characters of each stage of model evolution are explained withself-organized critical state theory. Finally, earthquake sequences produced by the models are analysed interms pf algorithmic complexity and the result shows that AC-values of algorithmic complexity could be usedto study earthquake process and evolution.展开更多
We establish polynomial complexity corrector algorithms for linear programming over bounds of the Mehrotra-type predictor- symmetric cones. We first slightly modify the maximum step size in the predictor step of the s...We establish polynomial complexity corrector algorithms for linear programming over bounds of the Mehrotra-type predictor- symmetric cones. We first slightly modify the maximum step size in the predictor step of the safeguard based Mehrotra-type algorithm for linear programming, that was proposed by Salahi et al. Then, using the machinery of Euclidean Jordan algebras, we extend the modified algorithm to symmetric cones. Based on the Nesterov-Todd direction, we obtain O(r log ε1) iteration complexity bound of this algorithm, where r is the rank of the Jordan algebras and ε is the required precision. We also present a new variant of Mehrotra-type algorithm using a new adaptive updating scheme of centering parameter and show that this algorithm enjoys the same order of complexity bound as the safeguard algorithm. We illustrate the numerical behaviour of the methods on some small examples.展开更多
This paper uses the concept of algorithmic efficiency to present a unified theory of intelligence. Intelligence is defined informally, formally, and computationally. We introduce the concept of dimensional complexity ...This paper uses the concept of algorithmic efficiency to present a unified theory of intelligence. Intelligence is defined informally, formally, and computationally. We introduce the concept of dimensional complexity in algorithmic efficiency and deduce that an optimally efficient algorithm has zero time complexity, zero space complexity, and an infinite dimensional complexity. This algorithm is used to generate the number line.展开更多
In satellite mobile communication system, relative movement of the satellite and the terminal will cause a large Doppler offset. Timing advanced estimation with Zadoff-Chu sequence is sensitive to the frequency offset...In satellite mobile communication system, relative movement of the satellite and the terminal will cause a large Doppler offset. Timing advanced estimation with Zadoff-Chu sequence is sensitive to the frequency offset. When the frequency offset is larger than one times subcarrier spacing, the value of peak cannot be detected at the receiving end. To suppress the larger Doppler frequency shift, this paper proposes a novel timing advanced estimation scheme(TAE-MCD) for satellite communication system. In this algorithm, t r a n s m i t t e d s i g n a l i s d i v i d e d i n t o Z C sequence and its conjugate sequence. Using multiplication and DFT operation to find the estimated peak at the receiving end, and make subtraction with the obtained sequences at last. The scheme can not only inhibit the adverse effects of large Doppler frequency shift in timing estimation effectively, but also reduce the computational complexity at the receiving end and improve the work efficiency of the hardware. Simulations results show that TAEMCD outperform the existing timing advanced estimation methods, on the condition of no additional time and frequency resource are needed.展开更多
In this paper, an enhanced greedy bit and power allocation algorithms for orthogonal frequency division multiplexing (OFDM) communication systems are introduced. These algorithms combine low complexity greedy power al...In this paper, an enhanced greedy bit and power allocation algorithms for orthogonal frequency division multiplexing (OFDM) communication systems are introduced. These algorithms combine low complexity greedy power allocation algorithms with a simplified maximum ratio combining (MRC) precoding technique at the transmitter for maximizing the average data throughput of OFDM communication systems. Results of computer simulations show that precoding is an effective technique for improving the throughput performance of the proposed bit and power allocation algorithms.展开更多
In certain computational systems the amount of space required to execute an algorithm is even more restrictive than the corresponding time necessary for solution of a problem. In this paper an algorithm for modular mu...In certain computational systems the amount of space required to execute an algorithm is even more restrictive than the corresponding time necessary for solution of a problem. In this paper an algorithm for modular multiplicative inverse is introduced and its computational space complexity is analyzed. A tight upper bound for bit storage required for execution of the algorithm is provided. It is demonstrated that for range of numbers used in public-key encryption systems, the size of bit storage does not exceed a 2K-bit threshold in the worst-case. This feature of the Enhanced-Euclid algorithm allows designing special-purpose hardware for its implementation as a subroutine in communication-secure wireless devices.展开更多
An algorithm complexity, or its efficiency, meaning its time of evaluation is the focus of primary care in algorithmic problems solving. Raising the used memory may reduce the complexity of algorithm drastically. We p...An algorithm complexity, or its efficiency, meaning its time of evaluation is the focus of primary care in algorithmic problems solving. Raising the used memory may reduce the complexity of algorithm drastically. We present an example of two algorithms on finite set, where change the approach to the same problem and introduction a memory array allows decrease the complexity of the algorithm from the order O(n2) up to the order O(n).展开更多
A fast algorithm is proposed to solve a kind of high complexity multi-objective problems in this paper. It takes advantages of both the orthogonal design method to search evenly, and the statistical optimal method to ...A fast algorithm is proposed to solve a kind of high complexity multi-objective problems in this paper. It takes advantages of both the orthogonal design method to search evenly, and the statistical optimal method to speed up the computation. It is very suitable for solving high complexity problems, and quickly yields solutions which converge to the Pareto-optimal set with high precision and uniform distribution. Some complicated multi-objective problems are solved by the algorithm and the results show that the algorithm is not only fast but also superior to other MOGAS and MOEAs, such as the currently efficient algorithm SPEA, in terms of the precision, quantity and distribution of solutions.展开更多
This paper proposes the alternating direction method of multipliers-based infinity-norm(ADMIN) with threshold(ADMIN-T) and with percentage(ADMIN-P) detection algorithms,which make full use of the distribution of the s...This paper proposes the alternating direction method of multipliers-based infinity-norm(ADMIN) with threshold(ADMIN-T) and with percentage(ADMIN-P) detection algorithms,which make full use of the distribution of the signal to interference plus noise ratio(SINR) for an uplink massive MIMO system.The ADMIN-T and ADMIN-P detection algorithms are improved visions of the ADMIN detection algorithm,in which an appropriate SINR threshold in the ADMIN-T detection algorithm and a certain percentage in the ADMIN-P detection algorithm are designed to reduce the overall computational complexity.The detected symbols are divided into two parts by the SINR threshold which is based on the cumulative probability density function(CDF) of SINR and a percentage,respectively.The symbols in higher SINR part are detected by MMSE.The interference of these symbols is then cancelled by successive interference cancellation(SIC).Afterwards the remaining symbols with low SINR are iteratively detected by ADMIN.The simulation results show that the ADMIIN-T and the ADMIN-P detection algorithms provide a significant performance gain compared with some recently proposed detection algorithms.In addition,the computational complexity of ADMIN-T and ADMIN-P are significantly reduced.Furthermore,in the case of same number of transceiver antennas,the proposed algorithms have a higher performance compared with the case of asymmetric transceiver antennas.展开更多
Two efficient and low complexity multiuser scheduling algorithms are proposed for the uplink multi- ple-input multiple-output systems in this paper. Conventionally, the exhaustive search algorithm (ESA) can give the...Two efficient and low complexity multiuser scheduling algorithms are proposed for the uplink multi- ple-input multiple-output systems in this paper. Conventionally, the exhaustive search algorithm (ESA) can give the optimal performance; however, it is complexity prohibitive for practical implementation. Aiming at re- ducing the complexity while keeping the achievable sum rate performance, two heuristic algorithms are proposed for the multiuser scheduling problems: the improved genetic algorithm and simplified norm-based greedy algo- rithm. Moreover, we also consider the heterogeneity scenario where a modified grouping-based user selection al- gorithm is given to guarantee the user' s fairness. Specifically, the asymptotic behavior of the norm-based greed- y algorithm is given when each user is equipped with one antenna. Numerical examples demonstrate the superi- ority of our proposed schedulin~ and ~rouoin~ algorithms.展开更多
Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main i...Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main issues in realizing the moments is using factorial terms in their equation which cause</span><span style="font-size:10.0pt;font-family:"">s</span><span style="font-size:10.0pt;font-family:""> higher time complexity. As a solution, several methods have been presented to reduce the time complexity of these polynomials in recent years. The purpose of this research is to study several methods among the most popular recursive methods for fast Zernike computation and compare them <span>together by a global theoretical evaluation system called worst-case time co</span><span>mplexity. In this study, we have analyzed the selected algorithms and calculate</span>d the worst-case time complexity for each one. After that, the results are represented and explained and finally, a conclusion has been made by comparing th</span><span style="font-size:10.0pt;font-family:"">ese</span><span style="font-size:10.0pt;font-family:""> criteria among the studied algorithms. According to time complexity, we have observed that although some algorithms </span><span style="font-size:10.0pt;font-family:"">such </span><span style="font-size:10.0pt;font-family:"">as Wee method and Modified Prata method were successful in having the smaller time complexit<span>ies, some other approaches did not make any significant difference compa</span>r</span><span style="font-size:10.0pt;font-family:"">ed</span><span style="font-size:10.0pt;font-family:""> to the classical algorithm.展开更多
The traditional A^(*)algorithm exhibits a low efficiency in the path planning of unmanned surface vehicles(USVs).In addition,the path planned presents numerous redundant inflection waypoints,and the security is low,wh...The traditional A^(*)algorithm exhibits a low efficiency in the path planning of unmanned surface vehicles(USVs).In addition,the path planned presents numerous redundant inflection waypoints,and the security is low,which is not conducive to the control of USV and also affects navigation safety.In this paper,these problems were addressed through the following improvements.First,the path search angle and security were comprehensively considered,and a security expansion strategy of nodes based on the 5×5 neighborhood was proposed.The A^(*)algorithm search neighborhood was expanded from 3×3 to 5×5,and safe nodes were screened out for extension via the node security expansion strategy.This algorithm can also optimize path search angles while improving path security.Second,the distance from the current node to the target node was introduced into the heuristic function.The efficiency of the A^(*)algorithm was improved,and the path was smoothed using the Floyd algorithm.For the dynamic adjustment of the weight to improve the efficiency of DWA,the distance from the USV to the target point was introduced into the evaluation function of the dynamic-window approach(DWA)algorithm.Finally,combined with the local target point selection strategy,the optimized DWA algorithm was performed for local path planning.The experimental results show the smooth and safe path planned by the fusion algorithm,which can successfully avoid dynamic obstacles and is effective and feasible in path planning for USVs.展开更多
In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorith...In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results.展开更多
In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and t...In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching.展开更多
Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convol...Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convolutional Neural Networks(CNN)combined with LSTM,and so on are built by simple stacking,which has the problems of feature loss,low efficiency,and low accuracy.Therefore,this paper proposes an autonomous detectionmodel for Distributed Denial of Service attacks,Multi-Scale Convolutional Neural Network-Bidirectional Gated Recurrent Units-Single Headed Attention(MSCNN-BiGRU-SHA),which is based on a Multistrategy Integrated Zebra Optimization Algorithm(MI-ZOA).The model undergoes training and testing with the CICDDoS2019 dataset,and its performance is evaluated on a new GINKS2023 dataset.The hyperparameters for Conv_filter and GRU_unit are optimized using the Multi-strategy Integrated Zebra Optimization Algorithm(MIZOA).The experimental results show that the test accuracy of the MSCNN-BiGRU-SHA model based on the MIZOA proposed in this paper is as high as 0.9971 in the CICDDoS 2019 dataset.The evaluation accuracy of the new dataset GINKS2023 created in this paper is 0.9386.Compared to the MSCNN-BiGRU-SHA model based on the Zebra Optimization Algorithm(ZOA),the detection accuracy on the GINKS2023 dataset has improved by 5.81%,precisionhas increasedby 1.35%,the recallhas improvedby 9%,and theF1scorehas increasedby 5.55%.Compared to the MSCNN-BiGRU-SHA models developed using Grid Search,Random Search,and Bayesian Optimization,the MSCNN-BiGRU-SHA model optimized with the MI-ZOA exhibits better performance in terms of accuracy,precision,recall,and F1 score.展开更多
The minimum mean square error-successive interference cancellation( MMSE-SIC) multiuser detection algorithm has high complexity and long processing latency. A multiuser detection algorithm is proposed for multi-beam s...The minimum mean square error-successive interference cancellation( MMSE-SIC) multiuser detection algorithm has high complexity and long processing latency. A multiuser detection algorithm is proposed for multi-beam satellite systems in order to decrease the complexity and latency. The spot beams are grouped base on the distance between them in the proposed algorithm. Some groups are detected in parallel after a crucial group-wise interference cancellation. Furthermore, the multi-stage structure is introduced to improve the performance. Simulation results show that the proposed algorithm can achieve better performance with less complexity compared with the existing group detection algorithm. Moreover,the proposed algorithm using one stage can reduce the complexity over the fast MMSE-SIC and existing group detection algorithm by 9% and20. 9%. The processing latency is reduced significantly compared with the MMSE-SIC.展开更多
In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-base...In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.展开更多
Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,th...Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.展开更多
文摘Since Grover’s algorithm was first introduced, it has become a category of quantum algorithms that can be applied to many problems through the exploitation of quantum parallelism. The original application was the unstructured search problems with the time complexity of O(). In Grover’s algorithm, the key is Oracle and Amplitude Amplification. In this paper, our purpose is to show through examples that, in general, the time complexity of the Oracle Phase is O(N), not O(1). As a result, the time complexity of Grover’s algorithm is O(N), not O(). As a secondary purpose, we also attempt to restore the time complexity of Grover’s algorithm to its original form, O(), by introducing an O(1) parallel algorithm for unstructured search without repeated items, which will work for most cases. In the worst-case scenarios where the number of repeated items is O(N), the time complexity of the Oracle Phase is still O(N) even after additional preprocessing.
基金This work was supported by an EPSRC grant (No.EP/C520696/1).
文摘Computational time complexity analyzes of evolutionary algorithms (EAs) have been performed since the mid-nineties. The first results were related to very simple algorithms, such as the (1+1)-EA, on toy problems. These efforts produced a deeper understanding of how EAs perform on different kinds of fitness landscapes and general mathematical tools that may be extended to the analysis of more complicated EAs on more realistic problems. In fact, in recent years, it has been possible to analyze the (1+1)-EA on combinatorial optimization problems with practical applications and more realistic population-based EAs on structured toy problems. This paper presents a survey of the results obtained in the last decade along these two research lines. The most common mathematical techniques are introduced, the basic ideas behind them are discussed and their elective applications are highlighted. Solved problems that were still open are enumerated as are those still awaiting for a solution. New questions and problems arisen in the meantime are also considered.
文摘In this paper, we use the cellular automation model to imitate earthquake process and draw some conclusionsof general applicability. First, it is confirmed that earthquake process has some ordering characters, and it isshown that both the existence and their mutual arrangement of faults could obviously influence the overallcharacters of earthquake process. Then the characters of each stage of model evolution are explained withself-organized critical state theory. Finally, earthquake sequences produced by the models are analysed interms pf algorithmic complexity and the result shows that AC-values of algorithmic complexity could be usedto study earthquake process and evolution.
基金Supported by the National Natural Science Foundation of China(11471102,61301229)Supported by the Natural Science Foundation of Henan University of Science and Technology(2014QN039)
文摘We establish polynomial complexity corrector algorithms for linear programming over bounds of the Mehrotra-type predictor- symmetric cones. We first slightly modify the maximum step size in the predictor step of the safeguard based Mehrotra-type algorithm for linear programming, that was proposed by Salahi et al. Then, using the machinery of Euclidean Jordan algebras, we extend the modified algorithm to symmetric cones. Based on the Nesterov-Todd direction, we obtain O(r log ε1) iteration complexity bound of this algorithm, where r is the rank of the Jordan algebras and ε is the required precision. We also present a new variant of Mehrotra-type algorithm using a new adaptive updating scheme of centering parameter and show that this algorithm enjoys the same order of complexity bound as the safeguard algorithm. We illustrate the numerical behaviour of the methods on some small examples.
文摘This paper uses the concept of algorithmic efficiency to present a unified theory of intelligence. Intelligence is defined informally, formally, and computationally. We introduce the concept of dimensional complexity in algorithmic efficiency and deduce that an optimally efficient algorithm has zero time complexity, zero space complexity, and an infinite dimensional complexity. This algorithm is used to generate the number line.
基金supported by the Science and Technology on Information Transmission and Dissemination in Communication Networks Laboratory (ITD-U13007/ KX132600014)the National Natural Science Foundation of China (No. 9143810063)the Fundamental Research Funds for the Central Universities (2014RC0202)
文摘In satellite mobile communication system, relative movement of the satellite and the terminal will cause a large Doppler offset. Timing advanced estimation with Zadoff-Chu sequence is sensitive to the frequency offset. When the frequency offset is larger than one times subcarrier spacing, the value of peak cannot be detected at the receiving end. To suppress the larger Doppler frequency shift, this paper proposes a novel timing advanced estimation scheme(TAE-MCD) for satellite communication system. In this algorithm, t r a n s m i t t e d s i g n a l i s d i v i d e d i n t o Z C sequence and its conjugate sequence. Using multiplication and DFT operation to find the estimated peak at the receiving end, and make subtraction with the obtained sequences at last. The scheme can not only inhibit the adverse effects of large Doppler frequency shift in timing estimation effectively, but also reduce the computational complexity at the receiving end and improve the work efficiency of the hardware. Simulations results show that TAEMCD outperform the existing timing advanced estimation methods, on the condition of no additional time and frequency resource are needed.
文摘In this paper, an enhanced greedy bit and power allocation algorithms for orthogonal frequency division multiplexing (OFDM) communication systems are introduced. These algorithms combine low complexity greedy power allocation algorithms with a simplified maximum ratio combining (MRC) precoding technique at the transmitter for maximizing the average data throughput of OFDM communication systems. Results of computer simulations show that precoding is an effective technique for improving the throughput performance of the proposed bit and power allocation algorithms.
文摘In certain computational systems the amount of space required to execute an algorithm is even more restrictive than the corresponding time necessary for solution of a problem. In this paper an algorithm for modular multiplicative inverse is introduced and its computational space complexity is analyzed. A tight upper bound for bit storage required for execution of the algorithm is provided. It is demonstrated that for range of numbers used in public-key encryption systems, the size of bit storage does not exceed a 2K-bit threshold in the worst-case. This feature of the Enhanced-Euclid algorithm allows designing special-purpose hardware for its implementation as a subroutine in communication-secure wireless devices.
文摘An algorithm complexity, or its efficiency, meaning its time of evaluation is the focus of primary care in algorithmic problems solving. Raising the used memory may reduce the complexity of algorithm drastically. We present an example of two algorithms on finite set, where change the approach to the same problem and introduction a memory array allows decrease the complexity of the algorithm from the order O(n2) up to the order O(n).
基金Supported by the National Natural Science Foundation of China(60204001,70071042,60073043,60133010)and Youth Chengguang Project of Science and Technology of Wuhan City(20025001002)
文摘A fast algorithm is proposed to solve a kind of high complexity multi-objective problems in this paper. It takes advantages of both the orthogonal design method to search evenly, and the statistical optimal method to speed up the computation. It is very suitable for solving high complexity problems, and quickly yields solutions which converge to the Pareto-optimal set with high precision and uniform distribution. Some complicated multi-objective problems are solved by the algorithm and the results show that the algorithm is not only fast but also superior to other MOGAS and MOEAs, such as the currently efficient algorithm SPEA, in terms of the precision, quantity and distribution of solutions.
基金This work was supported in part by the National Natural Science Foundation of China(NSFC)under grant numbers 61671047,61775015 and U2006217.
文摘This paper proposes the alternating direction method of multipliers-based infinity-norm(ADMIN) with threshold(ADMIN-T) and with percentage(ADMIN-P) detection algorithms,which make full use of the distribution of the signal to interference plus noise ratio(SINR) for an uplink massive MIMO system.The ADMIN-T and ADMIN-P detection algorithms are improved visions of the ADMIN detection algorithm,in which an appropriate SINR threshold in the ADMIN-T detection algorithm and a certain percentage in the ADMIN-P detection algorithm are designed to reduce the overall computational complexity.The detected symbols are divided into two parts by the SINR threshold which is based on the cumulative probability density function(CDF) of SINR and a percentage,respectively.The symbols in higher SINR part are detected by MMSE.The interference of these symbols is then cancelled by successive interference cancellation(SIC).Afterwards the remaining symbols with low SINR are iteratively detected by ADMIN.The simulation results show that the ADMIIN-T and the ADMIN-P detection algorithms provide a significant performance gain compared with some recently proposed detection algorithms.In addition,the computational complexity of ADMIN-T and ADMIN-P are significantly reduced.Furthermore,in the case of same number of transceiver antennas,the proposed algorithms have a higher performance compared with the case of asymmetric transceiver antennas.
基金Sponsored by the Technology Specific Project(Grant No. 2010ZX03002-003-01)
文摘Two efficient and low complexity multiuser scheduling algorithms are proposed for the uplink multi- ple-input multiple-output systems in this paper. Conventionally, the exhaustive search algorithm (ESA) can give the optimal performance; however, it is complexity prohibitive for practical implementation. Aiming at re- ducing the complexity while keeping the achievable sum rate performance, two heuristic algorithms are proposed for the multiuser scheduling problems: the improved genetic algorithm and simplified norm-based greedy algo- rithm. Moreover, we also consider the heterogeneity scenario where a modified grouping-based user selection al- gorithm is given to guarantee the user' s fairness. Specifically, the asymptotic behavior of the norm-based greed- y algorithm is given when each user is equipped with one antenna. Numerical examples demonstrate the superi- ority of our proposed schedulin~ and ~rouoin~ algorithms.
文摘Zernike polynomials have been used in different fields such as optics, astronomy, and digital image analysis for many years. To form these polynomials, Zernike moments are essential to be determined. One of the main issues in realizing the moments is using factorial terms in their equation which cause</span><span style="font-size:10.0pt;font-family:"">s</span><span style="font-size:10.0pt;font-family:""> higher time complexity. As a solution, several methods have been presented to reduce the time complexity of these polynomials in recent years. The purpose of this research is to study several methods among the most popular recursive methods for fast Zernike computation and compare them <span>together by a global theoretical evaluation system called worst-case time co</span><span>mplexity. In this study, we have analyzed the selected algorithms and calculate</span>d the worst-case time complexity for each one. After that, the results are represented and explained and finally, a conclusion has been made by comparing th</span><span style="font-size:10.0pt;font-family:"">ese</span><span style="font-size:10.0pt;font-family:""> criteria among the studied algorithms. According to time complexity, we have observed that although some algorithms </span><span style="font-size:10.0pt;font-family:"">such </span><span style="font-size:10.0pt;font-family:"">as Wee method and Modified Prata method were successful in having the smaller time complexit<span>ies, some other approaches did not make any significant difference compa</span>r</span><span style="font-size:10.0pt;font-family:"">ed</span><span style="font-size:10.0pt;font-family:""> to the classical algorithm.
基金Supported by the EDD of China(No.80912020104)the Science and Technology Commission of Shanghai Municipality(No.22ZR1427700 and No.23692106900).
文摘The traditional A^(*)algorithm exhibits a low efficiency in the path planning of unmanned surface vehicles(USVs).In addition,the path planned presents numerous redundant inflection waypoints,and the security is low,which is not conducive to the control of USV and also affects navigation safety.In this paper,these problems were addressed through the following improvements.First,the path search angle and security were comprehensively considered,and a security expansion strategy of nodes based on the 5×5 neighborhood was proposed.The A^(*)algorithm search neighborhood was expanded from 3×3 to 5×5,and safe nodes were screened out for extension via the node security expansion strategy.This algorithm can also optimize path search angles while improving path security.Second,the distance from the current node to the target node was introduced into the heuristic function.The efficiency of the A^(*)algorithm was improved,and the path was smoothed using the Floyd algorithm.For the dynamic adjustment of the weight to improve the efficiency of DWA,the distance from the USV to the target point was introduced into the evaluation function of the dynamic-window approach(DWA)algorithm.Finally,combined with the local target point selection strategy,the optimized DWA algorithm was performed for local path planning.The experimental results show the smooth and safe path planned by the fusion algorithm,which can successfully avoid dynamic obstacles and is effective and feasible in path planning for USVs.
文摘In this paper, we analyze the complexity and entropy of different methods of data compression algorithms: LZW, Huffman, Fixed-length code (FLC), and Huffman after using Fixed-length code (HFLC). We test those algorithms on different files of different sizes and then conclude that: LZW is the best one in all compression scales that we tested especially on the large files, then Huffman, HFLC, and FLC, respectively. Data compression still is an important topic for research these days, and has many applications and uses needed. Therefore, we suggest continuing searching in this field and trying to combine two techniques in order to reach a best one, or use another source mapping (Hamming) like embedding a linear array into a Hypercube with other good techniques like Huffman and trying to reach good results.
基金Supported by the Natural Science Foundation of Chongqing(General Program,NO.CSTB2022NSCQ-MSX0884)Discipline Teaching Special Project of Yangtze Normal University(csxkjx14)。
文摘In this paper,we prove that Euclid's algorithm,Bezout's equation and Divi-sion algorithm are equivalent to each other.Our result shows that Euclid has preliminarily established the theory of divisibility and the greatest common divisor.We further provided several suggestions for teaching.
基金supported by Science and Technology Innovation Programfor Postgraduate Students in IDP Subsidized by Fundamental Research Funds for the Central Universities(Project No.ZY20240335)support of the Research Project of the Key Technology of Malicious Code Detection Based on Data Mining in APT Attack(Project No.2022IT173)the Research Project of the Big Data Sensitive Information Supervision Technology Based on Convolutional Neural Network(Project No.2022011033).
文摘Previous studies have shown that deep learning is very effective in detecting known attacks.However,when facing unknown attacks,models such as Deep Neural Networks(DNN)combined with Long Short-Term Memory(LSTM),Convolutional Neural Networks(CNN)combined with LSTM,and so on are built by simple stacking,which has the problems of feature loss,low efficiency,and low accuracy.Therefore,this paper proposes an autonomous detectionmodel for Distributed Denial of Service attacks,Multi-Scale Convolutional Neural Network-Bidirectional Gated Recurrent Units-Single Headed Attention(MSCNN-BiGRU-SHA),which is based on a Multistrategy Integrated Zebra Optimization Algorithm(MI-ZOA).The model undergoes training and testing with the CICDDoS2019 dataset,and its performance is evaluated on a new GINKS2023 dataset.The hyperparameters for Conv_filter and GRU_unit are optimized using the Multi-strategy Integrated Zebra Optimization Algorithm(MIZOA).The experimental results show that the test accuracy of the MSCNN-BiGRU-SHA model based on the MIZOA proposed in this paper is as high as 0.9971 in the CICDDoS 2019 dataset.The evaluation accuracy of the new dataset GINKS2023 created in this paper is 0.9386.Compared to the MSCNN-BiGRU-SHA model based on the Zebra Optimization Algorithm(ZOA),the detection accuracy on the GINKS2023 dataset has improved by 5.81%,precisionhas increasedby 1.35%,the recallhas improvedby 9%,and theF1scorehas increasedby 5.55%.Compared to the MSCNN-BiGRU-SHA models developed using Grid Search,Random Search,and Bayesian Optimization,the MSCNN-BiGRU-SHA model optimized with the MI-ZOA exhibits better performance in terms of accuracy,precision,recall,and F1 score.
基金Sponsored by the China Postdoctoral Science Foundation(Grant No.2011M500640)
文摘The minimum mean square error-successive interference cancellation( MMSE-SIC) multiuser detection algorithm has high complexity and long processing latency. A multiuser detection algorithm is proposed for multi-beam satellite systems in order to decrease the complexity and latency. The spot beams are grouped base on the distance between them in the proposed algorithm. Some groups are detected in parallel after a crucial group-wise interference cancellation. Furthermore, the multi-stage structure is introduced to improve the performance. Simulation results show that the proposed algorithm can achieve better performance with less complexity compared with the existing group detection algorithm. Moreover,the proposed algorithm using one stage can reduce the complexity over the fast MMSE-SIC and existing group detection algorithm by 9% and20. 9%. The processing latency is reduced significantly compared with the MMSE-SIC.
基金Shanxi Province Higher Education Science and Technology Innovation Fund Project(2022-676)Shanxi Soft Science Program Research Fund Project(2016041008-6)。
文摘In order to improve the efficiency of cloud-based web services,an improved plant growth simulation algorithm scheduling model.This model first used mathematical methods to describe the relationships between cloud-based web services and the constraints of system resources.Then,a light-induced plant growth simulation algorithm was established.The performance of the algorithm was compared through several plant types,and the best plant model was selected as the setting for the system.Experimental results show that when the number of test cloud-based web services reaches 2048,the model being 2.14 times faster than PSO,2.8 times faster than the ant colony algorithm,2.9 times faster than the bee colony algorithm,and a remarkable 8.38 times faster than the genetic algorithm.
基金supported by Yunnan Provincial Basic Research Project(202401AT070344,202301AT070443)National Natural Science Foundation of China(62263014,52207105)+1 种基金Yunnan Lancang-Mekong International Electric Power Technology Joint Laboratory(202203AP140001)Major Science and Technology Projects in Yunnan Province(202402AG050006).
文摘Accurate short-term wind power forecast technique plays a crucial role in maintaining the safety and economic efficiency of smart grids.Although numerous studies have employed various methods to forecast wind power,there remains a research gap in leveraging swarm intelligence algorithms to optimize the hyperparameters of the Transformer model for wind power prediction.To improve the accuracy of short-term wind power forecast,this paper proposes a hybrid short-term wind power forecast approach named STL-IAOA-iTransformer,which is based on seasonal and trend decomposition using LOESS(STL)and iTransformer model optimized by improved arithmetic optimization algorithm(IAOA).First,to fully extract the power data features,STL is used to decompose the original data into components with less redundant information.The extracted components as well as the weather data are then input into iTransformer for short-term wind power forecast.The final predicted short-term wind power curve is obtained by combining the predicted components.To improve the model accuracy,IAOA is employed to optimize the hyperparameters of iTransformer.The proposed approach is validated using real-generation data from different seasons and different power stations inNorthwest China,and ablation experiments have been conducted.Furthermore,to validate the superiority of the proposed approach under different wind characteristics,real power generation data fromsouthwestChina are utilized for experiments.Thecomparative results with the other six state-of-the-art prediction models in experiments show that the proposed model well fits the true value of generation series and achieves high prediction accuracy.