Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,w...Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.展开更多
Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining envir...Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining environments under high in-situ stress.Thus,this study aims to develop an advanced model for predicting the UCS of rockmaterial in deepmining environments by combining three boosting-basedmachine learning methods with four optimization algorithms.For this purpose,the Lead-Zinc mine in Southwest China is considered as the case study.Rock density,P-wave velocity,and point load strength index are used as input variables,and UCS is regarded as the output.Subsequently,twelve hybrid predictive models are obtained.Root mean square error(RMSE),mean absolute error(MAE),coefficient of determination(R2),and the proportion of the mean absolute percentage error less than 20%(A-20)are selected as the evaluation metrics.Experimental results showed that the hybridmodel consisting of the extreme gradient boostingmethod and the artificial bee colony algorithm(XGBoost-ABC)achieved satisfactory results on the training dataset and exhibited the best generalization performance on the testing dataset.The values of R2,A-20,RMSE,and MAE on the training dataset are 0.98,1.0,3.11 MPa,and 2.23MPa,respectively.The highest values of R2 and A-20(0.93 and 0.96),and the smallest RMSE and MAE values of 4.78 MPa and 3.76MPa,are observed on the testing dataset.The proposed hybrid model can be considered a reliable and effective method for predicting rock UCS in deep mines.展开更多
A novel visually meaningful image encryption algorithm is proposed based on a hyperchaotic system and compressive sensing(CS), which aims to improve the visual security of steganographic image and decrypted quality. F...A novel visually meaningful image encryption algorithm is proposed based on a hyperchaotic system and compressive sensing(CS), which aims to improve the visual security of steganographic image and decrypted quality. First, a dynamic spiral block scrambling is designed to encrypt the sparse matrix generated by performing discrete wavelet transform(DWT)on the plain image. Then, the encrypted image is compressed and quantified to obtain the noise-like cipher image. Then the cipher image is embedded into the alpha channel of the carrier image in portable network graphics(PNG) format to generate the visually meaningful steganographic image. In our scheme, the hyperchaotic Lorenz system controlled by the hash value of plain image is utilized to construct the scrambling matrix, the measurement matrix and the embedding matrix to achieve higher security. In addition, compared with other existing encryption algorithms, the proposed PNG-based embedding method can blindly extract the cipher image, thus effectively reducing the transmission cost and storage space. Finally, the experimental results indicate that the proposed encryption algorithm has very high visual security.展开更多
Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability...Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability of success near 100% has been proposed, that performs operations 45√N times approximately. In this paper, a hybrid quantum VQ encoding algorithm between the classical method and the quantum algorithm is presented. The number of its operations is less than √N for most images, and it is more efficient than the pure quantum algorithm.展开更多
Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N)...Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.展开更多
A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communica...A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects.展开更多
Medical imaging plays a key role within modern hospital management systems for diagnostic purposes.Compression methodologies are extensively employed to mitigate storage demands and enhance transmission speed,all whil...Medical imaging plays a key role within modern hospital management systems for diagnostic purposes.Compression methodologies are extensively employed to mitigate storage demands and enhance transmission speed,all while upholding image quality.Moreover,an increasing number of hospitals are embracing cloud computing for patient data storage,necessitating meticulous scrutiny of server security and privacy protocols.Nevertheless,considering the widespread availability of multimedia tools,the preservation of digital data integrity surpasses the significance of compression alone.In response to this concern,we propose a secure storage and transmission solution for compressed medical image sequences,such as ultrasound images,utilizing a motion vector watermarking scheme.The watermark is generated employing an error-correcting code known as Bose-Chaudhuri-Hocquenghem(BCH)and is subsequently embedded into the compressed sequence via block-based motion vectors.In the process of watermark embedding,motion vectors are selected based on their magnitude and phase angle.When embedding watermarks,no specific spatial area,such as a region of interest(ROI),is used in the images.The embedding of watermark bits is dependent on motion vectors.Although reversible watermarking allows the restoration of the original image sequences,we use the irreversible watermarking method.The reason for this is that the use of reversible watermarks may impede the claims of ownership and legal rights.The restoration of original data or images may call into question ownership or other legal claims.The peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)serve as metrics for evaluating the watermarked image quality.Across all images,the PSNR value exceeds 46 dB,and the SSIM value exceeds 0.92.Experimental results substantiate the efficacy of the proposed technique in preserving data integrity.展开更多
Facing constraints imposed by storage and bandwidth limitations,the vast volume of phasor meas-urement unit(PMU)data collected by the wide-area measurement system(WAMS)for power systems cannot be fully utilized.This l...Facing constraints imposed by storage and bandwidth limitations,the vast volume of phasor meas-urement unit(PMU)data collected by the wide-area measurement system(WAMS)for power systems cannot be fully utilized.This limitation significantly hinders the effective deployment of situational awareness technologies for systematic applications.In this work,an effective curvature quantified Douglas-Peucker(CQDP)-based PMU data compression method is proposed for situational awareness of power systems.First,a curvature integrated distance(CID)for measuring the local flection and fluc-tuation of PMU signals is developed.The Doug-las-Peucker(DP)algorithm integrated with a quan-tile-based parameter adaptation scheme is then proposed to extract feature points for profiling the trends within the PMU signals.This allows adaptive adjustment of the al-gorithm parameters,so as to maintain the desired com-pression ratio and reconstruction accuracy as much as possible,irrespective of the power system dynamics.Fi-nally,case studies on the Western Electricity Coordinat-ing Council(WECC)179-bus system and the actual Guangdong power system are performed to verify the effectiveness of the proposed method.The simulation results show that the proposed method achieves stably higher compression ratio and reconstruction accuracy in both steady state and in transients of the power system,and alleviates the compression performance degradation problem faced by existing compression methods.Index Terms—Curvature quantified Douglas-Peucker,data compression,phasor measurement unit,power sys-tem situational awareness.展开更多
Through a series of studies on arithmetic coding and arithmetic encryption, a novel image joint compression- encryption algorithm based on adaptive arithmetic coding is proposed. The contexts produced in the process o...Through a series of studies on arithmetic coding and arithmetic encryption, a novel image joint compression- encryption algorithm based on adaptive arithmetic coding is proposed. The contexts produced in the process of image compression are modified by keys in order to achieve image joint compression encryption. Combined with the bit-plane coding technique, the discrete wavelet transform coefficients in different resolutions can be encrypted respectively with different keys, so that the resolution selective encryption is realized to meet different application needs. Zero-tree coding is improved, and adaptive arithmetic coding is introduced. Then, the proposed joint compression-encryption algorithm is simulated. The simulation results show that as long as the parameters are selected appropriately, the compression efficiency of proposed image joint compression-encryption algorithm is basically identical to that of the original image compression algorithm, and the security of the proposed algorithm is better than the joint encryption algorithm based on interval splitting.展开更多
The escalating deployment of distributed power sources and random loads in DC distribution networks hasamplified the potential consequences of faults if left uncontrolled. To expedite the process of achieving an optim...The escalating deployment of distributed power sources and random loads in DC distribution networks hasamplified the potential consequences of faults if left uncontrolled. To expedite the process of achieving an optimalconfiguration of measurement points, this paper presents an optimal configuration scheme for fault locationmeasurement points in DC distribution networks based on an improved particle swarm optimization algorithm.Initially, a measurement point distribution optimization model is formulated, leveraging compressive sensing.The model aims to achieve the minimum number of measurement points while attaining the best compressivesensing reconstruction effect. It incorporates constraints from the compressive sensing algorithm and networkwide viewability. Subsequently, the traditional particle swarm algorithm is enhanced by utilizing the Haltonsequence for population initialization, generating uniformly distributed individuals. This enhancement reducesindividual search blindness and overlap probability, thereby promoting population diversity. Furthermore, anadaptive t-distribution perturbation strategy is introduced during the particle update process to enhance the globalsearch capability and search speed. The established model for the optimal configuration of measurement points issolved, and the results demonstrate the efficacy and practicality of the proposed method. The optimal configurationreduces the number of measurement points, enhances localization accuracy, and improves the convergence speedof the algorithm. These findings validate the effectiveness and utility of the proposed approach.展开更多
This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. Th...This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. The proposed scheme enhances the compression ratio of the ACW(n) algorithm by dividing the binary sequence into a number of subsequences (s), each of them satisfying the condition that the number of decimal values (d) of the n-bit length characters is equal to or less than 256. Therefore, the new scheme is referred to as ACW(n, s), where n is the adaptive character wordlength and s is the number of subsequences. The new scheme was used to compress a number of text files from standard corpora. The obtained results demonstrate that the ACW(n, s) scheme achieves higher compression ratio than many widely used compression algorithms and it achieves a competitive performance compared to state-of-the-art compression tools.展开更多
This paper reviewed the recent progress in the field of electrocardiogram (ECG) compression and compared the efficiency of some compression algorithms. By experimenting on the 500 cases of ECG signals from the ECG dat...This paper reviewed the recent progress in the field of electrocardiogram (ECG) compression and compared the efficiency of some compression algorithms. By experimenting on the 500 cases of ECG signals from the ECG database of China, it obtained the numeral indexes for each algorithm. Then by using the automatic diagnostic program developed by Shanghai Zhongshan Hospital, it also got the parameters of the reconstructed signals from linear approximation distance threshold (LADT), wavelet transform (WT), differential pulse code modulation (DPCM) and discrete cosine transform (DCT) algorithm. The results show that when the index of percent of root mean square difference(PRD) is less than 2.5%, the diagnostic agreement ratio is more than 90%; the index of PRD cannot completely show the damage of significant clinical information; the performance of wavelet algorithm exceeds other methods in the same compression ratio (CR). For the statistical result of the parameters of various methods and the clinical diagnostic results, it is of certain value and originality in the field of ECG compression research.展开更多
HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acq...HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acquire, store, analyze and index the data. The volume of the data is nearly up to hundreds of million bytes. Besides the hardware and software support, a great capacity of data storage, process and transfer is a more important problem. To deal with this problem, the key technology is data compression algorithm. In the paper, the data format in HT-7 is introduced first, then the data compression algorithm, LZO, being a kind of portable lossless data compression algorithm with ANSI C, is analyzed. This compression algorithm, which fits well with the data acquisition and distribution in the nuclear fusion experiment, offers a pretty fast compression and extremely fast decompression. At last the performance evaluation of LZO application in HT-7 is given.展开更多
The clustering algorithm has a very important application in the data mining technology,and can achieve good results in the data classification operation.With the rapid development of the network communication technol...The clustering algorithm has a very important application in the data mining technology,and can achieve good results in the data classification operation.With the rapid development of the network communication technology and the personal computers and other digital devices,the real-time computer desktop image transmission technology has been widely used.The computer desktop image compression algorithm based on the block classification can effectively realize the compression and storage of the computer desktop images,and significantly improve the speed and quality of the computer desktop image transmission.展开更多
This paper studied a fast recursive predictive algorithm used for medical X-ray image compression. This algorithm consists of mathematics model building, fast recursive algorithm deducing, initial value determining, s...This paper studied a fast recursive predictive algorithm used for medical X-ray image compression. This algorithm consists of mathematics model building, fast recursive algorithm deducing, initial value determining, step-size selecting, image compression encoding and original image recovering. The experiment result indicates that this algorithm has not only a higher compression ratio to medical X-ray images compression, but also promotes image compression speed greatly.展开更多
Compared to fixed virtual window algorithm (FVWA), the dynamic virtual window algorithm (DVWA) determines the length of each virtual container according to the sizes of goods of each order, which saves space of vi...Compared to fixed virtual window algorithm (FVWA), the dynamic virtual window algorithm (DVWA) determines the length of each virtual container according to the sizes of goods of each order, which saves space of virtual containers and improves the picking efficiency. However, the interval of consecutive goods caused by dispensers on conveyor can not be eliminated by DVWA, which limits a further improvement of picking efficiency. In order to solve this problem, a compressible virtual window algorithm (CVWA) is presented. It not only inherits the merit of DVWA but also compresses the length of virtual containers without congestion of order accumulation by advancing the beginning time of order picking and reasonably coordinating the pace of order accumulation. The simulation result proves that the picking efficiency of automated sorting system is greatly improved by CVWA.展开更多
Compression index Ccis an essential parameter in geotechnical design for which the effectiveness of correlation is still a challenge.This paper suggests a novel modelling approach using machine learning(ML)technique.T...Compression index Ccis an essential parameter in geotechnical design for which the effectiveness of correlation is still a challenge.This paper suggests a novel modelling approach using machine learning(ML)technique.The performance of five commonly used machine learning(ML)algorithms,i.e.back-propagation neural network(BPNN),extreme learning machine(ELM),support vector machine(SVM),random forest(RF)and evolutionary polynomial regression(EPR)in predicting Cc is comprehensively investigated.A database with a total number of 311 datasets including three input variables,i.e.initial void ratio e0,liquid limit water content wL,plasticity index Ip,and one output variable Cc is first established.Genetic algorithm(GA)is used to optimize the hyper-parameters in five ML algorithms,and the average prediction error for the 10-fold cross-validation(CV)sets is set as thefitness function in the GA for enhancing the robustness of ML models.The results indicate that ML models outperform empirical prediction formulations with lower prediction error.RF yields the lowest error followed by BPNN,ELM,EPR and SVM.If the ranges of input variables in the database are large enough,BPNN and RF models are recommended to predict Cc.Furthermore,if the distribution of input variables is continuous,RF model is the best one.Otherwise,EPR model is recommended if the ranges of input variables are small.The predicted correlations between input and output variables using five ML models show great agreement with the physical explanation.展开更多
基金via funding from Prince Sattam bin Abdulaziz University Project Number(PSAU/2023/R/1444).
文摘Recent developments in Computer Vision have presented novel opportunities to tackle complex healthcare issues,particularly in the field of lung disease diagnosis.One promising avenue involves the use of chest X-Rays,which are commonly utilized in radiology.To fully exploit their potential,researchers have suggested utilizing deep learning methods to construct computer-aided diagnostic systems.However,constructing and compressing these systems presents a significant challenge,as it relies heavily on the expertise of data scientists.To tackle this issue,we propose an automated approach that utilizes an evolutionary algorithm(EA)to optimize the design and compression of a convolutional neural network(CNN)for X-Ray image classification.Our approach accurately classifies radiography images and detects potential chest abnormalities and infections,including COVID-19.Furthermore,our approach incorporates transfer learning,where a pre-trainedCNNmodel on a vast dataset of chest X-Ray images is fine-tuned for the specific task of detecting COVID-19.This method can help reduce the amount of labeled data required for the task and enhance the overall performance of the model.We have validated our method via a series of experiments against state-of-the-art architectures.
基金supported by the National Natural Science Foundation of China(Grant No.52374153).
文摘Traditional laboratory tests for measuring rock uniaxial compressive strength(UCS)are tedious and timeconsuming.There is a pressing need for more effective methods to determine rock UCS,especially in deep mining environments under high in-situ stress.Thus,this study aims to develop an advanced model for predicting the UCS of rockmaterial in deepmining environments by combining three boosting-basedmachine learning methods with four optimization algorithms.For this purpose,the Lead-Zinc mine in Southwest China is considered as the case study.Rock density,P-wave velocity,and point load strength index are used as input variables,and UCS is regarded as the output.Subsequently,twelve hybrid predictive models are obtained.Root mean square error(RMSE),mean absolute error(MAE),coefficient of determination(R2),and the proportion of the mean absolute percentage error less than 20%(A-20)are selected as the evaluation metrics.Experimental results showed that the hybridmodel consisting of the extreme gradient boostingmethod and the artificial bee colony algorithm(XGBoost-ABC)achieved satisfactory results on the training dataset and exhibited the best generalization performance on the testing dataset.The values of R2,A-20,RMSE,and MAE on the training dataset are 0.98,1.0,3.11 MPa,and 2.23MPa,respectively.The highest values of R2 and A-20(0.93 and 0.96),and the smallest RMSE and MAE values of 4.78 MPa and 3.76MPa,are observed on the testing dataset.The proposed hybrid model can be considered a reliable and effective method for predicting rock UCS in deep mines.
基金supported by the National Natural Science Foundation of China (Grant No. 61672124)the Password Theory Project of the 13th Five-Year Plan National Cryptography Development Fund (Grant No. MMJJ20170203)+3 种基金Liaoning Province Science and Technology Innovation Leading Talents Program Project (Grant No. XLYC1802013)Key R&D Projects of Liaoning Province (Grant No. 2019020105JH2/103)Jinan City ‘20 Universities’ Funding Projects Introducing Innovation Team Program (Grant No. 2019GXRC031)Research Fund of Guangxi Key Lab of Multi-source Information Mining & Security (Grant No. MIMS20-M-02)。
文摘A novel visually meaningful image encryption algorithm is proposed based on a hyperchaotic system and compressive sensing(CS), which aims to improve the visual security of steganographic image and decrypted quality. First, a dynamic spiral block scrambling is designed to encrypt the sparse matrix generated by performing discrete wavelet transform(DWT)on the plain image. Then, the encrypted image is compressed and quantified to obtain the noise-like cipher image. Then the cipher image is embedded into the alpha channel of the carrier image in portable network graphics(PNG) format to generate the visually meaningful steganographic image. In our scheme, the hyperchaotic Lorenz system controlled by the hash value of plain image is utilized to construct the scrambling matrix, the measurement matrix and the embedding matrix to achieve higher security. In addition, compared with other existing encryption algorithms, the proposed PNG-based embedding method can blindly extract the cipher image, thus effectively reducing the transmission cost and storage space. Finally, the experimental results indicate that the proposed encryption algorithm has very high visual security.
文摘Many classical encoding algorithms of vector quantization (VQ) of image compression that can obtain global optimal solution have computational complexity O(N). A pure quantum VQ encoding algorithm with probability of success near 100% has been proposed, that performs operations 45√N times approximately. In this paper, a hybrid quantum VQ encoding algorithm between the classical method and the quantum algorithm is presented. The number of its operations is less than √N for most images, and it is more efficient than the pure quantum algorithm.
文摘Vector quantization (VQ) is an important data compression method. The key of the encoding of VQ is to find the closest vector among N vectors for a feature vector. Many classical linear search algorithms take O(N) steps of distance computing between two vectors. The quantum VQ iteration and corresponding quantum VQ encoding algorithm that takes O(√N) steps are presented in this paper. The unitary operation of distance computing can be performed on a number of vectors simultaneously because the quantum state exists in a superposition of states. The quantum VQ iteration comprises three oracles, by contrast many quantum algorithms have only one oracle, such as Shor's factorization algorithm and Grover's algorithm. Entanglement state is generated and used, by contrast the state in Grover's algorithm is not an entanglement state. The quantum VQ iteration is a rotation over subspace, by contrast the Grover iteration is a rotation over global space. The quantum VQ iteration extends the Grover iteration to the more complex search that requires more oracles. The method of the quantum VQ iteration is universal.
文摘A real-time data compression wireless sensor network based on Lempel-Ziv-Welch encoding(LZW)algorithm is designed for the increasing data volume of terminal nodes when using ZigBee for long-distance wireless communication.The system consists of a terminal node,a router,a coordinator,and an upper computer.The terminal node is responsible for storing and sending the collected data after the LZW compression algorithm is compressed;The router is responsible for the relay of data in the wireless network;The coordinator is responsible for sending the received data to the upper computer.In terms of network function realization,the development and configuration of CC2530 chips on terminal nodes,router nodes,and coordinator nodes are completed using the Z-stack protocol stack,and the network is successfully organized.Through the final simulation analysis and test verification,the system realizes the wireless acquisition and storage of remote data,and reduces the network occupancy rate through the data compression,which has a certain practical value and application prospects.
基金supported by the Yayasan Universiti Teknologi PETRONAS Grants,YUTP-PRG(015PBC-027)YUTP-FRG(015LC0-311),Hilmi Hasan,www.utp.edu.my.
文摘Medical imaging plays a key role within modern hospital management systems for diagnostic purposes.Compression methodologies are extensively employed to mitigate storage demands and enhance transmission speed,all while upholding image quality.Moreover,an increasing number of hospitals are embracing cloud computing for patient data storage,necessitating meticulous scrutiny of server security and privacy protocols.Nevertheless,considering the widespread availability of multimedia tools,the preservation of digital data integrity surpasses the significance of compression alone.In response to this concern,we propose a secure storage and transmission solution for compressed medical image sequences,such as ultrasound images,utilizing a motion vector watermarking scheme.The watermark is generated employing an error-correcting code known as Bose-Chaudhuri-Hocquenghem(BCH)and is subsequently embedded into the compressed sequence via block-based motion vectors.In the process of watermark embedding,motion vectors are selected based on their magnitude and phase angle.When embedding watermarks,no specific spatial area,such as a region of interest(ROI),is used in the images.The embedding of watermark bits is dependent on motion vectors.Although reversible watermarking allows the restoration of the original image sequences,we use the irreversible watermarking method.The reason for this is that the use of reversible watermarks may impede the claims of ownership and legal rights.The restoration of original data or images may call into question ownership or other legal claims.The peak signal-to-noise ratio(PSNR)and structural similarity index(SSIM)serve as metrics for evaluating the watermarked image quality.Across all images,the PSNR value exceeds 46 dB,and the SSIM value exceeds 0.92.Experimental results substantiate the efficacy of the proposed technique in preserving data integrity.
基金supported by the National Natural Sci-ence Foundation of China(No.52077195).
文摘Facing constraints imposed by storage and bandwidth limitations,the vast volume of phasor meas-urement unit(PMU)data collected by the wide-area measurement system(WAMS)for power systems cannot be fully utilized.This limitation significantly hinders the effective deployment of situational awareness technologies for systematic applications.In this work,an effective curvature quantified Douglas-Peucker(CQDP)-based PMU data compression method is proposed for situational awareness of power systems.First,a curvature integrated distance(CID)for measuring the local flection and fluc-tuation of PMU signals is developed.The Doug-las-Peucker(DP)algorithm integrated with a quan-tile-based parameter adaptation scheme is then proposed to extract feature points for profiling the trends within the PMU signals.This allows adaptive adjustment of the al-gorithm parameters,so as to maintain the desired com-pression ratio and reconstruction accuracy as much as possible,irrespective of the power system dynamics.Fi-nally,case studies on the Western Electricity Coordinat-ing Council(WECC)179-bus system and the actual Guangdong power system are performed to verify the effectiveness of the proposed method.The simulation results show that the proposed method achieves stably higher compression ratio and reconstruction accuracy in both steady state and in transients of the power system,and alleviates the compression performance degradation problem faced by existing compression methods.Index Terms—Curvature quantified Douglas-Peucker,data compression,phasor measurement unit,power sys-tem situational awareness.
基金supported by the Natural Science Foundation of Hainan Province, China (Grant No. 613155)
文摘Through a series of studies on arithmetic coding and arithmetic encryption, a novel image joint compression- encryption algorithm based on adaptive arithmetic coding is proposed. The contexts produced in the process of image compression are modified by keys in order to achieve image joint compression encryption. Combined with the bit-plane coding technique, the discrete wavelet transform coefficients in different resolutions can be encrypted respectively with different keys, so that the resolution selective encryption is realized to meet different application needs. Zero-tree coding is improved, and adaptive arithmetic coding is introduced. Then, the proposed joint compression-encryption algorithm is simulated. The simulation results show that as long as the parameters are selected appropriately, the compression efficiency of proposed image joint compression-encryption algorithm is basically identical to that of the original image compression algorithm, and the security of the proposed algorithm is better than the joint encryption algorithm based on interval splitting.
基金the National Natural Science Foundation of China(52177074).
文摘The escalating deployment of distributed power sources and random loads in DC distribution networks hasamplified the potential consequences of faults if left uncontrolled. To expedite the process of achieving an optimalconfiguration of measurement points, this paper presents an optimal configuration scheme for fault locationmeasurement points in DC distribution networks based on an improved particle swarm optimization algorithm.Initially, a measurement point distribution optimization model is formulated, leveraging compressive sensing.The model aims to achieve the minimum number of measurement points while attaining the best compressivesensing reconstruction effect. It incorporates constraints from the compressive sensing algorithm and networkwide viewability. Subsequently, the traditional particle swarm algorithm is enhanced by utilizing the Haltonsequence for population initialization, generating uniformly distributed individuals. This enhancement reducesindividual search blindness and overlap probability, thereby promoting population diversity. Furthermore, anadaptive t-distribution perturbation strategy is introduced during the particle update process to enhance the globalsearch capability and search speed. The established model for the optimal configuration of measurement points issolved, and the results demonstrate the efficacy and practicality of the proposed method. The optimal configurationreduces the number of measurement points, enhances localization accuracy, and improves the convergence speedof the algorithm. These findings validate the effectiveness and utility of the proposed approach.
文摘This paper presents a description and performance evaluation of a new bit-level, lossless, adaptive, and asymmetric data compression scheme that is based on the adaptive character wordlength (ACW(n)) algorithm. The proposed scheme enhances the compression ratio of the ACW(n) algorithm by dividing the binary sequence into a number of subsequences (s), each of them satisfying the condition that the number of decimal values (d) of the n-bit length characters is equal to or less than 256. Therefore, the new scheme is referred to as ACW(n, s), where n is the adaptive character wordlength and s is the number of subsequences. The new scheme was used to compress a number of text files from standard corpora. The obtained results demonstrate that the ACW(n, s) scheme achieves higher compression ratio than many widely used compression algorithms and it achieves a competitive performance compared to state-of-the-art compression tools.
文摘This paper reviewed the recent progress in the field of electrocardiogram (ECG) compression and compared the efficiency of some compression algorithms. By experimenting on the 500 cases of ECG signals from the ECG database of China, it obtained the numeral indexes for each algorithm. Then by using the automatic diagnostic program developed by Shanghai Zhongshan Hospital, it also got the parameters of the reconstructed signals from linear approximation distance threshold (LADT), wavelet transform (WT), differential pulse code modulation (DPCM) and discrete cosine transform (DCT) algorithm. The results show that when the index of percent of root mean square difference(PRD) is less than 2.5%, the diagnostic agreement ratio is more than 90%; the index of PRD cannot completely show the damage of significant clinical information; the performance of wavelet algorithm exceeds other methods in the same compression ratio (CR). For the statistical result of the parameters of various methods and the clinical diagnostic results, it is of certain value and originality in the field of ECG compression research.
基金The project supported by the Meg-Science Enineering Project of Chinese Acdemy of Sciences
文摘HT-7 superconducting tokamak in the Institute of Plasma Physics of the Chinese Academy of Sciences is an experimental device for fusion research in China. The main task of the data acquisition system of HT-7 is to acquire, store, analyze and index the data. The volume of the data is nearly up to hundreds of million bytes. Besides the hardware and software support, a great capacity of data storage, process and transfer is a more important problem. To deal with this problem, the key technology is data compression algorithm. In the paper, the data format in HT-7 is introduced first, then the data compression algorithm, LZO, being a kind of portable lossless data compression algorithm with ANSI C, is analyzed. This compression algorithm, which fits well with the data acquisition and distribution in the nuclear fusion experiment, offers a pretty fast compression and extremely fast decompression. At last the performance evaluation of LZO application in HT-7 is given.
文摘The clustering algorithm has a very important application in the data mining technology,and can achieve good results in the data classification operation.With the rapid development of the network communication technology and the personal computers and other digital devices,the real-time computer desktop image transmission technology has been widely used.The computer desktop image compression algorithm based on the block classification can effectively realize the compression and storage of the computer desktop images,and significantly improve the speed and quality of the computer desktop image transmission.
文摘This paper studied a fast recursive predictive algorithm used for medical X-ray image compression. This algorithm consists of mathematics model building, fast recursive algorithm deducing, initial value determining, step-size selecting, image compression encoding and original image recovering. The experiment result indicates that this algorithm has not only a higher compression ratio to medical X-ray images compression, but also promotes image compression speed greatly.
基金National Natural Science Foundation of China(No.50175064)
文摘Compared to fixed virtual window algorithm (FVWA), the dynamic virtual window algorithm (DVWA) determines the length of each virtual container according to the sizes of goods of each order, which saves space of virtual containers and improves the picking efficiency. However, the interval of consecutive goods caused by dispensers on conveyor can not be eliminated by DVWA, which limits a further improvement of picking efficiency. In order to solve this problem, a compressible virtual window algorithm (CVWA) is presented. It not only inherits the merit of DVWA but also compresses the length of virtual containers without congestion of order accumulation by advancing the beginning time of order picking and reasonably coordinating the pace of order accumulation. The simulation result proves that the picking efficiency of automated sorting system is greatly improved by CVWA.
基金financial support provided by the RIF project(Grant No.PolyU R5037-18F)from the Research Grants Council(RGC)of Hong Kong is gratefully acknowledged。
文摘Compression index Ccis an essential parameter in geotechnical design for which the effectiveness of correlation is still a challenge.This paper suggests a novel modelling approach using machine learning(ML)technique.The performance of five commonly used machine learning(ML)algorithms,i.e.back-propagation neural network(BPNN),extreme learning machine(ELM),support vector machine(SVM),random forest(RF)and evolutionary polynomial regression(EPR)in predicting Cc is comprehensively investigated.A database with a total number of 311 datasets including three input variables,i.e.initial void ratio e0,liquid limit water content wL,plasticity index Ip,and one output variable Cc is first established.Genetic algorithm(GA)is used to optimize the hyper-parameters in five ML algorithms,and the average prediction error for the 10-fold cross-validation(CV)sets is set as thefitness function in the GA for enhancing the robustness of ML models.The results indicate that ML models outperform empirical prediction formulations with lower prediction error.RF yields the lowest error followed by BPNN,ELM,EPR and SVM.If the ranges of input variables in the database are large enough,BPNN and RF models are recommended to predict Cc.Furthermore,if the distribution of input variables is continuous,RF model is the best one.Otherwise,EPR model is recommended if the ranges of input variables are small.The predicted correlations between input and output variables using five ML models show great agreement with the physical explanation.