Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios ev...In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios even with the existing congestion control solutions. To addressthe head-of-line blocking problem of PFC, we propose a new congestioncontrol mechanism. The key point of Congestion Control Using In-NetworkTelemetry for Lossless Datacenters (ICC) is to use In-Network Telemetry(INT) technology to obtain comprehensive congestion information, which isthen fed back to the sender to adjust the sending rate timely and accurately.It is possible to control congestion in time, converge to the target rate quickly,and maintain a near-zero queue length at the switch when using ICC. Weconducted Network Simulator-3 (NS-3) simulation experiments to test theICC’s performance. When compared to Congestion Control for Large-ScaleRDMA Deployments (DCQCN), TIMELY: RTT-based Congestion Controlfor the Datacenter (TIMELY), and Re-architecting Congestion Managementin Lossless Ethernet (PCN), ICC effectively reduces PFC pause messages andFlow Completion Time (FCT) by 47%, 56%, 34%, and 15.3×, 14.8×, and11.2×, respectively.展开更多
This paper proposes a lossless and high payload data hiding scheme for JPEG images by histogram modification.The most in JPEG bitstream consists of a sequence of VLCs(variable length codes)and the appended bits.Each V...This paper proposes a lossless and high payload data hiding scheme for JPEG images by histogram modification.The most in JPEG bitstream consists of a sequence of VLCs(variable length codes)and the appended bits.Each VLC has a corresponding RLV(run/length value)to record the AC/DC coefficients.To achieve lossless data hiding with high payload,we shift the histogram of VLCs and modify the DHT segment to embed data.Since we sort the histogram of VLCs in descending order,the filesize expansion is limited.The paper’s key contribution includes:Lossless data hiding,less filesize expansion in identical pay-load and higher embedding efficiency.展开更多
A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of ...A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.展开更多
In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide ...In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.展开更多
In this paper, we propose a novel image recompression frame- work and image quality assessment (IQA) method to efficiently recompress Internet images. With this framework image size is significantly reduced without ...In this paper, we propose a novel image recompression frame- work and image quality assessment (IQA) method to efficiently recompress Internet images. With this framework image size is significantly reduced without affecting spatial resolution or perceptible quality of the image. With the help of IQA, the relationship between image quality and image evaluation scores can be quickly established, and the optimal quality factor can be obtained quickly and accurately within a pre - determined perceptual quality range. This process ensures the image's perceptual quality, which is applied to each input image. The test results show that, using the proposed method, the file size of images can be reduced by about 45%-60% without affecting their visual quality. Moreover, our new image -reeompression framework can be used in to many different application scenarios.展开更多
With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explore...With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explores different lossless data compression techniques and aims to find an optimal compression algorithm to compress astronomical data obtained by the Square Kilometre Array (SKA), which are new and unique in the field of radio astronomy. It was required that the compressed data sets should be lossless and that they should be compressed while the data are being read. The project was carried out in conjunction with the SKA South Africa office. Data compression reduces the time taken and the bandwidth used when transferring files, and it can also reduce the costs involved with data storage. The SKA uses the Hierarchical Data Format (HDF5) to store the data collected from the radio telescopes, with the data used in this study ranging from 29 MB to 9 GB in size. The compression techniques investigated in this study include SZIP, GZIP, the LZF filter, LZ4 and the Fully Adaptive Prediction Error Coder (FAPEC). The algorithms and methods used to perform the compression tests are discussed and the results from the three phases of testing are presented, followed by a brief discussion on those results.展开更多
In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbo...In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbor pixels. An algorithm of ascertaining available QTM neighbors is also proposed. Then, the method for reducing space complexities in the procedure of predicting QTM pixel values is presented. Next, the structure for storing compressed QTM pixel is proposed. In the end, the experiment on comparing compression ratio of this method with other methods is carried out by using three wave bands data of 1 km resolution of NOAA images in China. The results indicate that: 1) the compression method performs better than any other, such as Run Length Coding, Arithmetic Coding, Huffman Cod- ing, etc; 2) the average size of compressed three wave band data based on the neighbor QTM pixel predictive model is 31.58% of the origin space requirements and 67.5% of Arithmetic Coding without predictive model.展开更多
Discrete (J,J′) lossless factorization is established by using conjugation.For stable case ,the existence of such factorization is equivalent to the existence of a positive solution of a Riccati equation. For un...Discrete (J,J′) lossless factorization is established by using conjugation.For stable case ,the existence of such factorization is equivalent to the existence of a positive solution of a Riccati equation. For unstable case ,the existence conditions can be reduced to the existence of two positive solution of two Riccati equations.展开更多
The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time m...The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time multimedia system. A novel lossless compression algorithm is researched. A low complexity predictive model is proposed using the correlation of pixels and color components. In the meantime, perceptron in neural network is used to rectify the prediction values adaptively. It makes the prediction residuals smaller and in a small dynamic scope. Also a color space transform is used and good decorrelation is obtained in our algorithm. The compared experimental results have shown that our algorithm has a noticeably better performance than traditional algorithms. Compared to the new standard JPEG-LS, this predictive model reduces its computational complexity. And its speed is faster than the JPEG-LS with negligible performance sacrifice.展开更多
Lossless data hiding can restore the original status of cover media after embedded secret data are extracted. In 2010, Wang et al. proposed a lossless data hiding scheme which hides secret data in vector quantization ...Lossless data hiding can restore the original status of cover media after embedded secret data are extracted. In 2010, Wang et al. proposed a lossless data hiding scheme which hides secret data in vector quantization (VQ) indices, but the encoding strategies adopted by their scheme expand the final codestream. This paper designs four embedding and encoding strategies to improve Wang et aL's scheme. The experiment result of the proposed scheme compared with that of the Wang et aL's scheme reduces the bit rates of the final codestream by 4.6% and raises the payload by 1.09% on average.展开更多
Mammography is a specific type of imaging that uses low-dose x-ray system to examine breasts. This is an efficient means of early detection of breast cancer. Archiving and retaining these data for at least three years...Mammography is a specific type of imaging that uses low-dose x-ray system to examine breasts. This is an efficient means of early detection of breast cancer. Archiving and retaining these data for at least three years is expensive, diffi-cult and requires sophisticated data compres-sion techniques. We propose a lossless com-pression method that makes use of the smoothness property of the images. In the first step, de-correlation of the given image is done using two efficient predictors. The two residue images are partitioned into non overlapping sub-images of size 4x4. At every instant one of the sub-images is selected and sent for coding. The sub-images with all zero pixels are identi-fied using one bit code. The remaining sub- images are coded by using base switching method. Special techniques are used to save the overhead information. Experimental results indicate an average compression ratio of 6.44 for the selected database.展开更多
In this paper,the technique of quasi_lossless compression based on the image restoration is presented.The technique of compression described in the paper includes three steps,namely bit compression,correlation removin...In this paper,the technique of quasi_lossless compression based on the image restoration is presented.The technique of compression described in the paper includes three steps,namely bit compression,correlation removing and image restoration based on the theory of modulation transfer function (MTF).The quasi_lossless compression comes to a high speed.The quality of the reconstruction image under restoration is up to par of the quasi_lossless with higher compression ratio.The experiments of the TM and SPOT images show that the technique is reasonable and applicable.展开更多
The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of ...The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of compression. We present a comparative study of four approaches to compressing multispectral Mastcam images. The first approach is to divide the nine bands into three groups with each group having three bands. Since the multispectral bands have strong correlation, we treat the three groups of images as video frames. We call this approach the Video approach. The second approach is to compress each group separately and we call it the split band (SB) approach. The third one is to apply a two-step approach in which the first step uses principal component analysis (PCA) to compress a nine-band image cube to six bands and a second step compresses the six PCA bands using conventional codecs. The fourth one is to apply PCA only. In addition, we also present subjective and objective assessment results for compressing RGB images because RGB images have been used for stereo and disparity map generation. Five well-known compression codecs, including JPEG, JPEG-2000 (J2K), X264, X265, and Daala in the literature, have been applied and compared in each approach. The performance of different algorithms was assessed using four well-known performance metrics. Two are conventional and another two are known to have good correlation with human perception. Extensive experiments using actual Mastcam images have been performed to demonstrate the various approaches. We observed that perceptually lossless compression can be achieved at 10:1 compression ratio. In particular, the performance gain of the SB approach with Daala is at least 5 dBs in terms peak signal-to-noise ratio (PSNR) at 10:1 compression ratio over that of JPEG. Subjective comparisons also corroborated with the objective metrics in that perceptually lossless compression can be achieved even at 20 to 1 compression.展开更多
This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded...This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.展开更多
Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high perf...Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.展开更多
We propose a novel, lossless compression algorithm, based on the 2D Discrete Fast Fourier Transform, to approximate the Algorithmic (Kolmogorov) Complexity of Elementary Cellular Automata. Fast Fourier transforms are ...We propose a novel, lossless compression algorithm, based on the 2D Discrete Fast Fourier Transform, to approximate the Algorithmic (Kolmogorov) Complexity of Elementary Cellular Automata. Fast Fourier transforms are widely used in image compression but their lossy nature exclude them as viable candidates for Kolmogorov Complexity approximations. For the first time, we present a way to adapt fourier transforms for lossless image compression. The proposed method has a very strong Pearsons correlation to existing complexity metrics and we further establish its consistency as a complexity metric by confirming its measurements never exceed the complexity of nothingness and randomness (representing the lower and upper limits of complexity). Surprisingly, many of the other methods tested fail this simple sanity check. A final symmetry-based test also demonstrates our method’s superiority over existing lossless compression metrics. All complexity metrics tested, as well as the code used to generate and augment the original dataset, can be found in our github repository: ECA complexity metrics<sup>1</sup>.展开更多
The artificial neural network-spiking neural network(ANN-SNN)conversion,as an efficient algorithm for deep SNNs training,promotes the performance of shallow SNNs,and expands the application in various tasks.However,th...The artificial neural network-spiking neural network(ANN-SNN)conversion,as an efficient algorithm for deep SNNs training,promotes the performance of shallow SNNs,and expands the application in various tasks.However,the existing conversion methods still face the problem of large conversion error within low conversion time steps.In this paper,a heuristic symmetric-threshold rectified linear unit(stReLU)activation function for ANNs is proposed,based on the intrinsically different responses between the integrate-and-fire(IF)neurons in SNNs and the activation functions in ANNs.The negative threshold in stReLU can guarantee the conversion of negative activations,and the symmetric thresholds enable positive error to offset negative error between activation value and spike firing rate,thus reducing the conversion error from ANNs to SNNs.The lossless conversion from ANNs with stReLU to SNNs is demonstrated by theoretical formulation.By contrasting stReLU with asymmetric-threshold LeakyReLU and threshold ReLU,the effectiveness of symmetric thresholds is further explored.The results show that ANNs with stReLU can decrease the conversion error and achieve nearly lossless conversion based on the MNIST,Fashion-MNIST,and CIFAR10 datasets,with 6×to 250 speedup compared with other methods.Moreover,the comparison of energy consumption between ANNs and SNNs indicates that this novel conversion algorithm can also significantly reduce energy consumption.展开更多
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
基金supported by the National Natural Science Foundation of China (No.62102046,62072249,62072056)JinWang,YongjunRen,and Jinbin Hu receive the grant,and the URLs to the sponsors’websites are https://www.nsfc.gov.cn/.This work is also funded by the National Science Foundation of Hunan Province (No.2022JJ30618,2020JJ2029).
文摘In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios even with the existing congestion control solutions. To addressthe head-of-line blocking problem of PFC, we propose a new congestioncontrol mechanism. The key point of Congestion Control Using In-NetworkTelemetry for Lossless Datacenters (ICC) is to use In-Network Telemetry(INT) technology to obtain comprehensive congestion information, which isthen fed back to the sender to adjust the sending rate timely and accurately.It is possible to control congestion in time, converge to the target rate quickly,and maintain a near-zero queue length at the switch when using ICC. Weconducted Network Simulator-3 (NS-3) simulation experiments to test theICC’s performance. When compared to Congestion Control for Large-ScaleRDMA Deployments (DCQCN), TIMELY: RTT-based Congestion Controlfor the Datacenter (TIMELY), and Re-architecting Congestion Managementin Lossless Ethernet (PCN), ICC effectively reduces PFC pause messages andFlow Completion Time (FCT) by 47%, 56%, 34%, and 15.3×, 14.8×, and11.2×, respectively.
基金This research work is partly supported by National Natural Science Foundation of China(61502009,61525203,61472235,U1636206,61572308)CSC Postdoctoral Project(201706505004)+2 种基金Anhui Provincial Natural Science Foundation(1508085SQF216)Key Program for Excellent Young Talents in Colleges and Universities of Anhui Province(gxyqZD2016011)Anhui university research and innovation training project for undergraduate students.
文摘This paper proposes a lossless and high payload data hiding scheme for JPEG images by histogram modification.The most in JPEG bitstream consists of a sequence of VLCs(variable length codes)and the appended bits.Each VLC has a corresponding RLV(run/length value)to record the AC/DC coefficients.To achieve lossless data hiding with high payload,we shift the histogram of VLCs and modify the DHT segment to embed data.Since we sort the histogram of VLCs in descending order,the filesize expansion is limited.The paper’s key contribution includes:Lossless data hiding,less filesize expansion in identical pay-load and higher embedding efficiency.
文摘A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.
基金Supported by the National Natural Science Foundation of China!( 6 9875 0 0 9)
文摘In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.
基金supported in part by China"973"Program under Grant No.2014CB340303
文摘In this paper, we propose a novel image recompression frame- work and image quality assessment (IQA) method to efficiently recompress Internet images. With this framework image size is significantly reduced without affecting spatial resolution or perceptible quality of the image. With the help of IQA, the relationship between image quality and image evaluation scores can be quickly established, and the optimal quality factor can be obtained quickly and accurately within a pre - determined perceptual quality range. This process ensures the image's perceptual quality, which is applied to each input image. The test results show that, using the proposed method, the file size of images can be reduced by about 45%-60% without affecting their visual quality. Moreover, our new image -reeompression framework can be used in to many different application scenarios.
文摘With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explores different lossless data compression techniques and aims to find an optimal compression algorithm to compress astronomical data obtained by the Square Kilometre Array (SKA), which are new and unique in the field of radio astronomy. It was required that the compressed data sets should be lossless and that they should be compressed while the data are being read. The project was carried out in conjunction with the SKA South Africa office. Data compression reduces the time taken and the bandwidth used when transferring files, and it can also reduce the costs involved with data storage. The SKA uses the Hierarchical Data Format (HDF5) to store the data collected from the radio telescopes, with the data used in this study ranging from 29 MB to 9 GB in size. The compression techniques investigated in this study include SZIP, GZIP, the LZF filter, LZ4 and the Fully Adaptive Prediction Error Coder (FAPEC). The algorithms and methods used to perform the compression tests are discussed and the results from the three phases of testing are presented, followed by a brief discussion on those results.
基金Project 40471108 supported by the National Natural Science Foundation of China
文摘In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbor pixels. An algorithm of ascertaining available QTM neighbors is also proposed. Then, the method for reducing space complexities in the procedure of predicting QTM pixel values is presented. Next, the structure for storing compressed QTM pixel is proposed. In the end, the experiment on comparing compression ratio of this method with other methods is carried out by using three wave bands data of 1 km resolution of NOAA images in China. The results indicate that: 1) the compression method performs better than any other, such as Run Length Coding, Arithmetic Coding, Huffman Cod- ing, etc; 2) the average size of compressed three wave band data based on the neighbor QTM pixel predictive model is 31.58% of the origin space requirements and 67.5% of Arithmetic Coding without predictive model.
文摘Discrete (J,J′) lossless factorization is established by using conjugation.For stable case ,the existence of such factorization is equivalent to the existence of a positive solution of a Riccati equation. For unstable case ,the existence conditions can be reduced to the existence of two positive solution of two Riccati equations.
基金This project was supported by the National Natural Science Foundation of China (60172045).
文摘The technique of lossless image compression plays an important role in image transmission and storage for high quality. At present, both the compression ratio and processing speed should be considered in a real-time multimedia system. A novel lossless compression algorithm is researched. A low complexity predictive model is proposed using the correlation of pixels and color components. In the meantime, perceptron in neural network is used to rectify the prediction values adaptively. It makes the prediction residuals smaller and in a small dynamic scope. Also a color space transform is used and good decorrelation is obtained in our algorithm. The compared experimental results have shown that our algorithm has a noticeably better performance than traditional algorithms. Compared to the new standard JPEG-LS, this predictive model reduces its computational complexity. And its speed is faster than the JPEG-LS with negligible performance sacrifice.
基金supported by the National Science Council,Taiwan under Grant No.NSC 99-2221-E-324-040-MY2
文摘Lossless data hiding can restore the original status of cover media after embedded secret data are extracted. In 2010, Wang et al. proposed a lossless data hiding scheme which hides secret data in vector quantization (VQ) indices, but the encoding strategies adopted by their scheme expand the final codestream. This paper designs four embedding and encoding strategies to improve Wang et aL's scheme. The experiment result of the proposed scheme compared with that of the Wang et aL's scheme reduces the bit rates of the final codestream by 4.6% and raises the payload by 1.09% on average.
文摘Mammography is a specific type of imaging that uses low-dose x-ray system to examine breasts. This is an efficient means of early detection of breast cancer. Archiving and retaining these data for at least three years is expensive, diffi-cult and requires sophisticated data compres-sion techniques. We propose a lossless com-pression method that makes use of the smoothness property of the images. In the first step, de-correlation of the given image is done using two efficient predictors. The two residue images are partitioned into non overlapping sub-images of size 4x4. At every instant one of the sub-images is selected and sent for coding. The sub-images with all zero pixels are identi-fied using one bit code. The remaining sub- images are coded by using base switching method. Special techniques are used to save the overhead information. Experimental results indicate an average compression ratio of 6.44 for the selected database.
文摘In this paper,the technique of quasi_lossless compression based on the image restoration is presented.The technique of compression described in the paper includes three steps,namely bit compression,correlation removing and image restoration based on the theory of modulation transfer function (MTF).The quasi_lossless compression comes to a high speed.The quality of the reconstruction image under restoration is up to par of the quasi_lossless with higher compression ratio.The experiments of the TM and SPOT images show that the technique is reasonable and applicable.
文摘The two mast cameras, Mastcams, onboard Mars rover Curiosity are multispectral imagers with nine bands in each. Currently, the images are compressed losslessly using JPEG, which can achieve only two to three times of compression. We present a comparative study of four approaches to compressing multispectral Mastcam images. The first approach is to divide the nine bands into three groups with each group having three bands. Since the multispectral bands have strong correlation, we treat the three groups of images as video frames. We call this approach the Video approach. The second approach is to compress each group separately and we call it the split band (SB) approach. The third one is to apply a two-step approach in which the first step uses principal component analysis (PCA) to compress a nine-band image cube to six bands and a second step compresses the six PCA bands using conventional codecs. The fourth one is to apply PCA only. In addition, we also present subjective and objective assessment results for compressing RGB images because RGB images have been used for stereo and disparity map generation. Five well-known compression codecs, including JPEG, JPEG-2000 (J2K), X264, X265, and Daala in the literature, have been applied and compared in each approach. The performance of different algorithms was assessed using four well-known performance metrics. Two are conventional and another two are known to have good correlation with human perception. Extensive experiments using actual Mastcam images have been performed to demonstrate the various approaches. We observed that perceptually lossless compression can be achieved at 10:1 compression ratio. In particular, the performance gain of the SB approach with Daala is at least 5 dBs in terms peak signal-to-noise ratio (PSNR) at 10:1 compression ratio over that of JPEG. Subjective comparisons also corroborated with the objective metrics in that perceptually lossless compression can be achieved even at 20 to 1 compression.
文摘This paper presents a new method of lossless image compression. An image is characterized by homogeneous parts. The bit planes, which are of high weight are characterized by sequences of 0 and 1 are successive encoded with RLE, whereas the other bit planes are encoded by the arithmetic coding (AC) (static or adaptive model). By combining an AC (adaptive or static) with the RLE, a high degree of adaptation and compression efficiency is achieved. The proposed method is compared to both static and adaptive model. Experimental results, based on a set of 12 gray-level images, demonstrate that the proposed scheme gives mean compression ratio that are higher those compared to the conventional arithmetic encoders.
文摘Hyperspectral images (HSI) have hundreds of bands, which impose heavy burden on data storage and transmission bandwidth. Quite a few compression techniques have been explored for HSI in the past decades. One high performing technique is the combination of principal component analysis (PCA) and JPEG-2000 (J2K). However, since there are several new compression codecs developed after J2K in the past 15 years, it is worthwhile to revisit this research area and investigate if there are better techniques for HSI compression. In this paper, we present some new results in HSI compression. We aim at perceptually lossless compression of HSI. Perceptually lossless means that the decompressed HSI data cube has a performance metric near 40 dBs in terms of peak-signal-to-noise ratio (PSNR) or human visual system (HVS) based metrics. The key idea is to compare several combinations of PCA and video/ image codecs. Three representative HSI data cubes were used in our studies. Four video/image codecs, including J2K, X264, X265, and Daala, have been investigated and four performance metrics were used in our comparative studies. Moreover, some alternative techniques such as video, split band, and PCA only approaches were also compared. It was observed that the combination of PCA and X264 yielded the best performance in terms of compression performance and computational complexity. In some cases, the PCA + X264 combination achieved more than 3 dBs than the PCA + J2K combination.
文摘We propose a novel, lossless compression algorithm, based on the 2D Discrete Fast Fourier Transform, to approximate the Algorithmic (Kolmogorov) Complexity of Elementary Cellular Automata. Fast Fourier transforms are widely used in image compression but their lossy nature exclude them as viable candidates for Kolmogorov Complexity approximations. For the first time, we present a way to adapt fourier transforms for lossless image compression. The proposed method has a very strong Pearsons correlation to existing complexity metrics and we further establish its consistency as a complexity metric by confirming its measurements never exceed the complexity of nothingness and randomness (representing the lower and upper limits of complexity). Surprisingly, many of the other methods tested fail this simple sanity check. A final symmetry-based test also demonstrates our method’s superiority over existing lossless compression metrics. All complexity metrics tested, as well as the code used to generate and augment the original dataset, can be found in our github repository: ECA complexity metrics<sup>1</sup>.
基金the National Key Research and Development Program of China(No.2020AAA0105900)National Natural Science Foundation of China(No.62236007)Zhejiang Lab,China(No.2021KC0AC01).
文摘The artificial neural network-spiking neural network(ANN-SNN)conversion,as an efficient algorithm for deep SNNs training,promotes the performance of shallow SNNs,and expands the application in various tasks.However,the existing conversion methods still face the problem of large conversion error within low conversion time steps.In this paper,a heuristic symmetric-threshold rectified linear unit(stReLU)activation function for ANNs is proposed,based on the intrinsically different responses between the integrate-and-fire(IF)neurons in SNNs and the activation functions in ANNs.The negative threshold in stReLU can guarantee the conversion of negative activations,and the symmetric thresholds enable positive error to offset negative error between activation value and spike firing rate,thus reducing the conversion error from ANNs to SNNs.The lossless conversion from ANNs with stReLU to SNNs is demonstrated by theoretical formulation.By contrasting stReLU with asymmetric-threshold LeakyReLU and threshold ReLU,the effectiveness of symmetric thresholds is further explored.The results show that ANNs with stReLU can decrease the conversion error and achieve nearly lossless conversion based on the MNIST,Fashion-MNIST,and CIFAR10 datasets,with 6×to 250 speedup compared with other methods.Moreover,the comparison of energy consumption between ANNs and SNNs indicates that this novel conversion algorithm can also significantly reduce energy consumption.