Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their perform...Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.展开更多
Filtering capacitor with compact configuration and a wide range of operating voltage has been attracting increasing attention for the smooth conversion of the electric signal in modern circuits.Lossless integration of...Filtering capacitor with compact configuration and a wide range of operating voltage has been attracting increasing attention for the smooth conversion of the electric signal in modern circuits.Lossless integration of capacitor units can be regarded as one of the efficient ways to achieve a wider voltage range,which has not yet been fully conquered due to the lack of rational designs of the electrode structure and integration technology.This study presents an alternatingly stacked assemble technology to conveniently fabricate compact aqueous hybrid integrated filtering capacitors on a large scale,in which a unit consists of rGO/MXene composite film as a negative electrode and PEDOT:PSS based film as a positive electrode.Benefiting from the synergistic effect of rGO and MXene components,and morphological characteristics of PEDOT:PSS,the capacitor unit exhibits outstanding AC line filtering with a large areal specific energy density of 1,015 μF V^(2)cm^(-2)(0.28 μW h cm^(-2)) at 120 Hz.After rational integration,the assembled capacitors present compact/lightweight configuration and lossless frequency response,as reflected by almost constant resistor-capacitor time constant of 0.2 ms and dissipation factor of 15% at120 Hz,identical to those of the single capacitor unit.Apart from standing alone steadily on a flower,a small volume(only 8.1 cm^(3)) of the integrated capacitor with 70 units connected in series achieves hundred-volts alternating current line filtering,which is superior to most reported filtering capacitors with sandwich configuration.This study provides insight into the fabrication and application of compact/ultralight filtering capacitors with lossless frequency response,and a wide range of operating voltage.展开更多
In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios ev...In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios even with the existing congestion control solutions. To addressthe head-of-line blocking problem of PFC, we propose a new congestioncontrol mechanism. The key point of Congestion Control Using In-NetworkTelemetry for Lossless Datacenters (ICC) is to use In-Network Telemetry(INT) technology to obtain comprehensive congestion information, which isthen fed back to the sender to adjust the sending rate timely and accurately.It is possible to control congestion in time, converge to the target rate quickly,and maintain a near-zero queue length at the switch when using ICC. Weconducted Network Simulator-3 (NS-3) simulation experiments to test theICC’s performance. When compared to Congestion Control for Large-ScaleRDMA Deployments (DCQCN), TIMELY: RTT-based Congestion Controlfor the Datacenter (TIMELY), and Re-architecting Congestion Managementin Lossless Ethernet (PCN), ICC effectively reduces PFC pause messages andFlow Completion Time (FCT) by 47%, 56%, 34%, and 15.3×, 14.8×, and11.2×, respectively.展开更多
A semiconductor optical amplifier gate based on tensile strained quasi bulk InGaAs is developed.At injection current of 80mA,a 3dB optical bandwidth of more than 85nm is achieved due to dominant band filling effect...A semiconductor optical amplifier gate based on tensile strained quasi bulk InGaAs is developed.At injection current of 80mA,a 3dB optical bandwidth of more than 85nm is achieved due to dominant band filling effect.Moreover,the most important is that very low polarization dependence of gain (<0 7dB),fiber to fiber lossless operation current (70~90mA) and a high extinction ratio (>50dB) are simultaneously obtained over this wide 3dB optical bandwidth (1520~1609nm) which nearly covers the spectral region of the whole C band (1525~1565nm) and the whole L band (1570~1610nm).The gating time is also improved by decreasing carrier lifetime.The wide band polarization insensitive SOA gate is promising for use in future dense wavelength division multiplexing (DWDM) communication systems.展开更多
For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitutio...For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity.展开更多
This paper proposes a lossless and high payload data hiding scheme for JPEG images by histogram modification.The most in JPEG bitstream consists of a sequence of VLCs(variable length codes)and the appended bits.Each V...This paper proposes a lossless and high payload data hiding scheme for JPEG images by histogram modification.The most in JPEG bitstream consists of a sequence of VLCs(variable length codes)and the appended bits.Each VLC has a corresponding RLV(run/length value)to record the AC/DC coefficients.To achieve lossless data hiding with high payload,we shift the histogram of VLCs and modify the DHT segment to embed data.Since we sort the histogram of VLCs in descending order,the filesize expansion is limited.The paper’s key contribution includes:Lossless data hiding,less filesize expansion in identical pay-load and higher embedding efficiency.展开更多
This paper proves a power balance theorem of frequency domain. It becomes another circuit law concerning power conservation after Tellegen’s theorem. Moreover the universality and importance worth of application of t...This paper proves a power balance theorem of frequency domain. It becomes another circuit law concerning power conservation after Tellegen’s theorem. Moreover the universality and importance worth of application of the theorem are introduced in this paper. Various calculation of frequency domain in nonlinear circuit possess fixed intrinsic rule. There exists the mutual influence of nonlinear coupling among various harmonics. But every harmonic component must observe individually KCL, KVL and conservation of complex power in nonlinear circuit. It is a lossless network that the nonlinear conservative system with excited source has not dissipative element. The theorem proved by this paper can directly be used to find the main harmonic solutions of the lossless circuit. The results of solution are consistent with the balancing condition of reactive power, and accord with the traditional harmonic analysis method. This paper demonstrates that the lossless network can universally produce chaos. The phase portrait is related closely to the initial conditions, thus it is not an attractor. Furthermore it also reveals the difference between the attractiveness and boundedness for chaos.展开更多
A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of ...A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.展开更多
This paper presents the key optimization techniques for an efficient accelerator implementation in an image encoder IP core design for real-time Joint Photographic Experts Group Lossless(JPEG-LS) encoding.Pipeline arc...This paper presents the key optimization techniques for an efficient accelerator implementation in an image encoder IP core design for real-time Joint Photographic Experts Group Lossless(JPEG-LS) encoding.Pipeline architecture and accelerator elements have been utilized to enhance the throughput capability.Improved parameters mapping schemes and resource sharing have been adopted for the purpose of low complexity and small chip die area.Module-level and fine-grained gating measures have been used to achieve a low-power implementation.It has been proved that these hardware-oriented optimization techniques make the encoder meet the requirements of the IP core implementation.The proposed optimization techniques have been verified in the implementation of the JPEG-LS encoder IP,and then validated in a real wireless endoscope system.展开更多
The qualitative solutions of dynamical system expressed with nonlinear differential equation can be divided into two categories. One is that the motion of phase point may approach infinite or stable equilibrium point ...The qualitative solutions of dynamical system expressed with nonlinear differential equation can be divided into two categories. One is that the motion of phase point may approach infinite or stable equilibrium point eventually. Neither periodic excited source nor self-excited oscillation exists in such nonlinear dynamic circuits, so its solution cannot be treated as the synthesis of multiharmonic. And the other is that the endless vibration of phase point is limited within certain range, moreover possesses character of sustained oscillation, namely the bounded nonlinear oscillation. It can persistently and repeatedly vibration after dynamic variable entering into steady state;moreover the motion of phase point will not approach infinite at last;system has not stable equilibrium point. The motional trajectory can be described by a bounded space curve. So far, the curve cannot be represented by concretely explicit parametric form in math. It cannot be expressed analytically by human. The chaos is a most universally common form of bounded nonlinear oscillation. A number of chaotic systems, such as Lorenz equation, Chua’s circuit and lossless system in modern times are some examples among thousands of chaotic equations. In this work, basic properties related to the bounded space curve will be comprehensively summarized by analyzing these examples.展开更多
Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied....Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied. Comparing with the traditional algorithm, it can better improve the compression rate. CDF (2, n) biorthogonal wavelet family can lead to better compression ratio than other CDF family, SWE and CRF, which is owe to its capability in can- celing data redundancies and focusing data characteristics. CDF (2, n) family is suitable as the wavelet function of the lossless compression seismic data.展开更多
Small storage space for photographs in formal documents is increasingly necessary in today's needs for huge amounts of data communication and storage. Traditional compression algorithms do not sufficiently utilize th...Small storage space for photographs in formal documents is increasingly necessary in today's needs for huge amounts of data communication and storage. Traditional compression algorithms do not sufficiently utilize the distinctness of formal photographs. That is, the object is an image of the human head, and the background is in unicolor. Therefore, the compression is of low efficiency and the image after compression is still space-consuming. This paper presents an image compression algorithm based on object segmentation for practical high-efficiency applications. To achieve high coding efficiency, shape-adaptive discrete wavelet transforms are used to transformation arbitrarily shaped objects. The areas of the human head and its background are compressed separately to reduce the coding redundancy of the background. Two methods, lossless image contour coding based on differential chain, and modified set partitioning in hierarchical trees (SPIHT) algorithm of arbitrary shape, are discussed in detail. The results of experiments show that when bit per pixel (bpp)is equal to 0.078, peak signal-to-noise ratio (PSNR) of reconstructed photograph will exceed the standard of SPIHT by nearly 4dB.展开更多
In this paper, we propose a novel image recompression frame- work and image quality assessment (IQA) method to efficiently recompress Internet images. With this framework image size is significantly reduced without ...In this paper, we propose a novel image recompression frame- work and image quality assessment (IQA) method to efficiently recompress Internet images. With this framework image size is significantly reduced without affecting spatial resolution or perceptible quality of the image. With the help of IQA, the relationship between image quality and image evaluation scores can be quickly established, and the optimal quality factor can be obtained quickly and accurately within a pre - determined perceptual quality range. This process ensures the image's perceptual quality, which is applied to each input image. The test results show that, using the proposed method, the file size of images can be reduced by about 45%-60% without affecting their visual quality. Moreover, our new image -reeompression framework can be used in to many different application scenarios.展开更多
In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbo...In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbor pixels. An algorithm of ascertaining available QTM neighbors is also proposed. Then, the method for reducing space complexities in the procedure of predicting QTM pixel values is presented. Next, the structure for storing compressed QTM pixel is proposed. In the end, the experiment on comparing compression ratio of this method with other methods is carried out by using three wave bands data of 1 km resolution of NOAA images in China. The results indicate that: 1) the compression method performs better than any other, such as Run Length Coding, Arithmetic Coding, Huffman Cod- ing, etc; 2) the average size of compressed three wave band data based on the neighbor QTM pixel predictive model is 31.58% of the origin space requirements and 67.5% of Arithmetic Coding without predictive model.展开更多
With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explore...With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explores different lossless data compression techniques and aims to find an optimal compression algorithm to compress astronomical data obtained by the Square Kilometre Array (SKA), which are new and unique in the field of radio astronomy. It was required that the compressed data sets should be lossless and that they should be compressed while the data are being read. The project was carried out in conjunction with the SKA South Africa office. Data compression reduces the time taken and the bandwidth used when transferring files, and it can also reduce the costs involved with data storage. The SKA uses the Hierarchical Data Format (HDF5) to store the data collected from the radio telescopes, with the data used in this study ranging from 29 MB to 9 GB in size. The compression techniques investigated in this study include SZIP, GZIP, the LZF filter, LZ4 and the Fully Adaptive Prediction Error Coder (FAPEC). The algorithms and methods used to perform the compression tests are discussed and the results from the three phases of testing are presented, followed by a brief discussion on those results.展开更多
In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide ...In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.展开更多
The paper studies the robustness of the network in terms of the network structure. We define a strongly dominated relation between nodes and then we use the relation to merge the network. Based on that, we design a do...The paper studies the robustness of the network in terms of the network structure. We define a strongly dominated relation between nodes and then we use the relation to merge the network. Based on that, we design a dominated clustering algorithm aiming at finding the critical nodes in the network. Furthermore, this merging process is lossless which means the original structure of the network is kept. In order to realize the visulization of the network, we also apply the lossy consolidation to the network based on detection of the community structures. Simulation results show that compared with six existed centrality algorithms, our algorithm performs better when the attack capacity is limited. The simulations also illustrate our algorithm does better in assortative scale-free networks.展开更多
For digital communication, distributed storage and management of media contents over system holders are critical issues. In this article, an efficient verifiable sharing scheme is proposed that can satisfy significant...For digital communication, distributed storage and management of media contents over system holders are critical issues. In this article, an efficient verifiable sharing scheme is proposed that can satisfy significant essentials of distribution sharing and can achieve a iossless property of host media. Verifiability allows holders to detect and identify counterfeited shadows during cooperation in order to prevent cheaters. Only authorized holders can reveal the lossless shared content and then reconstruct the original host image. Shared media capacity is adjustable and proportional to the increase of the number of the distributed holders t. The more distributed holders, the larger the shared media capacity is. Moreover, the ability to reconstruct the image preserves the fidelity of valuable host media, such as military and medical images. According to the results, the proposed approach can achieve superior performance to that of related sharing schemes for effectively providing distributed media management and storage.展开更多
Discrete asine transform (DCT) is the key technique in JPEG and MPW, chch dds with tw bforkby block. HoWever, this methed is no sultabe for the blocks conaining many edges for high quality image reconstruc-tion in Par...Discrete asine transform (DCT) is the key technique in JPEG and MPW, chch dds with tw bforkby block. HoWever, this methed is no sultabe for the blocks conaining many edges for high quality image reconstruc-tion in Particular. An adaptive hybrid DPCM/DCT edng mehed is proposed to solve this problem. For each block,the ds dethetor botches to DPCM or gy ceder autoInaticthe depewhng upon quality requrement. The edge blocksare coded by DPCM coder that adaptively Selects a predictor from the given set, which results in minimum predictionerror, and the hadues obained are ced with fuce ed. For non-edg bforks, us, mlength nd vallabe lengthcoding(VLC) are applied. Experimental results showed the Proposed algorithm ouperforms baseline JPEG and JPEGlossless mode both on compression ratio and decoding run-time at the hit rates from 1 to 4 approximately.展开更多
Discrete (J,J′) lossless factorization is established by using conjugation.For stable case ,the existence of such factorization is equivalent to the existence of a positive solution of a Riccati equation. For un...Discrete (J,J′) lossless factorization is established by using conjugation.For stable case ,the existence of such factorization is equivalent to the existence of a positive solution of a Riccati equation. For unstable case ,the existence conditions can be reduced to the existence of two positive solution of two Riccati equations.展开更多
文摘Data compression plays a key role in optimizing the use of memory storage space and also reducing latency in data transmission. In this paper, we are interested in lossless compression techniques because their performance is exploited with lossy compression techniques for images and videos generally using a mixed approach. To achieve our intended objective, which is to study the performance of lossless compression methods, we first carried out a literature review, a summary of which enabled us to select the most relevant, namely the following: arithmetic coding, LZW, Tunstall’s algorithm, RLE, BWT, Huffman coding and Shannon-Fano. Secondly, we designed a purposive text dataset with a repeating pattern in order to test the behavior and effectiveness of the selected compression techniques. Thirdly, we designed the compression algorithms and developed the programs (scripts) in Matlab in order to test their performance. Finally, following the tests conducted on relevant data that we constructed according to a deliberate model, the results show that these methods presented in order of performance are very satisfactory:- LZW- Arithmetic coding- Tunstall algorithm- BWT + RLELikewise, it appears that on the one hand, the performance of certain techniques relative to others is strongly linked to the sequencing and/or recurrence of symbols that make up the message, and on the other hand, to the cumulative time of encoding and decoding.
基金supported by the NSFC(21805072,22075019,22035005)the National Key R&D Program of China(2017YFB1104300)。
文摘Filtering capacitor with compact configuration and a wide range of operating voltage has been attracting increasing attention for the smooth conversion of the electric signal in modern circuits.Lossless integration of capacitor units can be regarded as one of the efficient ways to achieve a wider voltage range,which has not yet been fully conquered due to the lack of rational designs of the electrode structure and integration technology.This study presents an alternatingly stacked assemble technology to conveniently fabricate compact aqueous hybrid integrated filtering capacitors on a large scale,in which a unit consists of rGO/MXene composite film as a negative electrode and PEDOT:PSS based film as a positive electrode.Benefiting from the synergistic effect of rGO and MXene components,and morphological characteristics of PEDOT:PSS,the capacitor unit exhibits outstanding AC line filtering with a large areal specific energy density of 1,015 μF V^(2)cm^(-2)(0.28 μW h cm^(-2)) at 120 Hz.After rational integration,the assembled capacitors present compact/lightweight configuration and lossless frequency response,as reflected by almost constant resistor-capacitor time constant of 0.2 ms and dissipation factor of 15% at120 Hz,identical to those of the single capacitor unit.Apart from standing alone steadily on a flower,a small volume(only 8.1 cm^(3)) of the integrated capacitor with 70 units connected in series achieves hundred-volts alternating current line filtering,which is superior to most reported filtering capacitors with sandwich configuration.This study provides insight into the fabrication and application of compact/ultralight filtering capacitors with lossless frequency response,and a wide range of operating voltage.
基金supported by the National Natural Science Foundation of China (No.62102046,62072249,62072056)JinWang,YongjunRen,and Jinbin Hu receive the grant,and the URLs to the sponsors’websites are https://www.nsfc.gov.cn/.This work is also funded by the National Science Foundation of Hunan Province (No.2022JJ30618,2020JJ2029).
文摘In the Ethernet lossless Data Center Networks (DCNs) deployedwith Priority-based Flow Control (PFC), the head-of-line blocking problemis still difficult to prevent due to PFC triggering under burst trafficscenarios even with the existing congestion control solutions. To addressthe head-of-line blocking problem of PFC, we propose a new congestioncontrol mechanism. The key point of Congestion Control Using In-NetworkTelemetry for Lossless Datacenters (ICC) is to use In-Network Telemetry(INT) technology to obtain comprehensive congestion information, which isthen fed back to the sender to adjust the sending rate timely and accurately.It is possible to control congestion in time, converge to the target rate quickly,and maintain a near-zero queue length at the switch when using ICC. Weconducted Network Simulator-3 (NS-3) simulation experiments to test theICC’s performance. When compared to Congestion Control for Large-ScaleRDMA Deployments (DCQCN), TIMELY: RTT-based Congestion Controlfor the Datacenter (TIMELY), and Re-architecting Congestion Managementin Lossless Ethernet (PCN), ICC effectively reduces PFC pause messages andFlow Completion Time (FCT) by 47%, 56%, 34%, and 15.3×, 14.8×, and11.2×, respectively.
文摘A semiconductor optical amplifier gate based on tensile strained quasi bulk InGaAs is developed.At injection current of 80mA,a 3dB optical bandwidth of more than 85nm is achieved due to dominant band filling effect.Moreover,the most important is that very low polarization dependence of gain (<0 7dB),fiber to fiber lossless operation current (70~90mA) and a high extinction ratio (>50dB) are simultaneously obtained over this wide 3dB optical bandwidth (1520~1609nm) which nearly covers the spectral region of the whole C band (1525~1565nm) and the whole L band (1570~1610nm).The gating time is also improved by decreasing carrier lifetime.The wide band polarization insensitive SOA gate is promising for use in future dense wavelength division multiplexing (DWDM) communication systems.
基金This project is supported by National Natural Science Foundation of China(No.61202439)partly supported by Scientific Research Foundation of Hunan Provincial Education Department of China(No.16A008)partly supported by Hunan Key Laboratory of Smart Roadway and Cooperative Vehicle-Infrastructure Systems(No.2017TP1016).
文摘For protecting the copyright of a text and recovering its original content harmlessly,this paper proposes a novel reversible natural language watermarking method that combines arithmetic coding and synonym substitution operations.By analyzing relative frequencies of synonymous words,synonyms employed for carrying payload are quantized into an unbalanced and redundant binary sequence.The quantized binary sequence is compressed by adaptive binary arithmetic coding losslessly to provide a spare for accommodating additional data.Then,the compressed data appended with the watermark are embedded into the cover text via synonym substitutions in an invertible manner.On the receiver side,the watermark and compressed data can be extracted by decoding the values of synonyms in the watermarked text,as a result of which the original context can be perfectly recovered by decompressing the extracted compressed data and substituting the replaced synonyms with their original synonyms.Experimental results demonstrate that the proposed method can extract the watermark successfully and achieve a lossless recovery of the original text.Additionally,it achieves a high embedding capacity.
基金This research work is partly supported by National Natural Science Foundation of China(61502009,61525203,61472235,U1636206,61572308)CSC Postdoctoral Project(201706505004)+2 种基金Anhui Provincial Natural Science Foundation(1508085SQF216)Key Program for Excellent Young Talents in Colleges and Universities of Anhui Province(gxyqZD2016011)Anhui university research and innovation training project for undergraduate students.
文摘This paper proposes a lossless and high payload data hiding scheme for JPEG images by histogram modification.The most in JPEG bitstream consists of a sequence of VLCs(variable length codes)and the appended bits.Each VLC has a corresponding RLV(run/length value)to record the AC/DC coefficients.To achieve lossless data hiding with high payload,we shift the histogram of VLCs and modify the DHT segment to embed data.Since we sort the histogram of VLCs in descending order,the filesize expansion is limited.The paper’s key contribution includes:Lossless data hiding,less filesize expansion in identical pay-load and higher embedding efficiency.
文摘This paper proves a power balance theorem of frequency domain. It becomes another circuit law concerning power conservation after Tellegen’s theorem. Moreover the universality and importance worth of application of the theorem are introduced in this paper. Various calculation of frequency domain in nonlinear circuit possess fixed intrinsic rule. There exists the mutual influence of nonlinear coupling among various harmonics. But every harmonic component must observe individually KCL, KVL and conservation of complex power in nonlinear circuit. It is a lossless network that the nonlinear conservative system with excited source has not dissipative element. The theorem proved by this paper can directly be used to find the main harmonic solutions of the lossless circuit. The results of solution are consistent with the balancing condition of reactive power, and accord with the traditional harmonic analysis method. This paper demonstrates that the lossless network can universally produce chaos. The phase portrait is related closely to the initial conditions, thus it is not an attractor. Furthermore it also reveals the difference between the attractiveness and boundedness for chaos.
文摘A simple and adaptive lossless compression algorithm is proposed for remote sensing image compression, which includes integer wavelet transform and the Rice entropy coder. By analyzing the probability distribution of integer wavelet transform coefficients and the characteristics of Rice entropy coder, the divide and rule method is used for high-frequency sub-bands and low-frequency one. High-frequency sub-bands are coded by the Rice entropy coder, and low-frequency coefficients are predicted before coding. The role of predictor is to map the low-frequency coefficients into symbols suitable for the entropy coding. Experimental results show that the average Comprcssion Ratio (CR) of our approach is about two, which is close to that of JPEG 2000. The algorithm is simple and easy to be implemented in hardware. Moreover, it has the merits of adaptability, and independent data packet. So the algorithm can adapt to space lossless compression applications.
基金Supported by National High Technology Research and Development Program (No.2008AA010707)
文摘This paper presents the key optimization techniques for an efficient accelerator implementation in an image encoder IP core design for real-time Joint Photographic Experts Group Lossless(JPEG-LS) encoding.Pipeline architecture and accelerator elements have been utilized to enhance the throughput capability.Improved parameters mapping schemes and resource sharing have been adopted for the purpose of low complexity and small chip die area.Module-level and fine-grained gating measures have been used to achieve a low-power implementation.It has been proved that these hardware-oriented optimization techniques make the encoder meet the requirements of the IP core implementation.The proposed optimization techniques have been verified in the implementation of the JPEG-LS encoder IP,and then validated in a real wireless endoscope system.
文摘The qualitative solutions of dynamical system expressed with nonlinear differential equation can be divided into two categories. One is that the motion of phase point may approach infinite or stable equilibrium point eventually. Neither periodic excited source nor self-excited oscillation exists in such nonlinear dynamic circuits, so its solution cannot be treated as the synthesis of multiharmonic. And the other is that the endless vibration of phase point is limited within certain range, moreover possesses character of sustained oscillation, namely the bounded nonlinear oscillation. It can persistently and repeatedly vibration after dynamic variable entering into steady state;moreover the motion of phase point will not approach infinite at last;system has not stable equilibrium point. The motional trajectory can be described by a bounded space curve. So far, the curve cannot be represented by concretely explicit parametric form in math. It cannot be expressed analytically by human. The chaos is a most universally common form of bounded nonlinear oscillation. A number of chaotic systems, such as Lorenz equation, Chua’s circuit and lossless system in modern times are some examples among thousands of chaotic equations. In this work, basic properties related to the bounded space curve will be comprehensively summarized by analyzing these examples.
文摘Due to the particularity of the seismic data, they must be treated by lossless compression algorithm in some cases. In the paper, based on the integer wavelet transform, the lossless compression algorithm is studied. Comparing with the traditional algorithm, it can better improve the compression rate. CDF (2, n) biorthogonal wavelet family can lead to better compression ratio than other CDF family, SWE and CRF, which is owe to its capability in can- celing data redundancies and focusing data characteristics. CDF (2, n) family is suitable as the wavelet function of the lossless compression seismic data.
基金This work was supported by National Natural Science Foundation of China (No.60372066)
文摘Small storage space for photographs in formal documents is increasingly necessary in today's needs for huge amounts of data communication and storage. Traditional compression algorithms do not sufficiently utilize the distinctness of formal photographs. That is, the object is an image of the human head, and the background is in unicolor. Therefore, the compression is of low efficiency and the image after compression is still space-consuming. This paper presents an image compression algorithm based on object segmentation for practical high-efficiency applications. To achieve high coding efficiency, shape-adaptive discrete wavelet transforms are used to transformation arbitrarily shaped objects. The areas of the human head and its background are compressed separately to reduce the coding redundancy of the background. Two methods, lossless image contour coding based on differential chain, and modified set partitioning in hierarchical trees (SPIHT) algorithm of arbitrary shape, are discussed in detail. The results of experiments show that when bit per pixel (bpp)is equal to 0.078, peak signal-to-noise ratio (PSNR) of reconstructed photograph will exceed the standard of SPIHT by nearly 4dB.
基金supported in part by China"973"Program under Grant No.2014CB340303
文摘In this paper, we propose a novel image recompression frame- work and image quality assessment (IQA) method to efficiently recompress Internet images. With this framework image size is significantly reduced without affecting spatial resolution or perceptible quality of the image. With the help of IQA, the relationship between image quality and image evaluation scores can be quickly established, and the optimal quality factor can be obtained quickly and accurately within a pre - determined perceptual quality range. This process ensures the image's perceptual quality, which is applied to each input image. The test results show that, using the proposed method, the file size of images can be reduced by about 45%-60% without affecting their visual quality. Moreover, our new image -reeompression framework can be used in to many different application scenarios.
基金Project 40471108 supported by the National Natural Science Foundation of China
文摘In this paper, a new predictive model, adapted to QTM (Quaternary Triangular Mesh) pixel compression, is introduced. Our approach starts with the principles of proposed predictive models based on available QTM neighbor pixels. An algorithm of ascertaining available QTM neighbors is also proposed. Then, the method for reducing space complexities in the procedure of predicting QTM pixel values is presented. Next, the structure for storing compressed QTM pixel is proposed. In the end, the experiment on comparing compression ratio of this method with other methods is carried out by using three wave bands data of 1 km resolution of NOAA images in China. The results indicate that: 1) the compression method performs better than any other, such as Run Length Coding, Arithmetic Coding, Huffman Cod- ing, etc; 2) the average size of compressed three wave band data based on the neighbor QTM pixel predictive model is 31.58% of the origin space requirements and 67.5% of Arithmetic Coding without predictive model.
文摘With the size of astronomical data archives continuing to increase at an enormous rate, the providers and end users of astronomical data sets will benefit from effective data compression techniques. This paper explores different lossless data compression techniques and aims to find an optimal compression algorithm to compress astronomical data obtained by the Square Kilometre Array (SKA), which are new and unique in the field of radio astronomy. It was required that the compressed data sets should be lossless and that they should be compressed while the data are being read. The project was carried out in conjunction with the SKA South Africa office. Data compression reduces the time taken and the bandwidth used when transferring files, and it can also reduce the costs involved with data storage. The SKA uses the Hierarchical Data Format (HDF5) to store the data collected from the radio telescopes, with the data used in this study ranging from 29 MB to 9 GB in size. The compression techniques investigated in this study include SZIP, GZIP, the LZF filter, LZ4 and the Fully Adaptive Prediction Error Coder (FAPEC). The algorithms and methods used to perform the compression tests are discussed and the results from the three phases of testing are presented, followed by a brief discussion on those results.
基金Supported by the National Natural Science Foundation of China!( 6 9875 0 0 9)
文摘In this paper, the second generation wavelet transform is applied to image lossless coding, according to its characteristic of reversible integer wavelet transform. The second generation wavelet transform can provide higher compression ratio than Huffman coding while it reconstructs image without loss compared with the first generation wavelet transform. The experimental results show that the se cond generation wavelet transform can obtain excellent performance in medical image compression coding.
基金supported by the National Natural Science Foundation of China(Grant No.61471055)
文摘The paper studies the robustness of the network in terms of the network structure. We define a strongly dominated relation between nodes and then we use the relation to merge the network. Based on that, we design a dominated clustering algorithm aiming at finding the critical nodes in the network. Furthermore, this merging process is lossless which means the original structure of the network is kept. In order to realize the visulization of the network, we also apply the lossy consolidation to the network based on detection of the community structures. Simulation results show that compared with six existed centrality algorithms, our algorithm performs better when the attack capacity is limited. The simulations also illustrate our algorithm does better in assortative scale-free networks.
文摘For digital communication, distributed storage and management of media contents over system holders are critical issues. In this article, an efficient verifiable sharing scheme is proposed that can satisfy significant essentials of distribution sharing and can achieve a iossless property of host media. Verifiability allows holders to detect and identify counterfeited shadows during cooperation in order to prevent cheaters. Only authorized holders can reveal the lossless shared content and then reconstruct the original host image. Shared media capacity is adjustable and proportional to the increase of the number of the distributed holders t. The more distributed holders, the larger the shared media capacity is. Moreover, the ability to reconstruct the image preserves the fidelity of valuable host media, such as military and medical images. According to the results, the proposed approach can achieve superior performance to that of related sharing schemes for effectively providing distributed media management and storage.
文摘Discrete asine transform (DCT) is the key technique in JPEG and MPW, chch dds with tw bforkby block. HoWever, this methed is no sultabe for the blocks conaining many edges for high quality image reconstruc-tion in Particular. An adaptive hybrid DPCM/DCT edng mehed is proposed to solve this problem. For each block,the ds dethetor botches to DPCM or gy ceder autoInaticthe depewhng upon quality requrement. The edge blocksare coded by DPCM coder that adaptively Selects a predictor from the given set, which results in minimum predictionerror, and the hadues obained are ced with fuce ed. For non-edg bforks, us, mlength nd vallabe lengthcoding(VLC) are applied. Experimental results showed the Proposed algorithm ouperforms baseline JPEG and JPEGlossless mode both on compression ratio and decoding run-time at the hit rates from 1 to 4 approximately.
文摘Discrete (J,J′) lossless factorization is established by using conjugation.For stable case ,the existence of such factorization is equivalent to the existence of a positive solution of a Riccati equation. For unstable case ,the existence conditions can be reduced to the existence of two positive solution of two Riccati equations.