At present,convolutional neural networks(CNNs)and transformers surpass humans in many situations(such as face recognition and object classification),but do not work well in identifying fibers in textile surface images...At present,convolutional neural networks(CNNs)and transformers surpass humans in many situations(such as face recognition and object classification),but do not work well in identifying fibers in textile surface images.Hence,this paper proposes an architecture named FiberCT which takes advantages of the feature extraction capability of CNNs and the long-range modeling capability of transformer decoders to adaptively extract multiple types of fiber features.Firstly,the convolution module extracts fiber features from the input textile surface images.Secondly,these features are sent into the transformer decoder module where label embeddings are compared with the features of each type of fibers through multi-head cross-attention and the desired features are pooled adaptively.Finally,an asymmetric loss further purifies the extracted fiber representations.Experiments show that FiberCT can more effectively extract the representations of various types of fibers and improve fiber identification accuracy than state-of-the-art multi-label classification approaches.展开更多
Quantum error-correction codes are immeasurable resources for quantum computing and quantum communication.However,the existing decoders are generally incapable of checking node duplication of belief propagation(BP)on ...Quantum error-correction codes are immeasurable resources for quantum computing and quantum communication.However,the existing decoders are generally incapable of checking node duplication of belief propagation(BP)on quantum low-density parity check(QLDPC)codes.Based on the probability theory in the machine learning,mathematical statistics and topological structure,a GF(4)(the Galois field is abbreviated as GF)augmented model BP decoder with Tanner graph is designed.The problem of repeated check nodes can be solved by this decoder.In simulation,when the random perturbation strength p=0.0115-0.0116 and number of attempts N=60-70,the highest decoding efficiency of the augmented model BP decoder is obtained,and the low-loss frame error rate(FER)decreases to 7.1975×10^(-5).Hence,we design a novel augmented model decoder to compare the relationship between GF(2)and GF(4)for quantum code[[450,200]]on the depolarization channel.It can be verified that the proposed decoder provides the widely application range,and the decoding performance is better in QLDPC codes.展开更多
Quantum error correction technology is an important solution to solve the noise interference generated during the operation of quantum computers.In order to find the best syndrome of the stabilizer code in quantum err...Quantum error correction technology is an important solution to solve the noise interference generated during the operation of quantum computers.In order to find the best syndrome of the stabilizer code in quantum error correction,we need to find a fast and close to the optimal threshold decoder.In this work,we build a convolutional neural network(CNN)decoder to correct errors in the toric code based on the system research of machine learning.We analyze and optimize various conditions that affect CNN,and use the RestNet network architecture to reduce the running time.It is shortened by 30%-40%,and we finally design an optimized algorithm for CNN decoder.In this way,the threshold accuracy of the neural network decoder is made to reach 10.8%,which is closer to the optimal threshold of about 11%.The previous threshold of 8.9%-10.3%has been slightly improved,and there is no need to verify the basic noise.展开更多
A low-complexity MP3 decoder based on Broadcom embedded platform was proposed.C-code level optimization algorithms on inverse quantization,stereo decoding and alias reduction based on PC were proposed to further reduc...A low-complexity MP3 decoder based on Broadcom embedded platform was proposed.C-code level optimization algorithms on inverse quantization,stereo decoding and alias reduction based on PC were proposed to further reduce the amount of memory usage and the computational complexity.Furthermore,the executable file of the optimized MP3 decoder was generated under the Linux environment,and transplanted to the set-top box based on Broadcom embedded platform.Experiment results showed that the total time for decoding was reduced on the embedded platform,and the goal of real time and fluent playing of audio files was fulfilled,which demonstrated the effectiveness of the proposed MP3 decoder.The proposed MP3 decoder could be applied in fields such as the set-top box based on Broadcom embedded platform and other portable devices.展开更多
Benefiting from strong decoding capabilities,soft-decision decoding has been used to replace hard-decision decoding in various communication systems,and NAND flash memory systems are no exception.However,soft-decision...Benefiting from strong decoding capabilities,soft-decision decoding has been used to replace hard-decision decoding in various communication systems,and NAND flash memory systems are no exception.However,soft-decision decoding relies heavily on accurate soft information.Owing to the incremental step pulse programming(ISPP),program errors(PEs)in multi-level cell(MLC)NAND flash memory have different characteristics compared to other types of errors,which is very difficult to obtain such accurate soft information.Therefore,the characteristics of the log-likelihood ratio(LLR)of PEs are investigated first in this paper.Accordingly,a PE-aware statistical method is proposed to determine the usage of PE mitigation schemes.In order to reduce the PE estimating workload of the controller,an adaptive blind clipping(ABC)scheme is proposed subsequently to approximate the PEs contaminated LLR with different decoding trials.Finally,simulation results demonstrate that(1)the proposed PE-aware statistical method is effective in practice,and(2)ABC scheme is able to provide satisfactory bit error rate(BER)and frame error rate(FER)performance in a penalty of negligible increasing of decoding latency.展开更多
Modern satellite communication systems require on-board processing(OBP)for performance improvements,and SRAM-FPGAs are an attractive option for OBP implementation.However,SRAM-FPGAs are sensitive to radiation effects,...Modern satellite communication systems require on-board processing(OBP)for performance improvements,and SRAM-FPGAs are an attractive option for OBP implementation.However,SRAM-FPGAs are sensitive to radiation effects,among which single event upsets(SEUs)are important as they can lead to data corruption and system failure.This paper studies the fault tolerance capability of a SRAM-FPGA implemented Viterbi decoder to SEUs on the user memory.Analysis and fault injection experiments are conducted to verify that over 97%of the SEUs on user memory would not lead to output errors.To achieve a better reliability,selective protection schemes are then proposed to further improve the reliability of the decoder to SEUs on user memory with very small overhead.Although the results are obtained for a specific FPGA implementation,the developed reliability estimation model and the general conclusions still hold for other implementations.展开更多
This paper presented a new solution for motion compensation module in the high definition television (HDTV) video decoder. The overall architecture and the design of the major functional units, such as the motion vect...This paper presented a new solution for motion compensation module in the high definition television (HDTV) video decoder. The overall architecture and the design of the major functional units, such as the motion vector decoder, the predictor, and the mixer, were discussed. Based on the exploitation of the special characteristics inherent in the motion compensation algorithm, the motion compensation module and its functional units adopt various novel architectures in order to allow the module to meet real-time constraints. This solution resolves the problem of high hardware costs, low bus efficiency and complex control schemes in conventional designs.展开更多
The conventional methodology for designing QC-LDPC decoders is applied for fixed configurations used in wireless communication standards, and the supported largest expansion factor Z (the parallelism of the layered de...The conventional methodology for designing QC-LDPC decoders is applied for fixed configurations used in wireless communication standards, and the supported largest expansion factor Z (the parallelism of the layered decoding) is a fixed number. In this paper, we study the circular-shifting network for decoding LDPC codes with arbitrary Z factor, especially for decoding large Z (Z P) codes, where P is the decoder parallelism. By buffering the P-length slices from the memory, and assembling the shifted slices in a fixed routine, the P-parallelism shift network can process Z-parallelism circular-shifting tasks. The implementation results show that the proposed network for arbitrary sized data shifting consumes only one times of additional resource cost compared to the traditional solution for only maximum P sized data shifting, and achieves significant saving on area and routing complexity.展开更多
Turbo code has been shown to have ability to achieve performance that is close to Shannon limit. It has been adopted by various commercial communication systems. Both universal mobile telecommunications system (UMTS) ...Turbo code has been shown to have ability to achieve performance that is close to Shannon limit. It has been adopted by various commercial communication systems. Both universal mobile telecommunications system (UMTS) TDD and FDD have also employed turbo code as the error correction coding scheme. It outperforms convolutional code in large block size, but because of its time delay, it is often only used in the non-real-time service. In this paper, we discuss the encoder and decoder structure of turbo code in B3G mobile communication System. In addition, various decoding techniques, such as the Log-MAP, Max-log-MAP and SOVA algorithm for non-real-time service are deduced and compared. The performance results of decoder and algorithms in different configurations are also shown.展开更多
Genetic algorithms are successfully used for decoding some classes of error correcting codes, and offer very good performances for solving large optimization problems. This article proposes a new decoder based on Seri...Genetic algorithms are successfully used for decoding some classes of error correcting codes, and offer very good performances for solving large optimization problems. This article proposes a new decoder based on Serial Genetic Algorithm Decoder (SGAD) for decoding Low Density Parity Check (LDPC) codes. The results show that the proposed algorithm gives large gains over sum-product decoder, which proves its efficiency.展开更多
For the well-known 3G mobile communications standard UMTS, four different service classes have been specified. Considering two turbo decoding algorithms, like SOVA and log-MAP, it would be desirable to use an efficien...For the well-known 3G mobile communications standard UMTS, four different service classes have been specified. Considering two turbo decoding algorithms, like SOVA and log-MAP, it would be desirable to use an efficient turbo decoder. In this paper this decoder is shown to adapt dynamically to different service scenarios, considering parameters like performance and complexity for indoor/low range outdoor operating en-vironment. The scenarios show that for streaming service class real-time class applications the proposed de-coding algorithm depends on data rate;for the majority of scenarios SOVA is proposed, whereas log-MAP is optimal for increased data rates and medium-sized frames. On the other hand, conversational service class real-time applications cannot be established. For the majority of non real-time applications (interactive and background service classes) either algorithm can be used, while log-MAP is proposed for medium data rates and frame lengths.展开更多
A high speed and low power Viterbi decoder architecture design based on deep pipelined, clock gating and toggle filtering has been presented in this paper. The Add-Compare-Select (ACS) and Trace Back (TB) units and it...A high speed and low power Viterbi decoder architecture design based on deep pipelined, clock gating and toggle filtering has been presented in this paper. The Add-Compare-Select (ACS) and Trace Back (TB) units and its sub circuits of the decoder have been operated in deep pipelined manner to achieve high transmission rate. The Power dissipation analysis is also investigated and compared with the existing results. The techniques that have been employed in our low-power design are clock-gating and toggle filtering. The synthesized circuits are placed and routed in the standard cell design environment and implemented on a Xilinx XC2VP2fg256-6 FPGA device. Power estimation obtained through gate level simulations indicated that the proposed design reduces the power dissipation of an original Viterbi decoder design by 68.82% and a speed of 145 MHz is achieved.展开更多
In this paper, a modified FPGA scheme for the convolutional encoder and Viterbi decoder based on the IEEE 802.11a standards of WLAN is presented in OFDM baseband processing systems. The proposed design supports a gene...In this paper, a modified FPGA scheme for the convolutional encoder and Viterbi decoder based on the IEEE 802.11a standards of WLAN is presented in OFDM baseband processing systems. The proposed design supports a generic, robust and configurable Viterbi decoder with constraint length of 7, code rate of 1/2 and decoding depth of 36 symbols. The Viterbi decoder uses full-parallel structure to improve computational speed for the add-compare-select (ACS) modules, adopts optimal data storage mechanism to avoid overflow and employs three distributed RAM blocks to complete cyclic trace-back. It includes the core parts, for example, the state path measure computation, the preservation and transfer of the survivor path and trace-back decoding, etc. Compared to the general Viterbi decoder, this design can effectively decrease the 10% of chip logic elements, reduce 5% of power consumption, and increase the encoder and decoder working performance in the hardware implementation. Lastly, relevant simulation results using Verilog HDL language are verified based on a Xinlinx Virtex-II FPGA by ISE 7.1i. It is shown that the Viterbi decoder is capable of decoding (2, 1, 7) convolutional codes accurately with a throughput of 80 Mbps.展开更多
An all-optical 2-to-4 decoder unit with the assist of terahertz optical asymmetric demultiplexer (TOAD) is presented. The all-optical 2-to-4 decoder with a set of all-optical switches is designed which can be used to ...An all-optical 2-to-4 decoder unit with the assist of terahertz optical asymmetric demultiplexer (TOAD) is presented. The all-optical 2-to-4 decoder with a set of all-optical switches is designed which can be used to achieve a high-speed central processor unit using optical hardware. The unique output lines can be used for all-optical header processing. We attempt to develop an integrated all-optical circuit which can perform decoding of signal. This scheme is very simple and flexible for performing different logic operation and to design advanced complex logic. Simulated results are confirming the described methods.展开更多
Decoding by alternating direction method of multipliers(ADMM) is a promising linear programming decoder for low-density parity-check(LDPC) codes. In this paper, we propose a two-step scheme to lower the error floor of...Decoding by alternating direction method of multipliers(ADMM) is a promising linear programming decoder for low-density parity-check(LDPC) codes. In this paper, we propose a two-step scheme to lower the error floor of LDPC codes with ADMM penalized decoder.For the undetected errors that cannot be avoided at the decoder side, we modify the code structure slightly to eliminate low-weight code words. For the detected errors induced by small error-prone structures, we propose a post-processing method for the ADMM penalized decoder. Simulation results show that the error floor can be reduced significantly over three illustrated LDPC codes by the proposed two-step scheme.展开更多
<div style="text-align:justify;"> Polar codes using successive-cancellation decoding always suffer from high latency for its serial nature. Fast simplified successive-cancellation decoding algorithm im...<div style="text-align:justify;"> Polar codes using successive-cancellation decoding always suffer from high latency for its serial nature. Fast simplified successive-cancellation decoding algorithm improves the situation in theoretically but not performs well as expected in practical for the workload of nodes identification and the existence of many short blocks. Meanwhile, Neural network (NN) based decoders have appeared as potential candidates to replace conventional decoders for polar codes. But the exponentially increasing training complexity with information bits is unacceptable which means it is only suitable for short codes. In this paper, we present an improvement that increases decoding efficiency without degrading the error-correction performance. The long polar codes are divided into several sub-blocks, some of which can be decoded adopting fast maximum likelihood decoding method and the remained parts are replaced by several short codes NN decoders. The result shows that time steps the proposed algorithm need only equal to 79.8% of fast simplified successive-cancellation decoders require. Moreover, it has up to 21.2 times faster than successive-cancellation decoding algorithm. More importantly, the proposed algorithm decreases the hardness when applying in some degree. </div>展开更多
Genetic algorithms offer very good performances for solving large optimization problems, especially in the domain of error-correcting codes. However, they have a major drawback related to the time complexity and memor...Genetic algorithms offer very good performances for solving large optimization problems, especially in the domain of error-correcting codes. However, they have a major drawback related to the time complexity and memory occupation when running on a uniprocessor computer. This paper proposes a parallel decoder for linear block codes, using parallel genetic algorithms (PGA). The good performance and time complexity are confirmed by theoretical study and by simulations on BCH(63,30,14) codes over both AWGN and flat Rayleigh fading channels. The simulation results show that the coding gain between parallel and single genetic algorithm is about 0.7 dB at BER = 10﹣5 with only 4 processors.展开更多
<div style="text-align:justify;"> <p style="text-align:justify;background:white;"> <span style="font-size:10.0pt;font-family:" color:black;"="">This artic...<div style="text-align:justify;"> <p style="text-align:justify;background:white;"> <span style="font-size:10.0pt;font-family:" color:black;"="">This article has been retracted to straighten the academic record. In making this decision the Editorial Board follows COPE's </span><span><a href="http://publicationethics.org/files/retraction%20guidelines.pdf"><span style="font-size:10.0pt;font-family:;" "="">Retraction Guidelines</span></a></span><span style="font-size:10.0pt;font-family:" color:black;"="">. The aim is to promote the circulation of scientific research by offering an ideal research publication platform with due consideration of internationally accepted standards on publication ethics. The Editorial Board would like to extend its sincere apologies for any inconvenience this retraction may have caused.</span><span style="font-size:10.0pt;font-family:" color:black;"=""></span> </p> <p style="text-align:justify;background:white;"> <span style="font-size:10.0pt;font-family:" color:black;"="">Please see the </span><span><a href="https://www.scirp.org/journal/paperinformation.aspx?paperid=101825"><span style="font-size:10.0pt;font-family:;" "="">article page</span></a></span><span style="font-size:10.0pt;font-family:" color:black;"=""> for more details. </span><span><a href="https://www.scirp.org/pdf/opj_2020072814494052.pdf"><span style="font-size:10.0pt;font-family:;" "="">The full retraction notice</span></a></span><span style="font-size:10.0pt;font-family:" color:black;"=""> in PDF is preceding the original paper which is marked "RETRACTED". </span> </p> <br /> </div>展开更多
In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is de...In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is dependent on a single video input source and few visual labels,and there is a problem with semantic alignment between video contents and generated natural sentences,which are not suitable for accurately comprehending and describing the video contents.To address this issue,this paper proposes a video captioning method by semantic topic-guided generation.First,a 3D convolutional neural network is utilized to extract the spatiotemporal features of videos during the encoding.Then,the semantic topics of video data are extracted using the visual labels retrieved from similar video data.In the decoding,a decoder is constructed by combining a novel Enhance-TopK sampling algorithm with a Generative Pre-trained Transformer-2 deep neural network,which decreases the influence of“deviation”in the semantic mapping process between videos and texts by jointly decoding a baseline and semantic topics of video contents.During this process,the designed Enhance-TopK sampling algorithm can alleviate a long-tail problem by dynamically adjusting the probability distribution of the predicted words.Finally,the experiments are conducted on two publicly used Microsoft Research Video Description andMicrosoft Research-Video to Text datasets.The experimental results demonstrate that the proposed method outperforms several state-of-art approaches.Specifically,the performance indicators Bilingual Evaluation Understudy,Metric for Evaluation of Translation with Explicit Ordering,Recall Oriented Understudy for Gisting Evaluation-longest common subsequence,and Consensus-based Image Description Evaluation of the proposed method are improved by 1.2%,0.1%,0.3%,and 2.4% on the Microsoft Research Video Description dataset,and 0.1%,1.0%,0.1%,and 2.8% on the Microsoft Research-Video to Text dataset,respectively,compared with the existing video captioning methods.As a result,the proposed method can generate video captioning that is more closely aligned with human natural language expression habits.展开更多
基金National Natural Science Foundation of China(No.61972081)Fundamental Research Funds for the Central Universities,China(No.2232023Y-01)Natural Science Foundation of Shanghai,China(No.22ZR1400200)。
文摘At present,convolutional neural networks(CNNs)and transformers surpass humans in many situations(such as face recognition and object classification),but do not work well in identifying fibers in textile surface images.Hence,this paper proposes an architecture named FiberCT which takes advantages of the feature extraction capability of CNNs and the long-range modeling capability of transformer decoders to adaptively extract multiple types of fiber features.Firstly,the convolution module extracts fiber features from the input textile surface images.Secondly,these features are sent into the transformer decoder module where label embeddings are compared with the features of each type of fibers through multi-head cross-attention and the desired features are pooled adaptively.Finally,an asymmetric loss further purifies the extracted fiber representations.Experiments show that FiberCT can more effectively extract the representations of various types of fibers and improve fiber identification accuracy than state-of-the-art multi-label classification approaches.
基金the National Natural Science Foundation of China(Grant Nos.11975132 and 61772295)the Natural Science Foundation of Shandong Province,China(Grant No.ZR2019YQ01)the Higher Education Science and Technology Program of Shandong Province,China(Grant No.J18KZ012).
文摘Quantum error-correction codes are immeasurable resources for quantum computing and quantum communication.However,the existing decoders are generally incapable of checking node duplication of belief propagation(BP)on quantum low-density parity check(QLDPC)codes.Based on the probability theory in the machine learning,mathematical statistics and topological structure,a GF(4)(the Galois field is abbreviated as GF)augmented model BP decoder with Tanner graph is designed.The problem of repeated check nodes can be solved by this decoder.In simulation,when the random perturbation strength p=0.0115-0.0116 and number of attempts N=60-70,the highest decoding efficiency of the augmented model BP decoder is obtained,and the low-loss frame error rate(FER)decreases to 7.1975×10^(-5).Hence,we design a novel augmented model decoder to compare the relationship between GF(2)and GF(4)for quantum code[[450,200]]on the depolarization channel.It can be verified that the proposed decoder provides the widely application range,and the decoding performance is better in QLDPC codes.
基金the National Natural Science Foundation of China(Grant Nos.11975132 and 61772295)the Natural Science Foundation of Shandong Province,China(Grant No.ZR2019YQ01)the Project of Shandong Province Higher Educational Science and Technology Program,China(Grant No.J18KZ012).
文摘Quantum error correction technology is an important solution to solve the noise interference generated during the operation of quantum computers.In order to find the best syndrome of the stabilizer code in quantum error correction,we need to find a fast and close to the optimal threshold decoder.In this work,we build a convolutional neural network(CNN)decoder to correct errors in the toric code based on the system research of machine learning.We analyze and optimize various conditions that affect CNN,and use the RestNet network architecture to reduce the running time.It is shortened by 30%-40%,and we finally design an optimized algorithm for CNN decoder.In this way,the threshold accuracy of the neural network decoder is made to reach 10.8%,which is closer to the optimal threshold of about 11%.The previous threshold of 8.9%-10.3%has been slightly improved,and there is no need to verify the basic noise.
基金Supported by the National Natural Science Foundation of China(60772066)
文摘A low-complexity MP3 decoder based on Broadcom embedded platform was proposed.C-code level optimization algorithms on inverse quantization,stereo decoding and alias reduction based on PC were proposed to further reduce the amount of memory usage and the computational complexity.Furthermore,the executable file of the optimized MP3 decoder was generated under the Linux environment,and transplanted to the set-top box based on Broadcom embedded platform.Experiment results showed that the total time for decoding was reduced on the embedded platform,and the goal of real time and fluent playing of audio files was fulfilled,which demonstrated the effectiveness of the proposed MP3 decoder.The proposed MP3 decoder could be applied in fields such as the set-top box based on Broadcom embedded platform and other portable devices.
基金This work was supported by Key Project of Sichuan Province(no.2017SZYZF0002)Marie Curie Fellowship(no.796426).
文摘Benefiting from strong decoding capabilities,soft-decision decoding has been used to replace hard-decision decoding in various communication systems,and NAND flash memory systems are no exception.However,soft-decision decoding relies heavily on accurate soft information.Owing to the incremental step pulse programming(ISPP),program errors(PEs)in multi-level cell(MLC)NAND flash memory have different characteristics compared to other types of errors,which is very difficult to obtain such accurate soft information.Therefore,the characteristics of the log-likelihood ratio(LLR)of PEs are investigated first in this paper.Accordingly,a PE-aware statistical method is proposed to determine the usage of PE mitigation schemes.In order to reduce the PE estimating workload of the controller,an adaptive blind clipping(ABC)scheme is proposed subsequently to approximate the PEs contaminated LLR with different decoding trials.Finally,simulation results demonstrate that(1)the proposed PE-aware statistical method is effective in practice,and(2)ABC scheme is able to provide satisfactory bit error rate(BER)and frame error rate(FER)performance in a penalty of negligible increasing of decoding latency.
基金supported in part by the National Key R&D Program(Grant No.2017YFE0121300)in part by the National Natural Science Foundation of China (Grant No. 61501321)+1 种基金in part by Tianjin science and technology program (Grant No. 17ZXRGGX00160)the support of the TEXEO project TEC201680339R funded by the Spanish Ministry of Economy and Competitivity
文摘Modern satellite communication systems require on-board processing(OBP)for performance improvements,and SRAM-FPGAs are an attractive option for OBP implementation.However,SRAM-FPGAs are sensitive to radiation effects,among which single event upsets(SEUs)are important as they can lead to data corruption and system failure.This paper studies the fault tolerance capability of a SRAM-FPGA implemented Viterbi decoder to SEUs on the user memory.Analysis and fault injection experiments are conducted to verify that over 97%of the SEUs on user memory would not lead to output errors.To achieve a better reliability,selective protection schemes are then proposed to further improve the reliability of the decoder to SEUs on user memory with very small overhead.Although the results are obtained for a specific FPGA implementation,the developed reliability estimation model and the general conclusions still hold for other implementations.
文摘This paper presented a new solution for motion compensation module in the high definition television (HDTV) video decoder. The overall architecture and the design of the major functional units, such as the motion vector decoder, the predictor, and the mixer, were discussed. Based on the exploitation of the special characteristics inherent in the motion compensation algorithm, the motion compensation module and its functional units adopt various novel architectures in order to allow the module to meet real-time constraints. This solution resolves the problem of high hardware costs, low bus efficiency and complex control schemes in conventional designs.
文摘The conventional methodology for designing QC-LDPC decoders is applied for fixed configurations used in wireless communication standards, and the supported largest expansion factor Z (the parallelism of the layered decoding) is a fixed number. In this paper, we study the circular-shifting network for decoding LDPC codes with arbitrary Z factor, especially for decoding large Z (Z P) codes, where P is the decoder parallelism. By buffering the P-length slices from the memory, and assembling the shifted slices in a fixed routine, the P-parallelism shift network can process Z-parallelism circular-shifting tasks. The implementation results show that the proposed network for arbitrary sized data shifting consumes only one times of additional resource cost compared to the traditional solution for only maximum P sized data shifting, and achieves significant saving on area and routing complexity.
文摘Turbo code has been shown to have ability to achieve performance that is close to Shannon limit. It has been adopted by various commercial communication systems. Both universal mobile telecommunications system (UMTS) TDD and FDD have also employed turbo code as the error correction coding scheme. It outperforms convolutional code in large block size, but because of its time delay, it is often only used in the non-real-time service. In this paper, we discuss the encoder and decoder structure of turbo code in B3G mobile communication System. In addition, various decoding techniques, such as the Log-MAP, Max-log-MAP and SOVA algorithm for non-real-time service are deduced and compared. The performance results of decoder and algorithms in different configurations are also shown.
文摘Genetic algorithms are successfully used for decoding some classes of error correcting codes, and offer very good performances for solving large optimization problems. This article proposes a new decoder based on Serial Genetic Algorithm Decoder (SGAD) for decoding Low Density Parity Check (LDPC) codes. The results show that the proposed algorithm gives large gains over sum-product decoder, which proves its efficiency.
文摘For the well-known 3G mobile communications standard UMTS, four different service classes have been specified. Considering two turbo decoding algorithms, like SOVA and log-MAP, it would be desirable to use an efficient turbo decoder. In this paper this decoder is shown to adapt dynamically to different service scenarios, considering parameters like performance and complexity for indoor/low range outdoor operating en-vironment. The scenarios show that for streaming service class real-time class applications the proposed de-coding algorithm depends on data rate;for the majority of scenarios SOVA is proposed, whereas log-MAP is optimal for increased data rates and medium-sized frames. On the other hand, conversational service class real-time applications cannot be established. For the majority of non real-time applications (interactive and background service classes) either algorithm can be used, while log-MAP is proposed for medium data rates and frame lengths.
文摘A high speed and low power Viterbi decoder architecture design based on deep pipelined, clock gating and toggle filtering has been presented in this paper. The Add-Compare-Select (ACS) and Trace Back (TB) units and its sub circuits of the decoder have been operated in deep pipelined manner to achieve high transmission rate. The Power dissipation analysis is also investigated and compared with the existing results. The techniques that have been employed in our low-power design are clock-gating and toggle filtering. The synthesized circuits are placed and routed in the standard cell design environment and implemented on a Xilinx XC2VP2fg256-6 FPGA device. Power estimation obtained through gate level simulations indicated that the proposed design reduces the power dissipation of an original Viterbi decoder design by 68.82% and a speed of 145 MHz is achieved.
文摘In this paper, a modified FPGA scheme for the convolutional encoder and Viterbi decoder based on the IEEE 802.11a standards of WLAN is presented in OFDM baseband processing systems. The proposed design supports a generic, robust and configurable Viterbi decoder with constraint length of 7, code rate of 1/2 and decoding depth of 36 symbols. The Viterbi decoder uses full-parallel structure to improve computational speed for the add-compare-select (ACS) modules, adopts optimal data storage mechanism to avoid overflow and employs three distributed RAM blocks to complete cyclic trace-back. It includes the core parts, for example, the state path measure computation, the preservation and transfer of the survivor path and trace-back decoding, etc. Compared to the general Viterbi decoder, this design can effectively decrease the 10% of chip logic elements, reduce 5% of power consumption, and increase the encoder and decoder working performance in the hardware implementation. Lastly, relevant simulation results using Verilog HDL language are verified based on a Xinlinx Virtex-II FPGA by ISE 7.1i. It is shown that the Viterbi decoder is capable of decoding (2, 1, 7) convolutional codes accurately with a throughput of 80 Mbps.
文摘An all-optical 2-to-4 decoder unit with the assist of terahertz optical asymmetric demultiplexer (TOAD) is presented. The all-optical 2-to-4 decoder with a set of all-optical switches is designed which can be used to achieve a high-speed central processor unit using optical hardware. The unique output lines can be used for all-optical header processing. We attempt to develop an integrated all-optical circuit which can perform decoding of signal. This scheme is very simple and flexible for performing different logic operation and to design advanced complex logic. Simulated results are confirming the described methods.
基金supported in part by National Nature Science Foundation of China under Grant No.61471286,No.61271004the Fundamental Research Funds for the Central Universitiesthe open research fund of Key Laboratory of Information Coding and Transmission,Southwest Jiaotong University(No.2010-03)
文摘Decoding by alternating direction method of multipliers(ADMM) is a promising linear programming decoder for low-density parity-check(LDPC) codes. In this paper, we propose a two-step scheme to lower the error floor of LDPC codes with ADMM penalized decoder.For the undetected errors that cannot be avoided at the decoder side, we modify the code structure slightly to eliminate low-weight code words. For the detected errors induced by small error-prone structures, we propose a post-processing method for the ADMM penalized decoder. Simulation results show that the error floor can be reduced significantly over three illustrated LDPC codes by the proposed two-step scheme.
文摘<div style="text-align:justify;"> Polar codes using successive-cancellation decoding always suffer from high latency for its serial nature. Fast simplified successive-cancellation decoding algorithm improves the situation in theoretically but not performs well as expected in practical for the workload of nodes identification and the existence of many short blocks. Meanwhile, Neural network (NN) based decoders have appeared as potential candidates to replace conventional decoders for polar codes. But the exponentially increasing training complexity with information bits is unacceptable which means it is only suitable for short codes. In this paper, we present an improvement that increases decoding efficiency without degrading the error-correction performance. The long polar codes are divided into several sub-blocks, some of which can be decoded adopting fast maximum likelihood decoding method and the remained parts are replaced by several short codes NN decoders. The result shows that time steps the proposed algorithm need only equal to 79.8% of fast simplified successive-cancellation decoders require. Moreover, it has up to 21.2 times faster than successive-cancellation decoding algorithm. More importantly, the proposed algorithm decreases the hardness when applying in some degree. </div>
文摘Genetic algorithms offer very good performances for solving large optimization problems, especially in the domain of error-correcting codes. However, they have a major drawback related to the time complexity and memory occupation when running on a uniprocessor computer. This paper proposes a parallel decoder for linear block codes, using parallel genetic algorithms (PGA). The good performance and time complexity are confirmed by theoretical study and by simulations on BCH(63,30,14) codes over both AWGN and flat Rayleigh fading channels. The simulation results show that the coding gain between parallel and single genetic algorithm is about 0.7 dB at BER = 10﹣5 with only 4 processors.
文摘<div style="text-align:justify;"> <p style="text-align:justify;background:white;"> <span style="font-size:10.0pt;font-family:" color:black;"="">This article has been retracted to straighten the academic record. In making this decision the Editorial Board follows COPE's </span><span><a href="http://publicationethics.org/files/retraction%20guidelines.pdf"><span style="font-size:10.0pt;font-family:;" "="">Retraction Guidelines</span></a></span><span style="font-size:10.0pt;font-family:" color:black;"="">. The aim is to promote the circulation of scientific research by offering an ideal research publication platform with due consideration of internationally accepted standards on publication ethics. The Editorial Board would like to extend its sincere apologies for any inconvenience this retraction may have caused.</span><span style="font-size:10.0pt;font-family:" color:black;"=""></span> </p> <p style="text-align:justify;background:white;"> <span style="font-size:10.0pt;font-family:" color:black;"="">Please see the </span><span><a href="https://www.scirp.org/journal/paperinformation.aspx?paperid=101825"><span style="font-size:10.0pt;font-family:;" "="">article page</span></a></span><span style="font-size:10.0pt;font-family:" color:black;"=""> for more details. </span><span><a href="https://www.scirp.org/pdf/opj_2020072814494052.pdf"><span style="font-size:10.0pt;font-family:;" "="">The full retraction notice</span></a></span><span style="font-size:10.0pt;font-family:" color:black;"=""> in PDF is preceding the original paper which is marked "RETRACTED". </span> </p> <br /> </div>
基金supported in part by the National Natural Science Foundation of China under Grant 61873277in part by the Natural Science Basic Research Plan in Shaanxi Province of China underGrant 2020JQ-758in part by the Chinese Postdoctoral Science Foundation under Grant 2020M673446.
文摘In the video captioning methods based on an encoder-decoder,limited visual features are extracted by an encoder,and a natural sentence of the video content is generated using a decoder.However,this kind ofmethod is dependent on a single video input source and few visual labels,and there is a problem with semantic alignment between video contents and generated natural sentences,which are not suitable for accurately comprehending and describing the video contents.To address this issue,this paper proposes a video captioning method by semantic topic-guided generation.First,a 3D convolutional neural network is utilized to extract the spatiotemporal features of videos during the encoding.Then,the semantic topics of video data are extracted using the visual labels retrieved from similar video data.In the decoding,a decoder is constructed by combining a novel Enhance-TopK sampling algorithm with a Generative Pre-trained Transformer-2 deep neural network,which decreases the influence of“deviation”in the semantic mapping process between videos and texts by jointly decoding a baseline and semantic topics of video contents.During this process,the designed Enhance-TopK sampling algorithm can alleviate a long-tail problem by dynamically adjusting the probability distribution of the predicted words.Finally,the experiments are conducted on two publicly used Microsoft Research Video Description andMicrosoft Research-Video to Text datasets.The experimental results demonstrate that the proposed method outperforms several state-of-art approaches.Specifically,the performance indicators Bilingual Evaluation Understudy,Metric for Evaluation of Translation with Explicit Ordering,Recall Oriented Understudy for Gisting Evaluation-longest common subsequence,and Consensus-based Image Description Evaluation of the proposed method are improved by 1.2%,0.1%,0.3%,and 2.4% on the Microsoft Research Video Description dataset,and 0.1%,1.0%,0.1%,and 2.8% on the Microsoft Research-Video to Text dataset,respectively,compared with the existing video captioning methods.As a result,the proposed method can generate video captioning that is more closely aligned with human natural language expression habits.