Blockchain is a technology that provides security features that can be used for more than just cryptocurrencies.Blockchain achieves security by saving the information of one block in the next block.Changing the inform...Blockchain is a technology that provides security features that can be used for more than just cryptocurrencies.Blockchain achieves security by saving the information of one block in the next block.Changing the information of one block will require changes to all the next block in order for that change to take effect.Which makes it unfeasible for such an attack to happen.However,the structure of how blockchain works makes the last block always vulnerable for attacks,given that its information is not saved yet in any block.This allows malicious node to change the information of the last block and generate a new block and broadcast it to the network.Given that the nodes always follow the longer chain wins rule,the malicious node will win given that it has the longest chain in the network.This paper suggests a solution to this issue by making the nodes send consistency check messages before broadcasting a block.If the nodes manage to successfully verify that the node that generated a new block hasn’t tampered with the blockchain than that block will be broadcasted.The results of the simulation show suggested protocol provided better security compared to the regular blockchain.展开更多
This article describes a global consistency check of CO2 satellite retrieval products from the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) using statistical analysis...This article describes a global consistency check of CO2 satellite retrieval products from the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) using statistical analysis and data from the World Data Centre for Greenhouse Gases (WDCGG). We use the correlation coefficient (r), relative difference (RD), root mean square errors (RMSE), and mean bias error (MBE) as evaluation indicators for this study. Statistical results show that a linear positive correlation between AIRS/IASI and WDCGG data occurs for most regions around the world. Temporal and spatial variations of these statistical quantities reflect obvious differences between satellite-derived and ground-based data based on geographic position, especially for stations near areas of intense human activities in the Northern Hemisphere. It is noteworthy that there appears to be a very weak correlation between AIRS/IASI data and ten ground- based observation stations in Europe, Asia, and North America. These results indicate that retrieval products from the two satellite-based instruments studied should be used with great caution.展开更多
When vectors of a judgement matrix is arranged according to their weight in AHP, uniform consistency is always required and the check criterion is uniform check. In this paper, a stronger uniform consistency check is ...When vectors of a judgement matrix is arranged according to their weight in AHP, uniform consistency is always required and the check criterion is uniform check. In this paper, a stronger uniform consistency check is introduced which can obtain a exact and practical effect by making an adjust to any non-satisfying uniforming matrix.展开更多
The main faults existing in current scale methods are that the scales do not represent the real importance of alternatives and their relations. This paper presents a proportion judgment scale and introduces a new meth...The main faults existing in current scale methods are that the scales do not represent the real importance of alternatives and their relations. This paper presents a proportion judgment scale and introduces a new method based on the proportion scale for construction comparison matrix in the analytic hierarchy process (AHP). The proportion judgment scales do not have the faults existing in current scale methods and the comparison matrix constructed by the new scale展开更多
Consistency checking is a fundamental computational problem in genetics.Given a pedigree and information on the genotypes (of some) of the individuals in it, the aim ofconsistency checking is to determine whether thes...Consistency checking is a fundamental computational problem in genetics.Given a pedigree and information on the genotypes (of some) of the individuals in it, the aim ofconsistency checking is to determine whether these data are consistent with the classic Mendelianlaws of inheritance. This problem arose originally from the geneticists'' need to filter their inputdata from erroneous information, and is well motivated from both a biological and a sociologicalviewpoint. This paper shows that consistency checking is NP-complete, even with focus on a singlegene and in the presence of three alleles. Several other results on the computational complexity ofproblems from genetics that are related to consistency checking are also offered. In particular, itis shown that checking the consistency of pedigrees over two alleles, and of pedigrees withoutloops, can be done in polynomial time.展开更多
MongoDB is one of the first commercial distributed databases that support causal consistency.Its implementation of causal consistency combines several research ideas for achieving scalability,fault tolerance,and secur...MongoDB is one of the first commercial distributed databases that support causal consistency.Its implementation of causal consistency combines several research ideas for achieving scalability,fault tolerance,and security.Given its inherent complexity,a natural question arises:"Has MongoDB correctly implemented causal consistency as it claimed?"To address this concern,the Jepsen team has conducted black-box testing of MongoDB.However,this Jepsen testing has several drawbacks in terms of specification,test case generation,implementation of causal consistency checking algorithms,and testing scenarios,which undermine the credibility of its reports.In this work,we propose a more thorough design of Jepsen testing of causal consistency of MongoDB.Specifically,we fully implement the causal consistency checking algorithms proposed by Bouajjani et al.and test MongoDB against three well-known variants of causal consistency,namely CC,CCv,and CM,under various scenarios including node failures,data movement,and network partitions.In addition,we develop formal specifications of causal consistency and their checking algorithms in TLA^(+),and verify them using the TLC model checker.We also explain how TLA^(+) specification can be related to Jepsen testing.展开更多
Testability plays an important role in improving the readiness and decreasing the lifecycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault...Testability plays an important role in improving the readiness and decreasing the lifecycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate(FDR) and fault isolation rate(FIR), which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testability demonstration test data(TDTD) such as low evaluation confidence and inaccurate result, a testability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function(PDF), and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated.Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.展开更多
One of the important topics in knowledge base revision is to introduce an efficient implementation algorithm. Algebraic approaches have good characteristics and implementation method; they may be a choice to solve the...One of the important topics in knowledge base revision is to introduce an efficient implementation algorithm. Algebraic approaches have good characteristics and implementation method; they may be a choice to solve the problem. An algebraic approach is presented to revise propositional rule-based knowledge bases in this paper. A way is firstly introduced to transform a propositional rule-based knowledge base into a Petri net. A knowledge base is represented by a Petri net, and facts are represented by the initial marking. Thus, the consistency check of a knowledge base is equivalent to the reachability problem of Petri nets. The reachability of Petri nets can be decided by whether the state equation has a solution; hence the consistency check can also be implemented by algebraic approach. Furthermore, algorithms are introduced to revise a propositional rule-based knowledge base, as well as extended logic programming. Compared with related works, the algorithms presented in the paper are efficient, and the time complexities of these algorithms are polynomial.展开更多
The problem of sequencing the importance degree of all n alternatives to an upper-levelattribute in terms of a given criterion scales is very important and basic in the analytic hierarchy process(AHP). It needs a give...The problem of sequencing the importance degree of all n alternatives to an upper-levelattribute in terms of a given criterion scales is very important and basic in the analytic hierarchy process(AHP). It needs a given criterion scales to get a paired comparison matrix which must be consistency,then obtain the importance weight of each alternative based on the matrix. In practical applicationhowever, the paired comparison matrix is usually inconsistency. In this paper, a completely new methodis presented on sequencing the alternatives in AHP, It does not need a given criterion scales and thepaired comparison matrix and can directly obtain the importance weight of each alternative. It notonly keeps the real transitivity among the alternatives, but also keeps the relative importance betweenarbitrary near two alternatives in the ranked sequence. Finally, two illustrating examples are given.展开更多
基金supported by research fund of Chungnam National University.
文摘Blockchain is a technology that provides security features that can be used for more than just cryptocurrencies.Blockchain achieves security by saving the information of one block in the next block.Changing the information of one block will require changes to all the next block in order for that change to take effect.Which makes it unfeasible for such an attack to happen.However,the structure of how blockchain works makes the last block always vulnerable for attacks,given that its information is not saved yet in any block.This allows malicious node to change the information of the last block and generate a new block and broadcast it to the network.Given that the nodes always follow the longer chain wins rule,the malicious node will win given that it has the longest chain in the network.This paper suggests a solution to this issue by making the nodes send consistency check messages before broadcasting a block.If the nodes manage to successfully verify that the node that generated a new block hasn’t tampered with the blockchain than that block will be broadcasted.The results of the simulation show suggested protocol provided better security compared to the regular blockchain.
基金Acknowledgements This project was supported by the National Basic Research Program of China (No. 2010CB951603) and the Major Program of National Social Science Foundation of China (No.13&ZD161). We thank Prof. Jietai Mao of the Department of Atmospheric & Oceanic Sciences, Peking University, China for providing expert advice and assistance. We also thank the WDCGG for providing the CO2 data. Many thanks to NASA for providing AIRS CO2 data and NOAA for providing IASI CO2 data.
文摘This article describes a global consistency check of CO2 satellite retrieval products from the Atmospheric Infrared Sounder (AIRS) and Infrared Atmospheric Sounding Interferometer (IASI) using statistical analysis and data from the World Data Centre for Greenhouse Gases (WDCGG). We use the correlation coefficient (r), relative difference (RD), root mean square errors (RMSE), and mean bias error (MBE) as evaluation indicators for this study. Statistical results show that a linear positive correlation between AIRS/IASI and WDCGG data occurs for most regions around the world. Temporal and spatial variations of these statistical quantities reflect obvious differences between satellite-derived and ground-based data based on geographic position, especially for stations near areas of intense human activities in the Northern Hemisphere. It is noteworthy that there appears to be a very weak correlation between AIRS/IASI data and ten ground- based observation stations in Europe, Asia, and North America. These results indicate that retrieval products from the two satellite-based instruments studied should be used with great caution.
文摘When vectors of a judgement matrix is arranged according to their weight in AHP, uniform consistency is always required and the check criterion is uniform check. In this paper, a stronger uniform consistency check is introduced which can obtain a exact and practical effect by making an adjust to any non-satisfying uniforming matrix.
基金This project was supported by Zhejiang Provincial Natural Science Foundation of China (No. 601076).
文摘The main faults existing in current scale methods are that the scales do not represent the real importance of alternatives and their relations. This paper presents a proportion judgment scale and introduces a new method based on the proportion scale for construction comparison matrix in the analytic hierarchy process (AHP). The proportion judgment scales do not have the faults existing in current scale methods and the comparison matrix constructed by the new scale
文摘Consistency checking is a fundamental computational problem in genetics.Given a pedigree and information on the genotypes (of some) of the individuals in it, the aim ofconsistency checking is to determine whether these data are consistent with the classic Mendelianlaws of inheritance. This problem arose originally from the geneticists'' need to filter their inputdata from erroneous information, and is well motivated from both a biological and a sociologicalviewpoint. This paper shows that consistency checking is NP-complete, even with focus on a singlegene and in the presence of three alleles. Several other results on the computational complexity ofproblems from genetics that are related to consistency checking are also offered. In particular, itis shown that checking the consistency of pedigrees over two alleles, and of pedigrees withoutloops, can be done in polynomial time.
基金supported by the CCF-Tencent Open Fund under Grant No.RAGR20200124the National Natural Science Foundation of China under Grant Nos.61702253 and 61772258.
文摘MongoDB is one of the first commercial distributed databases that support causal consistency.Its implementation of causal consistency combines several research ideas for achieving scalability,fault tolerance,and security.Given its inherent complexity,a natural question arises:"Has MongoDB correctly implemented causal consistency as it claimed?"To address this concern,the Jepsen team has conducted black-box testing of MongoDB.However,this Jepsen testing has several drawbacks in terms of specification,test case generation,implementation of causal consistency checking algorithms,and testing scenarios,which undermine the credibility of its reports.In this work,we propose a more thorough design of Jepsen testing of causal consistency of MongoDB.Specifically,we fully implement the causal consistency checking algorithms proposed by Bouajjani et al.and test MongoDB against three well-known variants of causal consistency,namely CC,CCv,and CM,under various scenarios including node failures,data movement,and network partitions.In addition,we develop formal specifications of causal consistency and their checking algorithms in TLA^(+),and verify them using the TLC model checker.We also explain how TLA^(+) specification can be related to Jepsen testing.
基金co-supported by the National Natural Science Foundation of China(No.51105369)Shanghai Aerospace Science and Technology Foundation(No.SAST201313)
文摘Testability plays an important role in improving the readiness and decreasing the lifecycle cost of equipment. Testability demonstration and evaluation is of significance in measuring such testability indexes as fault detection rate(FDR) and fault isolation rate(FIR), which is useful to the producer in mastering the testability level and improving the testability design, and helpful to the consumer in making purchase decisions. Aiming at the problems with a small sample of testability demonstration test data(TDTD) such as low evaluation confidence and inaccurate result, a testability evaluation method is proposed based on the prior information of multiple sources and Bayes theory. Firstly, the types of prior information are analyzed. The maximum entropy method is applied to the prior information with the mean and interval estimate forms on the testability index to obtain the parameters of prior probability density function(PDF), and the empirical Bayesian method is used to get the parameters for the prior information with a success-fail form. Then, a parametrical data consistency check method is used to check the compatibility between all the sources of prior information and TDTD. For the prior information to pass the check, the prior credibility is calculated. A mixed prior distribution is formed based on the prior PDFs and the corresponding credibility. The Bayesian posterior distribution model is acquired with the mixed prior distribution and TDTD, based on which the point and interval estimates are calculated.Finally, examples of a flying control system are used to verify the proposed method. The results show that the proposed method is feasible and effective.
基金Supported by the National Grand Fundamental Research 973 Program of China (Grant No. 2002CB312103)
文摘One of the important topics in knowledge base revision is to introduce an efficient implementation algorithm. Algebraic approaches have good characteristics and implementation method; they may be a choice to solve the problem. An algebraic approach is presented to revise propositional rule-based knowledge bases in this paper. A way is firstly introduced to transform a propositional rule-based knowledge base into a Petri net. A knowledge base is represented by a Petri net, and facts are represented by the initial marking. Thus, the consistency check of a knowledge base is equivalent to the reachability problem of Petri nets. The reachability of Petri nets can be decided by whether the state equation has a solution; hence the consistency check can also be implemented by algebraic approach. Furthermore, algorithms are introduced to revise a propositional rule-based knowledge base, as well as extended logic programming. Compared with related works, the algorithms presented in the paper are efficient, and the time complexities of these algorithms are polynomial.
文摘The problem of sequencing the importance degree of all n alternatives to an upper-levelattribute in terms of a given criterion scales is very important and basic in the analytic hierarchy process(AHP). It needs a given criterion scales to get a paired comparison matrix which must be consistency,then obtain the importance weight of each alternative based on the matrix. In practical applicationhowever, the paired comparison matrix is usually inconsistency. In this paper, a completely new methodis presented on sequencing the alternatives in AHP, It does not need a given criterion scales and thepaired comparison matrix and can directly obtain the importance weight of each alternative. It notonly keeps the real transitivity among the alternatives, but also keeps the relative importance betweenarbitrary near two alternatives in the ranked sequence. Finally, two illustrating examples are given.