As the amount of data continues to grow rapidly,the variety of data produced by applications is becoming more affluent than ever.Cloud computing is the best technology evolving today to provide multi-services for the ...As the amount of data continues to grow rapidly,the variety of data produced by applications is becoming more affluent than ever.Cloud computing is the best technology evolving today to provide multi-services for the mass and variety of data.The cloud computing features are capable of processing,managing,and storing all sorts of data.Although data is stored in many high-end nodes,either in the same data centers or across many data centers in cloud,performance issues are still inevitable.The cloud replication strategy is one of best solutions to address risk of performance degradation in the cloud environment.The real challenge here is developing the right data replication strategy with minimal data movement that guarantees efficient network usage,low fault tolerance,and minimal replication frequency.The key problem discussed in this research is inefficient network usage discovered during selecting a suitable data center to store replica copies induced by inadequate data center selection criteria.Hence,to mitigate the issue,we proposed Replication Strategy with a comprehensive Data Center Selection Method(RS-DCSM),which can determine the appropriate data center to place replicas by considering three key factors:Popularity,space availability,and centrality.The proposed RS-DCSM was simulated using CloudSim and the results proved that data movement between data centers is significantly reduced by 14%reduction in overall replication frequency and 20%decrement in network usage,which outperformed the current replication strategy,known as Dynamic Popularity aware Replication Strategy(DPRS)algorithm.展开更多
Since each rock joint is unique by nature,the utilization of replicas in direct shear testing is required to carry out experimental parameter studies.However,information about the ability of the replicas to simulate t...Since each rock joint is unique by nature,the utilization of replicas in direct shear testing is required to carry out experimental parameter studies.However,information about the ability of the replicas to simulate the shear mechanical behavior of the rock joint and their dispersion in direct shear testing is lacking.With the aim to facilitate generation of high-quality direct shear test data from replicas,a novel component in the testing procedure is introduced by presenting two parameters for geometric quality assurance.The parameters are derived from surface comparisons of three-dimensional(3D)scanning data of the rock joint and its replicas.The first parameter,smf,captures morphological deviations between the replica and the rock joint surfaces.smf is derived as the standard deviation of the deviations between the coordinate points of the replica and the rock joint.Four sources of errors introduced in the replica manufacturing process employed in this study could be identified.These errors could be minimized,yielding replicas with smf0.06 mm.The second parameter is a vector,VHp100,which describes deviations with respect to the shear direction.It is the projection of the 100 mm long normal vector of the best-fit plane of the replica joint surface to the corresponding plane of the rock joint.VHp100was found to be less than or equal to 0.36 mm in this study.Application of these two geometric quality assurance parameters demonstrates that it is possible to manufacture replicas with high geometric similarity to the rock joint.In a subsequent paper(part 2),smf and VHp100 are incorporated in a novel quality assurance method,in which the parameters shall be evaluated prior to direct shear testing.Replicas having parameter values below established thresholds shall have a known and narrow dispersion and imitate the shear mechanical behavior of the rock joint.展开更多
Each rock joint is unique by nature which means that utilization of replicas in direct shear tests is required in experimental parameter studies.However,a method to acquire knowledge about the ability of the replicas ...Each rock joint is unique by nature which means that utilization of replicas in direct shear tests is required in experimental parameter studies.However,a method to acquire knowledge about the ability of the replicas to imitate the shear mechanical behavior of the rock joint and their dispersion in direct shear testing is lacking.In this study,a novel method is presented for geometric quality assurance of replicas.The aim is to facilitate generation of high-quality direct shear testing data as a prerequisite for reliable subsequent analyses of the results.In Part 1 of this study,two quality assurance parameters,smf and V_(Hp100),are derived and their usefulness for evaluation of geometric deviations,i.e.geometric reproducibility,is shown.In Part 2,the parameters are validated by showing a correlation between the parameters and the shear mechanical behavior,which qualifies the parameters for usage in the quality assurance method.Unique results from direct shear tests presenting comparisons between replicas and the rock joint show that replicas fulfilling proposed threshold values of σ_(mf)<0.06 mm and|V_(Hp100)|<0.2 mm have a narrow dispersion and imitate the shear mechanical behavior of the rock joint in all aspects apart from having a slightly lower peak shear strength.The wear in these replicas,which have similar morphology as the rock joint,is in the same areas as in the rock joint.The wear is slightly larger in the rock joint and therefore the discrepancy in peak shear strength derives from differences in material properties,possibly from differences in toughness.It is shown by application of the suggested method that the quality assured replicas manufactured following the process employed in this study phenomenologically capture the shear strength characteristics,which makes them useful in parameter studies.展开更多
在基于三副本策略的分布式存储系统中,当存储节点上的硬盘出现故障时,常见的处理方式是等待系统预设的时间.如果该故障硬盘超时未恢复,才开始恢复故障硬盘上的副本.这种处理方式存在的问题是,当三副本组中存在故障副本时,如果该副本组...在基于三副本策略的分布式存储系统中,当存储节点上的硬盘出现故障时,常见的处理方式是等待系统预设的时间.如果该故障硬盘超时未恢复,才开始恢复故障硬盘上的副本.这种处理方式存在的问题是,当三副本组中存在故障副本时,如果该副本组再有一个副本所在的硬盘发生故障,将导致系统无法继续提供服务,且不能自动恢复.本文提出一种基于日志副本的改进的Raft共识算法,即LR-Raft (log replica based Raft),日志副本没有完整状态机,可以快速加入集群,并参与投票与共识,提升了存在故障硬盘时系统的可用性;可以解决短时间内三副本中两个副本故障导致集群不可用和丢失数据的问题.实验结果表明,在副本组中引入日志副本后,与原Raft相比,LR-Raft在不同的工作负载下读写时延均明显降低,吞吐量显著提升.展开更多
Replicas can improve the data reliability in distributed system. However, the traditional algorithms for replica management are based on the assumption that all replicas have the uniform reliability, which is inaccura...Replicas can improve the data reliability in distributed system. However, the traditional algorithms for replica management are based on the assumption that all replicas have the uniform reliability, which is inaccurate in some actual systems. To address such problem, a novel algorithm is proposed based on dynamic programming to manage the number and distribution of replicas in different nodes. By using Markov model, replicas management is organized as a multi-phase process, and the recursion equations are provided. In this algorithm, the heterogeneity of nodes, the expense for maintaining replicas and the engaged space have been considered. Under these restricted conditions, this algorithm realizes high data reliability in a distributed system. The results of case analysis prove the feasibility of the algorithm.展开更多
This paper introduces replication management policies in distributed file system, and presents a novel decentralized dynamic replication management mechanism based on accessing frequency detecting named FDRM. In FDRM,...This paper introduces replication management policies in distributed file system, and presents a novel decentralized dynamic replication management mechanism based on accessing frequency detecting named FDRM. In FDRM, in order to provide better system performance and reduce network traffic, system nodes scan their local replicas to monitor replicas’ access pattern, and makes decision independently to add, delete or migrate replicas. In addition, the scanning interval of a replica is variable according to the accessing frequency to that replica, which makes FDRM more sensitive to the change of system behaviors, so that one can get better performance with less system overhead. Experiments show the efficiency and performance improvement of FSRM.展开更多
Twenty one health hybrid dogs weighted (7±3.2) kg were intramuscularly injected with tetracyclinum in a dose of 10 mg/kg body weight for continuous 5 days while the blood eperythrozoon test showing negative rea...Twenty one health hybrid dogs weighted (7±3.2) kg were intramuscularly injected with tetracyclinum in a dose of 10 mg/kg body weight for continuous 5 days while the blood eperythrozoon test showing negative reaction and divided randomly into three groups. The treating methods were respectively GroupⅠ, lienectomy, Group Ⅱ, both lienectomy and injection of immunosuppressor and Group Ⅲ, control. All dogs were intraperitoneal injected with 4ml anticoagulative blood from dog with eperythrozoonosis (the erythrozoon infection rate was 94 %, erythrocyte was 3.8×106/mm3). The result shown the eperythrozoon infection rate in GroupⅠ and Ⅱ were respectively 1.5 % and 2.1 % on the first day after administration. On the sixth day, the infection rate of GroupⅠ, Ⅱ and Ⅲ were 81.3 %, 86.5 % and 75.2 % respectively. The hematological changes in GroupⅠ and Ⅱ included both decrease in packed cell volume (HCT PCV), hemoglobin, total erythrocyte, acidocyte number and lymphocyte, and rise in total leukocyte, neutrocyte and monocyte. Five dogs in GroupⅠ and seven dogs in Group Ⅱ showed apparent symptoms of anaemia, icterus, high fever, anorexia, diarrhea and emesis,etc. The morbidity in GroupⅠ and Ⅱ were respectively 71 % and 100 %, and two dogs in Group Ⅱdied. The changes in clinical symptoms, the hemogram and the physiological indexes in GroupⅠ and Ⅱ were the same as those in natural eperythrozoonosis.展开更多
To compare the levels of agreement and the survival rates of sealant retention for different sealing materials over a 2-year period assessed using the visual clinical examination and replica methods, sealant retention...To compare the levels of agreement and the survival rates of sealant retention for different sealing materials over a 2-year period assessed using the visual clinical examination and replica methods, sealant retention data were obtained by visual clinical examination and from replicas of the same sealed tooth at baseline and at 0.5-, 1- and 2-year evaluation points in 407 children and were compared for agreement using kappa coefficients. Survival curves of retained sealants on occlusal surfaces were created using modified categorisation (fully retained sealants and those having all pits and fissures partly covered with the sealant material versus completely lost sealants that included pit and fissure systems that had /〉 1 pit re-exposed) according to the Kaplan-Meier method. The kappa coefficient for the agreement between both assessment methods over the three evaluation time points combined was 0.38 (95% confidence interval (CI): 0.35-0.41). More sealant retention was observed from replicas than through visual clinical examination. Cumulative survival curves at the three evaluation times were not statistically significantly higher when assessed from replicas (P=0.47). Using the replica method, more retained sealant material was observed than through visual clinical examination during the 2-year period. This finding did not result in a difference in the survival rates of sealants assessed by the two assessment methods. When replicas cast in die stone are used for assessing sealant retention, the level of reliability of the data is higher than that of data obtained through the commonly used visual clinical examination, particularly if such assessments are conducted over time.展开更多
In distributed parallel server system, location and redundancy of repficas have great influence on availability and efficiency of the system. In order to improve availability and efficiency of the system, two phase de...In distributed parallel server system, location and redundancy of repficas have great influence on availability and efficiency of the system. In order to improve availability and efficiency of the system, two phase decision algorithm of replica allocation is proposed. The algorithm which makes use of auto-regression model dynamically predicts the future count of READ and WRITE operation, and then determines location and redundancy of replicas by considering availability, CPU and bands of the network. The algorithm can not only ensure the requirement of availability, but also reduce the system resources consumed by all the operations in a great scale. Analysis and test show that communication complexity and time complexity of the algorithm satisfy O(n), resource optimizing scale increases with the increase of READ count.展开更多
基金supported by Universiti Putra Malaysia and the Ministry of Education(MOE).
文摘As the amount of data continues to grow rapidly,the variety of data produced by applications is becoming more affluent than ever.Cloud computing is the best technology evolving today to provide multi-services for the mass and variety of data.The cloud computing features are capable of processing,managing,and storing all sorts of data.Although data is stored in many high-end nodes,either in the same data centers or across many data centers in cloud,performance issues are still inevitable.The cloud replication strategy is one of best solutions to address risk of performance degradation in the cloud environment.The real challenge here is developing the right data replication strategy with minimal data movement that guarantees efficient network usage,low fault tolerance,and minimal replication frequency.The key problem discussed in this research is inefficient network usage discovered during selecting a suitable data center to store replica copies induced by inadequate data center selection criteria.Hence,to mitigate the issue,we proposed Replication Strategy with a comprehensive Data Center Selection Method(RS-DCSM),which can determine the appropriate data center to place replicas by considering three key factors:Popularity,space availability,and centrality.The proposed RS-DCSM was simulated using CloudSim and the results proved that data movement between data centers is significantly reduced by 14%reduction in overall replication frequency and 20%decrement in network usage,which outperformed the current replication strategy,known as Dynamic Popularity aware Replication Strategy(DPRS)algorithm.
文摘Since each rock joint is unique by nature,the utilization of replicas in direct shear testing is required to carry out experimental parameter studies.However,information about the ability of the replicas to simulate the shear mechanical behavior of the rock joint and their dispersion in direct shear testing is lacking.With the aim to facilitate generation of high-quality direct shear test data from replicas,a novel component in the testing procedure is introduced by presenting two parameters for geometric quality assurance.The parameters are derived from surface comparisons of three-dimensional(3D)scanning data of the rock joint and its replicas.The first parameter,smf,captures morphological deviations between the replica and the rock joint surfaces.smf is derived as the standard deviation of the deviations between the coordinate points of the replica and the rock joint.Four sources of errors introduced in the replica manufacturing process employed in this study could be identified.These errors could be minimized,yielding replicas with smf0.06 mm.The second parameter is a vector,VHp100,which describes deviations with respect to the shear direction.It is the projection of the 100 mm long normal vector of the best-fit plane of the replica joint surface to the corresponding plane of the rock joint.VHp100was found to be less than or equal to 0.36 mm in this study.Application of these two geometric quality assurance parameters demonstrates that it is possible to manufacture replicas with high geometric similarity to the rock joint.In a subsequent paper(part 2),smf and VHp100 are incorporated in a novel quality assurance method,in which the parameters shall be evaluated prior to direct shear testing.Replicas having parameter values below established thresholds shall have a known and narrow dispersion and imitate the shear mechanical behavior of the rock joint.
文摘Each rock joint is unique by nature which means that utilization of replicas in direct shear tests is required in experimental parameter studies.However,a method to acquire knowledge about the ability of the replicas to imitate the shear mechanical behavior of the rock joint and their dispersion in direct shear testing is lacking.In this study,a novel method is presented for geometric quality assurance of replicas.The aim is to facilitate generation of high-quality direct shear testing data as a prerequisite for reliable subsequent analyses of the results.In Part 1 of this study,two quality assurance parameters,smf and V_(Hp100),are derived and their usefulness for evaluation of geometric deviations,i.e.geometric reproducibility,is shown.In Part 2,the parameters are validated by showing a correlation between the parameters and the shear mechanical behavior,which qualifies the parameters for usage in the quality assurance method.Unique results from direct shear tests presenting comparisons between replicas and the rock joint show that replicas fulfilling proposed threshold values of σ_(mf)<0.06 mm and|V_(Hp100)|<0.2 mm have a narrow dispersion and imitate the shear mechanical behavior of the rock joint in all aspects apart from having a slightly lower peak shear strength.The wear in these replicas,which have similar morphology as the rock joint,is in the same areas as in the rock joint.The wear is slightly larger in the rock joint and therefore the discrepancy in peak shear strength derives from differences in material properties,possibly from differences in toughness.It is shown by application of the suggested method that the quality assured replicas manufactured following the process employed in this study phenomenologically capture the shear strength characteristics,which makes them useful in parameter studies.
文摘在基于三副本策略的分布式存储系统中,当存储节点上的硬盘出现故障时,常见的处理方式是等待系统预设的时间.如果该故障硬盘超时未恢复,才开始恢复故障硬盘上的副本.这种处理方式存在的问题是,当三副本组中存在故障副本时,如果该副本组再有一个副本所在的硬盘发生故障,将导致系统无法继续提供服务,且不能自动恢复.本文提出一种基于日志副本的改进的Raft共识算法,即LR-Raft (log replica based Raft),日志副本没有完整状态机,可以快速加入集群,并参与投票与共识,提升了存在故障硬盘时系统的可用性;可以解决短时间内三副本中两个副本故障导致集群不可用和丢失数据的问题.实验结果表明,在副本组中引入日志副本后,与原Raft相比,LR-Raft在不同的工作负载下读写时延均明显降低,吞吐量显著提升.
文摘Replicas can improve the data reliability in distributed system. However, the traditional algorithms for replica management are based on the assumption that all replicas have the uniform reliability, which is inaccurate in some actual systems. To address such problem, a novel algorithm is proposed based on dynamic programming to manage the number and distribution of replicas in different nodes. By using Markov model, replicas management is organized as a multi-phase process, and the recursion equations are provided. In this algorithm, the heterogeneity of nodes, the expense for maintaining replicas and the engaged space have been considered. Under these restricted conditions, this algorithm realizes high data reliability in a distributed system. The results of case analysis prove the feasibility of the algorithm.
基金Supported by the Electronic Industry Development Fund of MII "Multi-Function Network Server"
文摘This paper introduces replication management policies in distributed file system, and presents a novel decentralized dynamic replication management mechanism based on accessing frequency detecting named FDRM. In FDRM, in order to provide better system performance and reduce network traffic, system nodes scan their local replicas to monitor replicas’ access pattern, and makes decision independently to add, delete or migrate replicas. In addition, the scanning interval of a replica is variable according to the accessing frequency to that replica, which makes FDRM more sensitive to the change of system behaviors, so that one can get better performance with less system overhead. Experiments show the efficiency and performance improvement of FSRM.
基金Item supported by Shanghai scientific andtechnological commission
文摘Twenty one health hybrid dogs weighted (7±3.2) kg were intramuscularly injected with tetracyclinum in a dose of 10 mg/kg body weight for continuous 5 days while the blood eperythrozoon test showing negative reaction and divided randomly into three groups. The treating methods were respectively GroupⅠ, lienectomy, Group Ⅱ, both lienectomy and injection of immunosuppressor and Group Ⅲ, control. All dogs were intraperitoneal injected with 4ml anticoagulative blood from dog with eperythrozoonosis (the erythrozoon infection rate was 94 %, erythrocyte was 3.8×106/mm3). The result shown the eperythrozoon infection rate in GroupⅠ and Ⅱ were respectively 1.5 % and 2.1 % on the first day after administration. On the sixth day, the infection rate of GroupⅠ, Ⅱ and Ⅲ were 81.3 %, 86.5 % and 75.2 % respectively. The hematological changes in GroupⅠ and Ⅱ included both decrease in packed cell volume (HCT PCV), hemoglobin, total erythrocyte, acidocyte number and lymphocyte, and rise in total leukocyte, neutrocyte and monocyte. Five dogs in GroupⅠ and seven dogs in Group Ⅱ showed apparent symptoms of anaemia, icterus, high fever, anorexia, diarrhea and emesis,etc. The morbidity in GroupⅠ and Ⅱ were respectively 71 % and 100 %, and two dogs in Group Ⅱdied. The changes in clinical symptoms, the hemogram and the physiological indexes in GroupⅠ and Ⅱ were the same as those in natural eperythrozoonosis.
基金financed by grants from the Ministry of Science and Technology,China(2007BA128B00)the Netherlands Academy of Science(08CDP011)the Radboud University,the Netherlands(R0000463)
文摘To compare the levels of agreement and the survival rates of sealant retention for different sealing materials over a 2-year period assessed using the visual clinical examination and replica methods, sealant retention data were obtained by visual clinical examination and from replicas of the same sealed tooth at baseline and at 0.5-, 1- and 2-year evaluation points in 407 children and were compared for agreement using kappa coefficients. Survival curves of retained sealants on occlusal surfaces were created using modified categorisation (fully retained sealants and those having all pits and fissures partly covered with the sealant material versus completely lost sealants that included pit and fissure systems that had /〉 1 pit re-exposed) according to the Kaplan-Meier method. The kappa coefficient for the agreement between both assessment methods over the three evaluation time points combined was 0.38 (95% confidence interval (CI): 0.35-0.41). More sealant retention was observed from replicas than through visual clinical examination. Cumulative survival curves at the three evaluation times were not statistically significantly higher when assessed from replicas (P=0.47). Using the replica method, more retained sealant material was observed than through visual clinical examination during the 2-year period. This finding did not result in a difference in the survival rates of sealants assessed by the two assessment methods. When replicas cast in die stone are used for assessing sealant retention, the level of reliability of the data is higher than that of data obtained through the commonly used visual clinical examination, particularly if such assessments are conducted over time.
文摘In distributed parallel server system, location and redundancy of repficas have great influence on availability and efficiency of the system. In order to improve availability and efficiency of the system, two phase decision algorithm of replica allocation is proposed. The algorithm which makes use of auto-regression model dynamically predicts the future count of READ and WRITE operation, and then determines location and redundancy of replicas by considering availability, CPU and bands of the network. The algorithm can not only ensure the requirement of availability, but also reduce the system resources consumed by all the operations in a great scale. Analysis and test show that communication complexity and time complexity of the algorithm satisfy O(n), resource optimizing scale increases with the increase of READ count.