An efficient trust-aware secure routing and network strategy-based data collection scheme is presented in this paper to enhance the performance and security of wireless sensor networks during data collection.The metho...An efficient trust-aware secure routing and network strategy-based data collection scheme is presented in this paper to enhance the performance and security of wireless sensor networks during data collection.The method first discovers the routes between the data sensors and the sink node.Several factors are considered for each sensor node along the route,including energy,number of neighbours,previous transmissions,and energy depletion ratio.Considering all these variables,the Sink Reachable Support Measure and the Secure Communication Support Measure,the method evaluates two distinct measures.The method calculates the data carrier support value using these two metrics.A single route is chosen to collect data based on the value of data carrier support.It has contributed to the design of Secure Communication Support(SCS)Estimation.This has been measured according to the strategy of each hop of the route.The suggested method improves the security and efficacy of data collection in wireless sensor networks.The second stage uses the two-fish approach to build a trust model for secure data transfer.A sim-ulation exercise was conducted to evaluate the effectiveness of the suggested framework.Metrics,including PDR,end-to-end latency,and average residual energy,were assessed for the proposed model.The efficiency of the suggested route design serves as evidence for the average residual energy for the proposed framework.展开更多
Out-band radiation is a severe problem for Cognitive Radio with OFDM system (CR-OFDM) which is caused by the sidelobe of OFDM signals. Lots of studies have been done on suppressing the sidelobe power and numerous meth...Out-band radiation is a severe problem for Cognitive Radio with OFDM system (CR-OFDM) which is caused by the sidelobe of OFDM signals. Lots of studies have been done on suppressing the sidelobe power and numerous methods have been proposed. In this paper, we propose a novel method to minimize the sidelobe by adding extended data carrier so called EDC to the original data carriers so as to protect primary user (PU) spectrum. Unlike the methods before, the EDCs are deployed within the secondary user (SU) data frequency spectrum to fully use the spectrum. Moreover, we derive the linear least squares problem to get the optimal weighting factors of EDCs to minimize the sidelobe power which is subject to an original data interference constraint. By simulation, we find that EDC is more capable in sidelobe suppression than method of Cancellation Carrier (CC) while EDC has only a small loss in BER performance.展开更多
With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for clou...With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.展开更多
Missing value is one of the main factors that cause dirty data.Without high-quality data,there will be no reliable analysis results and precise decision-making.Therefore,the data warehouse needs to integrate high-qual...Missing value is one of the main factors that cause dirty data.Without high-quality data,there will be no reliable analysis results and precise decision-making.Therefore,the data warehouse needs to integrate high-quality data consistently.In the power system,the electricity consumption data of some large users cannot be normally collected resulting in missing data,which affects the calculation of power supply and eventually leads to a large error in the daily power line loss rate.For the problem of missing electricity consumption data,this study proposes a group method of data handling(GMDH)based data interpolation method in distribution power networks and applies it in the analysis of actually collected electricity data.First,the dependent and independent variables are defined from the original data,and the upper and lower limits of missing values are determined according to prior knowledge or existing data information.All missing data are randomly interpolated within the upper and lower limits.Then,the GMDH network is established to obtain the optimal complexity model,which is used to predict the missing data to replace the last imputed electricity consumption data.At last,this process is implemented iteratively until the missing values do not change.Under a relatively small noise level(α=0.25),the proposed approach achieves a maximum error of no more than 0.605%.Experimental findings demonstrate the efficacy and feasibility of the proposed approach,which realizes the transformation from incomplete data to complete data.Also,this proposed data interpolation approach provides a strong basis for the electricity theft diagnosis and metering fault analysis of electricity enterprises.展开更多
Cu2ZnSn(S,Se)4(CZTSSe)solar cells have resource distribution and economic advantages.The main cause of their low efficiency is carrier loss resulting from recombination of photo-generated electron and hole.To overcome...Cu2ZnSn(S,Se)4(CZTSSe)solar cells have resource distribution and economic advantages.The main cause of their low efficiency is carrier loss resulting from recombination of photo-generated electron and hole.To overcome this,it is important to understand their electron-hole behavior characteristics.To determine the carrier separation characteristics,we measured the surface potential and the local current in terms of the absorber depth.The elemental variation in the intragrains(IGs)and at the grain boundaries(GBs)caused a band edge shift and bandgap(Eg)change.At the absorber surface and subsurface,an upward Ec and Ev band bending structure was observed at the GBs,and the carrier separation was improved.At the absorber center,both upward Ec and Ev and downward Ec-upward Ev band bending structures were observed at the GBs,and the carrier separation was degraded.To improve the carrier separation and suppress carrier recombination,an upward Ec and Ev band bending structure at the GBs is desirable.展开更多
Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal depende...Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.展开更多
We measure the time-resolved terahertz spectroscopy of GeSn thin film and studied the ultrafast dynamics of its photo-generated carriers.The experimental results show that there are photo-generated carriers in GeSn un...We measure the time-resolved terahertz spectroscopy of GeSn thin film and studied the ultrafast dynamics of its photo-generated carriers.The experimental results show that there are photo-generated carriers in GeSn under femtosecond laser excitation at 2500 nm,and its pump-induced photoconductivity can be explained by the Drude–Smith model.The carrier recombination process is mainly dominated by defect-assisted Auger processes and defect capture.The firstand second-order recombination rates are obtained by the rate equation fitting,which are(2.6±1.1)×10^(-2)ps^(-1)and(6.6±1.8)×10^(-19)cm^(3)·ps^(-1),respectively.Meanwhile,we also obtain the diffusion length of photo-generated carriers in GeSn,which is about 0.4μm,and it changes with the pump delay time.These results are important for the GeSn-based infrared optoelectronic devices,and demonstrate that Ge Sn materials can be applied to high-speed optoelectronic detectors and other applications.展开更多
A Diesel Particulate Filter(DPF)is a critical device for diesel engine exhaust products treatment.When using active-regeneration purification methods,on the one hand,a spatially irregular gas flow can produce relative...A Diesel Particulate Filter(DPF)is a critical device for diesel engine exhaust products treatment.When using active-regeneration purification methods,on the one hand,a spatially irregular gas flow can produce relatively high local temperatures,potentially resulting in damage to the carrier;On the other hand,the internal temperature field can also undergo significant changes contributing to increase this risk.This study explores the gas flow uniformity in a DPF carrier and the related temperature behavior under drop-to-idle(DTI)condition by means of bench tests.It is shown that the considered silicon carbide carrier exhibits good flow uniformity,with a temperature deviation of no more than 2%with respect to the same radius measurement point at the outlet during the regeneration stage.In the DTI test,the temperature is relatively high within r/2 near the outlet end,where the maximum temperature peak occurs,and the maximum radial temperature gradient is located between r/2 and the edge.Both these quantities grow as the soot load increases,thereby making the risk of carrier burnout greater.Finally,it is shown that the soot load limit of the silicon carbide DPF can be extended to 11 g/L,which reduces the frequency of active regeneration by approximately 40%compared to a cordierite DPF.展开更多
Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present ...Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present significant challenges,necessitating efficient data collection and reliable transmission services.This paper addresses the limitations of existing data transmission and recovery protocols by proposing a systematic end-to-end design tailored for medical event-driven cluster-based large-scale WSNs.The primary goal is to enhance the reliability of data collection and transmission services,ensuring a comprehensive and practical approach.Our approach focuses on refining the hop-count-based routing scheme to achieve fairness in forwarding reliability.Additionally,it emphasizes reliable data collection within clusters and establishes robust data transmission over multiple hops.These systematic improvements are designed to optimize the overall performance of the WSN in real-world scenarios.Simulation results of the proposed protocol validate its exceptional performance compared to other prominent data transmission schemes.The evaluation spans varying sensor densities,wireless channel conditions,and packet transmission rates,showcasing the protocol’s superiority in ensuring reliable and efficient data transfer.Our systematic end-to-end design successfully addresses the challenges posed by the instability of wireless links in large-scaleWSNs.By prioritizing fairness,reliability,and efficiency,the proposed protocol demonstrates its efficacy in enhancing data collection and transmission services,thereby offering a valuable contribution to the field of medical event-drivenWSNs.展开更多
Guest Editors Prof.Ling Tian Prof.Jian-Hua Tao University of Electronic Science and Technology of China Tsinghua University lingtian@uestc.edu.cn jhtao@tsinghua.edu.cn Dr.Bin Zhou National University of Defense Techno...Guest Editors Prof.Ling Tian Prof.Jian-Hua Tao University of Electronic Science and Technology of China Tsinghua University lingtian@uestc.edu.cn jhtao@tsinghua.edu.cn Dr.Bin Zhou National University of Defense Technology binzhou@nudt.edu.cn, Since the concept of “Big Data” was first introduced in Nature in 2008, it has been widely applied in fields, such as business, healthcare, national defense, education, transportation, and security. With the maturity of artificial intelligence technology, big data analysis techniques tailored to various fields have made significant progress, but still face many challenges in terms of data quality, algorithms, and computing power.展开更多
Capabilities to assimilate Geostationary Operational Environmental Satellite “R-series ”(GOES-R) Geostationary Lightning Mapper(GLM) flash extent density(FED) data within the operational Gridpoint Statistical Interp...Capabilities to assimilate Geostationary Operational Environmental Satellite “R-series ”(GOES-R) Geostationary Lightning Mapper(GLM) flash extent density(FED) data within the operational Gridpoint Statistical Interpolation ensemble Kalman filter(GSI-EnKF) framework were previously developed and tested with a mesoscale convective system(MCS) case. In this study, such capabilities are further developed to assimilate GOES GLM FED data within the GSI ensemble-variational(EnVar) hybrid data assimilation(DA) framework. The results of assimilating the GLM FED data using 3DVar, and pure En3DVar(PEn3DVar, using 100% ensemble covariance and no static covariance) are compared with those of EnKF/DfEnKF for a supercell storm case. The focus of this study is to validate the correctness and evaluate the performance of the new implementation rather than comparing the performance of FED DA among different DA schemes. Only the results of 3DVar and pEn3DVar are examined and compared with EnKF/DfEnKF. Assimilation of a single FED observation shows that the magnitude and horizontal extent of the analysis increments from PEn3DVar are generally larger than from EnKF, which is mainly caused by using different localization strategies in EnFK/DfEnKF and PEn3DVar as well as the integration limits of the graupel mass in the observation operator. Overall, the forecast performance of PEn3DVar is comparable to EnKF/DfEnKF, suggesting correct implementation.展开更多
It is a challenge to coordinate carrier-kinetics performance and the redox capacity of photogenerated charges synchronously at the atomic level for boosting photocatalytic activity.Herein,the atomic Ni was introduced ...It is a challenge to coordinate carrier-kinetics performance and the redox capacity of photogenerated charges synchronously at the atomic level for boosting photocatalytic activity.Herein,the atomic Ni was introduced into the lattice of hexagonal ZnIn_(2)S_(4) nanosheets(Ni/ZnIn_(2)S_(4))via directionalsubstituting Zn atom with the facile hydrothermal method.The electronic structure calculations indicate that the introduction of Ni atom effectively extracts more electrons and acts as active site for subsequent reduction reaction.Besides the optimized light absorption range,the elevation of Efand ECBendows Ni/ZnIn_(2)S_(4) photocatalyst with the increased electron concentration and the enhanced reduction ability for surface reaction.Moreover,ultrafast transient absorption spectroscopy,as well as a series of electrochemical tests,demonstrates that Ni/ZnIn_(2)S_(4) possesses 2.15 times longer lifetime of the excited charge carriers and an order of magnitude increase for carrier mobility and separation efficiency compared with pristine ZnIn_(2)S_(4).These efficient kinetics performances of charge carriers and enhanced redox capacity synergistically boost photocatalytic activity,in which a 3-times higher conversion efficiency of nitrobenzene reduction was achieved upon Ni/ZnIn_(2)S_(4).Our study not only provides in-depth insights into the effect of atomic directional-substitution on the kinetic behavior of photogenerated charges,but also opens an avenue to the synchronous optimization of redox capacity and carrier-kinetics performance for efficient solar energy conversion.展开更多
In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploratio...In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploration.Considering that traditional locating methods are time-consuming and supervised methods require a great quantity of expensive labeled data,in this paper,we first investigate characteristics of interferometric fringes in the simulation and real scenario separately,and integrate an almost parameter-free unsupervised clustering method and seeding filling or eraser algorithm to propose a hierarchical plug and play method to improve location accuracy.Then,we apply our method to locate single and multiple sources’interferometric fringes in simulation data.Next,we apply our method to real data taken from the Tianlai radio telescope array.Finally,we compare with unsupervised methods that are state of the art.These results show that our method has robustness in different scenarios and can improve location measurement accuracy effectively.展开更多
With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapi...With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.展开更多
The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections an...The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.展开更多
Traditional Io T systems suffer from high equipment management costs and difficulty in trustworthy data sharing caused by centralization.Blockchain provides a feasible research direction to solve these problems. The m...Traditional Io T systems suffer from high equipment management costs and difficulty in trustworthy data sharing caused by centralization.Blockchain provides a feasible research direction to solve these problems. The main challenge at this stage is to integrate the blockchain from the resourceconstrained Io T devices and ensure the data of Io T system is credible. We provide a general framework for intelligent Io T data acquisition and sharing in an untrusted environment based on the blockchain, where gateways become Oracles. A distributed Oracle network based on Byzantine Fault Tolerant algorithm is used to provide trusted data for the blockchain to make intelligent Io T data trustworthy. An aggregation contract is deployed to collect data from various Oracle and share the credible data to all on-chain users. We also propose a gateway data aggregation scheme based on the REST API event publishing/subscribing mechanism which uses SQL to achieve flexible data aggregation. The experimental results show that the proposed scheme can alleviate the problem of limited performance of Io T equipment, make data reliable, and meet the diverse data needs on the chain.展开更多
Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims...Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims to elevate the efficiency and precision of data stream clustering,leveraging the TEDA(Typicality and Eccentricity Data Analysis)algorithm as a foundation,we introduce improvements by integrating a nearest neighbor search algorithm to enhance both the efficiency and accuracy of the algorithm.The original TEDA algorithm,grounded in the concept of“Typicality and Eccentricity Data Analytics”,represents an evolving and recursive method that requires no prior knowledge.While the algorithm autonomously creates and merges clusters as new data arrives,its efficiency is significantly hindered by the need to traverse all existing clusters upon the arrival of further data.This work presents the NS-TEDA(Neighbor Search Based Typicality and Eccentricity Data Analysis)algorithm by incorporating a KD-Tree(K-Dimensional Tree)algorithm integrated with the Scapegoat Tree.Upon arrival,this ensures that new data points interact solely with clusters in very close proximity.This significantly enhances algorithm efficiency while preventing a single data point from joining too many clusters and mitigating the merging of clusters with high overlap to some extent.We apply the NS-TEDA algorithm to several well-known datasets,comparing its performance with other data stream clustering algorithms and the original TEDA algorithm.The results demonstrate that the proposed algorithm achieves higher accuracy,and its runtime exhibits almost linear dependence on the volume of data,making it more suitable for large-scale data stream analysis research.展开更多
Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary w...Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.展开更多
Data is regarded as a valuable asset,and sharing data is a prerequisite for fully exploiting the value of data.However,the current medical data sharing scheme lacks a fair incentive mechanism,and the authenticity of d...Data is regarded as a valuable asset,and sharing data is a prerequisite for fully exploiting the value of data.However,the current medical data sharing scheme lacks a fair incentive mechanism,and the authenticity of data cannot be guaranteed,resulting in low enthusiasm of participants.A fair and trusted medical data trading scheme based on smart contracts is proposed,which aims to encourage participants to be honest and improve their enthusiasm for participation.The scheme uses zero-knowledge range proof for trusted verification,verifies the authenticity of the patient’s data and the specific attributes of the data before the transaction,and realizes privacy protection.At the same time,the game pricing strategy selects the best revenue strategy for all parties involved and realizes the fairness and incentive of the transaction price.The smart contract is used to complete the verification and game bargaining process,and the blockchain is used as a distributed ledger to record the medical data transaction process to prevent data tampering and transaction denial.Finally,by deploying smart contracts on the Ethereum test network and conducting experiments and theoretical calculations,it is proved that the transaction scheme achieves trusted verification and fair bargaining while ensuring privacy protection in a decentralized environment.The experimental results show that the model improves the credibility and fairness of medical data transactions,maximizes social benefits,encourages more patients and medical institutions to participate in the circulation of medical data,and more fully taps the potential value of medical data.展开更多
文摘An efficient trust-aware secure routing and network strategy-based data collection scheme is presented in this paper to enhance the performance and security of wireless sensor networks during data collection.The method first discovers the routes between the data sensors and the sink node.Several factors are considered for each sensor node along the route,including energy,number of neighbours,previous transmissions,and energy depletion ratio.Considering all these variables,the Sink Reachable Support Measure and the Secure Communication Support Measure,the method evaluates two distinct measures.The method calculates the data carrier support value using these two metrics.A single route is chosen to collect data based on the value of data carrier support.It has contributed to the design of Secure Communication Support(SCS)Estimation.This has been measured according to the strategy of each hop of the route.The suggested method improves the security and efficacy of data collection in wireless sensor networks.The second stage uses the two-fish approach to build a trust model for secure data transfer.A sim-ulation exercise was conducted to evaluate the effectiveness of the suggested framework.Metrics,including PDR,end-to-end latency,and average residual energy,were assessed for the proposed model.The efficiency of the suggested route design serves as evidence for the average residual energy for the proposed framework.
文摘Out-band radiation is a severe problem for Cognitive Radio with OFDM system (CR-OFDM) which is caused by the sidelobe of OFDM signals. Lots of studies have been done on suppressing the sidelobe power and numerous methods have been proposed. In this paper, we propose a novel method to minimize the sidelobe by adding extended data carrier so called EDC to the original data carriers so as to protect primary user (PU) spectrum. Unlike the methods before, the EDCs are deployed within the secondary user (SU) data frequency spectrum to fully use the spectrum. Moreover, we derive the linear least squares problem to get the optimal weighting factors of EDCs to minimize the sidelobe power which is subject to an original data interference constraint. By simulation, we find that EDC is more capable in sidelobe suppression than method of Cancellation Carrier (CC) while EDC has only a small loss in BER performance.
基金sponsored by the National Natural Science Foundation of China under grant number No. 62172353, No. 62302114, No. U20B2046 and No. 62172115Innovation Fund Program of the Engineering Research Center for Integration and Application of Digital Learning Technology of Ministry of Education No.1331007 and No. 1311022+1 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No. 17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX-108
文摘With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.
基金This research was funded by the National Nature Sciences Foundation of China(Grant No.42250410321).
文摘Missing value is one of the main factors that cause dirty data.Without high-quality data,there will be no reliable analysis results and precise decision-making.Therefore,the data warehouse needs to integrate high-quality data consistently.In the power system,the electricity consumption data of some large users cannot be normally collected resulting in missing data,which affects the calculation of power supply and eventually leads to a large error in the daily power line loss rate.For the problem of missing electricity consumption data,this study proposes a group method of data handling(GMDH)based data interpolation method in distribution power networks and applies it in the analysis of actually collected electricity data.First,the dependent and independent variables are defined from the original data,and the upper and lower limits of missing values are determined according to prior knowledge or existing data information.All missing data are randomly interpolated within the upper and lower limits.Then,the GMDH network is established to obtain the optimal complexity model,which is used to predict the missing data to replace the last imputed electricity consumption data.At last,this process is implemented iteratively until the missing values do not change.Under a relatively small noise level(α=0.25),the proposed approach achieves a maximum error of no more than 0.605%.Experimental findings demonstrate the efficacy and feasibility of the proposed approach,which realizes the transformation from incomplete data to complete data.Also,this proposed data interpolation approach provides a strong basis for the electricity theft diagnosis and metering fault analysis of electricity enterprises.
基金supported by the National Research Foundation of Korea(NRF)grant funded by the Ministry of Science and ICT(No.2022M3J1A1085371)by the DGIST R&D programs of the Ministry of Science and ICT(23-ET-08 and 23-CoE-ET-01)supported by the Basic Science Research Program through the National Research Foundation of Korea(NRF)funded by the Ministry of Education(NRF-2018R1A6A1A03025340).
文摘Cu2ZnSn(S,Se)4(CZTSSe)solar cells have resource distribution and economic advantages.The main cause of their low efficiency is carrier loss resulting from recombination of photo-generated electron and hole.To overcome this,it is important to understand their electron-hole behavior characteristics.To determine the carrier separation characteristics,we measured the surface potential and the local current in terms of the absorber depth.The elemental variation in the intragrains(IGs)and at the grain boundaries(GBs)caused a band edge shift and bandgap(Eg)change.At the absorber surface and subsurface,an upward Ec and Ev band bending structure was observed at the GBs,and the carrier separation was improved.At the absorber center,both upward Ec and Ev and downward Ec-upward Ev band bending structures were observed at the GBs,and the carrier separation was degraded.To improve the carrier separation and suppress carrier recombination,an upward Ec and Ev band bending structure at the GBs is desirable.
基金This research was financially supported by the Ministry of Trade,Industry,and Energy(MOTIE),Korea,under the“Project for Research and Development with Middle Markets Enterprises and DNA(Data,Network,AI)Universities”(AI-based Safety Assessment and Management System for Concrete Structures)(ReferenceNumber P0024559)supervised by theKorea Institute for Advancement of Technology(KIAT).
文摘Time-series data provide important information in many fields,and their processing and analysis have been the focus of much research.However,detecting anomalies is very difficult due to data imbalance,temporal dependence,and noise.Therefore,methodologies for data augmentation and conversion of time series data into images for analysis have been studied.This paper proposes a fault detection model that uses time series data augmentation and transformation to address the problems of data imbalance,temporal dependence,and robustness to noise.The method of data augmentation is set as the addition of noise.It involves adding Gaussian noise,with the noise level set to 0.002,to maximize the generalization performance of the model.In addition,we use the Markov Transition Field(MTF)method to effectively visualize the dynamic transitions of the data while converting the time series data into images.It enables the identification of patterns in time series data and assists in capturing the sequential dependencies of the data.For anomaly detection,the PatchCore model is applied to show excellent performance,and the detected anomaly areas are represented as heat maps.It allows for the detection of anomalies,and by applying an anomaly map to the original image,it is possible to capture the areas where anomalies occur.The performance evaluation shows that both F1-score and Accuracy are high when time series data is converted to images.Additionally,when processed as images rather than as time series data,there was a significant reduction in both the size of the data and the training time.The proposed method can provide an important springboard for research in the field of anomaly detection using time series data.Besides,it helps solve problems such as analyzing complex patterns in data lightweight.
基金Project supported by the National Natural Science Foundation of China(Grant Nos.12004067,11974070,62027807,and 52272137)the National Key R&D Program of China(Grant No.2022YFA1403000)。
文摘We measure the time-resolved terahertz spectroscopy of GeSn thin film and studied the ultrafast dynamics of its photo-generated carriers.The experimental results show that there are photo-generated carriers in GeSn under femtosecond laser excitation at 2500 nm,and its pump-induced photoconductivity can be explained by the Drude–Smith model.The carrier recombination process is mainly dominated by defect-assisted Auger processes and defect capture.The firstand second-order recombination rates are obtained by the rate equation fitting,which are(2.6±1.1)×10^(-2)ps^(-1)and(6.6±1.8)×10^(-19)cm^(3)·ps^(-1),respectively.Meanwhile,we also obtain the diffusion length of photo-generated carriers in GeSn,which is about 0.4μm,and it changes with the pump delay time.These results are important for the GeSn-based infrared optoelectronic devices,and demonstrate that Ge Sn materials can be applied to high-speed optoelectronic detectors and other applications.
基金This work was supported by National Key R&D Program Project[Grant Number 2020YFB0106603]Provincial Major Scientific and Technological Innovation Project[Grant Number 2021CXGC010207-1]+2 种基金Shantui Engineering Machinery Intelligent Equipment Innovation and Entrepreneurship Community Innovation Project[Grant Number GTT2021105]Shandong Provincial Science and Technology SMEs Innovation Capacity Improvement Project[Grant Numbers 2021TSGC1334]Undergraduate School of Shandong University,China[Grant Number 2022Y155].
文摘A Diesel Particulate Filter(DPF)is a critical device for diesel engine exhaust products treatment.When using active-regeneration purification methods,on the one hand,a spatially irregular gas flow can produce relatively high local temperatures,potentially resulting in damage to the carrier;On the other hand,the internal temperature field can also undergo significant changes contributing to increase this risk.This study explores the gas flow uniformity in a DPF carrier and the related temperature behavior under drop-to-idle(DTI)condition by means of bench tests.It is shown that the considered silicon carbide carrier exhibits good flow uniformity,with a temperature deviation of no more than 2%with respect to the same radius measurement point at the outlet during the regeneration stage.In the DTI test,the temperature is relatively high within r/2 near the outlet end,where the maximum temperature peak occurs,and the maximum radial temperature gradient is located between r/2 and the edge.Both these quantities grow as the soot load increases,thereby making the risk of carrier burnout greater.Finally,it is shown that the soot load limit of the silicon carbide DPF can be extended to 11 g/L,which reduces the frequency of active regeneration by approximately 40%compared to a cordierite DPF.
文摘Large-scale wireless sensor networks(WSNs)play a critical role in monitoring dangerous scenarios and responding to medical emergencies.However,the inherent instability and error-prone nature of wireless links present significant challenges,necessitating efficient data collection and reliable transmission services.This paper addresses the limitations of existing data transmission and recovery protocols by proposing a systematic end-to-end design tailored for medical event-driven cluster-based large-scale WSNs.The primary goal is to enhance the reliability of data collection and transmission services,ensuring a comprehensive and practical approach.Our approach focuses on refining the hop-count-based routing scheme to achieve fairness in forwarding reliability.Additionally,it emphasizes reliable data collection within clusters and establishes robust data transmission over multiple hops.These systematic improvements are designed to optimize the overall performance of the WSN in real-world scenarios.Simulation results of the proposed protocol validate its exceptional performance compared to other prominent data transmission schemes.The evaluation spans varying sensor densities,wireless channel conditions,and packet transmission rates,showcasing the protocol’s superiority in ensuring reliable and efficient data transfer.Our systematic end-to-end design successfully addresses the challenges posed by the instability of wireless links in large-scaleWSNs.By prioritizing fairness,reliability,and efficiency,the proposed protocol demonstrates its efficacy in enhancing data collection and transmission services,thereby offering a valuable contribution to the field of medical event-drivenWSNs.
文摘Guest Editors Prof.Ling Tian Prof.Jian-Hua Tao University of Electronic Science and Technology of China Tsinghua University lingtian@uestc.edu.cn jhtao@tsinghua.edu.cn Dr.Bin Zhou National University of Defense Technology binzhou@nudt.edu.cn, Since the concept of “Big Data” was first introduced in Nature in 2008, it has been widely applied in fields, such as business, healthcare, national defense, education, transportation, and security. With the maturity of artificial intelligence technology, big data analysis techniques tailored to various fields have made significant progress, but still face many challenges in terms of data quality, algorithms, and computing power.
基金supported by NOAA JTTI award via Grant #NA21OAR4590165, NOAA GOESR Program funding via Grant #NA16OAR4320115provided by NOAA/Office of Oceanic and Atmospheric Research under NOAA-University of Oklahoma Cooperative Agreement #NA11OAR4320072, U.S. Department of Commercesupported by the National Oceanic and Atmospheric Administration (NOAA) of the U.S. Department of Commerce via Grant #NA18NWS4680063。
文摘Capabilities to assimilate Geostationary Operational Environmental Satellite “R-series ”(GOES-R) Geostationary Lightning Mapper(GLM) flash extent density(FED) data within the operational Gridpoint Statistical Interpolation ensemble Kalman filter(GSI-EnKF) framework were previously developed and tested with a mesoscale convective system(MCS) case. In this study, such capabilities are further developed to assimilate GOES GLM FED data within the GSI ensemble-variational(EnVar) hybrid data assimilation(DA) framework. The results of assimilating the GLM FED data using 3DVar, and pure En3DVar(PEn3DVar, using 100% ensemble covariance and no static covariance) are compared with those of EnKF/DfEnKF for a supercell storm case. The focus of this study is to validate the correctness and evaluate the performance of the new implementation rather than comparing the performance of FED DA among different DA schemes. Only the results of 3DVar and pEn3DVar are examined and compared with EnKF/DfEnKF. Assimilation of a single FED observation shows that the magnitude and horizontal extent of the analysis increments from PEn3DVar are generally larger than from EnKF, which is mainly caused by using different localization strategies in EnFK/DfEnKF and PEn3DVar as well as the integration limits of the graupel mass in the observation operator. Overall, the forecast performance of PEn3DVar is comparable to EnKF/DfEnKF, suggesting correct implementation.
基金the National Natural Science Foundation of China (22209091)the Natural Science Foundation of Shandong Province (ZR2020QB057)+1 种基金the Key Program of National Natural Science Foundation of China (22133006)the Yankuang Group 2019 Science and Technology Program (YKKJ2019AJ05JG-R60)。
文摘It is a challenge to coordinate carrier-kinetics performance and the redox capacity of photogenerated charges synchronously at the atomic level for boosting photocatalytic activity.Herein,the atomic Ni was introduced into the lattice of hexagonal ZnIn_(2)S_(4) nanosheets(Ni/ZnIn_(2)S_(4))via directionalsubstituting Zn atom with the facile hydrothermal method.The electronic structure calculations indicate that the introduction of Ni atom effectively extracts more electrons and acts as active site for subsequent reduction reaction.Besides the optimized light absorption range,the elevation of Efand ECBendows Ni/ZnIn_(2)S_(4) photocatalyst with the increased electron concentration and the enhanced reduction ability for surface reaction.Moreover,ultrafast transient absorption spectroscopy,as well as a series of electrochemical tests,demonstrates that Ni/ZnIn_(2)S_(4) possesses 2.15 times longer lifetime of the excited charge carriers and an order of magnitude increase for carrier mobility and separation efficiency compared with pristine ZnIn_(2)S_(4).These efficient kinetics performances of charge carriers and enhanced redox capacity synergistically boost photocatalytic activity,in which a 3-times higher conversion efficiency of nitrobenzene reduction was achieved upon Ni/ZnIn_(2)S_(4).Our study not only provides in-depth insights into the effect of atomic directional-substitution on the kinetic behavior of photogenerated charges,but also opens an avenue to the synchronous optimization of redox capacity and carrier-kinetics performance for efficient solar energy conversion.
基金supported by the National Natural Science Foundation of China(NSFC,grant Nos.42172323 and 12371454)。
文摘In source detection in the Tianlai project,locating the interferometric fringe in visibility data accurately will influence downstream tasks drastically,such as physical parameter estimation and weak source exploration.Considering that traditional locating methods are time-consuming and supervised methods require a great quantity of expensive labeled data,in this paper,we first investigate characteristics of interferometric fringes in the simulation and real scenario separately,and integrate an almost parameter-free unsupervised clustering method and seeding filling or eraser algorithm to propose a hierarchical plug and play method to improve location accuracy.Then,we apply our method to locate single and multiple sources’interferometric fringes in simulation data.Next,we apply our method to real data taken from the Tianlai radio telescope array.Finally,we compare with unsupervised methods that are state of the art.These results show that our method has robustness in different scenarios and can improve location measurement accuracy effectively.
基金supported by China’s National Natural Science Foundation(Nos.62072249,62072056)This work is also funded by the National Science Foundation of Hunan Province(2020JJ2029).
文摘With the development of Industry 4.0 and big data technology,the Industrial Internet of Things(IIoT)is hampered by inherent issues such as privacy,security,and fault tolerance,which pose certain challenges to the rapid development of IIoT.Blockchain technology has immutability,decentralization,and autonomy,which can greatly improve the inherent defects of the IIoT.In the traditional blockchain,data is stored in a Merkle tree.As data continues to grow,the scale of proofs used to validate it grows,threatening the efficiency,security,and reliability of blockchain-based IIoT.Accordingly,this paper first analyzes the inefficiency of the traditional blockchain structure in verifying the integrity and correctness of data.To solve this problem,a new Vector Commitment(VC)structure,Partition Vector Commitment(PVC),is proposed by improving the traditional VC structure.Secondly,this paper uses PVC instead of the Merkle tree to store big data generated by IIoT.PVC can improve the efficiency of traditional VC in the process of commitment and opening.Finally,this paper uses PVC to build a blockchain-based IIoT data security storage mechanism and carries out a comparative analysis of experiments.This mechanism can greatly reduce communication loss and maximize the rational use of storage space,which is of great significance for maintaining the security and stability of blockchain-based IIoT.
文摘The 6th generation mobile networks(6G)network is a kind of multi-network interconnection and multi-scenario coexistence network,where multiple network domains break the original fixed boundaries to form connections and convergence.In this paper,with the optimization objective of maximizing network utility while ensuring flows performance-centric weighted fairness,this paper designs a reinforcement learning-based cloud-edge autonomous multi-domain data center network architecture that achieves single-domain autonomy and multi-domain collaboration.Due to the conflict between the utility of different flows,the bandwidth fairness allocation problem for various types of flows is formulated by considering different defined reward functions.Regarding the tradeoff between fairness and utility,this paper deals with the corresponding reward functions for the cases where the flows undergo abrupt changes and smooth changes in the flows.In addition,to accommodate the Quality of Service(QoS)requirements for multiple types of flows,this paper proposes a multi-domain autonomous routing algorithm called LSTM+MADDPG.Introducing a Long Short-Term Memory(LSTM)layer in the actor and critic networks,more information about temporal continuity is added,further enhancing the adaptive ability changes in the dynamic network environment.The LSTM+MADDPG algorithm is compared with the latest reinforcement learning algorithm by conducting experiments on real network topology and traffic traces,and the experimental results show that LSTM+MADDPG improves the delay convergence speed by 14.6%and delays the start moment of packet loss by 18.2%compared with other algorithms.
基金supported by the open research fund of Key Lab of Broadband Wireless Communication and Sensor Network Technology(Nanjing University of Posts and Telecommunications),Ministry of Education(No.JZNY202114)Postgraduate Research&Practice Innovation Program of Jiangsu Province(No.KYCX210734).
文摘Traditional Io T systems suffer from high equipment management costs and difficulty in trustworthy data sharing caused by centralization.Blockchain provides a feasible research direction to solve these problems. The main challenge at this stage is to integrate the blockchain from the resourceconstrained Io T devices and ensure the data of Io T system is credible. We provide a general framework for intelligent Io T data acquisition and sharing in an untrusted environment based on the blockchain, where gateways become Oracles. A distributed Oracle network based on Byzantine Fault Tolerant algorithm is used to provide trusted data for the blockchain to make intelligent Io T data trustworthy. An aggregation contract is deployed to collect data from various Oracle and share the credible data to all on-chain users. We also propose a gateway data aggregation scheme based on the REST API event publishing/subscribing mechanism which uses SQL to achieve flexible data aggregation. The experimental results show that the proposed scheme can alleviate the problem of limited performance of Io T equipment, make data reliable, and meet the diverse data needs on the chain.
基金This research was funded by the National Natural Science Foundation of China(Grant No.72001190)by the Ministry of Education’s Humanities and Social Science Project via the China Ministry of Education(Grant No.20YJC630173)by Zhejiang A&F University(Grant No.2022LFR062).
文摘Data stream clustering is integral to contemporary big data applications.However,addressing the ongoing influx of data streams efficiently and accurately remains a primary challenge in current research.This paper aims to elevate the efficiency and precision of data stream clustering,leveraging the TEDA(Typicality and Eccentricity Data Analysis)algorithm as a foundation,we introduce improvements by integrating a nearest neighbor search algorithm to enhance both the efficiency and accuracy of the algorithm.The original TEDA algorithm,grounded in the concept of“Typicality and Eccentricity Data Analytics”,represents an evolving and recursive method that requires no prior knowledge.While the algorithm autonomously creates and merges clusters as new data arrives,its efficiency is significantly hindered by the need to traverse all existing clusters upon the arrival of further data.This work presents the NS-TEDA(Neighbor Search Based Typicality and Eccentricity Data Analysis)algorithm by incorporating a KD-Tree(K-Dimensional Tree)algorithm integrated with the Scapegoat Tree.Upon arrival,this ensures that new data points interact solely with clusters in very close proximity.This significantly enhances algorithm efficiency while preventing a single data point from joining too many clusters and mitigating the merging of clusters with high overlap to some extent.We apply the NS-TEDA algorithm to several well-known datasets,comparing its performance with other data stream clustering algorithms and the original TEDA algorithm.The results demonstrate that the proposed algorithm achieves higher accuracy,and its runtime exhibits almost linear dependence on the volume of data,making it more suitable for large-scale data stream analysis research.
基金Korea Institute of Energy Technology Evaluation and Planning(KETEP)grant funded by the Korea government(Grant No.20214000000140,Graduate School of Convergence for Clean Energy Integrated Power Generation)Korea Basic Science Institute(National Research Facilities and Equipment Center)grant funded by the Ministry of Education(2021R1A6C101A449)the National Research Foundation of Korea grant funded by the Ministry of Science and ICT(2021R1A2C1095139),Republic of Korea。
文摘Mg alloys possess an inherent plastic anisotropy owing to the selective activation of deformation mechanisms depending on the loading condition.This characteristic results in a diverse range of flow curves that vary with a deformation condition.This study proposes a novel approach for accurately predicting an anisotropic deformation behavior of wrought Mg alloys using machine learning(ML)with data augmentation.The developed model combines four key strategies from data science:learning the entire flow curves,generative adversarial networks(GAN),algorithm-driven hyperparameter tuning,and gated recurrent unit(GRU)architecture.The proposed model,namely GAN-aided GRU,was extensively evaluated for various predictive scenarios,such as interpolation,extrapolation,and a limited dataset size.The model exhibited significant predictability and improved generalizability for estimating the anisotropic compressive behavior of ZK60 Mg alloys under 11 annealing conditions and for three loading directions.The GAN-aided GRU results were superior to those of previous ML models and constitutive equations.The superior performance was attributed to hyperparameter optimization,GAN-based data augmentation,and the inherent predictivity of the GRU for extrapolation.As a first attempt to employ ML techniques other than artificial neural networks,this study proposes a novel perspective on predicting the anisotropic deformation behaviors of wrought Mg alloys.
基金This research was funded by the Natural Science Foundation of Hebei Province(F2021201052)。
文摘Data is regarded as a valuable asset,and sharing data is a prerequisite for fully exploiting the value of data.However,the current medical data sharing scheme lacks a fair incentive mechanism,and the authenticity of data cannot be guaranteed,resulting in low enthusiasm of participants.A fair and trusted medical data trading scheme based on smart contracts is proposed,which aims to encourage participants to be honest and improve their enthusiasm for participation.The scheme uses zero-knowledge range proof for trusted verification,verifies the authenticity of the patient’s data and the specific attributes of the data before the transaction,and realizes privacy protection.At the same time,the game pricing strategy selects the best revenue strategy for all parties involved and realizes the fairness and incentive of the transaction price.The smart contract is used to complete the verification and game bargaining process,and the blockchain is used as a distributed ledger to record the medical data transaction process to prevent data tampering and transaction denial.Finally,by deploying smart contracts on the Ethereum test network and conducting experiments and theoretical calculations,it is proved that the transaction scheme achieves trusted verification and fair bargaining while ensuring privacy protection in a decentralized environment.The experimental results show that the model improves the credibility and fairness of medical data transactions,maximizes social benefits,encourages more patients and medical institutions to participate in the circulation of medical data,and more fully taps the potential value of medical data.