Edge technology aims to bring cloud resources(specifically,the computation,storage,and network)to the closed proximity of the edge devices,i.e.,smart devices where the data are produced and consumed.Embedding computin...Edge technology aims to bring cloud resources(specifically,the computation,storage,and network)to the closed proximity of the edge devices,i.e.,smart devices where the data are produced and consumed.Embedding computing and application in edge devices lead to emerging of two new concepts in edge technology:edge computing and edge analytics.Edge analytics uses some techniques or algorithms to analyse the data generated by the edge devices.With the emerging of edge analytics,the edge devices have become a complete set.Currently,edge analytics is unable to provide full support to the analytic techniques.The edge devices cannot execute advanced and sophisticated analytic algorithms following various constraints such as limited power supply,small memory size,limited resources,etc.This article aims to provide a detailed discussion on edge analytics.The key contributions of the paper are as follows-a clear explanation to distinguish between the three concepts of edge technology:edge devices,edge computing,and edge analytics,along with their issues.In addition,the article discusses the implementation of edge analytics to solve many problems and applications in various areas such as retail,agriculture,industry,and healthcare.Moreover,the research papers of the state-of-the-art edge analytics are rigorously reviewed in this article to explore the existing issues,emerging challenges,research opportunities and their directions,and applications.展开更多
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t...In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.展开更多
We investigate the behavior of edge modes in the presence of different edge terminations and long-range(LR)hopping.Here,we mainly focus on such model crystals with two different types of structures(type I:“…-P-Q-P-Q...We investigate the behavior of edge modes in the presence of different edge terminations and long-range(LR)hopping.Here,we mainly focus on such model crystals with two different types of structures(type I:“…-P-Q-P-Q-…”and type II:“…=P-Q=P-Q=…”),where P and Q represent crystal lines(CLs),while the symbols“-”and“=”denote the distance between the nearest neighbor(NN)CLs.Based on the lattice model Hamiltonian with LR hopping,the existence of edge modes is determined analytically by using the transfer matrix method(TMM)when different edge terminals are taken into consideration.Our findings are consistent with the numerical results obtained by the exact diagonalization method.We also notice that edge modes can exhibit different behaviors under different edge terminals.Our result is helpful in solving novel edge modes in honeycomb crystalline graphene and transition metal dichalcogenides with different edge terminals.展开更多
The rapid development of emerging technologies,such as edge intelligence and digital twins,have added momentum towards the development of the Industrial Internet of Things(IIo T).However,the massive amount of data gen...The rapid development of emerging technologies,such as edge intelligence and digital twins,have added momentum towards the development of the Industrial Internet of Things(IIo T).However,the massive amount of data generated by the IIo T,coupled with heterogeneous computation capacity across IIo T devices,and users’data privacy concerns,have posed challenges towards achieving industrial edge intelligence(IEI).To achieve IEI,in this paper,we propose a semi-federated learning framework where a portion of the data with higher privacy is kept locally and a portion of the less private data can be potentially uploaded to the edge server.In addition,we leverage digital twins to overcome the problem of computation capacity heterogeneity of IIo T devices through the mapping of physical entities.We formulate a synchronization latency minimization problem which jointly optimizes edge association and the proportion of uploaded nonprivate data.As the joint problem is NP-hard and combinatorial and taking into account the reality of largescale device training,we develop a multi-agent hybrid action deep reinforcement learning(DRL)algorithm to find the optimal solution.Simulation results show that our proposed DRL algorithm can reduce latency and have a better convergence performance for semi-federated learning compared to benchmark algorithms.展开更多
To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network...To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA.展开更多
[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been propo...[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been proposed for monitoring cow ruminant behavior,including video surveillance,sound recognition,and sensor monitoring methods.How‐ever,the application of edge device gives rise to the issue of inadequate real-time performance.To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior,a real-time monitoring method was proposed for cow ruminant behavior based on edge computing.[Methods]Autono‐mously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time.Based on these six-axis data,two distinct strategies,federated edge intelligence and split edge intelligence,were investigat‐ed for the real-time recognition of cow ruminant behavior.Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence,the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism.Additionally,a federated edge intelligence model was designed uti‐lizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm.In the study on split edge intelli‐gence,a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network.[Results and Discussions]Through compara‐tive experiments with MobileNet v3 and MobileNet-LSTM,the federated edge intelligence model based on CA-Mo‐bileNet v3 achieved an average Precision rate,Recall rate,F1-Score,Specificity,and Accuracy of 97.1%,97.9%,97.5%,98.3%,and 98.2%,respectively,yielding the best recognition performance.[Conclusions]It is provided a real-time and effective method for monitoring cow ruminant behavior,and the proposed federated edge intelligence model can be ap‐plied in practical settings.展开更多
With the continuous development of network func-tions virtualization(NFV)and software-defined networking(SDN)technologies and the explosive growth of network traffic,the requirement for computing resources in the netw...With the continuous development of network func-tions virtualization(NFV)and software-defined networking(SDN)technologies and the explosive growth of network traffic,the requirement for computing resources in the network has risen sharply.Due to the high cost of edge computing resources,coordinating the cloud and edge computing resources to improve the utilization efficiency of edge computing resources is still a considerable challenge.In this paper,we focus on optimiz-ing the placement of network services in cloud-edge environ-ments to maximize the efficiency.It is first proved that,in cloud-edge environments,placing one service function chain(SFC)integrally in the cloud or at the edge can improve the utilization efficiency of edge resources.Then a virtual network function(VNF)performance-resource(P-R)function is proposed to repre-sent the relationship between the VNF instance computing per-formance and the allocated computing resource.To select the SFCs that are most suitable to deploy at the edge,a VNF place-ment and resource allocation model is built to configure each VNF with its particular P-R function.Moreover,a heuristic recur-sive algorithm is designed called the recursive algorithm for max edge throughput(RMET)to solve the model.Through simula-tions on two scenarios,it is verified that RMET can improve the utilization efficiency of edge computing resources.展开更多
In order to reveal the complex network characteristics and evolution principle of China aviation network,the relationship between the average degree and the average path length of edge vertices of China aviation netwo...In order to reveal the complex network characteristics and evolution principle of China aviation network,the relationship between the average degree and the average path length of edge vertices of China aviation network in 1988,1994,2001,2008 and 2015 was studied.According to the theory and method of complex network,the network system was constructed with the city where the airport was located as the network node and the airline as the edge of the network.On the basis of the statistical data,the average degree and average path length of edge vertices of China aviation network in 1988,1994,2001,2008 and 2015 were calculated.Through regression analysis,it was found that the average degree had a logarithmic relationship with the average path length of edge vertices and the two parameters of the logarithmic relationship had linear evolutionary trace.展开更多
240 nm AlGaN-based micro-LEDs with different sizes are designed and fabricated.Then,the external quantum efficiency(EQE)and light extraction efficiency(LEE)are systematically investigated by comparing size and edge ef...240 nm AlGaN-based micro-LEDs with different sizes are designed and fabricated.Then,the external quantum efficiency(EQE)and light extraction efficiency(LEE)are systematically investigated by comparing size and edge effects.Here,it is revealed that the peak optical output power increases by 81.83%with the size shrinking from 50.0 to 25.0μm.Thereinto,the LEE increases by 26.21%and the LEE enhancement mainly comes from the sidewall light extraction.Most notably,transversemagnetic(TM)mode light intensifies faster as the size shrinks due to the tilted mesa side-wall and Al reflector design.However,when it turns to 12.5μm sized micro-LEDs,the output power is lower than 25.0μm sized ones.The underlying mechanism is that even though protected by SiO2 passivation,the edge effect which leads to current leakage and Shockley-Read-Hall(SRH)recombination deteriorates rapidly with the size further shrinking.Moreover,the ratio of the p-contact area to mesa area is much lower,which deteriorates the p-type current spreading at the mesa edge.These findings show a role of thumb for the design of high efficiency micro-LEDs with wavelength below 250 nm,which will pave the way for wide applications of deep ultraviolet(DUV)micro-LEDs.展开更多
By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-grow...By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC.展开更多
In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amount...In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.展开更多
The increasing popularity of the metaverse has led to a growing interest and market size in spatial computing from both academia and industry.Developing portable and accurate imaging and depth sensing systems is cruci...The increasing popularity of the metaverse has led to a growing interest and market size in spatial computing from both academia and industry.Developing portable and accurate imaging and depth sensing systems is crucial for advancing next-generation virtual reality devices.This work demonstrates an intelligent,lightweight,and compact edge-enhanced depth perception system that utilizes a binocular meta-lens for spatial computing.The miniaturized system comprises a binocular meta-lens,a 532 nm filter,and a CMOS sensor.For disparity computation,we propose a stereo-matching neural network with a novel H-Module.The H-Module incorporates an attention mechanism into the Siamese network.The symmetric architecture,with cross-pixel interaction and cross-view interaction,enables a more comprehensive analysis of contextual information in stereo images.Based on spatial intensity discontinuity,the edge enhancement eliminates illposed regions in the image where ambiguous depth predictions may occur due to a lack of texture.With the assistance of deep learning,our edge-enhanced system provides prompt responses in less than 0.15 seconds.This edge-enhanced depth perception meta-lens imaging system will significantly contribute to accurate 3D scene modeling,machine vision,autonomous driving,and robotics development.展开更多
In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of ...In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes.展开更多
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r...Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.展开更多
In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of sate...In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of satellites necessitate the use of edge computing to enhance secure communication.While edge computing reduces the burden on cloud computing, it introduces security and reliability challenges in open satellite communication channels. To address these challenges, we propose a blockchain architecture specifically designed for edge computing in mega-constellation communication systems. This architecture narrows down the consensus scope of the blockchain to meet the requirements of edge computing while ensuring comprehensive log storage across the network. Additionally, we introduce a reputation management mechanism for nodes within the blockchain, evaluating their trustworthiness, workload, and efficiency. Nodes with higher reputation scores are selected to participate in tasks and are appropriately incentivized. Simulation results demonstrate that our approach achieves a task result reliability of 95% while improving computational speed.展开更多
Graphite interfaces are an important part of the anode in lithium-ion batteries(LIBs),significantly influencing Li intercalation kinetics.Graphite anodes adopt different stacking sequences depending on the concentrati...Graphite interfaces are an important part of the anode in lithium-ion batteries(LIBs),significantly influencing Li intercalation kinetics.Graphite anodes adopt different stacking sequences depending on the concentration of the intercalated Li ions.In this work,we performed first-principles calculations to comprehensively address the energetics and dynamics of Li intercalation and Li vacancy diffusion near the no n-basal edges of graphite,namely the armchair and zigzag-edges,at high Li concentration.We find that surface effects persist in stage-Ⅱ that bind Li strongly at the edge sites.However,the pronounced effect previously identified at the zigzag edge of pristine graphite is reduced in LiC_(12),penetrating only to the subsurface site,and eventually disappearing in LiC_(6).Consequently,the distinctive surface state at the zigzag edge significantly impacts and restrains the charging rate at the initial lithiation of graphite anodes,whilst diminishes with an increasing degree of lithiation.Longer diffusion time for Li hopping to the bulk site from either the zigzag edge or the armchair edge in LiC_(6) was observed during high state of charge due to charge repulsion.Effectively controlling Li occupation and diffusion kinetics at this stage is also crucial for enhancing the charge rate.展开更多
With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for clou...With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.展开更多
Hierarchical networks are frequently encountered in animal groups,gene networks,and artificial engineering systems such as multiple robots,unmanned vehicle systems,smart grids,wind farm networks,and so forth.The struc...Hierarchical networks are frequently encountered in animal groups,gene networks,and artificial engineering systems such as multiple robots,unmanned vehicle systems,smart grids,wind farm networks,and so forth.The structure of a large directed hierarchical network is often strongly influenced by reverse edges from lower-to higher-level nodes,such as lagging birds’howl in a flock or the opinions of lowerlevel individuals feeding back to higher-level ones in a social group.This study reveals that,for most large-scale real hierarchical networks,the majority of the reverse edges do not affect the synchronization process of the entire network;the synchronization process is influenced only by a small part of these reverse edges along specific paths.More surprisingly,a single effective reverse edge can slow down the synchronization of a huge hierarchical network by over 60%.The effect of such edges depends not on the network size but only on the average in-degree of the involved subnetwork.The overwhelming majority of active reverse edges turn out to have some kind of“bunching”effect on the information flows of hierarchical networks,which slows down synchronization processes.This finding refines the current understanding of the role of reverse edges in many natural,social,and engineering hierarchical networks,which might be beneficial for precisely tuning the synchronization rhythms of these networks.Our study also proposes an effective way to attack a hierarchical network by adding a malicious reverse edge to it and provides some guidance for protecting a network by screening out the specific small proportion of vulnerable nodes.展开更多
文摘Edge technology aims to bring cloud resources(specifically,the computation,storage,and network)to the closed proximity of the edge devices,i.e.,smart devices where the data are produced and consumed.Embedding computing and application in edge devices lead to emerging of two new concepts in edge technology:edge computing and edge analytics.Edge analytics uses some techniques or algorithms to analyse the data generated by the edge devices.With the emerging of edge analytics,the edge devices have become a complete set.Currently,edge analytics is unable to provide full support to the analytic techniques.The edge devices cannot execute advanced and sophisticated analytic algorithms following various constraints such as limited power supply,small memory size,limited resources,etc.This article aims to provide a detailed discussion on edge analytics.The key contributions of the paper are as follows-a clear explanation to distinguish between the three concepts of edge technology:edge devices,edge computing,and edge analytics,along with their issues.In addition,the article discusses the implementation of edge analytics to solve many problems and applications in various areas such as retail,agriculture,industry,and healthcare.Moreover,the research papers of the state-of-the-art edge analytics are rigorously reviewed in this article to explore the existing issues,emerging challenges,research opportunities and their directions,and applications.
基金funding from TECNALIA,Basque Research and Technology Alliance(BRTA)supported by the project aOptimization of Deep Learning algorithms for Edge IoT devices for sensorization and control in Buildings and Infrastructures(EMBED)funded by the Gipuzkoa Provincial Council and approved under the 2023 call of the Guipuzcoan Network of Science,Technology and Innovation Program with File Number 2023-CIEN-000051-01.
文摘In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices.
基金supported by the National Natural Science Foundation of China(Grant No.11847061)Domestic Visiting Program for Young and Middle-aged Teachers in Shanghai Universities.
文摘We investigate the behavior of edge modes in the presence of different edge terminations and long-range(LR)hopping.Here,we mainly focus on such model crystals with two different types of structures(type I:“…-P-Q-P-Q-…”and type II:“…=P-Q=P-Q=…”),where P and Q represent crystal lines(CLs),while the symbols“-”and“=”denote the distance between the nearest neighbor(NN)CLs.Based on the lattice model Hamiltonian with LR hopping,the existence of edge modes is determined analytically by using the transfer matrix method(TMM)when different edge terminals are taken into consideration.Our findings are consistent with the numerical results obtained by the exact diagonalization method.We also notice that edge modes can exhibit different behaviors under different edge terminals.Our result is helpful in solving novel edge modes in honeycomb crystalline graphene and transition metal dichalcogenides with different edge terminals.
基金supported in part by the National Nature Science Foundation of China under Grant 62001168in part by the Foundation and Application Research Grant of Guangzhou under Grant 202102020515。
文摘The rapid development of emerging technologies,such as edge intelligence and digital twins,have added momentum towards the development of the Industrial Internet of Things(IIo T).However,the massive amount of data generated by the IIo T,coupled with heterogeneous computation capacity across IIo T devices,and users’data privacy concerns,have posed challenges towards achieving industrial edge intelligence(IEI).To achieve IEI,in this paper,we propose a semi-federated learning framework where a portion of the data with higher privacy is kept locally and a portion of the less private data can be potentially uploaded to the edge server.In addition,we leverage digital twins to overcome the problem of computation capacity heterogeneity of IIo T devices through the mapping of physical entities.We formulate a synchronization latency minimization problem which jointly optimizes edge association and the proportion of uploaded nonprivate data.As the joint problem is NP-hard and combinatorial and taking into account the reality of largescale device training,we develop a multi-agent hybrid action deep reinforcement learning(DRL)algorithm to find the optimal solution.Simulation results show that our proposed DRL algorithm can reduce latency and have a better convergence performance for semi-federated learning compared to benchmark algorithms.
文摘To support the explosive growth of Information and Communications Technology(ICT),Mobile Edge Comput-ing(MEC)provides users with low latency and high bandwidth service by offloading computational tasks to the network’s edge.However,resource-constrained mobile devices still suffer from a capacity mismatch when faced with latency-sensitive and compute-intensive emerging applications.To address the difficulty of running computationally intensive applications on resource-constrained clients,a model of the computation offloading problem in a network consisting of multiple mobile users and edge cloud servers is studied in this paper.Then a user benefit function EoU(Experience of Users)is proposed jointly considering energy consumption and time delay.The EoU maximization problem is decomposed into two steps,i.e.,resource allocation and offloading decision.The offloading decision is usually given by heuristic algorithms which are often faced with the challenge of slow convergence and poor stability.Thus,a combined offloading algorithm,i.e.,a Gini coefficient-based adaptive genetic algorithm(GCAGA),is proposed to alleviate the dilemma.The proposed algorithm optimizes the offloading decision by maximizing EoU and accelerates the convergence with the Gini coefficient.The simulation compares the proposed algorithm with the genetic algorithm(GA)and adaptive genetic algorithm(AGA).Experiment results show that the Gini coefficient and the adaptive heuristic operators can accelerate the convergence speed,and the proposed algorithm performs better in terms of convergence while obtaining higher EoU.The simulation code of the proposed algorithm is available:https://github.com/Grox888/Mobile_Edge_Computing/tree/GCAGA.
文摘[Objective]Real-time monitoring of cow ruminant behavior is of paramount importance for promptly obtaining relevant information about cow health and predicting cow diseases.Currently,various strategies have been proposed for monitoring cow ruminant behavior,including video surveillance,sound recognition,and sensor monitoring methods.How‐ever,the application of edge device gives rise to the issue of inadequate real-time performance.To reduce the volume of data transmission and cloud computing workload while achieving real-time monitoring of dairy cow rumination behavior,a real-time monitoring method was proposed for cow ruminant behavior based on edge computing.[Methods]Autono‐mously designed edge devices were utilized to collect and process six-axis acceleration signals from cows in real-time.Based on these six-axis data,two distinct strategies,federated edge intelligence and split edge intelligence,were investigat‐ed for the real-time recognition of cow ruminant behavior.Focused on the real-time recognition method for cow ruminant behavior leveraging federated edge intelligence,the CA-MobileNet v3 network was proposed by enhancing the MobileNet v3 network with a collaborative attention mechanism.Additionally,a federated edge intelligence model was designed uti‐lizing the CA-MobileNet v3 network and the FedAvg federated aggregation algorithm.In the study on split edge intelli‐gence,a split edge intelligence model named MobileNet-LSTM was designed by integrating the MobileNet v3 network with a fusion collaborative attention mechanism and the Bi-LSTM network.[Results and Discussions]Through compara‐tive experiments with MobileNet v3 and MobileNet-LSTM,the federated edge intelligence model based on CA-Mo‐bileNet v3 achieved an average Precision rate,Recall rate,F1-Score,Specificity,and Accuracy of 97.1%,97.9%,97.5%,98.3%,and 98.2%,respectively,yielding the best recognition performance.[Conclusions]It is provided a real-time and effective method for monitoring cow ruminant behavior,and the proposed federated edge intelligence model can be ap‐plied in practical settings.
基金This work was supported by the Key Research and Development(R&D)Plan of Heilongjiang Province of China(JD22A001).
文摘With the continuous development of network func-tions virtualization(NFV)and software-defined networking(SDN)technologies and the explosive growth of network traffic,the requirement for computing resources in the network has risen sharply.Due to the high cost of edge computing resources,coordinating the cloud and edge computing resources to improve the utilization efficiency of edge computing resources is still a considerable challenge.In this paper,we focus on optimiz-ing the placement of network services in cloud-edge environ-ments to maximize the efficiency.It is first proved that,in cloud-edge environments,placing one service function chain(SFC)integrally in the cloud or at the edge can improve the utilization efficiency of edge resources.Then a virtual network function(VNF)performance-resource(P-R)function is proposed to repre-sent the relationship between the VNF instance computing per-formance and the allocated computing resource.To select the SFCs that are most suitable to deploy at the edge,a VNF place-ment and resource allocation model is built to configure each VNF with its particular P-R function.Moreover,a heuristic recur-sive algorithm is designed called the recursive algorithm for max edge throughput(RMET)to solve the model.Through simula-tions on two scenarios,it is verified that RMET can improve the utilization efficiency of edge computing resources.
文摘In order to reveal the complex network characteristics and evolution principle of China aviation network,the relationship between the average degree and the average path length of edge vertices of China aviation network in 1988,1994,2001,2008 and 2015 was studied.According to the theory and method of complex network,the network system was constructed with the city where the airport was located as the network node and the airline as the edge of the network.On the basis of the statistical data,the average degree and average path length of edge vertices of China aviation network in 1988,1994,2001,2008 and 2015 were calculated.Through regression analysis,it was found that the average degree had a logarithmic relationship with the average path length of edge vertices and the two parameters of the logarithmic relationship had linear evolutionary trace.
基金This work was supported by National Key R&D Program of China(2022YFB3605103)the National Natural Science Foundation of China(62204241,U22A2084,62121005,and 61827813)+3 种基金the Natural Science Foundation of Jilin Province(20230101345JC,20230101360JC,and 20230101107JC)the Youth Innovation Promotion Association of CAS(2023223)the Young Elite Scientist Sponsorship Program By CAST(YESS20200182)the CAS Talents Program(E30122E4M0).
文摘240 nm AlGaN-based micro-LEDs with different sizes are designed and fabricated.Then,the external quantum efficiency(EQE)and light extraction efficiency(LEE)are systematically investigated by comparing size and edge effects.Here,it is revealed that the peak optical output power increases by 81.83%with the size shrinking from 50.0 to 25.0μm.Thereinto,the LEE increases by 26.21%and the LEE enhancement mainly comes from the sidewall light extraction.Most notably,transversemagnetic(TM)mode light intensifies faster as the size shrinks due to the tilted mesa side-wall and Al reflector design.However,when it turns to 12.5μm sized micro-LEDs,the output power is lower than 25.0μm sized ones.The underlying mechanism is that even though protected by SiO2 passivation,the edge effect which leads to current leakage and Shockley-Read-Hall(SRH)recombination deteriorates rapidly with the size further shrinking.Moreover,the ratio of the p-contact area to mesa area is much lower,which deteriorates the p-type current spreading at the mesa edge.These findings show a role of thumb for the design of high efficiency micro-LEDs with wavelength below 250 nm,which will pave the way for wide applications of deep ultraviolet(DUV)micro-LEDs.
基金supported in part by the National Natural Science Foundation of China under Grant 62171465,62072303,62272223,U22A2031。
文摘By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC.
基金supported in part by the National Natural Science Foundation of China(No.61701197)in part by the National Key Research and Development Program of China(No.2021YFA1000500(4))in part by the 111 Project(No.B23008).
文摘In vehicle edge computing(VEC),asynchronous federated learning(AFL)is used,where the edge receives a local model and updates the global model,effectively reducing the global aggregation latency.Due to different amounts of local data,computing capabilities and locations of the vehicles,renewing the global model with same weight is inappropriate.The above factors will affect the local calculation time and upload time of the local model,and the vehicle may also be affected by Byzantine attacks,leading to the deterioration of the vehicle data.However,based on deep reinforcement learning(DRL),we can consider these factors comprehensively to eliminate vehicles with poor performance as much as possible and exclude vehicles that have suffered Byzantine attacks before AFL.At the same time,when aggregating AFL,we can focus on those vehicles with better performance to improve the accuracy and safety of the system.In this paper,we proposed a vehicle selection scheme based on DRL in VEC.In this scheme,vehicle’s mobility,channel conditions with temporal variations,computational resources with temporal variations,different data amount,transmission channel status of vehicles as well as Byzantine attacks were taken into account.Simulation results show that the proposed scheme effectively improves the safety and accuracy of the global model.
基金supports from the Research Grants Council of the Hong Kong Special Administrative Region,China[Project No.C5031-22GCityU11310522+3 种基金CityU11300123]the Department of Science and Technology of Guangdong Province[Project No.2020B1515120073]City University of Hong Kong[Project No.9610628]JST CREST(Grant No.JPMJCR1904).
文摘The increasing popularity of the metaverse has led to a growing interest and market size in spatial computing from both academia and industry.Developing portable and accurate imaging and depth sensing systems is crucial for advancing next-generation virtual reality devices.This work demonstrates an intelligent,lightweight,and compact edge-enhanced depth perception system that utilizes a binocular meta-lens for spatial computing.The miniaturized system comprises a binocular meta-lens,a 532 nm filter,and a CMOS sensor.For disparity computation,we propose a stereo-matching neural network with a novel H-Module.The H-Module incorporates an attention mechanism into the Siamese network.The symmetric architecture,with cross-pixel interaction and cross-view interaction,enables a more comprehensive analysis of contextual information in stereo images.Based on spatial intensity discontinuity,the edge enhancement eliminates illposed regions in the image where ambiguous depth predictions may occur due to a lack of texture.With the assistance of deep learning,our edge-enhanced system provides prompt responses in less than 0.15 seconds.This edge-enhanced depth perception meta-lens imaging system will significantly contribute to accurate 3D scene modeling,machine vision,autonomous driving,and robotics development.
基金This work was supported by the Key Scientific and Technological Project of Henan Province(Grant Number 222102210212)Doctoral Research Start Project of Henan Institute of Technology(Grant Number KQ2005)Key Research Projects of Colleges and Universities in Henan Province(Grant Number 23B510006).
文摘In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes.
文摘Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques.
基金supported in part by the National Natural Science Foundation of China under Grant No.U2268204,62172061 and 61871422National Key R&D Program of China under Grant No.2020YFB1711800 and 2020YFB1707900+2 种基金the Science and Technology Project of Sichuan Province under Grant No.2023ZHCG0014,2023ZHCG0011,2022YFG0155,2022YFG0157,2021GFW019,2021YFG0152,2021YFG0025,2020YFG0322Central Universities of Southwest Minzu University under Grant No.ZYN2022032,2023NYXXS034the State Scholarship Fund of the China Scholarship Council under Grant No.202008510081。
文摘In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of satellites necessitate the use of edge computing to enhance secure communication.While edge computing reduces the burden on cloud computing, it introduces security and reliability challenges in open satellite communication channels. To address these challenges, we propose a blockchain architecture specifically designed for edge computing in mega-constellation communication systems. This architecture narrows down the consensus scope of the blockchain to meet the requirements of edge computing while ensuring comprehensive log storage across the network. Additionally, we introduce a reputation management mechanism for nodes within the blockchain, evaluating their trustworthiness, workload, and efficiency. Nodes with higher reputation scores are selected to participate in tasks and are appropriately incentivized. Simulation results demonstrate that our approach achieves a task result reliability of 95% while improving computational speed.
基金financial support from the National Natural Science Foundation of China(52203303)the International Partnership Program of the Chinese Academy of Sciences(321GJHZ2023189FN)+2 种基金the Natural Science Foundation of Guangdong Province(2022A1515010076)the Shenzhen Science and Technology Program(SGDX20211123151002003)the Shenzhen International Cooperation Program(GJHZ20220913142812025)。
文摘Graphite interfaces are an important part of the anode in lithium-ion batteries(LIBs),significantly influencing Li intercalation kinetics.Graphite anodes adopt different stacking sequences depending on the concentration of the intercalated Li ions.In this work,we performed first-principles calculations to comprehensively address the energetics and dynamics of Li intercalation and Li vacancy diffusion near the no n-basal edges of graphite,namely the armchair and zigzag-edges,at high Li concentration.We find that surface effects persist in stage-Ⅱ that bind Li strongly at the edge sites.However,the pronounced effect previously identified at the zigzag edge of pristine graphite is reduced in LiC_(12),penetrating only to the subsurface site,and eventually disappearing in LiC_(6).Consequently,the distinctive surface state at the zigzag edge significantly impacts and restrains the charging rate at the initial lithiation of graphite anodes,whilst diminishes with an increasing degree of lithiation.Longer diffusion time for Li hopping to the bulk site from either the zigzag edge or the armchair edge in LiC_(6) was observed during high state of charge due to charge repulsion.Effectively controlling Li occupation and diffusion kinetics at this stage is also crucial for enhancing the charge rate.
基金sponsored by the National Natural Science Foundation of China under grant number No. 62172353, No. 62302114, No. U20B2046 and No. 62172115Innovation Fund Program of the Engineering Research Center for Integration and Application of Digital Learning Technology of Ministry of Education No.1331007 and No. 1311022+1 种基金Natural Science Foundation of the Jiangsu Higher Education Institutions Grant No. 17KJB520044Six Talent Peaks Project in Jiangsu Province No.XYDXX-108
文摘With the rapid development of information technology,IoT devices play a huge role in physiological health data detection.The exponential growth of medical data requires us to reasonably allocate storage space for cloud servers and edge nodes.The storage capacity of edge nodes close to users is limited.We should store hotspot data in edge nodes as much as possible,so as to ensure response timeliness and access hit rate;However,the current scheme cannot guarantee that every sub-message in a complete data stored by the edge node meets the requirements of hot data;How to complete the detection and deletion of redundant data in edge nodes under the premise of protecting user privacy and data dynamic integrity has become a challenging problem.Our paper proposes a redundant data detection method that meets the privacy protection requirements.By scanning the cipher text,it is determined whether each sub-message of the data in the edge node meets the requirements of the hot data.It has the same effect as zero-knowledge proof,and it will not reveal the privacy of users.In addition,for redundant sub-data that does not meet the requirements of hot data,our paper proposes a redundant data deletion scheme that meets the dynamic integrity of the data.We use Content Extraction Signature(CES)to generate the remaining hot data signature after the redundant data is deleted.The feasibility of the scheme is proved through safety analysis and efficiency analysis.
基金supported in part by the National Natural Science Foundation of China(62225306,U2141235,52188102,and 62003145)the National Key Research and Development Program of China(2022ZD0119601)+1 种基金Guangdong Basic and Applied Research Foundation(2022B1515120069)the Science and Technology Project of State Grid Corporation of China(5100-202199557A-0-5-ZN).
文摘Hierarchical networks are frequently encountered in animal groups,gene networks,and artificial engineering systems such as multiple robots,unmanned vehicle systems,smart grids,wind farm networks,and so forth.The structure of a large directed hierarchical network is often strongly influenced by reverse edges from lower-to higher-level nodes,such as lagging birds’howl in a flock or the opinions of lowerlevel individuals feeding back to higher-level ones in a social group.This study reveals that,for most large-scale real hierarchical networks,the majority of the reverse edges do not affect the synchronization process of the entire network;the synchronization process is influenced only by a small part of these reverse edges along specific paths.More surprisingly,a single effective reverse edge can slow down the synchronization of a huge hierarchical network by over 60%.The effect of such edges depends not on the network size but only on the average in-degree of the involved subnetwork.The overwhelming majority of active reverse edges turn out to have some kind of“bunching”effect on the information flows of hierarchical networks,which slows down synchronization processes.This finding refines the current understanding of the role of reverse edges in many natural,social,and engineering hierarchical networks,which might be beneficial for precisely tuning the synchronization rhythms of these networks.Our study also proposes an effective way to attack a hierarchical network by adding a malicious reverse edge to it and provides some guidance for protecting a network by screening out the specific small proportion of vulnerable nodes.