期刊文献+
共找到8,461篇文章
< 1 2 250 >
每页显示 20 50 100
Computing Power Network:A Survey
1
作者 Sun Yukun Lei Bo +4 位作者 Liu Junlin Huang Haonan Zhang Xing Peng Jing Wang Wenbo 《China Communications》 SCIE CSCD 2024年第9期109-145,共37页
With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these... With the rapid development of cloud computing,edge computing,and smart devices,computing power resources indicate a trend of ubiquitous deployment.The traditional network architecture cannot efficiently leverage these distributed computing power resources due to computing power island effect.To overcome these problems and improve network efficiency,a new network computing paradigm is proposed,i.e.,Computing Power Network(CPN).Computing power network can connect ubiquitous and heterogenous computing power resources through networking to realize computing power scheduling flexibly.In this survey,we make an exhaustive review on the state-of-the-art research efforts on computing power network.We first give an overview of computing power network,including definition,architecture,and advantages.Next,a comprehensive elaboration of issues on computing power modeling,information awareness and announcement,resource allocation,network forwarding,computing power transaction platform and resource orchestration platform is presented.The computing power network testbed is built and evaluated.The applications and use cases in computing power network are discussed.Then,the key enabling technologies for computing power network are introduced.Finally,open challenges and future research directions are presented as well. 展开更多
关键词 computing power modeling computing power network computing power scheduling information awareness network forwarding
下载PDF
ATSSC:An Attack Tolerant System in Serverless Computing
2
作者 Zhang Shuai Guo Yunfei +2 位作者 Hu Hongchao Liu Wenyan Wang Yawen 《China Communications》 SCIE CSCD 2024年第6期192-205,共14页
Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are ... Serverless computing is a promising paradigm in cloud computing that greatly simplifies cloud programming.With serverless computing,developers only provide function code to serverless platform,and these functions are invoked by its driven events.Nonetheless,security threats in serverless computing such as vulnerability-based security threats have become the pain point hindering its wide adoption.The ideas in proactive defense such as redundancy,diversity and dynamic provide promising approaches to protect against cyberattacks.However,these security technologies are mostly applied to serverless platform based on“stacked”mode,as they are designed independent with serverless computing.The lack of security consideration in the initial design makes it especially challenging to achieve the all life cycle protection for serverless application with limited cost.In this paper,we present ATSSC,a proactive defense enabled attack tolerant serverless platform.ATSSC integrates the characteristic of redundancy,diversity and dynamic into serverless seamless to achieve high-level security and efficiency.Specifically,ATSSC constructs multiple diverse function replicas to process the driven events and performs cross-validation to verify the results.In order to create diverse function replicas,both software diversity and environment diversity are adopted.Furthermore,a dynamic function refresh strategy is proposed to keep the clean state of serverless functions.We implement ATSSC based on Kubernetes and Knative.Analysis and experimental results demonstrate that ATSSC can effectively protect serverless computing against cyberattacks with acceptable costs. 展开更多
关键词 active defense attack tolerant cloud computing SECURITY serverless computing
下载PDF
Complementary memtransistors for neuromorphic computing: How, what and why
3
作者 Qi Chen Yue Zhou +4 位作者 Weiwei Xiong Zirui Chen Yasai Wang Xiangshui Miao Yuhui He 《Journal of Semiconductors》 EI CAS CSCD 2024年第6期64-80,共17页
Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it ... Memtransistors in which the source-drain channel conductance can be nonvolatilely manipulated through the gate signals have emerged as promising components for implementing neuromorphic computing.On the other side,it is known that the complementary metal-oxide-semiconductor(CMOS)field effect transistors have played the fundamental role in the modern integrated circuit technology.Therefore,will complementary memtransistors(CMT)also play such a role in the future neuromorphic circuits and chips?In this review,various types of materials and physical mechanisms for constructing CMT(how)are inspected with their merits and need-to-address challenges discussed.Then the unique properties(what)and poten-tial applications of CMT in different learning algorithms/scenarios of spiking neural networks(why)are reviewed,including super-vised rule,reinforcement one,dynamic vision with in-sensor computing,etc.Through exploiting the complementary structure-related novel functions,significant reduction of hardware consuming,enhancement of energy/efficiency ratio and other advan-tages have been gained,illustrating the alluring prospect of design technology co-optimization(DTCO)of CMT towards neuro-morphic computing. 展开更多
关键词 complementary memtransistor neuromorphic computing reward-modulated spike timing-dependent plasticity remote supervise method in-sensor computing
下载PDF
Hybrid Approach for Cost Efficient Application Placement in Fog-Cloud Computing Environments
4
作者 Abdulelah Alwabel Chinmaya Kumar Swain 《Computers, Materials & Continua》 SCIE EI 2024年第6期4127-4148,共22页
Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.How... Fog computing has recently developed as a new paradigm with the aim of addressing time-sensitive applications better than with cloud computing by placing and processing tasks in close proximity to the data sources.However,the majority of the fog nodes in this environment are geographically scattered with resources that are limited in terms of capabilities compared to cloud nodes,thus making the application placement problem more complex than that in cloud computing.An approach for cost-efficient application placement in fog-cloud computing environments that combines the benefits of both fog and cloud computing to optimize the placement of applications and services while minimizing costs.This approach is particularly relevant in scenarios where latency,resource constraints,and cost considerations are crucial factors for the deployment of applications.In this study,we propose a hybrid approach that combines a genetic algorithm(GA)with the Flamingo Search Algorithm(FSA)to place application modules while minimizing cost.We consider four cost-types for application deployment:Computation,communication,energy consumption,and violations.The proposed hybrid approach is called GA-FSA and is designed to place the application modules considering the deadline of the application and deploy them appropriately to fog or cloud nodes to curtail the overall cost of the system.An extensive simulation is conducted to assess the performance of the proposed approach compared to other state-of-the-art approaches.The results demonstrate that GA-FSA approach is superior to the other approaches with respect to task guarantee ratio(TGR)and total cost. 展开更多
关键词 Placement mechanism application module placement fog computing cloud computing genetic algorithm flamingo search algorithm
下载PDF
Advances in neuromorphic computing:Expanding horizons for AI development through novel artificial neurons and in-sensor computing
5
作者 杨玉波 赵吉哲 +11 位作者 刘胤洁 华夏扬 王天睿 郑纪元 郝智彪 熊兵 孙长征 韩彦军 王健 李洪涛 汪莱 罗毅 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期1-23,共23页
AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by ... AI development has brought great success to upgrading the information age.At the same time,the large-scale artificial neural network for building AI systems is thirsty for computing power,which is barely satisfied by the conventional computing hardware.In the post-Moore era,the increase in computing power brought about by the size reduction of CMOS in very large-scale integrated circuits(VLSIC)is challenging to meet the growing demand for AI computing power.To address the issue,technical approaches like neuromorphic computing attract great attention because of their feature of breaking Von-Neumann architecture,and dealing with AI algorithms much more parallelly and energy efficiently.Inspired by the human neural network architecture,neuromorphic computing hardware is brought to life based on novel artificial neurons constructed by new materials or devices.Although it is relatively difficult to deploy a training process in the neuromorphic architecture like spiking neural network(SNN),the development in this field has incubated promising technologies like in-sensor computing,which brings new opportunities for multidisciplinary research,including the field of optoelectronic materials and devices,artificial neural networks,and microelectronics integration technology.The vision chips based on the architectures could reduce unnecessary data transfer and realize fast and energy-efficient visual cognitive processing.This paper reviews firstly the architectures and algorithms of SNN,and artificial neuron devices supporting neuromorphic computing,then the recent progress of in-sensor computing vision chips,which all will promote the development of AI. 展开更多
关键词 neuromorphic computing spiking neural network(SNN) in-sensor computing artificial intelligence
原文传递
Task Offloading in Edge Computing Using GNNs and DQN
6
作者 Asier Garmendia-Orbegozo Jose David Nunez-Gonzalez Miguel Angel Anton 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第6期2649-2671,共23页
In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer t... In a network environment composed of different types of computing centers that can be divided into different layers(clod,edge layer,and others),the interconnection between them offers the possibility of peer-to-peer task offloading.For many resource-constrained devices,the computation of many types of tasks is not feasible because they cannot support such computations as they do not have enough available memory and processing capacity.In this scenario,it is worth considering transferring these tasks to resource-rich platforms,such as Edge Data Centers or remote cloud servers.For different reasons,it is more exciting and appropriate to download various tasks to specific download destinations depending on the properties and state of the environment and the nature of the functions.At the same time,establishing an optimal offloading policy,which ensures that all tasks are executed within the required latency and avoids excessive workload on specific computing centers is not easy.This study presents two alternatives to solve the offloading decision paradigm by introducing two well-known algorithms,Graph Neural Networks(GNN)and Deep Q-Network(DQN).It applies the alternatives on a well-known Edge Computing simulator called PureEdgeSimand compares them with the two defaultmethods,Trade-Off and Round Robin.Experiments showed that variants offer a slight improvement in task success rate and workload distribution.In terms of energy efficiency,they provided similar results.Finally,the success rates of different computing centers are tested,and the lack of capacity of remote cloud servers to respond to applications in real-time is demonstrated.These novel ways of finding a download strategy in a local networking environment are unique as they emulate the state and structure of the environment innovatively,considering the quality of its connections and constant updates.The download score defined in this research is a crucial feature for determining the quality of a download path in the GNN training process and has not previously been proposed.Simultaneously,the suitability of Reinforcement Learning(RL)techniques is demonstrated due to the dynamism of the network environment,considering all the key factors that affect the decision to offload a given task,including the actual state of all devices. 展开更多
关键词 Edge computing edge offloading fog computing task offloading
下载PDF
Online Learning-Based Offloading Decision and Resource Allocation in Mobile Edge Computing-Enabled Satellite-Terrestrial Networks
7
作者 Tong Minglei Li Song +1 位作者 Han Wanjiang Wang Xiaoxiang 《China Communications》 SCIE CSCD 2024年第3期230-246,共17页
Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal ... Mobile edge computing(MEC)-enabled satellite-terrestrial networks(STNs)can provide Internet of Things(IoT)devices with global computing services.Sometimes,the network state information is uncertain or unknown.To deal with this situation,we investigate online learning-based offloading decision and resource allocation in MEC-enabled STNs in this paper.The problem of minimizing the average sum task completion delay of all IoT devices over all time periods is formulated.We decompose this optimization problem into a task offloading decision problem and a computing resource allocation problem.A joint optimization scheme of offloading decision and resource allocation is then proposed,which consists of a task offloading decision algorithm based on the devices cooperation aided upper confidence bound(UCB)algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method.Simulation results validate that the proposed scheme performs better than other baseline schemes. 展开更多
关键词 computing resource allocation mobile edge computing satellite-terrestrial networks task offloading decision
下载PDF
Developments of Computing in Papua New Guinea in the Post-Independence Era
8
作者 Zhaohao Sun Xuehui Wei Francisca Pambel 《Journal of Computer and Communications》 2024年第8期141-160,共20页
This article looks at the developments of computing in Papua New Guinea (PNG) in the post-independence era. More specifically, this article examines the development of national policies on Information and Communicatio... This article looks at the developments of computing in Papua New Guinea (PNG) in the post-independence era. More specifically, this article examines the development of national policies on Information and Communications Technology (ICT), digital technologies in PNG, and the development of computing education in PNG since 1975. The research findings reveal that PNG has made solid progress in computing, ICT, national ICT policies, digital technologies, and computing education at universities in the post-independence era. The proposed approach in this article might facilitate the research and development of computing, ICT, digital technologies, and big data analytics in PNG, and beyond. 展开更多
关键词 computing Digital Technologies ICT computing Education Papua New Guinea
下载PDF
Quantum-Edge Cloud Computing for IoT: Bridging the Gap between Cloud, Edge, and Quantum Technologies
9
作者 Shahanaz Akter Md. Khairul Islam Bhuiyan +3 位作者 Md. Bahauddin Badhon Habib Md. Hasan Fatema Akter Mohammad Nahid Ul Islam 《Advances in Internet of Things》 2024年第4期99-120,共22页
The rapid expansion of the Internet of Things (IoT) has driven the need for advanced computational frameworks capable of handling the complex data processing and security challenges that modern IoT applications demand... The rapid expansion of the Internet of Things (IoT) has driven the need for advanced computational frameworks capable of handling the complex data processing and security challenges that modern IoT applications demand. However, traditional cloud computing frameworks face significant latency, scalability, and security issues. Quantum-Edge Cloud Computing (QECC) offers an innovative solution by integrating the computational power of quantum computing with the low-latency advantages of edge computing and the scalability of cloud computing resources. This study is grounded in an extensive literature review, performance improvements, and metrics data from Bangladesh, focusing on smart city infrastructure, healthcare monitoring, and the industrial IoT sector. The discussion covers vital elements, including integrating quantum cryptography to enhance data security, the critical role of edge computing in reducing response times, and cloud computing’s ability to support large-scale IoT networks with its extensive resources. Through case studies such as the application of quantum sensors in autonomous vehicles, the practical impact of QECC is demonstrated. Additionally, the paper outlines future research opportunities, including developing quantum-resistant encryption techniques and optimizing quantum algorithms for edge computing. The convergence of these technologies in QECC has the potential to overcome the current limitations of IoT frameworks, setting a new standard for future IoT applications. 展开更多
关键词 Quantum-Edge Cloud computing (QECC) Internet of Things (IoT) Low Latency Quantum computing (QC) Scalable Cloud Services
下载PDF
Recent Advances in In-Memory Computing:Exploring Memristor and Memtransistor Arrays with 2D Materials 被引量:1
10
作者 Hangbo Zhou Sifan Li +1 位作者 Kah-Wee Ang Yong-Wei Zhang 《Nano-Micro Letters》 SCIE EI CAS CSCD 2024年第7期1-30,共30页
The conventional computing architecture faces substantial chal-lenges,including high latency and energy consumption between memory and processing units.In response,in-memory computing has emerged as a promising altern... The conventional computing architecture faces substantial chal-lenges,including high latency and energy consumption between memory and processing units.In response,in-memory computing has emerged as a promising alternative architecture,enabling computing operations within memory arrays to overcome these limitations.Memristive devices have gained significant attention as key components for in-memory computing due to their high-density arrays,rapid response times,and ability to emulate biological synapses.Among these devices,two-dimensional(2D)material-based memristor and memtransistor arrays have emerged as particularly promising candidates for next-generation in-memory computing,thanks to their exceptional performance driven by the unique properties of 2D materials,such as layered structures,mechanical flexibility,and the capability to form heterojunctions.This review delves into the state-of-the-art research on 2D material-based memristive arrays,encompassing critical aspects such as material selection,device perfor-mance metrics,array structures,and potential applications.Furthermore,it provides a comprehensive overview of the current challenges and limitations associated with these arrays,along with potential solutions.The primary objective of this review is to serve as a significant milestone in realizing next-generation in-memory computing utilizing 2D materials and bridge the gap from single-device characterization to array-level and system-level implementations of neuromorphic computing,leveraging the potential of 2D material-based memristive devices. 展开更多
关键词 2D materials MEMRISTORS Memtransistors Crossbar array In-memory computing
下载PDF
Multiframe-integrated, in-sensor computing using persistent photoconductivity 被引量:1
11
作者 Xiaoyong Jiang Minrui Ye +7 位作者 Yunhai Li Xiao Fu Tangxin Li Qixiao Zhao Jinjin Wang Tao Zhang Jinshui Miao Zengguang Cheng 《Journal of Semiconductors》 EI CAS CSCD 2024年第9期36-41,共6页
The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where su... The utilization of processing capabilities within the detector holds significant promise in addressing energy consumption and latency challenges. Especially in the context of dynamic motion recognition tasks, where substantial data transfers are necessitated by the generation of extensive information and the need for frame-by-frame analysis. Herein, we present a novel approach for dynamic motion recognition, leveraging a spatial-temporal in-sensor computing system rooted in multiframe integration by employing photodetector. Our approach introduced a retinomorphic MoS_(2) photodetector device for motion detection and analysis. The device enables the generation of informative final states, nonlinearly embedding both past and present frames. Subsequent multiply-accumulate (MAC) calculations are efficiently performed as the classifier. When evaluating our devices for target detection and direction classification, we achieved an impressive recognition accuracy of 93.5%. By eliminating the need for frame-by-frame analysis, our system not only achieves high precision but also facilitates energy-efficient in-sensor computing. 展开更多
关键词 in-sensor MOS2 PHOTODETECTOR persistent photoconductivity reservoir computing
下载PDF
Fabrication and integration of photonic devices for phase-change memory and neuromorphic computing 被引量:1
12
作者 Wen Zhou Xueyang Shen +2 位作者 Xiaolong Yang Jiangjing Wang Wei Zhang 《International Journal of Extreme Manufacturing》 SCIE EI CAS CSCD 2024年第2期2-27,共26页
In the past decade,there has been tremendous progress in integrating chalcogenide phase-change materials(PCMs)on the silicon photonic platform for non-volatile memory to neuromorphic in-memory computing applications.I... In the past decade,there has been tremendous progress in integrating chalcogenide phase-change materials(PCMs)on the silicon photonic platform for non-volatile memory to neuromorphic in-memory computing applications.In particular,these non von Neumann computational elements and systems benefit from mass manufacturing of silicon photonic integrated circuits(PICs)on 8-inch wafers using a 130 nm complementary metal-oxide semiconductor line.Chip manufacturing based on deep-ultraviolet lithography and electron-beam lithography enables rapid prototyping of PICs,which can be integrated with high-quality PCMs based on the wafer-scale sputtering technique as a back-end-of-line process.In this article,we present an overview of recent advances in waveguide integrated PCM memory cells,functional devices,and neuromorphic systems,with an emphasis on fabrication and integration processes to attain state-of-the-art device performance.After a short overview of PCM based photonic devices,we discuss the materials properties of the functional layer as well as the progress on the light guiding layer,namely,the silicon and germanium waveguide platforms.Next,we discuss the cleanroom fabrication flow of waveguide devices integrated with thin films and nanowires,silicon waveguides and plasmonic microheaters for the electrothermal switching of PCMs and mixed-mode operation.Finally,the fabrication of photonic and photonic–electronic neuromorphic computing systems is reviewed.These systems consist of arrays of PCM memory elements for associative learning,matrix-vector multiplication,and pattern recognition.With large-scale integration,the neuromorphic photonic computing paradigm holds the promise to outperform digital electronic accelerators by taking the advantages of ultra-high bandwidth,high speed,and energy-efficient operation in running machine learning algorithms. 展开更多
关键词 nanofabrication silicon photonics phase-change materials non-volatile photonic memory neuromorphic photonic computing
下载PDF
Air-Ground Collaborative Mobile Edge Computing:Architecture,Challenges,and Opportunities 被引量:1
13
作者 Qin Zhen He Shoushuai +5 位作者 Wang Hai Qu Yuben Dai Haipeng Xiong Fei Wei Zhenhua Li Hailong 《China Communications》 SCIE CSCD 2024年第5期1-16,共16页
By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-grow... By pushing computation,cache,and network control to the edge,mobile edge computing(MEC)is expected to play a leading role in fifth generation(5G)and future sixth generation(6G).Nevertheless,facing ubiquitous fast-growing computational demands,it is impossible for a single MEC paradigm to effectively support high-quality intelligent services at end user equipments(UEs).To address this issue,we propose an air-ground collaborative MEC(AGCMEC)architecture in this article.The proposed AGCMEC integrates all potentially available MEC servers within air and ground in the envisioned 6G,by a variety of collaborative ways to provide computation services at their best for UEs.Firstly,we introduce the AGC-MEC architecture and elaborate three typical use cases.Then,we discuss four main challenges in the AGC-MEC as well as their potential solutions.Next,we conduct a case study of collaborative service placement for AGC-MEC to validate the effectiveness of the proposed collaborative service placement strategy.Finally,we highlight several potential research directions of the AGC-MEC. 展开更多
关键词 air-ground architecture COLLABORATIVE mobile edge computing
下载PDF
MCWOA Scheduler:Modified Chimp-Whale Optimization Algorithm for Task Scheduling in Cloud Computing 被引量:1
14
作者 Chirag Chandrashekar Pradeep Krishnadoss +1 位作者 Vijayakumar Kedalu Poornachary Balasundaram Ananthakrishnan 《Computers, Materials & Continua》 SCIE EI 2024年第2期2593-2616,共24页
Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay ... Cloud computing provides a diverse and adaptable resource pool over the internet,allowing users to tap into various resources as needed.It has been seen as a robust solution to relevant challenges.A significant delay can hamper the performance of IoT-enabled cloud platforms.However,efficient task scheduling can lower the cloud infrastructure’s energy consumption,thus maximizing the service provider’s revenue by decreasing user job processing times.The proposed Modified Chimp-Whale Optimization Algorithm called Modified Chimp-Whale Optimization Algorithm(MCWOA),combines elements of the Chimp Optimization Algorithm(COA)and the Whale Optimization Algorithm(WOA).To enhance MCWOA’s identification precision,the Sobol sequence is used in the population initialization phase,ensuring an even distribution of the population across the solution space.Moreover,the traditional MCWOA’s local search capabilities are augmented by incorporating the whale optimization algorithm’s bubble-net hunting and random search mechanisms into MCWOA’s position-updating process.This study demonstrates the effectiveness of the proposed approach using a two-story rigid frame and a simply supported beam model.Simulated outcomes reveal that the new method outperforms the original MCWOA,especially in multi-damage detection scenarios.MCWOA excels in avoiding false positives and enhancing computational speed,making it an optimal choice for structural damage detection.The efficiency of the proposed MCWOA is assessed against metrics such as energy usage,computational expense,task duration,and delay.The simulated data indicates that the new MCWOA outpaces other methods across all metrics.The study also references the Whale Optimization Algorithm(WOA),Chimp Algorithm(CA),Ant Lion Optimizer(ALO),Genetic Algorithm(GA)and Grey Wolf Optimizer(GWO). 展开更多
关键词 Cloud computing SCHEDULING chimp optimization algorithm whale optimization algorithm
下载PDF
IRS Assisted UAV Communications against Proactive Eavesdropping in Mobile Edge Computing Networks 被引量:1
15
作者 Ying Zhang Weiming Niu Leibing Yan 《Computer Modeling in Engineering & Sciences》 SCIE EI 2024年第1期885-902,共18页
In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of ... In this paper,we consider mobile edge computing(MEC)networks against proactive eavesdropping.To maximize the transmission rate,IRS assisted UAV communications are applied.We take the joint design of the trajectory of UAV,the transmitting beamforming of users,and the phase shift matrix of IRS.The original problem is strong non-convex and difficult to solve.We first propose two basic modes of the proactive eavesdropper,and obtain the closed-form solution for the boundary conditions of the two modes.Then we transform the original problem into an equivalent one and propose an alternating optimization(AO)based method to obtain a local optimal solution.The convergence of the algorithm is illustrated by numerical results.Further,we propose a zero forcing(ZF)based method as sub-optimal solution,and the simulation section shows that the proposed two schemes could obtain better performance compared with traditional schemes. 展开更多
关键词 Mobile edge computing(MEC) unmanned aerial vehicle(UAV) intelligent reflecting surface(IRS) zero forcing(ZF)
下载PDF
Security Implications of Edge Computing in Cloud Networks 被引量:1
16
作者 Sina Ahmadi 《Journal of Computer and Communications》 2024年第2期26-46,共21页
Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this r... Security issues in cloud networks and edge computing have become very common. This research focuses on analyzing such issues and developing the best solutions. A detailed literature review has been conducted in this regard. The findings have shown that many challenges are linked to edge computing, such as privacy concerns, security breaches, high costs, low efficiency, etc. Therefore, there is a need to implement proper security measures to overcome these issues. Using emerging trends, like machine learning, encryption, artificial intelligence, real-time monitoring, etc., can help mitigate security issues. They can also develop a secure and safe future in cloud computing. It was concluded that the security implications of edge computing can easily be covered with the help of new technologies and techniques. 展开更多
关键词 Edge computing Cloud Networks Artificial Intelligence Machine Learning Cloud Security
下载PDF
Secure Computation Efficiency Resource Allocation for Massive MIMO-Enabled Mobile Edge Computing Networks
17
作者 Sun Gangcan Sun Jiwei +3 位作者 HaoWanming Zhu Zhengyu Ji Xiang Zhou Yiqing 《China Communications》 SCIE CSCD 2024年第11期150-162,共13页
In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based ... In this article,the secure computation efficiency(SCE)problem is studied in a massive multipleinput multiple-output(mMIMO)-assisted mobile edge computing(MEC)network.We first derive the secure transmission rate based on the mMIMO under imperfect channel state information.Based on this,the SCE maximization problem is formulated by jointly optimizing the local computation frequency,the offloading time,the downloading time,the users and the base station transmit power.Due to its difficulty to directly solve the formulated problem,we first transform the fractional objective function into the subtractive form one via the dinkelbach method.Next,the original problem is transformed into a convex one by applying the successive convex approximation technique,and an iteration algorithm is proposed to obtain the solutions.Finally,the stimulations are conducted to show that the performance of the proposed schemes is superior to that of the other schemes. 展开更多
关键词 EAVESDROPPING massive multiple input multiple output mobile edge computing partial offloading secure computation efficiency
下载PDF
Exploring reservoir computing:Implementation via double stochastic nanowire networks
18
作者 唐健峰 夏磊 +3 位作者 李广隶 付军 段书凯 王丽丹 《Chinese Physics B》 SCIE EI CAS CSCD 2024年第3期572-582,共11页
Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data ana... Neuromorphic computing,inspired by the human brain,uses memristor devices for complex tasks.Recent studies show that self-organizing random nanowires can implement neuromorphic information processing,enabling data analysis.This paper presents a model based on these nanowire networks,with an improved conductance variation profile.We suggest using these networks for temporal information processing via a reservoir computing scheme and propose an efficient data encoding method using voltage pulses.The nanowire network layer generates dynamic behaviors for pulse voltages,allowing time series prediction analysis.Our experiment uses a double stochastic nanowire network architecture for processing multiple input signals,outperforming traditional reservoir computing in terms of fewer nodes,enriched dynamics and improved prediction accuracy.Experimental results confirm the high accuracy of this architecture on multiple real-time series datasets,making neuromorphic nanowire networks promising for physical implementation of reservoir computing. 展开更多
关键词 double-layer stochastic(DS)nanowire network architecture neuromorphic computation nanowire network reservoir computing time series prediction
原文传递
Performance Comparison of Hyper-V and KVM for Cryptographic Tasks in Cloud Computing
19
作者 Nader Abdel Karim Osama A.Khashan +4 位作者 Waleed K.Abdulraheem Moutaz Alazab Hasan Kanaker Mahmoud E.Farfoura Mohammad Alshinwan 《Computers, Materials & Continua》 SCIE EI 2024年第2期2023-2045,共23页
As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy i... As the extensive use of cloud computing raises questions about the security of any personal data stored there,cryptography is being used more frequently as a security tool to protect data confidentiality and privacy in the cloud environment.A hypervisor is a virtualization software used in cloud hosting to divide and allocate resources on various pieces of hardware.The choice of hypervisor can significantly impact the performance of cryptographic operations in the cloud environment.An important issue that must be carefully examined is that no hypervisor is completely superior in terms of performance;Each hypervisor should be examined to meet specific needs.The main objective of this study is to provide accurate results to compare the performance of Hyper-V and Kernel-based Virtual Machine(KVM)while implementing different cryptographic algorithms to guide cloud service providers and end users in choosing the most suitable hypervisor for their cryptographic needs.This study evaluated the efficiency of two hypervisors,Hyper-V and KVM,in implementing six cryptographic algorithms:Rivest,Shamir,Adleman(RSA),Advanced Encryption Standard(AES),Triple Data Encryption Standard(TripleDES),Carlisle Adams and Stafford Tavares(CAST-128),BLOWFISH,and TwoFish.The study’s findings show that KVM outperforms Hyper-V,with 12.2%less Central Processing Unit(CPU)use and 12.95%less time overall for encryption and decryption operations with various file sizes.The study’s findings emphasize how crucial it is to pick a hypervisor that is appropriate for cryptographic needs in a cloud environment,which could assist both cloud service providers and end users.Future research may focus more on how various hypervisors perform while handling cryptographic workloads. 展开更多
关键词 Cloud computing performance VIRTUALIZATION hypervisors HYPER-V KVM cryptographic algorithm
下载PDF
For Mega-Constellations: Edge Computing and Safety Management Based on Blockchain Technology
20
作者 Zhen Zhang Bing Guo Chengjie Li 《China Communications》 SCIE CSCD 2024年第2期59-73,共15页
In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of sate... In mega-constellation Communication Systems, efficient routing algorithms and data transmission technologies are employed to ensure fast and reliable data transfer. However, the limited computational resources of satellites necessitate the use of edge computing to enhance secure communication.While edge computing reduces the burden on cloud computing, it introduces security and reliability challenges in open satellite communication channels. To address these challenges, we propose a blockchain architecture specifically designed for edge computing in mega-constellation communication systems. This architecture narrows down the consensus scope of the blockchain to meet the requirements of edge computing while ensuring comprehensive log storage across the network. Additionally, we introduce a reputation management mechanism for nodes within the blockchain, evaluating their trustworthiness, workload, and efficiency. Nodes with higher reputation scores are selected to participate in tasks and are appropriately incentivized. Simulation results demonstrate that our approach achieves a task result reliability of 95% while improving computational speed. 展开更多
关键词 blockchain consensus mechanism edge computing mega-constellation reputation management
下载PDF
上一页 1 2 250 下一页 到第
使用帮助 返回顶部