期刊文献+
共找到106篇文章
< 1 2 6 >
每页显示 20 50 100
A Review and Analysis of Localization Techniques in Underwater Wireless Sensor Networks 被引量:1
1
作者 Seema Rani Anju +6 位作者 Anupma Sangwan Krishna Kumar Kashif Nisar Tariq Rahim Soomro Ag.Asri Ag.Ibrahim Manoj Gupta Laxmi Chandand Sadiq Ali Khan 《Computers, Materials & Continua》 SCIE EI 2023年第6期5697-5715,共19页
In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in... In recent years,there has been a rapid growth in Underwater Wireless Sensor Networks(UWSNs).The focus of research in this area is now on solving the problems associated with large-scale UWSN.One of the major issues in such a network is the localization of underwater nodes.Localization is required for tracking objects and detecting the target.It is also considered tagging of data where sensed contents are not found of any use without localization.This is useless for application until the position of sensed content is confirmed.This article’s major goal is to review and analyze underwater node localization to solve the localization issues in UWSN.The present paper describes various existing localization schemes and broadly categorizes these schemes as Centralized and Distributed localization schemes underwater.Also,a detailed subdivision of these localization schemes is given.Further,these localization schemes are compared from different perspectives.The detailed analysis of these schemes in terms of certain performance metrics has been discussed in this paper.At the end,the paper addresses several future directions for potential research in improving localization problems of UWSN. 展开更多
关键词 Underwater wireless sensor networks localization schemes node localization ranging algorithms estimation based prediction based
下载PDF
Qualitative Abnormalities of Peripheral Blood Smear Images Using Deep Learning Techniques
2
作者 G.Arutperumjothi K.Suganya Devi +1 位作者 C.Rani P.Srinivasan 《Intelligent Automation & Soft Computing》 SCIE 2023年第1期1069-1086,共18页
In recent years,Peripheral blood smear is a generic analysis to assess the person’s health status.Manual testing of Peripheral blood smear images are difficult,time-consuming and is subject to human intervention and ... In recent years,Peripheral blood smear is a generic analysis to assess the person’s health status.Manual testing of Peripheral blood smear images are difficult,time-consuming and is subject to human intervention and visual error.This method encouraged for researchers to present algorithms and techniques to perform the peripheral blood smear analysis with the help of computer-assisted and decision-making techniques.Existing CAD based methods are lacks in attaining the accurate detection of abnormalities present in the images.In order to mitigate this issue Deep Convolution Neural Network(DCNN)based automatic classification technique is introduced with the classification of eight groups of peripheral blood cells such as basophil,eosinophil,lymphocyte,monocyte,neutrophil,erythroblast,platelet,myocyte,promyocyte and metamyocyte.The proposed DCNN model employs transfer learning approach and additionally it carries three stages such as pre-processing,feature extraction and classification.Initially the pre-processing steps are incorporated to eliminate noisy contents present in the image by using Histogram Equalization(HE).It is enclosed to improve an image contrast.In order to distinguish the dissimilar class and segmentation approach is carried out with the help of Fuzzy C-Means(FCM)model whereas its centroid point optimality method with Slap Swarm based optimization strategy.Moreover some specific set of Gray Level Co-occurrence Matrix(GLCM)features of the segmented images are extracted to augment the performance of proposed detection algorithm.Finally the extracted features are recorded by DCNN and the proposed classifier has the capability to extract their own features.Based on this the diverse set of classes are classified and distinguished from qualitative abnormalities found in the image. 展开更多
关键词 Peripheral blood smear DCNN classifier PRE-PROCESSING SEGMENTATION feature extraction salp swarm optimization classification
下载PDF
Moth Flame Optimization Based FCNN for Prediction of Bugs in Software
3
作者 C.Anjali Julia Punitha Malar Dhas J.Amar Pratap Singh 《Intelligent Automation & Soft Computing》 SCIE 2023年第5期1241-1256,共16页
The software engineering technique makes it possible to create high-quality software.One of the most significant qualities of good software is that it is devoid of bugs.One of the most time-consuming and costly softwar... The software engineering technique makes it possible to create high-quality software.One of the most significant qualities of good software is that it is devoid of bugs.One of the most time-consuming and costly software proce-dures isfinding andfixing bugs.Although it is impossible to eradicate all bugs,it is feasible to reduce the number of bugs and their negative effects.To broaden the scope of bug prediction techniques and increase software quality,numerous causes of software problems must be identified,and successful bug prediction models must be implemented.This study employs a hybrid of Faster Convolution Neural Network and the Moth Flame Optimization(MFO)algorithm to forecast the number of bugs in software based on the program data itself,such as the line quantity in codes,methods characteristics,and other essential software aspects.Here,the MFO method is used to train the neural network to identify optimal weights.The proposed MFO-FCNN technique is compared with existing methods such as AdaBoost(AB),Random Forest(RF),K-Nearest Neighbour(KNN),K-Means Clustering(KMC),Support Vector Machine(SVM)and Bagging Clas-sifier(BC)are examples of machine learning(ML)techniques.The assessment method revealed that machine learning techniques may be employed successfully and through a high level of accuracy.The obtained data revealed that the proposed strategy outperforms the traditional approach. 展开更多
关键词 Faster convolution neural network Moth Flame Optimization(MFO) Support Vector Machine(SVM) AdaBoost(AB) software bug prediction
下载PDF
Computation of PoA for Selfish Node Detection and Resource Allocation Using Game Theory
4
作者 S.Kanmani M.Murali 《Computer Systems Science & Engineering》 SCIE EI 2023年第11期2583-2598,共16页
The introduction of new technologies has increased communication network coverage and the number of associating nodes in dynamic communication networks(DCN).As the network has the characteristics like decentralized an... The introduction of new technologies has increased communication network coverage and the number of associating nodes in dynamic communication networks(DCN).As the network has the characteristics like decentralized and dynamic,few nodes in the network may not associate with other nodes.These uncooperative nodes also known as selfish nodes corrupt the performance of the cooperative nodes.Namely,the nodes cause congestion,high delay,security concerns,and resource depletion.This study presents an effective selfish node detection method to address these problems.The Price of Anarchy(PoA)and the Price of Stability(PoS)in Game Theory with the Presence of Nash Equilibrium(NE)are discussed for the Selfish Node Detection.This is a novel experiment to detect selfish nodes in a network using PoA.Moreover,the least response dynamic-based Capacitated Selfish Resource Allocation(CSRA)game is introduced to improve resource usage among the nodes.The suggested strategy is simulated using the Solar Winds simulator,and the simulation results show that,when compared to earlier methods,the new scheme offers promising performance in terms of delivery rate,delay,and throughput. 展开更多
关键词 Dynamic communication network(DCN) price of anarchy(PoA) nash equilibrium(NE) capacitated selfish resource allocation(CSRA)game game theory price of stability(PoS)
下载PDF
Logistic Regression Trust–A Trust Model for Internet-of-Things Using Regression Analysis
5
作者 Feslin Anish Mon Solomon Godfrey Winster Sathianesan R.Ramesh 《Computer Systems Science & Engineering》 SCIE EI 2023年第2期1125-1142,共18页
Internet of Things(IoT)is a popular social network in which devices are virtually connected for communicating and sharing information.This is applied greatly in business enterprises and government sectors for deliveri... Internet of Things(IoT)is a popular social network in which devices are virtually connected for communicating and sharing information.This is applied greatly in business enterprises and government sectors for delivering the services to their customers,clients and citizens.But,the interaction is success-ful only based on the trust that each device has on another.Thus trust is very much essential for a social network.As Internet of Things have access over sen-sitive information,it urges to many threats that lead data management to risk.This issue is addressed by trust management that help to take decision about trust-worthiness of requestor and provider before communication and sharing.Several trust-based systems are existing for different domain using Dynamic weight meth-od,Fuzzy classification,Bayes inference and very few Regression analysis for IoT.The proposed algorithm is based on Logistic Regression,which provide strong statistical background to trust prediction.To make our stand strong on regression support to trust,we have compared the performance with equivalent sound Bayes analysis using Beta distribution.The performance is studied in simu-lated IoT setup with Quality of Service(QoS)and Social parameters for the nodes.The proposed model performs better in terms of various metrics.An IoT connects heterogeneous devices such as tags and sensor devices for sharing of information and avail different application services.The most salient features of IoT system is to design it with scalability,extendibility,compatibility and resiliency against attack.The existing worksfinds a way to integrate direct and indirect trust to con-verge quickly and estimate the bias due to attacks in addition to the above features. 展开更多
关键词 LRTrust logistic regression trust management internet of things
下载PDF
Nonlinear Dynamic System Identification of ARX Model for Speech Signal Identification
6
作者 Rakesh Kumar Pattanaik Mihir N.Mohanty +1 位作者 Srikanta Ku.Mohapatra Binod Ku.Pattanayak 《Computer Systems Science & Engineering》 SCIE EI 2023年第7期195-208,共14页
System Identification becomes very crucial in the field of nonlinear and dynamic systems or practical systems.As most practical systems don’t have prior information about the system behaviour thus,mathematical modell... System Identification becomes very crucial in the field of nonlinear and dynamic systems or practical systems.As most practical systems don’t have prior information about the system behaviour thus,mathematical modelling is required.The authors have proposed a stacked Bidirectional Long-Short Term Memory(Bi-LSTM)model to handle the problem of nonlinear dynamic system identification in this paper.The proposed model has the ability of faster learning and accurate modelling as it can be trained in both forward and backward directions.The main advantage of Bi-LSTM over other algorithms is that it processes inputs in two ways:one from the past to the future,and the other from the future to the past.In this proposed model a backward-running Long-Short Term Memory(LSTM)can store information from the future along with application of two hidden states together allows for storing information from the past and future at any moment in time.The proposed model is tested with a recorded speech signal to prove its superiority with the performance being evaluated through Mean Square Error(MSE)and Root Means Square Error(RMSE).The RMSE and MSE performances obtained by the proposed model are found to be 0.0218 and 0.0162 respectively for 500 Epochs.The comparison of results and further analysis illustrates that the proposed model achieves better performance over other models and can obtain higher prediction accuracy along with faster convergence speed. 展开更多
关键词 Nonlinear dynamic system identification long-short term memory bidirectional-long-short term memory auto-regressive with exogenous
下载PDF
Implementation of VLSI on Signal Processing-Based Digital Architecture Using AES Algorithm
7
作者 Mohanapriya Marimuthu Santhosh Rajendran +5 位作者 Reshma Radhakrishnan Kalpana Rengarajan Shahzada Khurram Shafiq Ahmad Abdelaty Edrees Sayed Muhammad Shafiq 《Computers, Materials & Continua》 SCIE EI 2023年第3期4729-4745,共17页
Continuous improvements in very-large-scale integration(VLSI)technology and design software have significantly broadened the scope of digital signal processing(DSP)applications.The use of application-specific integrat... Continuous improvements in very-large-scale integration(VLSI)technology and design software have significantly broadened the scope of digital signal processing(DSP)applications.The use of application-specific integrated circuits(ASICs)and programmable digital signal processors for many DSP applications have changed,even though new system implementations based on reconfigurable computing are becoming more complex.Adaptable platforms that combine hardware and software programmability efficiency are rapidly maturing with discrete wavelet transformation(DWT)and sophisticated computerized design techniques,which are much needed in today’s modern world.New research and commercial efforts to sustain power optimization,cost savings,and improved runtime effectiveness have been initiated as initial reconfigurable technologies have emerged.Hence,in this paper,it is proposed that theDWTmethod can be implemented on a fieldprogrammable gate array in a digital architecture(FPGA-DA).We examined the effects of quantization on DWTperformance in classification problems to demonstrate its reliability concerning fixed-point math implementations.The Advanced Encryption Standard(AES)algorithm for DWT learning used in this architecture is less responsive to resampling errors than the previously proposed solution in the literature using the artificial neural networks(ANN)method.By reducing hardware area by 57%,the proposed system has a higher throughput rate of 88.72%,reliability analysis of 95.5%compared to the other standard methods. 展开更多
关键词 VLSI A ES discrete wavelet transformation signal processing
下载PDF
Homogeneous Batch Memory Deduplication Using Clustering of Virtual Machines
8
作者 N.Jagadeeswari V.Mohan Raj 《Computer Systems Science & Engineering》 SCIE EI 2023年第1期929-943,共15页
Virtualization is the backbone of cloud computing,which is a developing and widely used paradigm.Byfinding and merging identical memory pages,memory deduplication improves memory efficiency in virtualized systems.Kern... Virtualization is the backbone of cloud computing,which is a developing and widely used paradigm.Byfinding and merging identical memory pages,memory deduplication improves memory efficiency in virtualized systems.Kernel Same Page Merging(KSM)is a Linux service for memory pages sharing in virtualized environments.Memory deduplication is vulnerable to a memory disclosure attack,which uses covert channel establishment to reveal the contents of other colocated virtual machines.To avoid a memory disclosure attack,sharing of identical pages within a single user’s virtual machine is permitted,but sharing of contents between different users is forbidden.In our proposed approach,virtual machines with similar operating systems of active domains in a node are recognised and organised into a homogenous batch,with memory deduplication performed inside that batch,to improve the memory pages sharing efficiency.When compared to memory deduplication applied to the entire host,implementation details demonstrate a significant increase in the number of pages shared when memory deduplication applied batch-wise and CPU(Central processing unit)consumption also increased. 展开更多
关键词 Kernel same page merging memory deduplication virtual machine sharing content-based sharing
下载PDF
A Robust Automated Framework for Classification of CT Covid-19 Images Using MSI-ResNet
9
作者 Aghila Rajagopal Sultan Ahmad +3 位作者 Sudan Jha Ramachandran Alagarsamy Abdullah Alharbi Bader Alouffi 《Computer Systems Science & Engineering》 SCIE EI 2023年第6期3215-3229,共15页
Nowadays,the COVID-19 virus disease is spreading rampantly.There are some testing tools and kits available for diagnosing the virus,but it is in a lim-ited count.To diagnose the presence of disease from radiological i... Nowadays,the COVID-19 virus disease is spreading rampantly.There are some testing tools and kits available for diagnosing the virus,but it is in a lim-ited count.To diagnose the presence of disease from radiological images,auto-mated COVID-19 diagnosis techniques are needed.The enhancement of AI(Artificial Intelligence)has been focused in previous research,which uses X-ray images for detecting COVID-19.The most common symptoms of COVID-19 are fever,dry cough and sore throat.These symptoms may lead to an increase in the rigorous type of pneumonia with a severe barrier.Since medical imaging is not suggested recently in Canada for critical COVID-19 diagnosis,computer-aided systems are implemented for the early identification of COVID-19,which aids in noticing the disease progression and thus decreases the death rate.Here,a deep learning-based automated method for the extraction of features and classi-fication is enhanced for the detection of COVID-19 from the images of computer tomography(CT).The suggested method functions on the basis of three main pro-cesses:data preprocessing,the extraction of features and classification.This approach integrates the union of deep features with the help of Inception 14 and VGG-16 models.At last,a classifier of Multi-scale Improved ResNet(MSI-ResNet)is developed to detect and classify the CT images into unique labels of class.With the support of available open-source COVID-CT datasets that consists of 760 CT pictures,the investigational validation of the suggested method is estimated.The experimental results reveal that the proposed approach offers greater performance with high specificity,accuracy and sensitivity. 展开更多
关键词 Covid-19 CT images multi-scale improved ResNet AI inception 14 and VGG-16 models
下载PDF
Energy efficient indoor localisation for narrowband internet of things
10
作者 Ismail Keshta Mukesh Soni +6 位作者 Mohammed Wasim Bhatt Azeem Irshad Ali Rizwan Shakir Khan Renato RMaaliw III Arsalan Muhammad Soomar Mohammad Shabaz 《CAAI Transactions on Intelligence Technology》 SCIE EI 2023年第4期1150-1163,共14页
There are an increasing number of Narrow Band IoT devices being manufactured as the technology behind them develops quickly.The high co‐channel interference and signal attenuation seen in edge Narrow Band IoT devices... There are an increasing number of Narrow Band IoT devices being manufactured as the technology behind them develops quickly.The high co‐channel interference and signal attenuation seen in edge Narrow Band IoT devices make it challenging to guarantee the service quality of these devices.To maximise the data rate fairness of Narrow Band IoT devices,a multi‐dimensional indoor localisation model is devised,consisting of transmission power,data scheduling,and time slot scheduling,based on a network model that employs non‐orthogonal multiple access via a relay.Based on this network model,the optimisation goal of Narrow Band IoT device data rate ratio fairness is first established by the authors,while taking into account the Narrow Band IoT network:The multidimensional indoor localisation optimisation model of equipment tends to minimize data rate,energy constraints and EH relay energy and data buffer constraints,data scheduling and time slot scheduling.As a result,each Narrow Band IoT device's data rate needs are met while the network's overall performance is optimised.We investigate the model's potential for convex optimisation and offer an algorithm for optimising the distribution of multiple resources using the KKT criterion.The current work primarily considers the NOMA Narrow Band IoT network under a single EH relay.However,the growth of Narrow Band IoT devices also leads to a rise in co‐channel interference,which impacts NOMA's performance enhancement.Through simulation,the proposed approach is successfully shown.These improvements have boosted the network's energy efficiency by 44.1%,data rate proportional fairness by 11.9%,and spectrum efficiency by 55.4%. 展开更多
关键词 artificial inteligence detection of moving objects internet of things
下载PDF
THRFuzzy:Tangential holoentropy-enabled rough fuzzy classifier to classification of evolving data streams 被引量:1
11
作者 Jagannath E.Nalavade T.Senthil Murugan 《Journal of Central South University》 SCIE EI CAS CSCD 2017年第8期1789-1800,共12页
The rapid developments in the fields of telecommunication, sensor data, financial applications, analyzing of data streams, and so on, increase the rate of data arrival, among which the data mining technique is conside... The rapid developments in the fields of telecommunication, sensor data, financial applications, analyzing of data streams, and so on, increase the rate of data arrival, among which the data mining technique is considered a vital process. The data analysis process consists of different tasks, among which the data stream classification approaches face more challenges than the other commonly used techniques. Even though the classification is a continuous process, it requires a design that can adapt the classification model so as to adjust the concept change or the boundary change between the classes. Hence, we design a novel fuzzy classifier known as THRFuzzy to classify new incoming data streams. Rough set theory along with tangential holoentropy function helps in the designing the dynamic classification model. The classification approach uses kernel fuzzy c-means(FCM) clustering for the generation of the rules and tangential holoentropy function to update the membership function. The performance of the proposed THRFuzzy method is verified using three datasets, namely skin segmentation, localization, and breast cancer datasets, and the evaluated metrics, accuracy and time, comparing its performance with HRFuzzy and adaptive k-NN classifiers. The experimental results conclude that THRFuzzy classifier shows better classification results providing a maximum accuracy consuming a minimal time than the existing classifiers. 展开更多
关键词 模糊分类器 数据流分析 粗糙集理论 数据挖掘技术 fuzzy方法 k-NN分类 分类模型 模糊C均值
下载PDF
Scope of machine learning applications for addressing the challenges in next-generation wireless networks 被引量:1
12
作者 Raj Kumar Samanta Bikash Sadhukhan +3 位作者 Hiranmay Samaddar Suvobrata Sarkar Chandan Koner Monidepa Ghosh 《CAAI Transactions on Intelligence Technology》 SCIE EI 2022年第3期395-418,共24页
The convenience of availing quality services at affordable costs anytime and anywhere makes mobile technology very popular among users.Due to this popularity,there has been a huge rise in mobile data volume,applicatio... The convenience of availing quality services at affordable costs anytime and anywhere makes mobile technology very popular among users.Due to this popularity,there has been a huge rise in mobile data volume,applications,types of services,and number of customers.Furthermore,due to the COVID-19 pandemic,the worldwide lockdown has added fuel to this increase as most of our professional and commercial activities are being done online from home.This massive increase in demand for multi-class services has posed numerous challenges to wireless network frameworks.The services offered through wireless networks are required to support this huge volume of data and multiple types of traffic,such as real-time live streaming of videos,audios,text,images etc.,at a very high bit rate with a negligible delay in transmission and permissible vehicular speed of the customers.Next-generation wireless networks(NGWNs,i.e.5G networks and beyond)are being developed to accommodate the service qualities mentioned above and many more.However,achieving all the desired service qualities to be incorporated into the design of the 5G network infrastructure imposes large challenges for designers and engineers.It requires the analysis of a huge volume of network data(structured and unstructured)received or collected from heterogeneous devices,applications,services,and customers and the effective and dynamic management of network parameters based on this analysis in real time.In the ever-increasing network heterogeneity and complexity,machine learning(ML)techniques may become an efficient tool for effectively managing these issues.In recent days,the progress of artificial intelligence and ML techniques has grown interest in their application in the networking domain.This study discusses current wireless network research,brief discussions on ML methods that can be effectively applied to the wireless networking domain,some tools available to support and customise efficient mobile system design,and some unresolved issues for future research directions. 展开更多
关键词 machine learning network control next-generation wireless networks
下载PDF
Hybrid XGBoost model with hyperparameter tuning for prediction of liver disease with better accuracy
13
作者 Surjeet Dalal Edeh Michael Onyema Amit Malik 《World Journal of Gastroenterology》 SCIE CAS 2022年第46期6551-6563,共13页
BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver dise... BACKGROUND Liver disease indicates any pathology that can harm or destroy the liver or prevent it from normal functioning.The global community has recently witnessed an increase in the mortality rate due to liver disease.This could be attributed to many factors,among which are human habits,awareness issues,poor healthcare,and late detection.To curb the growing threats from liver disease,early detection is critical to help reduce the risks and improve treatment outcome.Emerging technologies such as machine learning,as shown in this study,could be deployed to assist in enhancing its prediction and treatment.AIM To present a more efficient system for timely prediction of liver disease using a hybrid eXtreme Gradient Boosting model with hyperparameter tuning with a view to assist in early detection,diagnosis,and reduction of risks and mortality associated with the disease.METHODS The dataset used in this study consisted of 416 people with liver problems and 167 with no such history.The data were collected from the state of Andhra Pradesh,India,through https://www.kaggle.com/datasets/uciml/indian-liver-patientrecords.The population was divided into two sets depending on the disease state of the patient.This binary information was recorded in the attribute"is_patient".RESULTS The results indicated that the chi-square automated interaction detection and classification and regression trees models achieved an accuracy level of 71.36%and 73.24%,respectively,which was much better than the conventional method.The proposed solution would assist patients and physicians in tackling the problem of liver disease and ensuring that cases are detected early to prevent it from developing into cirrhosis(scarring)and to enhance the survival of patients.The study showed the potential of machine learning in health care,especially as it concerns disease prediction and monitoring.CONCLUSION This study contributed to the knowledge of machine learning application to health and to the efforts toward combating the problem of liver disease.However,relevant authorities have to invest more into machine learning research and other health technologies to maximize their potential. 展开更多
关键词 Liver infection Machine learning Chi-square automated interaction detection Classification and regression trees Decision tree XGBoost Hyperparameter tuning
下载PDF
Test Vector Optimization Using Pocofan-Poframe Partitionin
14
作者 P.PattunnaRajam Reeba korah G.Maria Kalavathy 《Computers, Materials & Continua》 SCIE EI 2018年第3期251-268,共18页
This paper presents an automated POCOFAN-POFRAME algorithm thatpartitions large combinational digital VLSI circuits for pseudo exhaustive testing. In thispaper, a simulation framework and partitioning technique are pr... This paper presents an automated POCOFAN-POFRAME algorithm thatpartitions large combinational digital VLSI circuits for pseudo exhaustive testing. In thispaper, a simulation framework and partitioning technique are presented to guide VLSIcircuits to work under with fewer test vectors in order to reduce testing time and todevelop VLSI circuit designs. This framework utilizes two methods of partitioningPrimary Output Cone Fanout Partitioning (POCOFAN) and POFRAME partitioning todetermine number of test vectors in the circuit. The key role of partitioning is to identifyreconvergent fanout branch pairs and the optimal value of primary input node N andfanout F partitioning using I-PIFAN algorithm. The number of reconvergent fanout andits locations are critical for testing of VLSI circuits and design for testability. Hence, theirselection is crucial in order to optimize system performance and reliability. In the presentwork, the design constraints of the partitioned circuit considered for optimizationincludes critical path delay and test time. POCOFAN-POFRAME algorithm uses theparameters with optimal values of circuits maximum primary input cone size (N) andminimum fan-out value (F) to determine the number of test vectors, number of partitionsand its locations. The ISCAS’85 benchmark circuits have been successfully partitioned,the test results of C499 shows 45% reduction in the test vectors and the experimentalresults are compared with other partitioning methods, our algorithm makes fewer testvectors. 展开更多
关键词 Pseudo exhaustive testing POCOFAN (Primary Output Cone FanoutPartitioning) POFRAME partitioning combinational digital VLSI circuit testing criticalpath delay testing time design for testability
下载PDF
TBDDoSA-MD:Trust-Based DDoS Misbehave Detection Approach in Software-defined Vehicular Network(SDVN)
15
作者 Rajendra Prasad Nayak Srinivas Sethi +4 位作者 Sourav Kumar Bhoi Kshira Sagar Sahoo Nz Jhanjhi Thamer A.Tabbakh Zahrah A.Almusaylim 《Computers, Materials & Continua》 SCIE EI 2021年第12期3513-3529,共17页
Reliable vehicles are essential in vehicular networks for effective communication.Since vehicles in the network are dynamic,even a short span of misbehavior by a vehicle can disrupt the whole network which may lead to... Reliable vehicles are essential in vehicular networks for effective communication.Since vehicles in the network are dynamic,even a short span of misbehavior by a vehicle can disrupt the whole network which may lead to catastrophic consequences.In this paper,a Trust-Based Distributed DoS Misbehave Detection Approach(TBDDoSA-MD)is proposed to secure the Software-Defined Vehicular Network(SDVN).A malicious vehicle in this network performs DDoS misbehavior by attacking other vehicles in its neighborhood.It uses the jamming technique by sending unnecessary signals in the network,as a result,the network performance degrades.Attacked vehicles in that network will no longer meet the service requests from other vehicles.Therefore,in this paper,we proposed an approach to detect the DDoS misbehavior by using the trust values of the vehicles.Trust values are calculated based on direct trust and recommendations(indirect trust).These trust values help to decide whether a vehicle is legitimate or malicious.We simply discard the messages from malicious vehicles whereas the authenticity of the messages from legitimate vehicles is checked further before taking any action based on those messages.The performance of TBDDoSA-MD is evaluated in the Veins hybrid simulator,which uses OMNeT++and Simulation of Urban Mobility(SUMO).We compared the performance of TBDDoSA-MD with the recently proposed Trust-Based Framework(TBF)scheme using the following performance parameters such as detection accuracy,packet delivery ratio,detection time,and energy consumption.Simulation results show that the proposed work has a high detection accuracy of more than 90%while keeping the detection time as low as 30 s. 展开更多
关键词 Software-defined vehicular network TRUST evaluator node denial of service misbehavior
下载PDF
5G Data Offloading Using Fuzzification with Grasshopper Optimization Technique
16
作者 V.R.Balaji T.Kalavathi +2 位作者 J.Vellingiri N.Rajkumar Venkat Prasad Padhy 《Computer Systems Science & Engineering》 SCIE EI 2022年第7期289-301,共13页
Data offloading at the network with less time and reduced energy con-sumption are highly important for every technology.Smart applications process the data very quickly with less power consumption.As technology grows t... Data offloading at the network with less time and reduced energy con-sumption are highly important for every technology.Smart applications process the data very quickly with less power consumption.As technology grows towards 5G communication architecture,identifying a solution for QoS in 5G through energy-efficient computing is important.In this proposed model,we perform data offloading at 5G using the fuzzification concept.Mobile IoT devices create tasks in the network and are offloaded in the cloud or mobile edge nodes based on energy consumption.Two base stations,small(SB)and macro(MB)stations,are initialized and thefirst tasks randomly computed.Then,the tasks are pro-cessed using a fuzzification algorithm to select SB or MB in the central server.The optimization is performed using a grasshopper algorithm for improving the QoS of the 5G network.The result is compared with existing algorithms and indi-cates that the proposed system improves the performance of the system with a cost of 44.64 J for computing 250 benchmark tasks. 展开更多
关键词 5G energy consumption task offloading FUZZIFICATION grasshopper optimization QoS mobile IoT
下载PDF
Low Profile UHF Antenna Design for Low Earth-Observation CubeSats
17
作者 Md.Amanath Ullah Touhidul Alam +1 位作者 Ali F.Almutairi Mohammad Tariqul Islam 《Computers, Materials & Continua》 SCIE EI 2022年第5期2533-2542,共10页
This paper reveals a new design of UHF CubeSat antenna based on a modified Planar Inverted F Antenna(PIFA)for CubeSat communication.The design utilizes a CubeSat face as the ground plane.There is a gap of 5 mm beneath... This paper reveals a new design of UHF CubeSat antenna based on a modified Planar Inverted F Antenna(PIFA)for CubeSat communication.The design utilizes a CubeSat face as the ground plane.There is a gap of 5 mm beneath the radiating element that facilitates the design providing with space for solar panels.The prototype is fabricated using Aluminum metal sheet and measured.The antenna achieved resonance at 419 MHz.Response of the antenna has been investigated after placing a solar panel.Lossy properties of solar panels made the resonance shift about 20 MHz.This design addresses the frequency shifting issue after placing the antenna with the CubeSat body.This phenomenon has been analyzed considering a typical 1U and 2U CubeSat body with the antenna.The antenna achieved a positive realized gain of 0.7 dB and approximately 78%of efficiency at the resonant frequency with providing 85%of open space for solar irradiance onto the solar panel. 展开更多
关键词 CubeSat antenna UHF antenna small satellite satellite communication
下载PDF
Detection of Behavioral Patterns Employing a Hybrid Approach of Computational Techniques
18
作者 Rohit Raja Chetan Swarup +5 位作者 Abhishek Kumar Kamred Udham Singh Teekam Singh Dinesh Gupta Neeraj Varshney Swati Jain 《Computers, Materials & Continua》 SCIE EI 2022年第7期2015-2031,共17页
As far as the present state is concerned in detecting the behavioral pattern of humans(subject)using morphological image processing,a considerable portion of the study has been conducted utilizing frontal vision data ... As far as the present state is concerned in detecting the behavioral pattern of humans(subject)using morphological image processing,a considerable portion of the study has been conducted utilizing frontal vision data of human faces.The present research work had used a side vision of human-face data to develop a theoretical framework via a hybrid analytical model approach.In this example,hybridization includes an artificial neural network(ANN)with a genetic algorithm(GA).We researched the geometrical properties extracted from side-vision human-face data.An additional study was conducted to determine the ideal number of geometrical characteristics to pick while clustering.The close vicinity ofminimum distance measurements is done for these clusters,mapped for proper classification and decision process of behavioral pattern.To identify the data acquired,support vector machines and artificial neural networks are utilized.A method known as an adaptiveunidirectional associative memory(AUTAM)was used to map one side of a human face to the other side of the same subject.The behavioral pattern has been detected based on two-class problem classification,and the decision process has been done using a genetic algorithm with best-fit measurements.The developed algorithm in the present work has been tested by considering a dataset of 100 subjects and tested using standard databases like FERET,Multi-PIE,Yale Face database,RTR,CASIA,etc.The complexity measures have also been calculated under worst-case and best-case situations. 展开更多
关键词 Adaptive-unidirectional-associative-memory technique artificial neural network genetic algorithm hybrid approach
下载PDF
Operations and Actions of Lie Groups on Manifolds
19
作者 Sharmin Akter Mir Md. Moheuddin +1 位作者 Saddam Hossain Asia Khatun 《American Journal of Computational Mathematics》 2020年第3期460-472,共13页
As recounted in this paper, the idea of groups is one that has evolved from some very intuitive concepts. We can do binary operations like adding or multiplying two elements and also binary operations like taking the ... As recounted in this paper, the idea of groups is one that has evolved from some very intuitive concepts. We can do binary operations like adding or multiplying two elements and also binary operations like taking the square root of an element (in this case the result is not always in the set). In this paper, we aim to find the operations and actions of Lie groups on manifolds. These actions can be applied to the matrix group and Bi-invariant forms of Lie groups and to generalize the eigenvalues and eigenfunctions of differential operators on R<sup>n</sup>. A Lie group is a group as well as differentiable manifold, with the property that the group operations are compatible with the smooth structure on which group manipulations, product and inverse, are distinct. It plays an extremely important role in the theory of fiber bundles and also finds vast applications in physics. It represents the best-developed theory of continuous symmetry of mathematical objects and structures, which makes them indispensable tools for many parts of contemporary mathematics, as well as for modern theoretical physics. Here we did work flat out to represent the mathematical aspects of Lie groups on manifolds. 展开更多
关键词 Group (G) Abelian Group (g1g2 = g2g1) Subgroup (H Is a Subgroup of G) Co-Sets (gH) Lie Groups (G×G G(x y) x·y and G G g g-1) Smooth Mapping (σ:G × G G)
下载PDF
Implementation of an Efficient Light Weight Security Algorithm for Energy-Constrained Wireless Sensor Nodes
20
作者 A. Saravanaselvan B. Paramasivan 《Circuits and Systems》 2016年第9期2234-2241,共9页
In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. ... In-network data aggregation is severely affected due to information in transmits attack. This is an important problem since wireless sensor networks (WSN) are highly vulnerable to node compromises due to this attack. As a result, large error in the aggregate computed at the base station due to false sub aggregate values contributed by compromised nodes. When falsified event messages forwarded through intermediate nodes lead to wastage of their limited energy too. Since wireless sensor nodes are battery operated, it has low computational power and energy. In view of this, the algorithms designed for wireless sensor nodes should be such that, they extend the lifetime, use less computation and enhance security so as to enhance the network life time. This article presents Vernam Cipher cryptographic technique based data compression algorithm using huff man source coding scheme in order to enhance security and lifetime of the energy constrained wireless sensor nodes. In addition, this scheme is evaluated by using different processor based sensor node implementations and the results are compared against to other existing schemes. In particular, we present a secure light weight algorithm for the wireless sensor nodes which are consuming less energy for its operation. Using this, the entropy improvement is achieved to a greater extend. 展开更多
关键词 In-Network Data Aggregation Security Attacks Vernam Cipher Cryptographic Technique Huffman Source Coding ENTROPY
下载PDF
上一页 1 2 6 下一页 到第
使用帮助 返回顶部