Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques...Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms.展开更多
Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particula...Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis.展开更多
Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely h...Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.展开更多
Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial i...Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.展开更多
Universal Soil Loss Equation (USLE) is the most comprehensive technique available to predict the long term average annual rate of erosion on a field slope. USLE was governed by five factors include soil erodibility fa...Universal Soil Loss Equation (USLE) is the most comprehensive technique available to predict the long term average annual rate of erosion on a field slope. USLE was governed by five factors include soil erodibility factor (K), rainfall and runoff erodibility index (R), crop/vegetation and management factor (C), support practice factor (P) and slope length-gradient factor (LS). In the past, K, R and LS factors are extensively studied. But the impacts of factors C and P to outfall Total Suspended Solid (TSS) and % reduction of TSS are not fully studied yet. Therefore, this study employs Buffer Zone Calculator as a tool to determine the sediment removal efficiency for different C and P factors. The selected study areas are Santubong River, Kuching, Sarawak. Results show that the outfall TSS is increasing with the increase of C values. The most effective and efficient land use for reducing TSS among 17 land uses investigated is found to be forest with undergrowth, followed by mixed dipt. forest, forest with no undergrowth, cultivated grass, logging 30, logging 10^6, wet rice, new shifting agriculture, oil palm, rubber, cocoa, coffee, tea and lastly settlement/cleared land. Besides, results also indicate that the % reduction of TSS is increasing with the decrease of P factor. The most effective support practice to reduce the outfall TSS is found to be terracing, followed by contour-strip cropping, contouring and lastly not implementing any soil conservation practice.展开更多
Samarahan has transformed from a small village into education hub for the past 2 decades. Rapid development and population growth had led to speedy growth in water demand. The situation is getting worse as the pipes a...Samarahan has transformed from a small village into education hub for the past 2 decades. Rapid development and population growth had led to speedy growth in water demand. The situation is getting worse as the pipes are deteriorating due to pipe aging. Therefore, there is a need to study the adequacy of water supply and relationships among roughness coefficient (C) values in Hazen Williams’ Equation with head loss and water pressure due to pipe aging at Uni-Central, a residential area located at Samarahan Sarawak. Investigations were carried out with Ductile Iron, Abestos Cement and Cast Iron pipes at age categories of 0 - 10 years, 10 - 30 years, 30 - 50 years, 50 - 70 years and >70 years. Six critical nodes named as A, B, C, D, E and F were identified to study the water pressure and head loss. Model was developed with InfoWorks Water Supply (WS) Pro software. The impact of pipe aging and materials to water pressure and head loss was not significant at Nodes A, B, C and F. However, max water pressure at Nodes D and F were only reaching 6.30 m and 7.30 m, respectively for all investigations. Therefore, some improvement works are required. Results also show that Asbestos Cement pipe has the least impact on the head loss and water pressure, followed by Ductile Iron pipe and lastly Cast Iron pipe. Simulation results also revealed that older pipes have higher roughness coefficients, indicated with lower “C” values, thus increase the head loss and reduce the water pressure. In contrast, as “C” values increased, head loss will be reduced and water pressure will be increased.展开更多
The paper presents the simulation results of the comparison of three Queuing Mechanisms, First in First out (FIFO), Priority Queuing (PQ), and Weighted Fair Queuing (WFQ). Depending on their effects on the network’s ...The paper presents the simulation results of the comparison of three Queuing Mechanisms, First in First out (FIFO), Priority Queuing (PQ), and Weighted Fair Queuing (WFQ). Depending on their effects on the network’s Routers, the load of any algorithm of them over Router’s CPUs and memory usage, the delay occurred between routers when any algorithm has been used and the network application throughput. This comparison explains that, PQ doesn’t need high specification hardware (memory and CPU) but when used it is not fair, because it serves one application and ignore the other application and FIFO mechanism has smaller queuing delay, otherwise PQ has bigger delay.展开更多
The implementation of wireless technologies based on the vehicular ad hoc sensor network(VASNET) may provide support for the search and rescue(SAR) team to operate effectively in natural disaster events, such as lands...The implementation of wireless technologies based on the vehicular ad hoc sensor network(VASNET) may provide support for the search and rescue(SAR) team to operate effectively in natural disaster events, such as landslide, earthquake, flooding, and tsunami. The operations of SAR team are very challenging in such events due to the possible damages of the existing telecommunication infrastructures. The existing deployment of the cellular communications infrastructure may be partially or completely destroyed after the occurrence of these natural disasters. Thus, the current VASNET infrastructure must be able to support the infrastructure-less network by integrating other green wireless technologies that can benefit the SAR team, which can indirectly save more human lives and reduce the number of casualties. Therefore, the integration of green Internet of things(Io T) and VASNET is proposed to form a heterogeneous framework for data dissemination in SAR operations. In addition, this paper also discusses the existing Io T framework in disaster scenarios with future research direction for Io T using on any aspect, especially related to the natural disaster scenarios.展开更多
The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,whi...The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.展开更多
Lung cancer is the most dangerous and death-causing disease indicated by the presence of pulmonary nodules in the lung.It is mostly caused by the instinctive growth of cells in the lung.Lung nodule detection has a sig...Lung cancer is the most dangerous and death-causing disease indicated by the presence of pulmonary nodules in the lung.It is mostly caused by the instinctive growth of cells in the lung.Lung nodule detection has a significant role in detecting and screening lung cancer in Computed tomography(CT)scan images.Early detection plays an important role in the survival rate and treatment of lung cancer patients.Moreover,pulmonary nodule classification techniques based on the convolutional neural network can be used for the accurate and efficient detection of lung cancer.This work proposed an automatic nodule detection method in CT images based on modified AlexNet architecture and Support vector machine(SVM)algorithm namely LungNet-SVM.The proposed model consists of seven convolutional layers,three pooling layers,and two fully connected layers used to extract features.Support vector machine classifier is applied for the binary classification of nodules into benign andmalignant.The experimental analysis is performed by using the publicly available benchmark dataset Lung nodule analysis 2016(LUNA16).The proposed model has achieved 97.64%of accuracy,96.37%of sensitivity,and 99.08%of specificity.A comparative analysis has been carried out between the proposed LungNet-SVM model and existing stateof-the-art approaches for the classification of lung cancer.The experimental results indicate that the proposed LungNet-SVM model achieved remarkable performance on a LUNA16 dataset in terms of accuracy.展开更多
Offline signature verification(OfSV)is essential in preventing the falsification of documents.Deep learning(DL)based OfSVs require a high number of signature images to attain acceptable performance.However,a limited n...Offline signature verification(OfSV)is essential in preventing the falsification of documents.Deep learning(DL)based OfSVs require a high number of signature images to attain acceptable performance.However,a limited number of signature samples are available to train these models in a real-world scenario.Several researchers have proposed models to augment new signature images by applying various transformations.Others,on the other hand,have used human neuromotor and cognitive-inspired augmentation models to address the demand for more signature samples.Hence,augmenting a sufficient number of signatures with variations is still a challenging task.This study proposed OffSig-SinGAN:a deep learning-based image augmentation model to address the limited number of signatures problem on offline signature verification.The proposed model is capable of augmenting better quality signatures with diversity from a single signature image only.It is empirically evaluated on widely used public datasets;GPDSsyntheticSignature.The quality of augmented signature images is assessed using four metrics like pixel-by-pixel difference,peak signal-to-noise ratio(PSNR),structural similarity index measure(SSIM),and frechet inception distance(FID).Furthermore,various experiments were organised to evaluate the proposed image augmentation model’s performance on selected DL-based OfSV systems and to prove whether it helped to improve the verification accuracy rate.Experiment results showed that the proposed augmentation model performed better on the GPDSsyntheticSignature dataset than other augmentation methods.The improved verification accuracy rate of the selected DL-based OfSV system proved the effectiveness of the proposed augmentation model.展开更多
In this paper, we survey a number of studies in the literature on improving lightweight systems in the Internet of Things (IoT). The paper illustrates recent development of Boolean cryptographic function Application a...In this paper, we survey a number of studies in the literature on improving lightweight systems in the Internet of Things (IoT). The paper illustrates recent development of Boolean cryptographic function Application and how it assists in using hardware such as the internet of things. For a long time there seems to be little progress in applying pure mathematics in providing security since the wide progress made by George Boole and Shannon. We discuss cryptanalysis of Boolean functions to avoid trapdoors and vulnerabilities in the development of block ciphers. It appears that there is significant progress. A comparative analysis of lightweight cryptographic schemes is reported in terms of execution time, code size and throughput. Depending on the schemes and the structure of the algorithms, these parameters change but remain within reasonable values making them suited for Internet of things applications. The driving force of lightweight cryptography (LWC) stems mainly from its direct applications in the real world since it provides solutions to actual problems faced by designers of IoT systems. Broadly speaking, lightweight cryptographic algorithms are designed to achieve two main goals. The first goal of a cryptographic algorithm is to withstand all known cryptanalytic attacks and thus to be secure in the black-box model. The second goal is to build the cryptographic primitive in such a way that its implementations satisfy a clearly specified set of constraints that depend on a case-by-case basis.展开更多
This paper proposes a novel framework to detect cyber-attacks using Machine Learning coupled with User Behavior Analytics.The framework models the user behavior as sequences of events representing the user activities ...This paper proposes a novel framework to detect cyber-attacks using Machine Learning coupled with User Behavior Analytics.The framework models the user behavior as sequences of events representing the user activities at such a network.The represented sequences are thenfitted into a recurrent neural network model to extract features that draw distinctive behavior for individual users.Thus,the model can recognize frequencies of regular behavior to profile the user manner in the network.The subsequent procedure is that the recurrent neural network would detect abnormal behavior by classifying unknown behavior to either regu-lar or irregular behavior.The importance of the proposed framework is due to the increase of cyber-attacks especially when the attack is triggered from such sources inside the network.Typically detecting inside attacks are much more challenging in that the security protocols can barely recognize attacks from trustful resources at the network,including users.Therefore,the user behavior can be extracted and ultimately learned to recognize insightful patterns in which the regular patterns reflect a normal network workflow.In contrast,the irregular patterns can trigger an alert for a potential cyber-attack.The framework has been fully described where the evaluation metrics have also been introduced.The experimental results show that the approach performed better compared to other approaches and AUC 0.97 was achieved using RNN-LSTM 1.The paper has been concluded with pro-viding the potential directions for future improvements.展开更多
Crop diseases have a significant impact on plant growth and can lead to reduced yields.Traditional methods of disease detection rely on the expertise of plant protection experts,which can be subjective and dependent o...Crop diseases have a significant impact on plant growth and can lead to reduced yields.Traditional methods of disease detection rely on the expertise of plant protection experts,which can be subjective and dependent on individual experience and knowledge.To address this,the use of digital image recognition technology and deep learning algorithms has emerged as a promising approach for automating plant disease identification.In this paper,we propose a novel approach that utilizes a convolutional neural network(CNN)model in conjunction with Inception v3 to identify plant leaf diseases.The research focuses on developing a mobile application that leverages this mechanism to identify diseases in plants and provide recommendations for overcoming specific diseases.The models were trained using a dataset consisting of 80,848 images representing 21 different plant leaves categorized into 60 distinct classes.Through rigorous training and evaluation,the proposed system achieved an impressive accuracy rate of 99%.This mobile application serves as a convenient and valuable advisory tool,providing early detection and guidance in real agricultural environments.The significance of this research lies in its potential to revolutionize plant disease detection and management practices.By automating the identification process through deep learning algorithms,the proposed system eliminates the subjective nature of expert-based diagnosis and reduces dependence on individual expertise.The integration of mobile technology further enhances accessibility and enables farmers and agricultural practitioners to swiftly and accurately identify diseases in their crops.展开更多
With the advancements in the era of artificial intelligence,blockchain,cloud computing,and big data,there is a need for secure,decentralized medical record storage and retrieval systems.While cloud storage solves stor...With the advancements in the era of artificial intelligence,blockchain,cloud computing,and big data,there is a need for secure,decentralized medical record storage and retrieval systems.While cloud storage solves storage issues,it is challenging to realize secure sharing of records over the network.Medi-block record in the healthcare system has brought a new digitalization method for patients’medical records.This centralized technology provides a symmetrical process between the hospital and doctors when patients urgently need to go to a different or nearby hospital.It enables electronic medical records to be available with the correct authentication and restricts access to medical data retrieval.Medi-block record is the consumer-centered healthcare data system that brings reliable and transparent datasets for the medical record.This study presents an extensive review of proposed solutions aiming to protect the privacy and integrity of medical data by securing data sharing for Medi-block records.It also aims to propose a comprehensive investigation of the recent advances in different methods of securing data sharing,such as using Blockchain technology,Access Control,Privacy-Preserving,Proxy Re-Encryption,and Service-On-Chain approach.Finally,we highlight the open issues and identify the challenges regarding secure data sharing for Medi-block records in the healthcare systems.展开更多
Many Low Impact Developments (LIDs) have recently been developed as a sustainable integrated strategy for managing the quantity and quality of stormwater and surrounding amenities. Previous research showed that green ...Many Low Impact Developments (LIDs) have recently been developed as a sustainable integrated strategy for managing the quantity and quality of stormwater and surrounding amenities. Previous research showed that green roof is one of the most promising LIDs for slowing down rainwater, controlling rainwater volume, and enhancing rainwater quality by filtering and leaching contaminants from the substrate. However, there is no guideline for green roof design in Malaysia. Hence, Investigating the viability of using green roofs to manage stormwater and address flash flood hazards is urgently necessary. This study used the Storm Water Management Model (SWMM) to evaluate the effectiveness of green roof in managing stormwater and improving rainwater quality. The selected study area is the multistory car park (MSCP) rooftop at Swinburne University of Technology Sarawak Campus. Nine green roof models with different configurations were created. Results revealed that the optimum design of a green roof is 100 mm of berm height, 150 mm of soil thickness, and 50 mm of drainage mat thickness. With the ability to reduce runoff generation by 26.73%, reduce TSS by 89.75%, TP by 93.07%, TN by 93.16%, and improved BOD by 81.33%. However, pH values dropped as low as 5.933 and became more acidic due to the substrates in green roof. These findings demonstrated that green roofs improve water quality, able to temporarily store excess rainfall and it is very promising and sustainable tool in managing stormwater.展开更多
The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in thi...The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in this paper is based on exploiting the implicit feedbacks of user satisfaction during her web browsing history to construct a user profile storing the web pages the user is highly interested in. A weight is assigned to each page stored in the user’s profile;this weight reflects the user’s interest in this page. We name this weight the relative rank of the page, since it depends on the user issuing the query. Therefore, the ranking algorithm provided in this paper is based on the principle that;the rank assigned to a page is the addition of two rank values R_rank and A_rank. A_rank is an absolute rank, since it is fixed for all users issuing the same query, it only depends on the link structures of the web and on the keywords of the query. Thus, it could be calculated by the PageRank algorithm suggested by Brin and Page in 1998 and used by the google search engine. While, R_rank is the relative rank, it is calculated by the methods given in this paper which depends mainly on recording implicit measures of user satisfaction during her previous browsing history.展开更多
A non-local denoising (NLD) algorithm for point-sampled surfaces (PSSs) is presented based on similarities, including geometry intensity and features of sample points. By using the trilateral filtering operator, the d...A non-local denoising (NLD) algorithm for point-sampled surfaces (PSSs) is presented based on similarities, including geometry intensity and features of sample points. By using the trilateral filtering operator, the differential signal of each sample point is determined and called "geometry intensity". Based on covariance analysis, a regular grid of geometry intensity of a sample point is constructed, and the geometry-intensity similarity of two points is measured according to their grids. Based on mean shift clustering, the PSSs are clustered in terms of the local geometry-features similarity. The smoothed geometry intensity, i.e., offset distance, of the sample point is estimated according to the two similarities. Using the resulting intensity, the noise component from PSSs is finally removed by adjusting the position of each sample point along its own normal direction. Ex- perimental results demonstrate that the algorithm is robust and can produce a more accurate denoising result while having better feature preservation.展开更多
基金This paper’s logical organisation and content quality have been enhanced,so the authors thank anonymous reviewers and journal editors for assistance.
文摘Forecasting river flow is crucial for optimal planning,management,and sustainability using freshwater resources.Many machine learning(ML)approaches have been enhanced to improve streamflow prediction.Hybrid techniques have been viewed as a viable method for enhancing the accuracy of univariate streamflow estimation when compared to standalone approaches.Current researchers have also emphasised using hybrid models to improve forecast accuracy.Accordingly,this paper conducts an updated literature review of applications of hybrid models in estimating streamflow over the last five years,summarising data preprocessing,univariate machine learning modelling strategy,advantages and disadvantages of standalone ML techniques,hybrid models,and performance metrics.This study focuses on two types of hybrid models:parameter optimisation-based hybrid models(OBH)and hybridisation of parameter optimisation-based and preprocessing-based hybridmodels(HOPH).Overall,this research supports the idea thatmeta-heuristic approaches precisely improveML techniques.It’s also one of the first efforts to comprehensively examine the efficiency of various meta-heuristic approaches(classified into four primary classes)hybridised with ML techniques.This study revealed that previous research applied swarm,evolutionary,physics,and hybrid metaheuristics with 77%,61%,12%,and 12%,respectively.Finally,there is still room for improving OBH and HOPH models by examining different data pre-processing techniques and metaheuristic algorithms.
文摘Autism Spectrum Disorder(ASD)is a neurodevelopmental condition characterized by significant challenges in social interaction,communication,and repetitive behaviors.Timely and precise ASD detection is crucial,particularly in regions with limited diagnostic resources like Pakistan.This study aims to conduct an extensive comparative analysis of various machine learning classifiers for ASD detection using facial images to identify an accurate and cost-effective solution tailored to the local context.The research involves experimentation with VGG16 and MobileNet models,exploring different batch sizes,optimizers,and learning rate schedulers.In addition,the“Orange”machine learning tool is employed to evaluate classifier performance and automated image processing capabilities are utilized within the tool.The findings unequivocally establish VGG16 as the most effective classifier with a 5-fold cross-validation approach.Specifically,VGG16,with a batch size of 2 and the Adam optimizer,trained for 100 epochs,achieves a remarkable validation accuracy of 99% and a testing accuracy of 87%.Furthermore,the model achieves an F1 score of 88%,precision of 85%,and recall of 90% on test images.To validate the practical applicability of the VGG16 model with 5-fold cross-validation,the study conducts further testing on a dataset sourced fromautism centers in Pakistan,resulting in an accuracy rate of 85%.This reaffirms the model’s suitability for real-world ASD detection.This research offers valuable insights into classifier performance,emphasizing the potential of machine learning to deliver precise and accessible ASD diagnoses via facial image analysis.
文摘Accurate software cost estimation in Global Software Development(GSD)remains challenging due to reliance on historical data and expert judgments.Traditional models,such as the Constructive Cost Model(COCOMO II),rely heavily on historical and accurate data.In addition,expert judgment is required to set many input parameters,which can introduce subjectivity and variability in the estimation process.Consequently,there is a need to improve the current GSD models to mitigate reliance on historical data,subjectivity in expert judgment,inadequate consideration of GSD-based cost drivers and limited integration of modern technologies with cost overruns.This study introduces a novel hybrid model that synergizes the COCOMO II with Artificial Neural Networks(ANN)to address these challenges.The proposed hybrid model integrates additional GSD-based cost drivers identified through a systematic literature review and further vetted by industry experts.This article compares the effectiveness of the proposedmodelwith state-of-the-artmachine learning-basedmodels for software cost estimation.Evaluating the NASA 93 dataset by adopting twenty-six GSD-based cost drivers reveals that our hybrid model achieves superior accuracy,outperforming existing state-of-the-artmodels.The findings indicate the potential of combining COCOMO II,ANN,and additional GSD-based cost drivers to transform cost estimation in GSD.
基金financially supported by the Deanship of Scientific Research at King Khalid University under Research Grant Number(R.G.P.2/549/44).
文摘Algorithms for steganography are methods of hiding data transfers in media files.Several machine learning architectures have been presented recently to improve stego image identification performance by using spatial information,and these methods have made it feasible to handle a wide range of problems associated with image analysis.Images with little information or low payload are used by information embedding methods,but the goal of all contemporary research is to employ high-payload images for classification.To address the need for both low-and high-payload images,this work provides a machine-learning approach to steganography image classification that uses Curvelet transformation to efficiently extract characteristics from both type of images.Support Vector Machine(SVM),a commonplace classification technique,has been employed to determine whether the image is a stego or cover.The Wavelet Obtained Weights(WOW),Spatial Universal Wavelet Relative Distortion(S-UNIWARD),Highly Undetectable Steganography(HUGO),and Minimizing the Power of Optimal Detector(MiPOD)steganography techniques are used in a variety of experimental scenarios to evaluate the performance of the proposedmethod.Using WOW at several payloads,the proposed approach proves its classification accuracy of 98.60%.It exhibits its superiority over SOTA methods.
文摘Universal Soil Loss Equation (USLE) is the most comprehensive technique available to predict the long term average annual rate of erosion on a field slope. USLE was governed by five factors include soil erodibility factor (K), rainfall and runoff erodibility index (R), crop/vegetation and management factor (C), support practice factor (P) and slope length-gradient factor (LS). In the past, K, R and LS factors are extensively studied. But the impacts of factors C and P to outfall Total Suspended Solid (TSS) and % reduction of TSS are not fully studied yet. Therefore, this study employs Buffer Zone Calculator as a tool to determine the sediment removal efficiency for different C and P factors. The selected study areas are Santubong River, Kuching, Sarawak. Results show that the outfall TSS is increasing with the increase of C values. The most effective and efficient land use for reducing TSS among 17 land uses investigated is found to be forest with undergrowth, followed by mixed dipt. forest, forest with no undergrowth, cultivated grass, logging 30, logging 10^6, wet rice, new shifting agriculture, oil palm, rubber, cocoa, coffee, tea and lastly settlement/cleared land. Besides, results also indicate that the % reduction of TSS is increasing with the decrease of P factor. The most effective support practice to reduce the outfall TSS is found to be terracing, followed by contour-strip cropping, contouring and lastly not implementing any soil conservation practice.
文摘Samarahan has transformed from a small village into education hub for the past 2 decades. Rapid development and population growth had led to speedy growth in water demand. The situation is getting worse as the pipes are deteriorating due to pipe aging. Therefore, there is a need to study the adequacy of water supply and relationships among roughness coefficient (C) values in Hazen Williams’ Equation with head loss and water pressure due to pipe aging at Uni-Central, a residential area located at Samarahan Sarawak. Investigations were carried out with Ductile Iron, Abestos Cement and Cast Iron pipes at age categories of 0 - 10 years, 10 - 30 years, 30 - 50 years, 50 - 70 years and >70 years. Six critical nodes named as A, B, C, D, E and F were identified to study the water pressure and head loss. Model was developed with InfoWorks Water Supply (WS) Pro software. The impact of pipe aging and materials to water pressure and head loss was not significant at Nodes A, B, C and F. However, max water pressure at Nodes D and F were only reaching 6.30 m and 7.30 m, respectively for all investigations. Therefore, some improvement works are required. Results also show that Asbestos Cement pipe has the least impact on the head loss and water pressure, followed by Ductile Iron pipe and lastly Cast Iron pipe. Simulation results also revealed that older pipes have higher roughness coefficients, indicated with lower “C” values, thus increase the head loss and reduce the water pressure. In contrast, as “C” values increased, head loss will be reduced and water pressure will be increased.
文摘The paper presents the simulation results of the comparison of three Queuing Mechanisms, First in First out (FIFO), Priority Queuing (PQ), and Weighted Fair Queuing (WFQ). Depending on their effects on the network’s Routers, the load of any algorithm of them over Router’s CPUs and memory usage, the delay occurred between routers when any algorithm has been used and the network application throughput. This comparison explains that, PQ doesn’t need high specification hardware (memory and CPU) but when used it is not fair, because it serves one application and ignore the other application and FIFO mechanism has smaller queuing delay, otherwise PQ has bigger delay.
文摘The implementation of wireless technologies based on the vehicular ad hoc sensor network(VASNET) may provide support for the search and rescue(SAR) team to operate effectively in natural disaster events, such as landslide, earthquake, flooding, and tsunami. The operations of SAR team are very challenging in such events due to the possible damages of the existing telecommunication infrastructures. The existing deployment of the cellular communications infrastructure may be partially or completely destroyed after the occurrence of these natural disasters. Thus, the current VASNET infrastructure must be able to support the infrastructure-less network by integrating other green wireless technologies that can benefit the SAR team, which can indirectly save more human lives and reduce the number of casualties. Therefore, the integration of green Internet of things(Io T) and VASNET is proposed to form a heterogeneous framework for data dissemination in SAR operations. In addition, this paper also discusses the existing Io T framework in disaster scenarios with future research direction for Io T using on any aspect, especially related to the natural disaster scenarios.
文摘The quick spread of the CoronavirusDisease(COVID-19)infection around the world considered a real danger for global health.The biological structure and symptoms of COVID-19 are similar to other viral chest maladies,which makes it challenging and a big issue to improve approaches for efficient identification of COVID-19 disease.In this study,an automatic prediction of COVID-19 identification is proposed to automatically discriminate between healthy and COVID-19 infected subjects in X-ray images using two successful moderns are traditional machine learning methods(e.g.,artificial neural network(ANN),support vector machine(SVM),linear kernel and radial basis function(RBF),k-nearest neighbor(k-NN),Decision Tree(DT),andCN2 rule inducer techniques)and deep learningmodels(e.g.,MobileNets V2,ResNet50,GoogleNet,DarkNet andXception).A largeX-ray dataset has been created and developed,namely the COVID-19 vs.Normal(400 healthy cases,and 400 COVID cases).To the best of our knowledge,it is currently the largest publicly accessible COVID-19 dataset with the largest number of X-ray images of confirmed COVID-19 infection cases.Based on the results obtained from the experiments,it can be concluded that all the models performed well,deep learning models had achieved the optimum accuracy of 98.8%in ResNet50 model.In comparison,in traditional machine learning techniques, the SVM demonstrated the best result for an accuracy of 95% and RBFaccuracy 94% for the prediction of coronavirus disease 2019.
文摘Lung cancer is the most dangerous and death-causing disease indicated by the presence of pulmonary nodules in the lung.It is mostly caused by the instinctive growth of cells in the lung.Lung nodule detection has a significant role in detecting and screening lung cancer in Computed tomography(CT)scan images.Early detection plays an important role in the survival rate and treatment of lung cancer patients.Moreover,pulmonary nodule classification techniques based on the convolutional neural network can be used for the accurate and efficient detection of lung cancer.This work proposed an automatic nodule detection method in CT images based on modified AlexNet architecture and Support vector machine(SVM)algorithm namely LungNet-SVM.The proposed model consists of seven convolutional layers,three pooling layers,and two fully connected layers used to extract features.Support vector machine classifier is applied for the binary classification of nodules into benign andmalignant.The experimental analysis is performed by using the publicly available benchmark dataset Lung nodule analysis 2016(LUNA16).The proposed model has achieved 97.64%of accuracy,96.37%of sensitivity,and 99.08%of specificity.A comparative analysis has been carried out between the proposed LungNet-SVM model and existing stateof-the-art approaches for the classification of lung cancer.The experimental results indicate that the proposed LungNet-SVM model achieved remarkable performance on a LUNA16 dataset in terms of accuracy.
文摘Offline signature verification(OfSV)is essential in preventing the falsification of documents.Deep learning(DL)based OfSVs require a high number of signature images to attain acceptable performance.However,a limited number of signature samples are available to train these models in a real-world scenario.Several researchers have proposed models to augment new signature images by applying various transformations.Others,on the other hand,have used human neuromotor and cognitive-inspired augmentation models to address the demand for more signature samples.Hence,augmenting a sufficient number of signatures with variations is still a challenging task.This study proposed OffSig-SinGAN:a deep learning-based image augmentation model to address the limited number of signatures problem on offline signature verification.The proposed model is capable of augmenting better quality signatures with diversity from a single signature image only.It is empirically evaluated on widely used public datasets;GPDSsyntheticSignature.The quality of augmented signature images is assessed using four metrics like pixel-by-pixel difference,peak signal-to-noise ratio(PSNR),structural similarity index measure(SSIM),and frechet inception distance(FID).Furthermore,various experiments were organised to evaluate the proposed image augmentation model’s performance on selected DL-based OfSV systems and to prove whether it helped to improve the verification accuracy rate.Experiment results showed that the proposed augmentation model performed better on the GPDSsyntheticSignature dataset than other augmentation methods.The improved verification accuracy rate of the selected DL-based OfSV system proved the effectiveness of the proposed augmentation model.
文摘In this paper, we survey a number of studies in the literature on improving lightweight systems in the Internet of Things (IoT). The paper illustrates recent development of Boolean cryptographic function Application and how it assists in using hardware such as the internet of things. For a long time there seems to be little progress in applying pure mathematics in providing security since the wide progress made by George Boole and Shannon. We discuss cryptanalysis of Boolean functions to avoid trapdoors and vulnerabilities in the development of block ciphers. It appears that there is significant progress. A comparative analysis of lightweight cryptographic schemes is reported in terms of execution time, code size and throughput. Depending on the schemes and the structure of the algorithms, these parameters change but remain within reasonable values making them suited for Internet of things applications. The driving force of lightweight cryptography (LWC) stems mainly from its direct applications in the real world since it provides solutions to actual problems faced by designers of IoT systems. Broadly speaking, lightweight cryptographic algorithms are designed to achieve two main goals. The first goal of a cryptographic algorithm is to withstand all known cryptanalytic attacks and thus to be secure in the black-box model. The second goal is to build the cryptographic primitive in such a way that its implementations satisfy a clearly specified set of constraints that depend on a case-by-case basis.
基金supported by the fund received from Al Baha University,8/1440.
文摘This paper proposes a novel framework to detect cyber-attacks using Machine Learning coupled with User Behavior Analytics.The framework models the user behavior as sequences of events representing the user activities at such a network.The represented sequences are thenfitted into a recurrent neural network model to extract features that draw distinctive behavior for individual users.Thus,the model can recognize frequencies of regular behavior to profile the user manner in the network.The subsequent procedure is that the recurrent neural network would detect abnormal behavior by classifying unknown behavior to either regu-lar or irregular behavior.The importance of the proposed framework is due to the increase of cyber-attacks especially when the attack is triggered from such sources inside the network.Typically detecting inside attacks are much more challenging in that the security protocols can barely recognize attacks from trustful resources at the network,including users.Therefore,the user behavior can be extracted and ultimately learned to recognize insightful patterns in which the regular patterns reflect a normal network workflow.In contrast,the irregular patterns can trigger an alert for a potential cyber-attack.The framework has been fully described where the evaluation metrics have also been introduced.The experimental results show that the approach performed better compared to other approaches and AUC 0.97 was achieved using RNN-LSTM 1.The paper has been concluded with pro-viding the potential directions for future improvements.
基金supported by the Hainan Provincial Natural Science Foundation of China(No.123QN182)Hainan University Research Fund(Project Nos.KYQD(ZR)-22064,KYQD(ZR)-22063,and KYQD(ZR)-22065).
文摘Crop diseases have a significant impact on plant growth and can lead to reduced yields.Traditional methods of disease detection rely on the expertise of plant protection experts,which can be subjective and dependent on individual experience and knowledge.To address this,the use of digital image recognition technology and deep learning algorithms has emerged as a promising approach for automating plant disease identification.In this paper,we propose a novel approach that utilizes a convolutional neural network(CNN)model in conjunction with Inception v3 to identify plant leaf diseases.The research focuses on developing a mobile application that leverages this mechanism to identify diseases in plants and provide recommendations for overcoming specific diseases.The models were trained using a dataset consisting of 80,848 images representing 21 different plant leaves categorized into 60 distinct classes.Through rigorous training and evaluation,the proposed system achieved an impressive accuracy rate of 99%.This mobile application serves as a convenient and valuable advisory tool,providing early detection and guidance in real agricultural environments.The significance of this research lies in its potential to revolutionize plant disease detection and management practices.By automating the identification process through deep learning algorithms,the proposed system eliminates the subjective nature of expert-based diagnosis and reduces dependence on individual expertise.The integration of mobile technology further enhances accessibility and enables farmers and agricultural practitioners to swiftly and accurately identify diseases in their crops.
文摘With the advancements in the era of artificial intelligence,blockchain,cloud computing,and big data,there is a need for secure,decentralized medical record storage and retrieval systems.While cloud storage solves storage issues,it is challenging to realize secure sharing of records over the network.Medi-block record in the healthcare system has brought a new digitalization method for patients’medical records.This centralized technology provides a symmetrical process between the hospital and doctors when patients urgently need to go to a different or nearby hospital.It enables electronic medical records to be available with the correct authentication and restricts access to medical data retrieval.Medi-block record is the consumer-centered healthcare data system that brings reliable and transparent datasets for the medical record.This study presents an extensive review of proposed solutions aiming to protect the privacy and integrity of medical data by securing data sharing for Medi-block records.It also aims to propose a comprehensive investigation of the recent advances in different methods of securing data sharing,such as using Blockchain technology,Access Control,Privacy-Preserving,Proxy Re-Encryption,and Service-On-Chain approach.Finally,we highlight the open issues and identify the challenges regarding secure data sharing for Medi-block records in the healthcare systems.
文摘Many Low Impact Developments (LIDs) have recently been developed as a sustainable integrated strategy for managing the quantity and quality of stormwater and surrounding amenities. Previous research showed that green roof is one of the most promising LIDs for slowing down rainwater, controlling rainwater volume, and enhancing rainwater quality by filtering and leaching contaminants from the substrate. However, there is no guideline for green roof design in Malaysia. Hence, Investigating the viability of using green roofs to manage stormwater and address flash flood hazards is urgently necessary. This study used the Storm Water Management Model (SWMM) to evaluate the effectiveness of green roof in managing stormwater and improving rainwater quality. The selected study area is the multistory car park (MSCP) rooftop at Swinburne University of Technology Sarawak Campus. Nine green roof models with different configurations were created. Results revealed that the optimum design of a green roof is 100 mm of berm height, 150 mm of soil thickness, and 50 mm of drainage mat thickness. With the ability to reduce runoff generation by 26.73%, reduce TSS by 89.75%, TP by 93.07%, TN by 93.16%, and improved BOD by 81.33%. However, pH values dropped as low as 5.933 and became more acidic due to the substrates in green roof. These findings demonstrated that green roofs improve water quality, able to temporarily store excess rainfall and it is very promising and sustainable tool in managing stormwater.
文摘The basic idea behind a personalized web search is to deliver search results that are tailored to meet user needs, which is one of the growing concepts in web technologies. The personalized web search presented in this paper is based on exploiting the implicit feedbacks of user satisfaction during her web browsing history to construct a user profile storing the web pages the user is highly interested in. A weight is assigned to each page stored in the user’s profile;this weight reflects the user’s interest in this page. We name this weight the relative rank of the page, since it depends on the user issuing the query. Therefore, the ranking algorithm provided in this paper is based on the principle that;the rank assigned to a page is the addition of two rank values R_rank and A_rank. A_rank is an absolute rank, since it is fixed for all users issuing the same query, it only depends on the link structures of the web and on the keywords of the query. Thus, it could be calculated by the PageRank algorithm suggested by Brin and Page in 1998 and used by the google search engine. While, R_rank is the relative rank, it is calculated by the methods given in this paper which depends mainly on recording implicit measures of user satisfaction during her previous browsing history.
基金the Hi-Tech Research and Development Pro-gram (863) of China (Nos. 2007AA01Z311 and 2007AA04Z1A5)the Research Fund for the Doctoral Program of Higher Education of China (No. 20060335114)
文摘A non-local denoising (NLD) algorithm for point-sampled surfaces (PSSs) is presented based on similarities, including geometry intensity and features of sample points. By using the trilateral filtering operator, the differential signal of each sample point is determined and called "geometry intensity". Based on covariance analysis, a regular grid of geometry intensity of a sample point is constructed, and the geometry-intensity similarity of two points is measured according to their grids. Based on mean shift clustering, the PSSs are clustered in terms of the local geometry-features similarity. The smoothed geometry intensity, i.e., offset distance, of the sample point is estimated according to the two similarities. Using the resulting intensity, the noise component from PSSs is finally removed by adjusting the position of each sample point along its own normal direction. Ex- perimental results demonstrate that the algorithm is robust and can produce a more accurate denoising result while having better feature preservation.