Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed f...Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed for software measurement,which is not considered during the development of most software systems.Many research studies have investigated different approaches for measuring software quality,but with no practical approaches to quantify and measure quality attributes.This paper proposes a software quality measurement model,based on a software interconnection model,to measure the quality of software components and the overall quality of the software system.Unlike most of the existing approaches,the proposed approach can be applied at the early stages of software development,to different architectural design models,and at different levels of system decomposition.This article introduces a software measurement model that uses a heuristic normalization of the software’s internal quality attributes,i.e.,coupling and cohesion,for software quality measurement.In this model,the quality of a software component is measured based on its internal strength and the coupling it exhibits with other component(s).The proposed model has been experimented with nine software engineering teams that have agreed to participate in the experiment during the development of their different software systems.The experiments have shown that coupling reduces the internal strength of the coupled components by the amount of coupling they exhibit,which degrades their quality and the overall quality of the software system.The introduced model can help in understanding the quality of software design.In addition,it identifies the locations in software design that exhibit unnecessary couplings that degrade the quality of the software systems,which can be eliminated.展开更多
With the development of hardware devices and the upgrading of smartphones,a large number of users save privacy-related information in mobile devices,mainly smartphones,which puts forward higher demands on the protecti...With the development of hardware devices and the upgrading of smartphones,a large number of users save privacy-related information in mobile devices,mainly smartphones,which puts forward higher demands on the protection of mobile users’privacy information.At present,mobile user authenticationmethods based on humancomputer interaction have been extensively studied due to their advantages of high precision and non-perception,but there are still shortcomings such as low data collection efficiency,untrustworthy participating nodes,and lack of practicability.To this end,this paper proposes a privacy-enhanced mobile user authentication method with motion sensors,which mainly includes:(1)Construct a smart contract-based private chain and federated learning to improve the data collection efficiency of mobile user authentication,reduce the probability of the model being bypassed by attackers,and reduce the overhead of data centralized processing and the risk of privacy leakage;(2)Use certificateless encryption to realize the authentication of the device to ensure the credibility of the client nodes participating in the calculation;(3)Combine Variational Mode Decomposition(VMD)and Long Short-TermMemory(LSTM)to analyze and model the motion sensor data of mobile devices to improve the accuracy of model certification.The experimental results on the real environment dataset of 1513 people show that themethod proposed in this paper can effectively resist poisoning attacks while ensuring the accuracy and efficiency of mobile user authentication.展开更多
Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, ...Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, especially at the small enterprises, to play many roles, and sometimes simultaneously. Beside the technical and managerial skills, software engineers should have additional intellectual skills such as domain-specific abstract thinking. Therefore, software engineering curriculum should help the students to build and improve their skills to meet the labor market needs. This study aims to explore the perceptions of software engineering students on the influence of learning software modeling and design on their domain-specific abstract thinking. Also, we explore the role of the course project in improving their domain-specific abstract thinking. The study results have shown that, most of the surveyed students believe that learning and practicing modeling and design concepts contribute to their ability to think abstractly on specific domain. However, this finding is influenced by the students’ lack of the comprehension of some modeling and design aspects (e.g., generalization). We believe that, such aspects should be introduced to the students at early levels of software engineering curriculum, which certainly will improve their ability to think abstractly on specific domain.展开更多
Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep lear...Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep learning approaches for capturing user and product information from a short text.However,such previously used approaches do not fairly and efficiently incorporate users’preferences and product characteristics.The proposed novel Hybrid Deep Collaborative Filtering(HDCF)model combines deep learning capabilities and deep interaction modeling with high performance for True Recommendations.To overcome the cold start problem,the new overall rating is generated by aggregating the Deep Multivariate Rating DMR(Votes,Likes,Stars,and Sentiment scores of reviews)from different external data sources because different sites have different rating scores about the same product that make confusion for the user to make a decision,either product is truly popular or not.The proposed novel HDCF model consists of four major modules such as User Product Attention,Deep Collaborative Filtering,Neural Sentiment Classifier,and Deep Multivariate Rating(UPA-DCF+NSC+DMR)to solve the addressed problems.Experimental results demonstrate that our novel model is outperforming state-of-the-art IMDb,Yelp2013,and Yelp2014 datasets for the true top-n recommendation of products using HDCF to increase the accuracy,confidence,and trust of recommendation services.展开更多
The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles...The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles to avoid congestion.Therefore,optimal path selection to route traffic between the origin and destination is vital.This research proposed a realistic strategy to reduce traffic management service response time by enabling real-time content distribution in IoV systems using heterogeneous network access.Firstly,this work proposed a novel use of the Ant Colony Optimization(ACO)algorithm and formulated the path planning optimization problem as an Integer Linear Program(ILP).This integrates the future estimation metric to predict the future arrivals of the vehicles,searching the optimal routes.Considering the mobile nature of IOV,fuzzy logic is used for congestion level estimation along with the ACO to determine the optimal path.The model results indicate that the suggested scheme outperforms the existing state-of-the-art methods by identifying the shortest and most cost-effective path.Thus,this work strongly supports its use in applications having stringent Quality of Service(QoS)requirements for the vehicles.展开更多
The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information...The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques.展开更多
Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines.Speech Emotion Recognition(SER)is one of the critical sources for human evaluatio...Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines.Speech Emotion Recognition(SER)is one of the critical sources for human evaluation,which is applicable in many real-world applications such as healthcare,call centers,robotics,safety,and virtual reality.This work developed a novel TCN-based emotion recognition system using speech signals through a spatial-temporal convolution network to recognize the speaker’s emotional state.The authors designed a Temporal Convolutional Network(TCN)core block to recognize long-term dependencies in speech signals and then feed these temporal cues to a dense network to fuse the spatial features and recognize global information for final classification.The proposed network extracts valid sequential cues automatically from speech signals,which performed better than state-of-the-art(SOTA)and traditional machine learning algorithms.Results of the proposed method show a high recognition rate compared with SOTAmethods.The final unweighted accuracy of 80.84%,and 92.31%,for interactive emotional dyadic motion captures(IEMOCAP)and berlin emotional dataset(EMO-DB),indicate the robustness and efficiency of the designed model.展开更多
The N-1 criterion is a critical factor for ensuring the reliable and resilient operation of electric power distribution networks.However,the increasing complexity of distribution networks and the associated growth in ...The N-1 criterion is a critical factor for ensuring the reliable and resilient operation of electric power distribution networks.However,the increasing complexity of distribution networks and the associated growth in data size have created a significant challenge for distribution network planners.To address this issue,we propose a fast N-1 verification procedure for urban distribution networks that combines CIM file data analysis with MILP-based mathematical modeling.Our proposed method leverages the principles of CIM file analysis for distribution network N-1 analysis.We develop a mathematical model of distribution networks based on CIM data and transfer it into MILP.We also take into account the characteristics of medium voltage distribution networks after a line failure and select the feeder section at the exit of each substation with a high load rate to improve the efficiency of N-1 analysis.We validate our approach through a series of case studies and demonstrate its scalability and superiority over traditional N-1 analysis and heuristic optimization algorithms.By enabling online N-1 analysis,our approach significantly improves the work efficiency of distribution network planners.In summary,our proposed method provides a valuable tool for distribution network planners to enhance the accuracy and efficiency of their N-1 analyses.By leveraging the advantages of CIM file data analysis and MILP-based mathematical modeling,our approach contributes to the development of more resilient and reliable electric power distribution networks.展开更多
Social media networks are becoming essential to our daily activities,and many issues are due to this great involvement in our lives.Cyberbullying is a social media network issue,a global crisis affecting the victims a...Social media networks are becoming essential to our daily activities,and many issues are due to this great involvement in our lives.Cyberbullying is a social media network issue,a global crisis affecting the victims and society as a whole.It results from a misunderstanding regarding freedom of speech.In this work,we proposed a methodology for detecting such behaviors(bullying,harassment,and hate-related texts)using supervised machine learning algo-rithms(SVM,Naïve Bayes,Logistic regression,and random forest)and for predicting a topic associated with these text data using unsupervised natural language processing,such as latent Dirichlet allocation.In addition,we used accuracy,precision,recall,and F1 score to assess prior classifiers.Results show that the use of logistic regression,support vector machine,random forest model,and Naïve Bayes has 95%,94.97%,94.66%,and 93.1%accuracy,respectively.展开更多
The quality of pharmaceutical products plays a crucial role in healthcare systems such as hospitals for better patient services. Drug Supply Chain Management requires approaches to uncertainty and risk consideration. ...The quality of pharmaceutical products plays a crucial role in healthcare systems such as hospitals for better patient services. Drug Supply Chain Management requires approaches to uncertainty and risk consideration. This study is a comprehensive multi-objective mathematical model considering the uncertainties and potential reserves in supply and medicine. The proposed model includes three general objective functions that minimize total production costs, including the costs of transportation, maintenance, breakdown, collection, and disposal of waste. The model also maximizes the quality of potential storage. The results show the proposed method has a high quality to solve the model and leads to the optimization of the results to provide the drug supply chain for the proposed example. We have identified three important risks and uncertainties in addressing drug supply planning: the indefinite duration of the licensing process, the risk of a forced brand change, and indefinite repayment levels that lead to varied demand diversification. The results of comparison with other multi-objective optimization methods in existing articles also show better performance of the proposed model. A significant cost reduction results from implementing our model instead of using the over-storage role to estimate the volume of active drug elements, as seen in today’s industry.展开更多
The quality of pharmaceutical products plays a crucial role in healthcare systems such as hospitals for better patient services. Drug Supply Chain Management requires approaches to uncertainty and risk consideration. ...The quality of pharmaceutical products plays a crucial role in healthcare systems such as hospitals for better patient services. Drug Supply Chain Management requires approaches to uncertainty and risk consideration. This study is a comprehensive multi-objective mathematical model considering the uncertainties and potential reserves in supply and medicine. The proposed model includes three general objective functions that minimize total production costs, including the costs of transportation, maintenance, breakdown, collection, and disposal of waste. The model also maximizes the quality of potential storage. The results show the proposed method has a high quality to solve the model and leads to the optimization of the results to provide the drug supply chain for the proposed example. We have identified three important risks and uncertainties in addressing drug supply planning: the indefinite duration of the licensing process, the risk of a forced brand change, and indefinite repayment levels that lead to varied demand diversification. The results of comparison with other multi-objective optimization methods in existing articles also show better performance of the proposed model. A significant cost reduction results from implementing our model instead of using the over-storage role to estimate the volume of active drug elements, as seen in today’s industry.展开更多
The advanced integrated circuits have been widely used in various situations including the Internet of Things,wireless communication,etc.But its manufacturing process exists unreliability,so cryptographic chips must b...The advanced integrated circuits have been widely used in various situations including the Internet of Things,wireless communication,etc.But its manufacturing process exists unreliability,so cryptographic chips must be rigorously tested.Due to scan testing provides high test coverage,it is applied to the testing of cryptographic integrated circuits.However,while providing good controllability and observability,it also provides attackers with a backdoor to steal keys.In the text,a novel protection scheme is put forward to resist scan-based attacks,in which we first use the responses generated by a strong physical unclonable function circuit to solidify fuseantifuse structures in a non-linear shift register(NLSR),then determine the scan input code according to the configuration of the fuse-antifuse structures and the styles of connection between the NLSR cells and the scan cells.If the key is right,the chip can be tested normally;otherwise,the data in the scan chain cannot be propagated normally,it is also impossible for illegal users to derive the desired scan data.The proposed technique not only enhances the security of cryptographic chips,but also incurs acceptable overhead.展开更多
The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The...The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The application of herbicide is effective but causes environmental and health concerns.Hence,Precision Agriculture(PA)suggests the variable spraying of herbicides so that herbicide chemicals do not affect the primary plants.Motivated by the gap above,we proposed a Deep Learning(DL)based model for detecting Eggplant(Brinjal)weed in this paper.The key objective of this study is to detect plant and non-plant(weed)parts from crop images.With the help of object detection,the precise location of weeds from images can be achieved.The dataset is collected manually from a private farm in Gandhinagar,Gujarat,India.The combined approach of classification and object detection is applied in the proposed model.The Convolutional Neural Network(CNN)model is used to classify weed and non-weed images;further DL models are applied for object detection.We have compared DL models based on accuracy,memory usage,and Intersection over Union(IoU).ResNet-18,YOLOv3,CenterNet,and Faster RCNN are used in the proposed work.CenterNet outperforms all other models in terms of accuracy,i.e.,88%.Compared to other models,YOLOv3 is the least memory-intensive,utilizing 4.78 GB to evaluate the data.展开更多
The paper presents a new approach to managing software requirement elicitation techniques with a high level of analyses based on domain ontology techniques, where we established a mapping between user scenario, struct...The paper presents a new approach to managing software requirement elicitation techniques with a high level of analyses based on domain ontology techniques, where we established a mapping between user scenario, structured requirement, and domain ontology techniques to improve many attributes such as requirement consistency, completeness and eliminating duplicate requirements to reduce risk of overrun time and budgets. One of the main targets of requirement engineering is to develop a requirement document with high quality. So, we proposed a user interface to collect all vital information about the project directly from the regular user and requirement engineering;After that, the proposal will generate an ontology based on semantic relations and rules. Requirements Engineering tries to keep requirements throughout a project’s life cycle consistent necessities clear, and up to date. This prototype allows mapping requirement scenarios into ontology elements for semantically interrupted. The general points of our prototype are to guarantee the identification requirements and improved nature of the Software Requirements Specification (SRS) by solving incomplete and conflicting information in the requirements specification.展开更多
The medical community has more concern on lung cancer analysis.Medical experts’physical segmentation of lung cancers is time-consuming and needs to be automated.The research study’s objective is to diagnose lung tum...The medical community has more concern on lung cancer analysis.Medical experts’physical segmentation of lung cancers is time-consuming and needs to be automated.The research study’s objective is to diagnose lung tumors at an early stage to extend the life of humans using deep learning techniques.Computer-Aided Diagnostic(CAD)system aids in the diagnosis and shortens the time necessary to detect the tumor detected.The application of Deep Neural Networks(DNN)has also been exhibited as an excellent and effective method in classification and segmentation tasks.This research aims to separate lung cancers from images of Magnetic Resonance Imaging(MRI)with threshold segmentation.The Honey hook process categorizes lung cancer based on characteristics retrieved using several classifiers.Considering this principle,the work presents a solution for image compression utilizing a Deep Wave Auto-Encoder(DWAE).The combination of the two approaches significantly reduces the overall size of the feature set required for any future classification process performed using DNN.The proposed DWAE-DNN image classifier is applied to a lung imaging dataset with Radial Basis Function(RBF)classifier.The study reported promising results with an accuracy of 97.34%,whereas using the Decision Tree(DT)classifier has an accuracy of 94.24%.The proposed approach(DWAE-DNN)is found to classify the images with an accuracy of 98.67%,either as malignant or normal patients.In contrast to the accuracy requirements,the work also uses the benchmark standards like specificity,sensitivity,and precision to evaluate the efficiency of the network.It is found from an investigation that the DT classifier provides the maximum performance in the DWAE-DNN depending on the network’s performance on image testing,as shown by the data acquired by the categorizers themselves.展开更多
Twenty samples of endothelia removed from normal and post penetrating keratoplas-ty (0.5,1,2,3 months after penetrating keratoplasty) were observed by scanning electron mi-croscopy.The photographs of the endothelia in...Twenty samples of endothelia removed from normal and post penetrating keratoplas-ty (0.5,1,2,3 months after penetrating keratoplasty) were observed by scanning electron mi-croscopy.The photographs of the endothelia in graft-host junction were analyzed by computer-assisted image analysis system,and the morphometric indexes examined were area of the cells,perimeters,density,figure coefficient,long axis,coefficient of variation of the area,and oth-ers.Results showed that the morphology and the density of the endothelial cells changed obvi-ously after operation and improved slowly but progressively with time although at 3 monthspostoperatively some differences still existed.By using the new techniques,the experiment con-firmed and enriched the theories on the corneal endothelial wound-healing,revealing some ofthe new characters of the endothelial wound-healing following penetrating keratoplasty.展开更多
In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different f...In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics.While the first experiments directly used the own stock features as the model inputs,the second experiments utilized reduced stock features through Variational AutoEncoders(VAE).In the last experiments,in order to grasp the effects of the other banking stocks on individual stock performance,the features belonging to other stocks were also given as inputs to our models.While combining other stock features was done for both own(named as allstock_own)and VAE-reduced(named as allstock_VAE)stock features,the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination.As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model,the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675.Although the classification results achieved with both feature types was close,allstock_VAE achieved these results using nearly 16.67%less features compared to allstock_own.When all experimental results were examined,it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features.It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.展开更多
Coronavirus disease 2019(COVID-19)has been termed a“Pandemic Disease”that has infected many people and caused many deaths on a nearly unprecedented level.As more people are infected each day,it continues to pose a s...Coronavirus disease 2019(COVID-19)has been termed a“Pandemic Disease”that has infected many people and caused many deaths on a nearly unprecedented level.As more people are infected each day,it continues to pose a serious threat to humanity worldwide.As a result,healthcare systems around the world are facing a shortage of medical space such as wards and sickbeds.In most cases,healthy people experience tolerable symptoms if they are infected.However,in other cases,patients may suffer severe symptoms and require treatment in an intensive care unit.Thus,hospitals should select patients who have a high risk of death and treat them first.To solve this problem,a number of models have been developed for mortality prediction.However,they lack interpretability and generalization.To prepare a model that addresses these issues,we proposed a COVID-19 mortality prediction model that could provide new insights.We identified blood factors that could affect the prediction of COVID-19 mortality.In particular,we focused on dependency reduction using partial correlation and mutual information.Next,we used the Class-Attribute Interdependency Maximization(CAIM)algorithm to bin continuous values.Then,we used Jensen Shannon Divergence(JSD)and Bayesian posterior probability to create less redundant and more accurate rules.We provided a ruleset with its own posterior probability as a result.The extracted rules are in the form of“if antecedent then results,posterior probability(θ)”.If the sample matches the extracted rules,then the result is positive.The average AUC Score was 96.77%for the validation dataset and the F1-score was 92.8%for the test data.Compared to the results of previous studies,it shows good performance in terms of classification performance,generalization,and interpretability.展开更多
Medical Resonance Imaging(MRI)is a noninvasive,nonradioactive,and meticulous diagnostic modality capability in the field of medical imaging.However,the efficiency of MR image reconstruction is affected by its bulky im...Medical Resonance Imaging(MRI)is a noninvasive,nonradioactive,and meticulous diagnostic modality capability in the field of medical imaging.However,the efficiency of MR image reconstruction is affected by its bulky image sets and slow process implementation.Therefore,to obtain a high-quality reconstructed image we presented a sparse aware noise removal technique that uses convolution neural network(SANR_CNN)for eliminating noise and improving the MR image reconstruction quality.The proposed noise removal or denoising technique adopts a fast CNN architecture that aids in training larger datasets with improved quality,and SARN algorithm is used for building a dictionary learning technique for denoising large image datasets.The proposed SANR_CNN model also preserves the details and edges in the image during reconstruction.An experiment was conducted to analyze the performance of SANR_CNN in a few existing models in regard with peak signal-to-noise ratio(PSNR),structural similarity index(SSIM),and mean squared error(MSE).The proposed SANR_CNN model achieved higher PSNR,SSIM,and MSE efficiency than the other noise removal techniques.The proposed architecture also provides transmission of these denoised medical images through secured IoT architecture.展开更多
文摘Measuring software quality requires software engineers to understand the system’s quality attributes and their measurements.The quality attribute is a qualitative property;however,the quantitative feature is needed for software measurement,which is not considered during the development of most software systems.Many research studies have investigated different approaches for measuring software quality,but with no practical approaches to quantify and measure quality attributes.This paper proposes a software quality measurement model,based on a software interconnection model,to measure the quality of software components and the overall quality of the software system.Unlike most of the existing approaches,the proposed approach can be applied at the early stages of software development,to different architectural design models,and at different levels of system decomposition.This article introduces a software measurement model that uses a heuristic normalization of the software’s internal quality attributes,i.e.,coupling and cohesion,for software quality measurement.In this model,the quality of a software component is measured based on its internal strength and the coupling it exhibits with other component(s).The proposed model has been experimented with nine software engineering teams that have agreed to participate in the experiment during the development of their different software systems.The experiments have shown that coupling reduces the internal strength of the coupled components by the amount of coupling they exhibit,which degrades their quality and the overall quality of the software system.The introduced model can help in understanding the quality of software design.In addition,it identifies the locations in software design that exhibit unnecessary couplings that degrade the quality of the software systems,which can be eliminated.
基金Wenzhou Key Scientific and Technological Projects(No.ZG2020031)Wenzhou Polytechnic Research Projects(No.WZY2021002)+3 种基金Key R&D Projects in Zhejiang Province(No.2021C01117)Major Program of Natural Science Foundation of Zhejiang Province(LD22F020002)the Cloud Security Key Technology Research Laboratorythe Researchers Supporting Project Number(RSP2023R509),King Saud University,Riyadh,Saudi Arabia.
文摘With the development of hardware devices and the upgrading of smartphones,a large number of users save privacy-related information in mobile devices,mainly smartphones,which puts forward higher demands on the protection of mobile users’privacy information.At present,mobile user authenticationmethods based on humancomputer interaction have been extensively studied due to their advantages of high precision and non-perception,but there are still shortcomings such as low data collection efficiency,untrustworthy participating nodes,and lack of practicability.To this end,this paper proposes a privacy-enhanced mobile user authentication method with motion sensors,which mainly includes:(1)Construct a smart contract-based private chain and federated learning to improve the data collection efficiency of mobile user authentication,reduce the probability of the model being bypassed by attackers,and reduce the overhead of data centralized processing and the risk of privacy leakage;(2)Use certificateless encryption to realize the authentication of the device to ensure the credibility of the client nodes participating in the calculation;(3)Combine Variational Mode Decomposition(VMD)and Long Short-TermMemory(LSTM)to analyze and model the motion sensor data of mobile devices to improve the accuracy of model certification.The experimental results on the real environment dataset of 1513 people show that themethod proposed in this paper can effectively resist poisoning attacks while ensuring the accuracy and efficiency of mobile user authentication.
文摘Software engineering has been taught at many institutions as individual course for many years. Recently, many higher education institutions offer a BSc degree in Software Engineering. Software engineers are required, especially at the small enterprises, to play many roles, and sometimes simultaneously. Beside the technical and managerial skills, software engineers should have additional intellectual skills such as domain-specific abstract thinking. Therefore, software engineering curriculum should help the students to build and improve their skills to meet the labor market needs. This study aims to explore the perceptions of software engineering students on the influence of learning software modeling and design on their domain-specific abstract thinking. Also, we explore the role of the course project in improving their domain-specific abstract thinking. The study results have shown that, most of the surveyed students believe that learning and practicing modeling and design concepts contribute to their ability to think abstractly on specific domain. However, this finding is influenced by the students’ lack of the comprehension of some modeling and design aspects (e.g., generalization). We believe that, such aspects should be introduced to the students at early levels of software engineering curriculum, which certainly will improve their ability to think abstractly on specific domain.
文摘Recommendation services become an essential and hot research topic for researchers nowadays.Social data such asReviews play an important role in the recommendation of the products.Improvement was achieved by deep learning approaches for capturing user and product information from a short text.However,such previously used approaches do not fairly and efficiently incorporate users’preferences and product characteristics.The proposed novel Hybrid Deep Collaborative Filtering(HDCF)model combines deep learning capabilities and deep interaction modeling with high performance for True Recommendations.To overcome the cold start problem,the new overall rating is generated by aggregating the Deep Multivariate Rating DMR(Votes,Likes,Stars,and Sentiment scores of reviews)from different external data sources because different sites have different rating scores about the same product that make confusion for the user to make a decision,either product is truly popular or not.The proposed novel HDCF model consists of four major modules such as User Product Attention,Deep Collaborative Filtering,Neural Sentiment Classifier,and Deep Multivariate Rating(UPA-DCF+NSC+DMR)to solve the addressed problems.Experimental results demonstrate that our novel model is outperforming state-of-the-art IMDb,Yelp2013,and Yelp2014 datasets for the true top-n recommendation of products using HDCF to increase the accuracy,confidence,and trust of recommendation services.
基金supported by“Human Resources Program in Energy Technology”of the Korea Institute of Energy Technology Evaluation and Planning(KETEP),granted financial resources from the Ministry of Trade,Industry&Energy,Republic of Korea.(No.20204010600090).
文摘The Internet of Vehicles(IoV)is a networking paradigm related to the intercommunication of vehicles using a network.In a dynamic network,one of the key challenges in IoV is traffic management under increasing vehicles to avoid congestion.Therefore,optimal path selection to route traffic between the origin and destination is vital.This research proposed a realistic strategy to reduce traffic management service response time by enabling real-time content distribution in IoV systems using heterogeneous network access.Firstly,this work proposed a novel use of the Ant Colony Optimization(ACO)algorithm and formulated the path planning optimization problem as an Integer Linear Program(ILP).This integrates the future estimation metric to predict the future arrivals of the vehicles,searching the optimal routes.Considering the mobile nature of IOV,fuzzy logic is used for congestion level estimation along with the ACO to determine the optimal path.The model results indicate that the suggested scheme outperforms the existing state-of-the-art methods by identifying the shortest and most cost-effective path.Thus,this work strongly supports its use in applications having stringent Quality of Service(QoS)requirements for the vehicles.
基金funded by the University of Haripur,KP Pakistan Researchers Supporting Project number (PKURFL2324L33)。
文摘The detection of rice leaf disease is significant because,as an agricultural and rice exporter country,Pakistan needs to advance in production and lower the risk of diseases.In this rapid globalization era,information technology has increased.A sensing system is mandatory to detect rice diseases using Artificial Intelligence(AI).It is being adopted in all medical and plant sciences fields to access and measure the accuracy of results and detection while lowering the risk of diseases.Deep Neural Network(DNN)is a novel technique that will help detect disease present on a rice leave because DNN is also considered a state-of-the-art solution in image detection using sensing nodes.Further in this paper,the adoption of the mixed-method approach Deep Convolutional Neural Network(Deep CNN)has assisted the research in increasing the effectiveness of the proposed method.Deep CNN is used for image recognition and is a class of deep-learning neural networks.CNN is popular and mostly used in the field of image recognition.A dataset of images with three main leaf diseases is selected for training and testing the proposed model.After the image acquisition and preprocessing process,the Deep CNN model was trained to detect and classify three rice diseases(Brown spot,bacterial blight,and blast disease).The proposed model achieved 98.3%accuracy in comparison with similar state-of-the-art techniques.
文摘Speech signals play an essential role in communication and provide an efficient way to exchange information between humans and machines.Speech Emotion Recognition(SER)is one of the critical sources for human evaluation,which is applicable in many real-world applications such as healthcare,call centers,robotics,safety,and virtual reality.This work developed a novel TCN-based emotion recognition system using speech signals through a spatial-temporal convolution network to recognize the speaker’s emotional state.The authors designed a Temporal Convolutional Network(TCN)core block to recognize long-term dependencies in speech signals and then feed these temporal cues to a dense network to fuse the spatial features and recognize global information for final classification.The proposed network extracts valid sequential cues automatically from speech signals,which performed better than state-of-the-art(SOTA)and traditional machine learning algorithms.Results of the proposed method show a high recognition rate compared with SOTAmethods.The final unweighted accuracy of 80.84%,and 92.31%,for interactive emotional dyadic motion captures(IEMOCAP)and berlin emotional dataset(EMO-DB),indicate the robustness and efficiency of the designed model.
基金supported by the National Natural Science Foundation of China(52207105)。
文摘The N-1 criterion is a critical factor for ensuring the reliable and resilient operation of electric power distribution networks.However,the increasing complexity of distribution networks and the associated growth in data size have created a significant challenge for distribution network planners.To address this issue,we propose a fast N-1 verification procedure for urban distribution networks that combines CIM file data analysis with MILP-based mathematical modeling.Our proposed method leverages the principles of CIM file analysis for distribution network N-1 analysis.We develop a mathematical model of distribution networks based on CIM data and transfer it into MILP.We also take into account the characteristics of medium voltage distribution networks after a line failure and select the feeder section at the exit of each substation with a high load rate to improve the efficiency of N-1 analysis.We validate our approach through a series of case studies and demonstrate its scalability and superiority over traditional N-1 analysis and heuristic optimization algorithms.By enabling online N-1 analysis,our approach significantly improves the work efficiency of distribution network planners.In summary,our proposed method provides a valuable tool for distribution network planners to enhance the accuracy and efficiency of their N-1 analyses.By leveraging the advantages of CIM file data analysis and MILP-based mathematical modeling,our approach contributes to the development of more resilient and reliable electric power distribution networks.
文摘Social media networks are becoming essential to our daily activities,and many issues are due to this great involvement in our lives.Cyberbullying is a social media network issue,a global crisis affecting the victims and society as a whole.It results from a misunderstanding regarding freedom of speech.In this work,we proposed a methodology for detecting such behaviors(bullying,harassment,and hate-related texts)using supervised machine learning algo-rithms(SVM,Naïve Bayes,Logistic regression,and random forest)and for predicting a topic associated with these text data using unsupervised natural language processing,such as latent Dirichlet allocation.In addition,we used accuracy,precision,recall,and F1 score to assess prior classifiers.Results show that the use of logistic regression,support vector machine,random forest model,and Naïve Bayes has 95%,94.97%,94.66%,and 93.1%accuracy,respectively.
文摘The quality of pharmaceutical products plays a crucial role in healthcare systems such as hospitals for better patient services. Drug Supply Chain Management requires approaches to uncertainty and risk consideration. This study is a comprehensive multi-objective mathematical model considering the uncertainties and potential reserves in supply and medicine. The proposed model includes three general objective functions that minimize total production costs, including the costs of transportation, maintenance, breakdown, collection, and disposal of waste. The model also maximizes the quality of potential storage. The results show the proposed method has a high quality to solve the model and leads to the optimization of the results to provide the drug supply chain for the proposed example. We have identified three important risks and uncertainties in addressing drug supply planning: the indefinite duration of the licensing process, the risk of a forced brand change, and indefinite repayment levels that lead to varied demand diversification. The results of comparison with other multi-objective optimization methods in existing articles also show better performance of the proposed model. A significant cost reduction results from implementing our model instead of using the over-storage role to estimate the volume of active drug elements, as seen in today’s industry.
文摘The quality of pharmaceutical products plays a crucial role in healthcare systems such as hospitals for better patient services. Drug Supply Chain Management requires approaches to uncertainty and risk consideration. This study is a comprehensive multi-objective mathematical model considering the uncertainties and potential reserves in supply and medicine. The proposed model includes three general objective functions that minimize total production costs, including the costs of transportation, maintenance, breakdown, collection, and disposal of waste. The model also maximizes the quality of potential storage. The results show the proposed method has a high quality to solve the model and leads to the optimization of the results to provide the drug supply chain for the proposed example. We have identified three important risks and uncertainties in addressing drug supply planning: the indefinite duration of the licensing process, the risk of a forced brand change, and indefinite repayment levels that lead to varied demand diversification. The results of comparison with other multi-objective optimization methods in existing articles also show better performance of the proposed model. A significant cost reduction results from implementing our model instead of using the over-storage role to estimate the volume of active drug elements, as seen in today’s industry.
基金This work was funded by the Researchers Supporting Project No.(RSP2022R509)King Saud University,Riyadh,Saudi Arabia.In additionthe Natural Science Foundation of Hunan Province under Grant no.2020JJ5604,2022JJ2029 and 2020JJ4622the National Natural Science Foundation of China under Grant no.62172058.
文摘The advanced integrated circuits have been widely used in various situations including the Internet of Things,wireless communication,etc.But its manufacturing process exists unreliability,so cryptographic chips must be rigorously tested.Due to scan testing provides high test coverage,it is applied to the testing of cryptographic integrated circuits.However,while providing good controllability and observability,it also provides attackers with a backdoor to steal keys.In the text,a novel protection scheme is put forward to resist scan-based attacks,in which we first use the responses generated by a strong physical unclonable function circuit to solidify fuseantifuse structures in a non-linear shift register(NLSR),then determine the scan input code according to the configuration of the fuse-antifuse structures and the styles of connection between the NLSR cells and the scan cells.If the key is right,the chip can be tested normally;otherwise,the data in the scan chain cannot be propagated normally,it is also impossible for illegal users to derive the desired scan data.The proposed technique not only enhances the security of cryptographic chips,but also incurs acceptable overhead.
基金funded by the Researchers Supporting Project Number(RSP2023R 509),King Saud University,Riyadh,Saudi Arabia.
文摘The overgrowth of weeds growing along with the primary crop in the fields reduces crop production.Conventional solutions like hand weeding are labor-intensive,costly,and time-consuming;farmers have used herbicides.The application of herbicide is effective but causes environmental and health concerns.Hence,Precision Agriculture(PA)suggests the variable spraying of herbicides so that herbicide chemicals do not affect the primary plants.Motivated by the gap above,we proposed a Deep Learning(DL)based model for detecting Eggplant(Brinjal)weed in this paper.The key objective of this study is to detect plant and non-plant(weed)parts from crop images.With the help of object detection,the precise location of weeds from images can be achieved.The dataset is collected manually from a private farm in Gandhinagar,Gujarat,India.The combined approach of classification and object detection is applied in the proposed model.The Convolutional Neural Network(CNN)model is used to classify weed and non-weed images;further DL models are applied for object detection.We have compared DL models based on accuracy,memory usage,and Intersection over Union(IoU).ResNet-18,YOLOv3,CenterNet,and Faster RCNN are used in the proposed work.CenterNet outperforms all other models in terms of accuracy,i.e.,88%.Compared to other models,YOLOv3 is the least memory-intensive,utilizing 4.78 GB to evaluate the data.
文摘The paper presents a new approach to managing software requirement elicitation techniques with a high level of analyses based on domain ontology techniques, where we established a mapping between user scenario, structured requirement, and domain ontology techniques to improve many attributes such as requirement consistency, completeness and eliminating duplicate requirements to reduce risk of overrun time and budgets. One of the main targets of requirement engineering is to develop a requirement document with high quality. So, we proposed a user interface to collect all vital information about the project directly from the regular user and requirement engineering;After that, the proposal will generate an ontology based on semantic relations and rules. Requirements Engineering tries to keep requirements throughout a project’s life cycle consistent necessities clear, and up to date. This prototype allows mapping requirement scenarios into ontology elements for semantically interrupted. The general points of our prototype are to guarantee the identification requirements and improved nature of the Software Requirements Specification (SRS) by solving incomplete and conflicting information in the requirements specification.
基金the Researchers Supporting Project Number(RSP2023R 509)King Saud University,Riyadh,Saudi ArabiaThis work was supported in part by the Higher Education Sprout Project from the Ministry of Education(MOE)and National Science and Technology Council,Taiwan,(109-2628-E-224-001-MY3)in part by Isuzu Optics Corporation.Dr.Shih-Yu Chen is the corresponding author.
文摘The medical community has more concern on lung cancer analysis.Medical experts’physical segmentation of lung cancers is time-consuming and needs to be automated.The research study’s objective is to diagnose lung tumors at an early stage to extend the life of humans using deep learning techniques.Computer-Aided Diagnostic(CAD)system aids in the diagnosis and shortens the time necessary to detect the tumor detected.The application of Deep Neural Networks(DNN)has also been exhibited as an excellent and effective method in classification and segmentation tasks.This research aims to separate lung cancers from images of Magnetic Resonance Imaging(MRI)with threshold segmentation.The Honey hook process categorizes lung cancer based on characteristics retrieved using several classifiers.Considering this principle,the work presents a solution for image compression utilizing a Deep Wave Auto-Encoder(DWAE).The combination of the two approaches significantly reduces the overall size of the feature set required for any future classification process performed using DNN.The proposed DWAE-DNN image classifier is applied to a lung imaging dataset with Radial Basis Function(RBF)classifier.The study reported promising results with an accuracy of 97.34%,whereas using the Decision Tree(DT)classifier has an accuracy of 94.24%.The proposed approach(DWAE-DNN)is found to classify the images with an accuracy of 98.67%,either as malignant or normal patients.In contrast to the accuracy requirements,the work also uses the benchmark standards like specificity,sensitivity,and precision to evaluate the efficiency of the network.It is found from an investigation that the DT classifier provides the maximum performance in the DWAE-DNN depending on the network’s performance on image testing,as shown by the data acquired by the categorizers themselves.
文摘Twenty samples of endothelia removed from normal and post penetrating keratoplas-ty (0.5,1,2,3 months after penetrating keratoplasty) were observed by scanning electron mi-croscopy.The photographs of the endothelia in graft-host junction were analyzed by computer-assisted image analysis system,and the morphometric indexes examined were area of the cells,perimeters,density,figure coefficient,long axis,coefficient of variation of the area,and oth-ers.Results showed that the morphology and the density of the endothelial cells changed obvi-ously after operation and improved slowly but progressively with time although at 3 monthspostoperatively some differences still existed.By using the new techniques,the experiment con-firmed and enriched the theories on the corneal endothelial wound-healing,revealing some ofthe new characters of the endothelial wound-healing following penetrating keratoplasty.
文摘In this study,the hourly directions of eight banking stocks in Borsa Istanbul were predicted using linear-based,deep-learning(LSTM)and ensemble learning(Light-GBM)models.These models were trained with four different feature sets and their performances were evaluated in terms of accuracy and F-measure metrics.While the first experiments directly used the own stock features as the model inputs,the second experiments utilized reduced stock features through Variational AutoEncoders(VAE).In the last experiments,in order to grasp the effects of the other banking stocks on individual stock performance,the features belonging to other stocks were also given as inputs to our models.While combining other stock features was done for both own(named as allstock_own)and VAE-reduced(named as allstock_VAE)stock features,the expanded dimensions of the feature sets were reduced by Recursive Feature Elimination.As the highest success rate increased up to 0.685 with allstock_own and LSTM with attention model,the combination of allstock_VAE and LSTM with the attention model obtained an accuracy rate of 0.675.Although the classification results achieved with both feature types was close,allstock_VAE achieved these results using nearly 16.67%less features compared to allstock_own.When all experimental results were examined,it was found out that the models trained with allstock_own and allstock_VAE achieved higher accuracy rates than those using individual stock features.It was also concluded that the results obtained with the VAE-reduced stock features were similar to those obtained by own stock features.
基金This research was supported by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021–2020–0–01602)supervised by the IITP(Institute for Information&Communications Technology Planning&Evaluation).
文摘Coronavirus disease 2019(COVID-19)has been termed a“Pandemic Disease”that has infected many people and caused many deaths on a nearly unprecedented level.As more people are infected each day,it continues to pose a serious threat to humanity worldwide.As a result,healthcare systems around the world are facing a shortage of medical space such as wards and sickbeds.In most cases,healthy people experience tolerable symptoms if they are infected.However,in other cases,patients may suffer severe symptoms and require treatment in an intensive care unit.Thus,hospitals should select patients who have a high risk of death and treat them first.To solve this problem,a number of models have been developed for mortality prediction.However,they lack interpretability and generalization.To prepare a model that addresses these issues,we proposed a COVID-19 mortality prediction model that could provide new insights.We identified blood factors that could affect the prediction of COVID-19 mortality.In particular,we focused on dependency reduction using partial correlation and mutual information.Next,we used the Class-Attribute Interdependency Maximization(CAIM)algorithm to bin continuous values.Then,we used Jensen Shannon Divergence(JSD)and Bayesian posterior probability to create less redundant and more accurate rules.We provided a ruleset with its own posterior probability as a result.The extracted rules are in the form of“if antecedent then results,posterior probability(θ)”.If the sample matches the extracted rules,then the result is positive.The average AUC Score was 96.77%for the validation dataset and the F1-score was 92.8%for the test data.Compared to the results of previous studies,it shows good performance in terms of classification performance,generalization,and interpretability.
基金This research was financially supported in part by the Ministry of Trade,Industry and Energy(MOTIE)and Korea Institute for Advancement of Technology(KIAT)through the International Cooperative R&D program.(Project No.P0016038)and in part by the MSIT(Ministry of Science and ICT),Korea,under the ITRC(Information Technology Research Center)support program(IITP-2021-2016-0-00312)supervised by the IITP(Institute for Information&communications Technology Planning&Evaluation).
文摘Medical Resonance Imaging(MRI)is a noninvasive,nonradioactive,and meticulous diagnostic modality capability in the field of medical imaging.However,the efficiency of MR image reconstruction is affected by its bulky image sets and slow process implementation.Therefore,to obtain a high-quality reconstructed image we presented a sparse aware noise removal technique that uses convolution neural network(SANR_CNN)for eliminating noise and improving the MR image reconstruction quality.The proposed noise removal or denoising technique adopts a fast CNN architecture that aids in training larger datasets with improved quality,and SARN algorithm is used for building a dictionary learning technique for denoising large image datasets.The proposed SANR_CNN model also preserves the details and edges in the image during reconstruction.An experiment was conducted to analyze the performance of SANR_CNN in a few existing models in regard with peak signal-to-noise ratio(PSNR),structural similarity index(SSIM),and mean squared error(MSE).The proposed SANR_CNN model achieved higher PSNR,SSIM,and MSE efficiency than the other noise removal techniques.The proposed architecture also provides transmission of these denoised medical images through secured IoT architecture.