The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that c...The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.展开更多
Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hate...Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.展开更多
The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orient...The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%.展开更多
Electronic patient data gives many advantages,but also new difficulties.Deadlocks may delay procedures like acquiring patient information.Distributed deadlock resolution solutions introduce uncertainty due to inaccura...Electronic patient data gives many advantages,but also new difficulties.Deadlocks may delay procedures like acquiring patient information.Distributed deadlock resolution solutions introduce uncertainty due to inaccurate transaction properties.Soft computing-based solutions have been developed to solve this challenge.In a single framework,ambiguous,vague,incomplete,and inconsistent transaction attribute information has received minimal attention.The work presented in this paper employed type-2 neutrosophic logic,an extension of type-1 neutrosophic logic,to handle uncertainty in real-time deadlock-resolving systems.The proposed method is structured to reflect multiple types of knowledge and relations among transactions’features that include validation factor degree,slackness degree,degree of deadline-missed transaction based on the degree of membership of truthiness,degree ofmembership of indeterminacy,and degree ofmembership of falsity.Here,the footprint of uncertainty(FOU)for truth,indeterminacy,and falsity represents the level of uncertainty that exists in the value of a grade of membership.We employed a distributed real-time transaction processing simulator(DRTTPS)to conduct the simulations and conducted experiments using the benchmark Pima Indians diabetes dataset(PIDD).As the results showed,there is an increase in detection rate and a large drop in rollback rate when this new strategy is used.The performance of Type-2 neutrosophicbased resolution is better than the Type-1 neutrosophic-based approach on the execution ratio scale.The improvement rate has reached 10%to 20%,depending on the number of arrived transactions.展开更多
Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and com...Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and commercial transactions due to its various advantages,including noninvasiveness,user-friendliness,and social and legal acceptability.According to the literature,extensive research has been conducted on signature verification systems in a variety of languages,including English,Hindi,Bangla,and Chinese.However,the Arabic Offline Signature Verification(OSV)system is still a challenging issue that has not been investigated as much by researchers due to the Arabic script being distinguished by changing letter shapes,diacritics,ligatures,and overlapping,making verification more difficult.Recently,signature verification systems have shown promising results for recognizing signatures that are genuine or forgeries;however,performance on skilled forgery detection is still unsatisfactory.Most existing methods require many learning samples to improve verification accuracy,which is a major drawback because the number of available signature samples is often limited in the practical application of signature verification systems.This study addresses these issues by presenting an OSV system based on multifeature fusion and discriminant feature selection using a genetic algorithm(GA).In contrast to existing methods,which use multiclass learning approaches,this study uses a oneclass learning strategy to address imbalanced signature data in the practical application of a signature verification system.The proposed approach is tested on three signature databases(SID)-Arabic handwriting signatures,CEDAR(Center of Excellence for Document Analysis and Recognition),and UTSIG(University of Tehran Persian Signature),and experimental results show that the proposed system outperforms existing systems in terms of reducing the False Acceptance Rate(FAR),False Rejection Rate(FRR),and Equal Error Rate(ERR).The proposed system achieved 5%improvement.展开更多
It is crucial,while using healthcare data,to assess the advantages of data privacy against the possible drawbacks.Data from several sources must be combined for use in many data mining applications.The medical practit...It is crucial,while using healthcare data,to assess the advantages of data privacy against the possible drawbacks.Data from several sources must be combined for use in many data mining applications.The medical practitioner may use the results of association rule mining performed on this aggregated data to better personalize patient care and implement preventive measures.Historically,numerous heuristics(e.g.,greedy search)and metaheuristics-based techniques(e.g.,evolutionary algorithm)have been created for the positive association rule in privacy preserving data mining(PPDM).When it comes to connecting seemingly unrelated diseases and drugs,negative association rules may be more informative than their positive counterparts.It is well-known that during negative association rules mining,a large number of uninteresting rules are formed,making this a difficult problem to tackle.In this research,we offer an adaptive method for negative association rule mining in vertically partitioned healthcare datasets that respects users’privacy.The applied approach dynamically determines the transactions to be interrupted for information hiding,as opposed to predefining them.This study introduces a novel method for addressing the problem of negative association rules in healthcare data mining,one that is based on the Tabu-genetic optimization paradigm.Tabu search is advantageous since it removes a huge number of unnecessary rules and item sets.Experiments using benchmark healthcare datasets prove that the discussed scheme outperforms state-of-the-art solutions in terms of decreasing side effects and data distortions,as measured by the indicator of hiding failure.展开更多
Routing is a key function inWireless Sensor Networks(WSNs)since it facilitates data transfer to base stations.Routing attacks have the potential to destroy and degrade the functionality ofWSNs.A trustworthy routing sy...Routing is a key function inWireless Sensor Networks(WSNs)since it facilitates data transfer to base stations.Routing attacks have the potential to destroy and degrade the functionality ofWSNs.A trustworthy routing system is essential for routing security andWSN efficiency.Numerous methods have been implemented to build trust between routing nodes,including the use of cryptographic methods and centralized routing.Nonetheless,the majority of routing techniques are unworkable in reality due to the difficulty of properly identifying untrusted routing node activities.At the moment,there is no effective way to avoid malicious node attacks.As a consequence of these concerns,this paper proposes a trusted routing technique that combines blockchain infrastructure,deep neural networks,and Markov Decision Processes(MDPs)to improve the security and efficiency of WSN routing.To authenticate the transmission process,the suggested methodology makes use of a Proof of Authority(PoA)mechanism inside the blockchain network.The validation group required for proofing is chosen using a deep learning approach that prioritizes each node’s characteristics.MDPs are then utilized to determine the suitable next-hop as a forwarding node capable of securely transmitting messages.According to testing data,our routing system outperforms current routing algorithms in a 50%malicious node routing scenario.展开更多
Medical image segmentation has consistently been a significant topic of research and a prominent goal,particularly in computer vision.Brain tumor research plays a major role in medical imaging applications by providin...Medical image segmentation has consistently been a significant topic of research and a prominent goal,particularly in computer vision.Brain tumor research plays a major role in medical imaging applications by providing a tremendous amount of anatomical and functional knowledge that enhances and allows easy diagnosis and disease therapy preparation.To prevent or minimize manual segmentation error,automated tumor segmentation,and detection became the most demanding process for radiologists and physicians as the tumor often has complex structures.Many methods for detection and segmentation presently exist,but all lack high accuracy.This paper’s key contribution focuses on evaluating machine learning techniques that are supposed to reduce the effect of frequently found issues in brain tumor research.Furthermore,attention concentrated on the challenges related to level set segmentation.The study proposed in this paper uses the Population-based Artificial Bee Colony Clustering(P-ABCC)methodology to reliably collect initial contour points,which helps minimize the number of iterations and segmentation errors of the level-set process.The proposed model measures cluster centroids(ABC populations)and uses a level-set approach to resolve contour differences as brain tumors vary as they have irregular form,structure,and volume.The suggested model comprises of three major steps:first,pre-processing to separate the brain from the head and improves contrast stretching.Secondly,P-ABCC is used to obtain tumor edges that are utilized as an initial MRI sequence contour.The level-set segmentation is then used to detect tumor regions from all volume slices with fewer iterations.Results suggest improved model efficiency compared to state-of-the-art methods for both datasets BRATS 2019 and BRATS 2017.At BRATS 2019,dice progress was achieved for Entire Tumor(WT),Tumor Center(TC),and Improved Tumor(ET)by 0.03%,0.03%,and 0.01%respectively.At BRATS 2017,an increase in precision for WT was reached by 5.27%.展开更多
Contact between mobile hosts and database servers presents many problems in theMobile Database System(MDS).It is harmed by a variety of causes,including handoff,inadequate capacity,frequent transaction updates,and rep...Contact between mobile hosts and database servers presents many problems in theMobile Database System(MDS).It is harmed by a variety of causes,including handoff,inadequate capacity,frequent transaction updates,and repeated failures,both of which contribute to serious issues with the information system’s consistency.However,error tolerance technicality allows devices to continue performing their functions in the event of a failure.The aim of this paper is to identify the optimal recovery approach from among the available state-of-the-art techniques in MDS by employing game theory.Several of the presented recovery protocols are chosen and evaluated in order to determine the most critical factors affecting the recovery mechanism,such as the number of processes,the time required to deliver messages,and the number of messages logged-in time.Then,using the suggested payout matrix,the game theory strategy is adapted to choose the optimum recovery technique for the specified environmental variables.The NS2 simulatorwas used to carry out the tests and apply the chosen recovery protocols.The experiments validate the proposed model’s usefulness in comparison to other methods.展开更多
A robust smile recognition system could be widely used for many real-world applications.Classification of a facial smile in an unconstrained setting is difficult due to the invertible and wide variety in face images.I...A robust smile recognition system could be widely used for many real-world applications.Classification of a facial smile in an unconstrained setting is difficult due to the invertible and wide variety in face images.In this paper,an adaptive model for smile expression classification is suggested that integrates a fast features extraction algorithm and cascade classifiers.Our model takes advantage of the intrinsic association between face detection,smile,and other face features to alleviate the over-fitting issue on the limited training set and increase classification results.The features are extracted taking into account to exclude any unnecessary coefficients in the feature vector;thereby enhancing the discriminatory capacity of the extracted features and reducing the computational process.Still,the main causes of error in learning are due to noise,bias,and variance.Ensemble helps to minimize these factors.Combinations of multiple classifiers decrease variance,especially in the case of unstable classifiers,and may produce a more reliable classification than a single classifier.However,a shortcoming of bagging as the best ensemble classifier is its random selection,where the classification performance relies on the chance to pick an appropriate subset of training items.The suggested model employs a modified form of bagging while creating training sets to deal with this challenge(error-based bootstrapping).The experimental results for smile classification on the JAFFE,CK+,and CK+48 benchmark datasets show the feasibility of our proposed model.展开更多
Signature verification involves vague situations in which a signature could resemble many reference samples ormight differ because of handwriting variances. By presenting the features and similarity score of signature...Signature verification involves vague situations in which a signature could resemble many reference samples ormight differ because of handwriting variances. By presenting the features and similarity score of signatures from thematching algorithm as fuzzy sets and capturing the degrees of membership, non-membership, and indeterminacy,a neutrosophic engine can significantly contribute to signature verification by addressing the inherent uncertaintiesand ambiguities present in signatures. But type-1 neutrosophic logic gives these membership functions fixed values,which could not adequately capture the various degrees of uncertainty in the characteristics of signatures. Type-1neutrosophic representation is also unable to adjust to various degrees of uncertainty. The proposed work exploresthe type-2 neutrosophic logic to enable additional flexibility and granularity in handling ambiguity, indeterminacy,and uncertainty, hence improving the accuracy of signature verification systems. Because type-2 neutrosophiclogic allows the assessment of many sources of ambiguity and conflicting information, decision-making is moreflexible. These experimental results show the possible benefits of using a type-2 neutrosophic engine for signatureverification by demonstrating its superior handling of uncertainty and variability over type-1, which eventuallyresults in more accurate False Rejection Rate (FRR) and False Acceptance Rate (FAR) verification results. In acomparison analysis using a benchmark dataset of handwritten signatures, the type-2 neutrosophic similaritymeasure yields a better accuracy rate of 98% than the type-1 95%.展开更多
文摘The act of transmitting photos via the Internet has become a routine and significant activity.Enhancing the security measures to safeguard these images from counterfeiting and modifications is a critical domain that can still be further enhanced.This study presents a system that employs a range of approaches and algorithms to ensure the security of transmitted venous images.The main goal of this work is to create a very effective system for compressing individual biometrics in order to improve the overall accuracy and security of digital photographs by means of image compression.This paper introduces a content-based image authentication mechanism that is suitable for usage across an untrusted network and resistant to data loss during transmission.By employing scale attributes and a key-dependent parametric Long Short-Term Memory(LSTM),it is feasible to improve the resilience of digital signatures against image deterioration and strengthen their security against malicious actions.Furthermore,the successful implementation of transmitting biometric data in a compressed format over a wireless network has been accomplished.For applications involving the transmission and sharing of images across a network.The suggested technique utilizes the scalability of a structural digital signature to attain a satisfactory equilibrium between security and picture transfer.An effective adaptive compression strategy was created to lengthen the overall lifetime of the network by sharing the processing of responsibilities.This scheme ensures a large reduction in computational and energy requirements while minimizing image quality loss.This approach employs multi-scale characteristics to improve the resistance of signatures against image deterioration.The proposed system attained a Gaussian noise value of 98%and a rotation accuracy surpassing 99%.
文摘Detecting hate speech automatically in social media forensics has emerged as a highly challenging task due tothe complex nature of language used in such platforms. Currently, several methods exist for classifying hatespeech, but they still suffer from ambiguity when differentiating between hateful and offensive content and theyalso lack accuracy. The work suggested in this paper uses a combination of the Whale Optimization Algorithm(WOA) and Particle Swarm Optimization (PSO) to adjust the weights of two Multi-Layer Perceptron (MLPs)for neutrosophic sets classification. During the training process of the MLP, the WOA is employed to exploreand determine the optimal set of weights. The PSO algorithm adjusts the weights to optimize the performanceof the MLP as fine-tuning. Additionally, in this approach, two separate MLP models are employed. One MLPis dedicated to predicting degrees of truth membership, while the other MLP focuses on predicting degrees offalse membership. The difference between these memberships quantifies uncertainty, indicating the degree ofindeterminacy in predictions. The experimental results indicate the superior performance of our model comparedto previous work when evaluated on the Davidson dataset.
文摘The number of blogs and other forms of opinionated online content has increased dramatically in recent years.Many fields,including academia and national security,place an emphasis on automated political article orientation detection.Political articles(especially in the Arab world)are different from other articles due to their subjectivity,in which the author’s beliefs and political affiliation might have a significant influence on a political article.With categories representing the main political ideologies,this problem may be thought of as a subset of the text categorization(classification).In general,the performance of machine learning models for text classification is sensitive to hyperparameter settings.Furthermore,the feature vector used to represent a document must capture,to some extent,the complex semantics of natural language.To this end,this paper presents an intelligent system to detect political Arabic article orientation that adapts the categorical boosting(CatBoost)method combined with a multi-level feature concept.Extracting features at multiple levels can enhance the model’s ability to discriminate between different classes or patterns.Each level may capture different aspects of the input data,contributing to a more comprehensive representation.CatBoost,a robust and efficient gradient-boosting algorithm,is utilized to effectively learn and predict the complex relationships between these features and the political orientation labels associated with the articles.A dataset of political Arabic texts collected from diverse sources,including postings and articles,is used to assess the suggested technique.Conservative,reform,and revolutionary are the three subcategories of these opinions.The results of this study demonstrate that compared to other frequently used machine learning models for text classification,the CatBoost method using multi-level features performs better with an accuracy of 98.14%.
文摘Electronic patient data gives many advantages,but also new difficulties.Deadlocks may delay procedures like acquiring patient information.Distributed deadlock resolution solutions introduce uncertainty due to inaccurate transaction properties.Soft computing-based solutions have been developed to solve this challenge.In a single framework,ambiguous,vague,incomplete,and inconsistent transaction attribute information has received minimal attention.The work presented in this paper employed type-2 neutrosophic logic,an extension of type-1 neutrosophic logic,to handle uncertainty in real-time deadlock-resolving systems.The proposed method is structured to reflect multiple types of knowledge and relations among transactions’features that include validation factor degree,slackness degree,degree of deadline-missed transaction based on the degree of membership of truthiness,degree ofmembership of indeterminacy,and degree ofmembership of falsity.Here,the footprint of uncertainty(FOU)for truth,indeterminacy,and falsity represents the level of uncertainty that exists in the value of a grade of membership.We employed a distributed real-time transaction processing simulator(DRTTPS)to conduct the simulations and conducted experiments using the benchmark Pima Indians diabetes dataset(PIDD).As the results showed,there is an increase in detection rate and a large drop in rollback rate when this new strategy is used.The performance of Type-2 neutrosophicbased resolution is better than the Type-1 neutrosophic-based approach on the execution ratio scale.The improvement rate has reached 10%to 20%,depending on the number of arrived transactions.
文摘Signature verification is regarded as the most beneficial behavioral characteristic-based biometric feature in security and fraud protection.It is also a popular biometric authentication technology in forensic and commercial transactions due to its various advantages,including noninvasiveness,user-friendliness,and social and legal acceptability.According to the literature,extensive research has been conducted on signature verification systems in a variety of languages,including English,Hindi,Bangla,and Chinese.However,the Arabic Offline Signature Verification(OSV)system is still a challenging issue that has not been investigated as much by researchers due to the Arabic script being distinguished by changing letter shapes,diacritics,ligatures,and overlapping,making verification more difficult.Recently,signature verification systems have shown promising results for recognizing signatures that are genuine or forgeries;however,performance on skilled forgery detection is still unsatisfactory.Most existing methods require many learning samples to improve verification accuracy,which is a major drawback because the number of available signature samples is often limited in the practical application of signature verification systems.This study addresses these issues by presenting an OSV system based on multifeature fusion and discriminant feature selection using a genetic algorithm(GA).In contrast to existing methods,which use multiclass learning approaches,this study uses a oneclass learning strategy to address imbalanced signature data in the practical application of a signature verification system.The proposed approach is tested on three signature databases(SID)-Arabic handwriting signatures,CEDAR(Center of Excellence for Document Analysis and Recognition),and UTSIG(University of Tehran Persian Signature),and experimental results show that the proposed system outperforms existing systems in terms of reducing the False Acceptance Rate(FAR),False Rejection Rate(FRR),and Equal Error Rate(ERR).The proposed system achieved 5%improvement.
文摘It is crucial,while using healthcare data,to assess the advantages of data privacy against the possible drawbacks.Data from several sources must be combined for use in many data mining applications.The medical practitioner may use the results of association rule mining performed on this aggregated data to better personalize patient care and implement preventive measures.Historically,numerous heuristics(e.g.,greedy search)and metaheuristics-based techniques(e.g.,evolutionary algorithm)have been created for the positive association rule in privacy preserving data mining(PPDM).When it comes to connecting seemingly unrelated diseases and drugs,negative association rules may be more informative than their positive counterparts.It is well-known that during negative association rules mining,a large number of uninteresting rules are formed,making this a difficult problem to tackle.In this research,we offer an adaptive method for negative association rule mining in vertically partitioned healthcare datasets that respects users’privacy.The applied approach dynamically determines the transactions to be interrupted for information hiding,as opposed to predefining them.This study introduces a novel method for addressing the problem of negative association rules in healthcare data mining,one that is based on the Tabu-genetic optimization paradigm.Tabu search is advantageous since it removes a huge number of unnecessary rules and item sets.Experiments using benchmark healthcare datasets prove that the discussed scheme outperforms state-of-the-art solutions in terms of decreasing side effects and data distortions,as measured by the indicator of hiding failure.
文摘Routing is a key function inWireless Sensor Networks(WSNs)since it facilitates data transfer to base stations.Routing attacks have the potential to destroy and degrade the functionality ofWSNs.A trustworthy routing system is essential for routing security andWSN efficiency.Numerous methods have been implemented to build trust between routing nodes,including the use of cryptographic methods and centralized routing.Nonetheless,the majority of routing techniques are unworkable in reality due to the difficulty of properly identifying untrusted routing node activities.At the moment,there is no effective way to avoid malicious node attacks.As a consequence of these concerns,this paper proposes a trusted routing technique that combines blockchain infrastructure,deep neural networks,and Markov Decision Processes(MDPs)to improve the security and efficiency of WSN routing.To authenticate the transmission process,the suggested methodology makes use of a Proof of Authority(PoA)mechanism inside the blockchain network.The validation group required for proofing is chosen using a deep learning approach that prioritizes each node’s characteristics.MDPs are then utilized to determine the suitable next-hop as a forwarding node capable of securely transmitting messages.According to testing data,our routing system outperforms current routing algorithms in a 50%malicious node routing scenario.
文摘Medical image segmentation has consistently been a significant topic of research and a prominent goal,particularly in computer vision.Brain tumor research plays a major role in medical imaging applications by providing a tremendous amount of anatomical and functional knowledge that enhances and allows easy diagnosis and disease therapy preparation.To prevent or minimize manual segmentation error,automated tumor segmentation,and detection became the most demanding process for radiologists and physicians as the tumor often has complex structures.Many methods for detection and segmentation presently exist,but all lack high accuracy.This paper’s key contribution focuses on evaluating machine learning techniques that are supposed to reduce the effect of frequently found issues in brain tumor research.Furthermore,attention concentrated on the challenges related to level set segmentation.The study proposed in this paper uses the Population-based Artificial Bee Colony Clustering(P-ABCC)methodology to reliably collect initial contour points,which helps minimize the number of iterations and segmentation errors of the level-set process.The proposed model measures cluster centroids(ABC populations)and uses a level-set approach to resolve contour differences as brain tumors vary as they have irregular form,structure,and volume.The suggested model comprises of three major steps:first,pre-processing to separate the brain from the head and improves contrast stretching.Secondly,P-ABCC is used to obtain tumor edges that are utilized as an initial MRI sequence contour.The level-set segmentation is then used to detect tumor regions from all volume slices with fewer iterations.Results suggest improved model efficiency compared to state-of-the-art methods for both datasets BRATS 2019 and BRATS 2017.At BRATS 2019,dice progress was achieved for Entire Tumor(WT),Tumor Center(TC),and Improved Tumor(ET)by 0.03%,0.03%,and 0.01%respectively.At BRATS 2017,an increase in precision for WT was reached by 5.27%.
文摘Contact between mobile hosts and database servers presents many problems in theMobile Database System(MDS).It is harmed by a variety of causes,including handoff,inadequate capacity,frequent transaction updates,and repeated failures,both of which contribute to serious issues with the information system’s consistency.However,error tolerance technicality allows devices to continue performing their functions in the event of a failure.The aim of this paper is to identify the optimal recovery approach from among the available state-of-the-art techniques in MDS by employing game theory.Several of the presented recovery protocols are chosen and evaluated in order to determine the most critical factors affecting the recovery mechanism,such as the number of processes,the time required to deliver messages,and the number of messages logged-in time.Then,using the suggested payout matrix,the game theory strategy is adapted to choose the optimum recovery technique for the specified environmental variables.The NS2 simulatorwas used to carry out the tests and apply the chosen recovery protocols.The experiments validate the proposed model’s usefulness in comparison to other methods.
文摘A robust smile recognition system could be widely used for many real-world applications.Classification of a facial smile in an unconstrained setting is difficult due to the invertible and wide variety in face images.In this paper,an adaptive model for smile expression classification is suggested that integrates a fast features extraction algorithm and cascade classifiers.Our model takes advantage of the intrinsic association between face detection,smile,and other face features to alleviate the over-fitting issue on the limited training set and increase classification results.The features are extracted taking into account to exclude any unnecessary coefficients in the feature vector;thereby enhancing the discriminatory capacity of the extracted features and reducing the computational process.Still,the main causes of error in learning are due to noise,bias,and variance.Ensemble helps to minimize these factors.Combinations of multiple classifiers decrease variance,especially in the case of unstable classifiers,and may produce a more reliable classification than a single classifier.However,a shortcoming of bagging as the best ensemble classifier is its random selection,where the classification performance relies on the chance to pick an appropriate subset of training items.The suggested model employs a modified form of bagging while creating training sets to deal with this challenge(error-based bootstrapping).The experimental results for smile classification on the JAFFE,CK+,and CK+48 benchmark datasets show the feasibility of our proposed model.
文摘Signature verification involves vague situations in which a signature could resemble many reference samples ormight differ because of handwriting variances. By presenting the features and similarity score of signatures from thematching algorithm as fuzzy sets and capturing the degrees of membership, non-membership, and indeterminacy,a neutrosophic engine can significantly contribute to signature verification by addressing the inherent uncertaintiesand ambiguities present in signatures. But type-1 neutrosophic logic gives these membership functions fixed values,which could not adequately capture the various degrees of uncertainty in the characteristics of signatures. Type-1neutrosophic representation is also unable to adjust to various degrees of uncertainty. The proposed work exploresthe type-2 neutrosophic logic to enable additional flexibility and granularity in handling ambiguity, indeterminacy,and uncertainty, hence improving the accuracy of signature verification systems. Because type-2 neutrosophiclogic allows the assessment of many sources of ambiguity and conflicting information, decision-making is moreflexible. These experimental results show the possible benefits of using a type-2 neutrosophic engine for signatureverification by demonstrating its superior handling of uncertainty and variability over type-1, which eventuallyresults in more accurate False Rejection Rate (FRR) and False Acceptance Rate (FAR) verification results. In acomparison analysis using a benchmark dataset of handwritten signatures, the type-2 neutrosophic similaritymeasure yields a better accuracy rate of 98% than the type-1 95%.