With the development of large scale text processing, the dimension of text feature space has become larger and larger, which has added a lot of difficulties to natural language processing. How to reduce the dimension...With the development of large scale text processing, the dimension of text feature space has become larger and larger, which has added a lot of difficulties to natural language processing. How to reduce the dimension has become a practical problem in the field. Here we present two clustering methods, i.e. concept association and concept abstract, to achieve the goal. The first refers to the keyword clustering based on the co occurrence of展开更多
The popularity of online home design and floor plan customization has been steadily increasing. However, the manual conversion of floor plan images from books or paper materials into electronic resources can be a chal...The popularity of online home design and floor plan customization has been steadily increasing. However, the manual conversion of floor plan images from books or paper materials into electronic resources can be a challenging task due to the vast amount of historical data available. By leveraging neural networks to identify and parse floor plans, the process of converting these images into electronic materials can be significantly streamlined. In this paper, we present a novel learning framework for automatically parsing floor plan images. Our key insight is that the room type text is very common and crucial in floor plan images as it identifies the important semantic information of the corresponding room. However, this clue is rarely considered in previous learning-based methods. In contrast, we propose the Row and Column network (RC-Net) for recognizing floor plan elements by integrating the text feature. Specifically, we add the text feature branch in the network to extract text features corresponding to the room type for the guidance of room type predictions. More importantly, we formulate the Row and Column constraint module (RC constraint module) to share and constrain features across the entire row and column of the feature maps to ensure that only one type is predicted in each room as much as possible, making the segmentation boundaries between different rooms more regular and cleaner. Extensive experiments on three benchmark datasets validate that our framework substantially outperforms other state-of-the-art approaches in terms of the metrics of FWIoU, mACC and mIoU.展开更多
Text extraction from images using the traditional techniques of image collecting,and pattern recognition using machine learning consume time due to the amount of extracted features from the images.Deep Neural Networks...Text extraction from images using the traditional techniques of image collecting,and pattern recognition using machine learning consume time due to the amount of extracted features from the images.Deep Neural Networks introduce effective solutions to extract text features from images using a few techniques and the ability to train large datasets of images with significant results.This study proposes using Dual Maxpooling and concatenating convolution Neural Networks(CNN)layers with the activation functions Relu and the Optimized Leaky Relu(OLRelu).The proposed method works by dividing the word image into slices that contain characters.Then pass them to deep learning layers to extract feature maps and reform the predicted words.Bidirectional Short Memory(BiLSTM)layers extractmore compelling features and link the time sequence fromforward and backward directions during the training phase.The Connectionist Temporal Classification(CTC)function calcifies the training and validation loss rates.In addition to decoding the extracted feature to reform characters again and linking them according to their time sequence.The proposed model performance is evaluated using training and validation loss errors on the Mjsynth and Integrated Argument Mining Tasks(IAM)datasets.The result of IAM was 2.09%for the average loss errors with the proposed dualMaxpooling and OLRelu.In the Mjsynth dataset,the best validation loss rate shrunk to 2.2%by applying concatenating CNN layers,and Relu.展开更多
The digital text media is the most common media transferred via the internet for various purposes and is very sensitive to transfer online with the possibility to be tampered illegally by the tampering attacks.Therefo...The digital text media is the most common media transferred via the internet for various purposes and is very sensitive to transfer online with the possibility to be tampered illegally by the tampering attacks.Therefore,improving the security and authenticity of the text when it is transferred via the internet has become one of the most difcult challenges that researchers face today.Arabic text is more sensitive than other languages due to Harakat’s existence in Arabic diacritics such as Kasra,and Damma in which making basic changes such as modifying diacritic arrangements can lead to change the text meaning.In this paper,an intelligent hybrid solution is proposed with highly sensitive detection for any tampering on Arabic text exchanged via the internet.Natural language processing,entropy,and watermarking techniques have been integrated into this method to improve the security and reliability of Arabic text without limitations in text nature or size,and type or volumes of tampering attack.The proposed scheme is implemented,simulated,and validated using four standard Arabic datasets of varying lengths under multiple random locations of insertion,reorder,and deletion attacks.The experimental and simulation results prove the accuracy of tampering detection of the proposed scheme against all kinds of tampering attacks.Comparison results show that the proposed approach outperforms all of the other baseline approaches in terms of tampering detection accuracy.展开更多
This paper proposes a learning-based method for text detection and text segmentation in natural scene images. First, the input image is decomposed into multiple connected-components (CCs) by Niblack clustering algorit...This paper proposes a learning-based method for text detection and text segmentation in natural scene images. First, the input image is decomposed into multiple connected-components (CCs) by Niblack clustering algorithm. Then all the CCs including text CCs and non-text CCs are verified on their text features by a 2-stage classification module, where most non-text CCs are discarded by an attentional cascade classifier and remaining CCs are further verified by an SVM. All the accepted CCs are output to result in text only binary image. Experiments with many images in different scenes showed satisfactory performance of our proposed method.展开更多
The potential of text analytics is revealed by Machine Learning(ML)and Natural Language Processing(NLP)techniques.In this paper,we propose an NLP framework that is applied to multiple datasets to detect malicious Unif...The potential of text analytics is revealed by Machine Learning(ML)and Natural Language Processing(NLP)techniques.In this paper,we propose an NLP framework that is applied to multiple datasets to detect malicious Uniform Resource Locators(URLs).Three categories of features,both ML and Deep Learning(DL)algorithms and a ranking schema are included in the proposed framework.We apply frequency and prediction-based embeddings,such as hash vectorizer,Term Frequency-Inverse Dense Frequency(TF-IDF)and predictors,word to vector-word2vec(continuous bag of words,skip-gram)from Google,to extract features from text.Further,we apply more state-of-the-art methods to create vectorized features,such as GloVe.Additionally,feature engineering that is specific to URL structure is deployed to detect scams and other threats.For framework assessment,four ranking indicators are weighted:computational time and performance as accuracy,F1 score and type error II.For the computational time,we propose a new metric-Feature Building Time(FBT)as the cutting-edge feature builders(like doc2vec or GloVe)require more time.By applying the proposed assessment step,the skip-gram algorithm of word2vec surpasses other feature builders in performance.Additionally,eXtreme Gradient Boost(XGB)outperforms other classifiers.With this setup,we attain an accuracy of 99.5%and an F1 score of 0.99.展开更多
Natural scene recognition has important significance and value in the fields of image retrieval,autonomous navigation,human-computer interaction and industrial automation.Firstly,the natural scene image non-text conte...Natural scene recognition has important significance and value in the fields of image retrieval,autonomous navigation,human-computer interaction and industrial automation.Firstly,the natural scene image non-text content takes up relatively high proportion;secondly,the natural scene images have a cluttered background and complex lighting conditions,angle,font and color.Therefore,how to extract text extreme regions efficiently from complex and varied natural scene images plays an important role in natural scene image text recognition.In this paper,a Text extremum region Extraction algorithm based on Joint-Channels(TEJC)is proposed.On the one hand,it can solve the problem that the maximum stable extremum region(MSER)algorithm is only suitable for gray images and difficult to process color images.On the other hand,it solves the problem that the MSER algorithm has high complexity and low accuracy when extracting the most stable extreme region.In this paper,the proposed algorithm is tested and evaluated on the ICDAR data set.The experimental results show that the method has superiority.展开更多
Visual question answering(VQA)has attracted more and more attention in computer vision and natural language processing.Scholars are committed to studying how to better integrate image features and text features to ach...Visual question answering(VQA)has attracted more and more attention in computer vision and natural language processing.Scholars are committed to studying how to better integrate image features and text features to achieve better results in VQA tasks.Analysis of all features may cause information redundancy and heavy computational burden.Attention mechanism is a wise way to solve this problem.However,using single attention mechanism may cause incomplete concern of features.This paper improves the attention mechanism method and proposes a hybrid attention mechanism that combines the spatial attention mechanism method and the channel attention mechanism method.In the case that the attention mechanism will cause the loss of the original features,a small portion of image features were added as compensation.For the attention mechanism of text features,a selfattention mechanism was introduced,and the internal structural features of sentences were strengthened to improve the overall model.The results show that attention mechanism and feature compensation add 6.1%accuracy to multimodal low-rank bilinear pooling network.展开更多
The robust guarantee of train control on-board equipment is inextricably linked to the safe functioning of a high-speed train.A fault diagnostic model of on-board equipment is built utilizing the integrated learning X...The robust guarantee of train control on-board equipment is inextricably linked to the safe functioning of a high-speed train.A fault diagnostic model of on-board equipment is built utilizing the integrated learning XGBoost(eXtreme Gradient Boosting)algorithm to help technicians assess the malfunction category of high-speed train control on-board equipment accurately and rapidly.The XGBoost algorithm iterates multiple decision tree models to improve the accuracy of fault diagnosis by lifting the predicted residual and adding regular terms.To begin,the text features were extracted using the improved TF-IDF(Term Frequency-Inverse Document Frequency)approach,and 24 fault feature words were chosen and converted into weight word vectors.Secondly,considering the imbalanced fault categories in the data set,the ADASYN(Adaptive Synthetic sampling)adaptive synthetically oversampling technique was used to synthesize a few category fault samples.Finally,the data samples were split into training and test sets based on the fault text data of CTCS-3train control on-board equipment recorded by Guangzhou Railway Group maintenance personnel.The XGBoost model was utilized to realize the automatic fault location of the test set after optimized parameter tuning through grid search.Compared with other methods,the evaluation index of the XGBoost model was significantly improved.The diagnostic accuracy reached 95.43%,which verifies the effectiveness of the method in text fault diagnosis.展开更多
In order to understand and organize the document in an efficient way,the multidocument summarization becomes the prominent technique in the Internet world.As the information available is in a large amount,it is necess...In order to understand and organize the document in an efficient way,the multidocument summarization becomes the prominent technique in the Internet world.As the information available is in a large amount,it is necessary to summarize the document for obtaining the condensed information.To perform the multi-document summarization,a new Bayesian theory-based Hybrid Learning Model(BHLM)is proposed in this paper.Initially,the input documents are preprocessed,where the stop words are removed from the document.Then,the feature of the sentence is extracted to determine the sentence score for summarizing the document.The extracted feature is then fed into the hybrid learning model for learning.Subsequently,learning feature,training error and correlation coefficient are integrated with the Bayesian model to develop BHLM.Also,the proposed method is used to assign the class label assisted by the mean,variance and probability measures.Finally,based on the class label,the sentences are sorted out to generate the final summary of the multi-document.The experimental results are validated in MATLAB,and the performance is analyzed using the metrics,precision,recall,F-measure and rouge-1.The proposed model attains 99.6%precision and 75%rouge-1 measure,which shows that the model can provide the final summary efficiently.展开更多
文摘With the development of large scale text processing, the dimension of text feature space has become larger and larger, which has added a lot of difficulties to natural language processing. How to reduce the dimension has become a practical problem in the field. Here we present two clustering methods, i.e. concept association and concept abstract, to achieve the goal. The first refers to the keyword clustering based on the co occurrence of
基金supported by the National Natural Science Foundation of China under Grant Nos.U21A20515,62172416,52175493,U2003109,61972459,and 62102414the Youth Innovation Promotion Association of the Chinese Academy of Sciences(2022131).
文摘The popularity of online home design and floor plan customization has been steadily increasing. However, the manual conversion of floor plan images from books or paper materials into electronic resources can be a challenging task due to the vast amount of historical data available. By leveraging neural networks to identify and parse floor plans, the process of converting these images into electronic materials can be significantly streamlined. In this paper, we present a novel learning framework for automatically parsing floor plan images. Our key insight is that the room type text is very common and crucial in floor plan images as it identifies the important semantic information of the corresponding room. However, this clue is rarely considered in previous learning-based methods. In contrast, we propose the Row and Column network (RC-Net) for recognizing floor plan elements by integrating the text feature. Specifically, we add the text feature branch in the network to extract text features corresponding to the room type for the guidance of room type predictions. More importantly, we formulate the Row and Column constraint module (RC constraint module) to share and constrain features across the entire row and column of the feature maps to ensure that only one type is predicted in each room as much as possible, making the segmentation boundaries between different rooms more regular and cleaner. Extensive experiments on three benchmark datasets validate that our framework substantially outperforms other state-of-the-art approaches in terms of the metrics of FWIoU, mACC and mIoU.
基金supported this project under the Fundamental Research Grant Scheme(FRGS)FRGS/1/2019/ICT02/UKM/02/9 entitled“Convolution Neural Network Enhancement Based on Adaptive Convexity and Regularization Functions for Fake Video Analytics”.This grant was received by Prof.Assis.Dr.S.N.H.Sheikh Abdullah,https://www.ukm.my/spifper/research_news/instrumentfunds.
文摘Text extraction from images using the traditional techniques of image collecting,and pattern recognition using machine learning consume time due to the amount of extracted features from the images.Deep Neural Networks introduce effective solutions to extract text features from images using a few techniques and the ability to train large datasets of images with significant results.This study proposes using Dual Maxpooling and concatenating convolution Neural Networks(CNN)layers with the activation functions Relu and the Optimized Leaky Relu(OLRelu).The proposed method works by dividing the word image into slices that contain characters.Then pass them to deep learning layers to extract feature maps and reform the predicted words.Bidirectional Short Memory(BiLSTM)layers extractmore compelling features and link the time sequence fromforward and backward directions during the training phase.The Connectionist Temporal Classification(CTC)function calcifies the training and validation loss rates.In addition to decoding the extracted feature to reform characters again and linking them according to their time sequence.The proposed model performance is evaluated using training and validation loss errors on the Mjsynth and Integrated Argument Mining Tasks(IAM)datasets.The result of IAM was 2.09%for the average loss errors with the proposed dualMaxpooling and OLRelu.In the Mjsynth dataset,the best validation loss rate shrunk to 2.2%by applying concatenating CNN layers,and Relu.
基金The author extends his appreciation to the Deanship of Scientic Research at King Khalid University for funding this work under Grant Number(R.G.P.2/55/40/2019),Received by Fahd N.Al-Wesabi.www.kku.edu.sa。
文摘The digital text media is the most common media transferred via the internet for various purposes and is very sensitive to transfer online with the possibility to be tampered illegally by the tampering attacks.Therefore,improving the security and authenticity of the text when it is transferred via the internet has become one of the most difcult challenges that researchers face today.Arabic text is more sensitive than other languages due to Harakat’s existence in Arabic diacritics such as Kasra,and Damma in which making basic changes such as modifying diacritic arrangements can lead to change the text meaning.In this paper,an intelligent hybrid solution is proposed with highly sensitive detection for any tampering on Arabic text exchanged via the internet.Natural language processing,entropy,and watermarking techniques have been integrated into this method to improve the security and reliability of Arabic text without limitations in text nature or size,and type or volumes of tampering attack.The proposed scheme is implemented,simulated,and validated using four standard Arabic datasets of varying lengths under multiple random locations of insertion,reorder,and deletion attacks.The experimental and simulation results prove the accuracy of tampering detection of the proposed scheme against all kinds of tampering attacks.Comparison results show that the proposed approach outperforms all of the other baseline approaches in terms of tampering detection accuracy.
基金Project supported by the OMRON and SJTU Collaborative Founda-tion under PVS project (2005.03~2005.10)
文摘This paper proposes a learning-based method for text detection and text segmentation in natural scene images. First, the input image is decomposed into multiple connected-components (CCs) by Niblack clustering algorithm. Then all the CCs including text CCs and non-text CCs are verified on their text features by a 2-stage classification module, where most non-text CCs are discarded by an attentional cascade classifier and remaining CCs are further verified by an SVM. All the accepted CCs are output to result in text only binary image. Experiments with many images in different scenes showed satisfactory performance of our proposed method.
基金supported by a grant of the Ministry of Research,Innovation and Digitization,CNCS-UEFISCDI,Project Number PN-Ⅲ-P4-PCE-2021-0334,within PNCDI Ⅲ.
文摘The potential of text analytics is revealed by Machine Learning(ML)and Natural Language Processing(NLP)techniques.In this paper,we propose an NLP framework that is applied to multiple datasets to detect malicious Uniform Resource Locators(URLs).Three categories of features,both ML and Deep Learning(DL)algorithms and a ranking schema are included in the proposed framework.We apply frequency and prediction-based embeddings,such as hash vectorizer,Term Frequency-Inverse Dense Frequency(TF-IDF)and predictors,word to vector-word2vec(continuous bag of words,skip-gram)from Google,to extract features from text.Further,we apply more state-of-the-art methods to create vectorized features,such as GloVe.Additionally,feature engineering that is specific to URL structure is deployed to detect scams and other threats.For framework assessment,four ranking indicators are weighted:computational time and performance as accuracy,F1 score and type error II.For the computational time,we propose a new metric-Feature Building Time(FBT)as the cutting-edge feature builders(like doc2vec or GloVe)require more time.By applying the proposed assessment step,the skip-gram algorithm of word2vec surpasses other feature builders in performance.Additionally,eXtreme Gradient Boost(XGB)outperforms other classifiers.With this setup,we attain an accuracy of 99.5%and an F1 score of 0.99.
基金This work is supported by State Grid Shandong Electric Power Company Science and Technology Project Funding under Grant Nos.520613180002,62061318C002the Fundamental Research Funds for the Central Universities(Grant No.HIT.NSRIF.201714)+1 种基金Weihai Science and Technology Development Program(2016DX GJMS15)Key Research and Development Program in Shandong Provincial(2017GGX90103).
文摘Natural scene recognition has important significance and value in the fields of image retrieval,autonomous navigation,human-computer interaction and industrial automation.Firstly,the natural scene image non-text content takes up relatively high proportion;secondly,the natural scene images have a cluttered background and complex lighting conditions,angle,font and color.Therefore,how to extract text extreme regions efficiently from complex and varied natural scene images plays an important role in natural scene image text recognition.In this paper,a Text extremum region Extraction algorithm based on Joint-Channels(TEJC)is proposed.On the one hand,it can solve the problem that the maximum stable extremum region(MSER)algorithm is only suitable for gray images and difficult to process color images.On the other hand,it solves the problem that the MSER algorithm has high complexity and low accuracy when extracting the most stable extreme region.In this paper,the proposed algorithm is tested and evaluated on the ICDAR data set.The experimental results show that the method has superiority.
基金This work was supported by the Sichuan Science and Technology Program(2021YFQ0003).
文摘Visual question answering(VQA)has attracted more and more attention in computer vision and natural language processing.Scholars are committed to studying how to better integrate image features and text features to achieve better results in VQA tasks.Analysis of all features may cause information redundancy and heavy computational burden.Attention mechanism is a wise way to solve this problem.However,using single attention mechanism may cause incomplete concern of features.This paper improves the attention mechanism method and proposes a hybrid attention mechanism that combines the spatial attention mechanism method and the channel attention mechanism method.In the case that the attention mechanism will cause the loss of the original features,a small portion of image features were added as compensation.For the attention mechanism of text features,a selfattention mechanism was introduced,and the internal structural features of sentences were strengthened to improve the overall model.The results show that attention mechanism and feature compensation add 6.1%accuracy to multimodal low-rank bilinear pooling network.
基金supported by the Science and Tec hnology Research and Development Plan Contract of China National Railway Group Co.,Ltd(Grant No.N2022G012)the Railway Science and Technology Research and Development Center Project(Project No.SYF2022SJ004).
文摘The robust guarantee of train control on-board equipment is inextricably linked to the safe functioning of a high-speed train.A fault diagnostic model of on-board equipment is built utilizing the integrated learning XGBoost(eXtreme Gradient Boosting)algorithm to help technicians assess the malfunction category of high-speed train control on-board equipment accurately and rapidly.The XGBoost algorithm iterates multiple decision tree models to improve the accuracy of fault diagnosis by lifting the predicted residual and adding regular terms.To begin,the text features were extracted using the improved TF-IDF(Term Frequency-Inverse Document Frequency)approach,and 24 fault feature words were chosen and converted into weight word vectors.Secondly,considering the imbalanced fault categories in the data set,the ADASYN(Adaptive Synthetic sampling)adaptive synthetically oversampling technique was used to synthesize a few category fault samples.Finally,the data samples were split into training and test sets based on the fault text data of CTCS-3train control on-board equipment recorded by Guangzhou Railway Group maintenance personnel.The XGBoost model was utilized to realize the automatic fault location of the test set after optimized parameter tuning through grid search.Compared with other methods,the evaluation index of the XGBoost model was significantly improved.The diagnostic accuracy reached 95.43%,which verifies the effectiveness of the method in text fault diagnosis.
文摘In order to understand and organize the document in an efficient way,the multidocument summarization becomes the prominent technique in the Internet world.As the information available is in a large amount,it is necessary to summarize the document for obtaining the condensed information.To perform the multi-document summarization,a new Bayesian theory-based Hybrid Learning Model(BHLM)is proposed in this paper.Initially,the input documents are preprocessed,where the stop words are removed from the document.Then,the feature of the sentence is extracted to determine the sentence score for summarizing the document.The extracted feature is then fed into the hybrid learning model for learning.Subsequently,learning feature,training error and correlation coefficient are integrated with the Bayesian model to develop BHLM.Also,the proposed method is used to assign the class label assisted by the mean,variance and probability measures.Finally,based on the class label,the sentences are sorted out to generate the final summary of the multi-document.The experimental results are validated in MATLAB,and the performance is analyzed using the metrics,precision,recall,F-measure and rouge-1.The proposed model attains 99.6%precision and 75%rouge-1 measure,which shows that the model can provide the final summary efficiently.