Retailing is a dynamic business domain where commodities and goods are sold in small quantities directly to the customers.It deals with the end user customers of a supply-chain network and therefore has to accommodate...Retailing is a dynamic business domain where commodities and goods are sold in small quantities directly to the customers.It deals with the end user customers of a supply-chain network and therefore has to accommodate the needs and desires of a large group of customers over varied utilities.The volume and volatility of the business makes it one of the prospectivefields for analytical study and data modeling.This is also why customer segmentation drives a key role in multiple retail business decisions such as marketing budgeting,customer targeting,customized offers,value proposition etc.The segmentation could be on various aspects such as demographics,historic behavior or preferences based on the use cases.In this paper,historic retail transactional data is used to segment the custo-mers using K-Means clustering and the results are utilized to arrive at a transition matrix which is used to predict the cluster movements over the time period using Markov Model algorithm.This helps in calculating the futuristic value a segment or a customer brings to the business.Strategic marketing designs and budgeting can be implemented using these results.The study is specifically useful for large scale marketing in domains such as e-commerce,insurance or retailers to segment,profile and measure the customer lifecycle value over a short period of time.展开更多
Epilepsy is one of the most prevalent neurological disorders affecting 70 million people worldwide.The present work is focused on designing an efficient algorithm for automatic seizure detection by using electroenceph...Epilepsy is one of the most prevalent neurological disorders affecting 70 million people worldwide.The present work is focused on designing an efficient algorithm for automatic seizure detection by using electroencephalogram(EEG) as a noninvasive procedure to record neuronal activities in the brain.EEG signals' underlying dynamics are extracted to differentiate healthy and seizure EEG signals.Shannon entropy,collision entropy,transfer entropy,conditional probability,and Hjorth parameter features are extracted from subbands of tunable Q wavelet transform.Efficient decomposition level for different feature vector is selected using the Kruskal-Wallis test to achieve good classification.Different features are combined using the discriminant correlation analysis fusion technique to form a single fused feature vector.The accuracy of the proposed approach is higher for Q=2 and J=10.Transfer entropy is observed to be significant for different class combinations.Proposed approach achieved 100% accuracy in classifying healthy-seizure EEG signal using simple and robust features and hidden Markov model with less computation time.The proposed approach efficiency is evaluated in classifying seizure and non-seizure surface EEG signals.The system has achieved 96.87% accuracy in classifying surface seizure and nonseizure EEG segments using efficient features extracted from different J level.展开更多
Text information is principally dependent on the natural languages.Therefore,improving security and reliability of text information exchanged via internet network has become the most difficult challenge that researche...Text information is principally dependent on the natural languages.Therefore,improving security and reliability of text information exchanged via internet network has become the most difficult challenge that researchers encounter.Content authentication and tampering detection of digital contents have become a major concern in the area of communication and information exchange via the Internet.In this paper,an intelligent text Zero-Watermarking approach SETZWMWMM(Smart English Text Zero-Watermarking Approach Based on Mid-Level Order and Word Mechanism of Markov Model)has been proposed for the content authentication and tampering detection of English text contents.The SETZWMWMM approach embeds and detects the watermark logically without altering the original English text document.Based on Hidden Markov Model(HMM),Third level order of word mechanism is used to analyze the interrelationship between contexts of given English texts.The extracted features are used as a watermark information and integrated with digital zero-watermarking techniques.To detect eventual tampering,SETZWMWMM has been implemented and validated with attacked English text.Experiments were performed on four datasets of varying lengths under multiple random locations of insertion,reorder and deletion attacks.The experimental results show that our method is more sensitive and efficient for all kinds of tampering attacks with high level accuracy of tampering detection than compared methods.展开更多
Translation software has become an important tool for communication between different languages.People’s requirements for translation are higher and higher,mainly reflected in people’s desire for barrier free cultur...Translation software has become an important tool for communication between different languages.People’s requirements for translation are higher and higher,mainly reflected in people’s desire for barrier free cultural exchange.With a large corpus,the performance of statistical machine translation based on words and phrases is limited due to the small size of modeling units.Previous statistical methods rely primarily on the size of corpus and number of its statistical results to avoid ambiguity in translation,ignoring context.To support the ongoing improvement of translation methods built upon deep learning,we propose a translation algorithm based on the Hidden Markov Model to improve the use of context in the process of translation.During translation,our Hidden Markov Model prediction chain selects a number of phrases with the highest result probability to form a sentence.The collection of all of the generated sentences forms a topic sequence.Using probabilities and article sequences determined from the training set,our method again applies the Hidden Markov Model to form the final translation to improve the context relevance in the process of translation.This algorithm improves the accuracy of translation,avoids the combination of invalid words,and enhances the readability and meaning of the resulting translation.展开更多
Modeling experiences of traditional grey-Markov show that the prediction results are not accurate when analyzed data are rare and fluctuated.So it is necessary to revise or improve the original modeling procedure of t...Modeling experiences of traditional grey-Markov show that the prediction results are not accurate when analyzed data are rare and fluctuated.So it is necessary to revise or improve the original modeling procedure of the grey-Markov(GM)model.Therefore,a new idea is brought forward that the Markov theory is used twice,where the first time is to extend the original data and the second to calculate and estimate the residual errors.Then by comparing the original data sequence from a fault prediction case with the simulation sequence produced by the use of GM(1,1) and the new GM method,results are conforming to the original data.Finally,an assumption of GM model is put forward as the future work.展开更多
In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance...In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance degradation in noisy conditions or distorted channels. It is necessary to search for more robust feature extraction methods to gain better performance in adverse conditions. This paper investigates the performance of conventional and new hybrid speech feature extraction algorithms of Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coding Coefficient (LPCC), perceptual linear production (PLP), and RASTA-PLP in noisy conditions through using multivariate Hidden Markov Model (HMM) classifier. The behavior of the proposal system is evaluated using TIDIGIT human voice dataset corpora, recorded from 208 different adult speakers in both training and testing process. The theoretical basis for speech processing and classifier procedures were presented, and the recognition results were obtained based on word recognition rate.展开更多
The paper aims to analyze land use/land cover (LULC) changes in western part and the populated area of Amman governorate and to identify the process of urbanization and urban expansion within the study area for the pe...The paper aims to analyze land use/land cover (LULC) changes in western part and the populated area of Amman governorate and to identify the process of urbanization and urban expansion within the study area for the period of 1984-2014. It also aims to predict future LULC map for the year 2030 using Markov Model to provide city planners and decision makers with information about the past and current spatial dynamics of LULC change and strictly urban expansion towards successful management and better planning in the future. Images from Landsat 5-TM for the years 1984, 1999 and from Landsat 8-OLI for the year 2014 were used to investigate LULC within the study area during 1984-2014 and the resulted LULC maps in 1999 and 2014 were used to predict future LULC map based on Markov Model. The results indicated that the urban/built up area expanded by 147% during the period from 1984 to 2014 and predicted to expand by 43.9% from 2014 to 2030 based on Markov model predictions. The areas in the western, northwest and southwest parts of Amman as well as the areas of Marka and Uhud, the northeast of the study area, were predicted to witness the major urban expansion in 2030. And these are the areas where city planners and decision makers should take into consideration in future plans of Amman. The urban expansion was mainly attributed to the high population growth rate and large number of immigrants from neighboring countries and other socio-economic changes.展开更多
The assembly process of aerospace products such as satellites and rockets has the characteristics of single-or small-batch production,a long development period,high reliability,and frequent disturbances.How to predict...The assembly process of aerospace products such as satellites and rockets has the characteristics of single-or small-batch production,a long development period,high reliability,and frequent disturbances.How to predict and avoid quality abnormalities,quickly locate their causes,and improve product assembly quality and efficiency are urgent engineering issues.As the core technology to realize the integration of virtual and physical space,digital twin(DT)technology can make full use of the low cost,high efficiency,and predictable advantages of digital space to provide a feasible solution to such problems.Hence,a quality management method for the assembly process of aerospace products based on DT is proposed.Given that traditional quality control methods for the assembly process of aerospace products are mostly post-inspection,the Grey-Markov model and T-K control chart are used with a small sample of assembly quality data to predict the value of quality data and the status of an assembly system.The Apriori algorithm is applied to mine the strong association rules related to quality data anomalies and uncontrolled assembly systems so as to solve the issue that the causes of abnormal quality are complicated and difficult to trace.The implementation of the proposed approach is described,taking the collected centroid data of an aerospace product’s cabin,one of the key quality data in the assembly process of aerospace products,as an example.A DT-based quality management system for the assembly process of aerospace products is developed,which can effectively improve the efficiency of quality management for the assembly process of aerospace products and reduce quality abnormalities.展开更多
With the emergence of the Internet of Things(IoT), there has been a proliferation of urban studies using big data. Yet, another type of urban research innovations that involve interdisciplinary thinking and methods re...With the emergence of the Internet of Things(IoT), there has been a proliferation of urban studies using big data. Yet, another type of urban research innovations that involve interdisciplinary thinking and methods remains underdeveloped. This paper represents an attempt to adopt a Hidden Markov Model(HMM) toolbox developed in Computer Science for the analysis of eye movement patterns in Psychology to answer urban mobility questions in Geography. The main idea is that both people’s eye movements and travel behavior follow the stop-travel-stop pattern, which can be summarized using HMM. Methodological challenges were addressed by adjusting the HMM to analyze territory-wide travel survey data in Hong Kong, China. By using the adjusted toolbox to identify the activitytravel patterns of working adults in Hong Kong, two distinctive groups of balanced(38.4%) and work-oriented(61.6%) lifestyles were identified. With some notable exceptions, working adults living in the urban core were having a more work-oriented lifestyle. Those with a balanced lifestyle were having a relatively compact zone of non-work activities around their homes but a relatively long commuting distance. Furthermore, working females tend to spend more time at home than their counterparts, regardless of their marital status and lifestyle. Overall, this interdisciplinary research demonstrates an attempt to integrate spatial, temporal, and sequential information for understanding people’s behavior in urban mobility research.展开更多
Because performance parameters of gear have degradation,a method is proposed to recognize and analyze its faults using the hidden Markov model( HMM). In this method,firstly,the delayed correlation-envelope method is u...Because performance parameters of gear have degradation,a method is proposed to recognize and analyze its faults using the hidden Markov model( HMM). In this method,firstly,the delayed correlation-envelope method is used to extract features from vibration signals. Then,HMMs are trained respectively using data under normal condition,gear root crack condition and gear root breaking condition. Further,the trained HMMs are used in pattern recognition and model assessment. Finally,the results from standard HMM and the proposed method are compared, which shows that the proposed methodology is feasible and effective.展开更多
Electric vehicles such as trains must match their electric power supply and demand,such as by using a composite energy storage system composed of lithium batteries and supercapacitors.In this paper,a predictive contro...Electric vehicles such as trains must match their electric power supply and demand,such as by using a composite energy storage system composed of lithium batteries and supercapacitors.In this paper,a predictive control strategy based on a Markov model is proposed for a composite energy storage system in an urban rail train.The model predicts the state of the train and a dynamic programming algorithm is employed to solve the optimization problem in a forecast time domain.Real-time online control of power allocation in the composite energy storage system can be achieved.Using standard train operating conditions for simulation,we found that the proposed control strategy achieves a suitable match between power supply and demand when the train is running.Compared with traditional predictive control systems,energy efficiency 10.5%higher.This system provides good stability and robustness,satisfactory speed tracking performance and control comfort,and significant suppression of disturbances,making it feasible for practical applications.展开更多
The links between low temperature and the incidence of disease have been studied by many researchers. What remains still unclear is the exact nature of the relation, especially the mechanism by which the change of wea...The links between low temperature and the incidence of disease have been studied by many researchers. What remains still unclear is the exact nature of the relation, especially the mechanism by which the change of weather effects on the onset of diseases. The existence of lag period between exposure to temperature and its effect on mortality may reflect the nature of the onset of diseases. Therefore, to assess lagged effects becomes potentially important. The most of studies on lags used the method by Lag-distributed Poisson Regression, and neglected extreme case as random noise to get correlations. In order to assess the lagged effect, we proposed a new approach, i.e., Hidden Markov Model by Self Organized Map (HMM by SOM) apart from well-known regression models. HMM by SOM includes the randomness in its nature and encompasses the extreme cases which were neglected by auto-regression models. The daily data of the number of patients transported by ambulance in Nagoya, Japan, were used. SOM was carried out to classify the meteorological elements into six classes. These classes were used as “states” of HMM. HMM was used to describe a background process which might produce the time series of the incidence of diseases. The background process was considered to change randomly weather states, classified by SOM. We estimated the lagged effects of weather change on the onset of both cerebral infarction and ischemic heart disease. This fact is potentially important in that if one could trace a path in the chain of events leading from temperature change to death, one might be able to prevent it and avert the fatal outcome.展开更多
A land cover classification procedure is presented utilizing the information content of fully polarimetric SAR images. The Cameron coherent target decomposition (CTD) is employed to characterize each pixel, using a se...A land cover classification procedure is presented utilizing the information content of fully polarimetric SAR images. The Cameron coherent target decomposition (CTD) is employed to characterize each pixel, using a set of canonical scattering mechanisms in order to describe the physical properties of the scatterer. The novelty of the proposed classification approach lies on the use of Hidden Markov Models (HMM) to uniquely characterize each type of land cover. The motivation to this approach is the investigation of the alternation between scattering mechanisms from SAR pixel to pixel. Depending </span><span style="font-family:Verdana;">on the observations-scattering mechanisms and exploiting the transitions </span><span style="font-family:Verdana;">between the scattering mechanisms we decide upon the HMM-land cover type. The classification process is based on the likelihood of observation sequences </span><span style="font-family:Verdana;">been evaluated by each model. The performance of the classification ap</span><span style="font-family:Verdana;">proach is assessed my means of fully polarimetric SLC SAR data from the broader </span><span style="font-family:Verdana;">area of Vancouver, Canada and was found satisfactory, reaching a success</span><span style="font-family:Verdana;"> from 87% to over 99%.展开更多
The label text is a very important tool for the automatic processing of language. It is used in several applications such as morphological and syntactic text analysis, index-ing, retrieval, finished networks determini...The label text is a very important tool for the automatic processing of language. It is used in several applications such as morphological and syntactic text analysis, index-ing, retrieval, finished networks deterministic (in which all combinations of words that are accepted by the grammar are listed) or by statistical grammars (e.g., an n-gram in which the probabilities of sequences of n words in a specific order are given), etc. In this article, we developed a morphosyntactic labeling system language “Baoule” using hidden Markov models. This will allow us to build a tagged reference corpus and rep-resent major grammatical rules faced “Baoule” language in general. To estimate the parameters of this model, we used a training corpus manually labeled using a set of morpho-syntactic labels. We then proceed to an improvement of the system through the re-estimation procedure parameters of this model.展开更多
Several studies were devoted to investigate the effects of meteorological factors on the occurrence of stroke. Regression models had been mostly used to assess the correlation between weather and stroke incidence. How...Several studies were devoted to investigate the effects of meteorological factors on the occurrence of stroke. Regression models had been mostly used to assess the correlation between weather and stroke incidence. However, these methods could not describe the process proceeding in the back-ground of stroke incidence. The purpose of this study was to provide a new approach based on Hidden Markov Models (HMMs) and self-organizing maps (SOM), interpreting the background from the viewpoint of weather variability. Based on meteorological data, SOM was performed to classify weather patterns. Using these classes by SOM as randomly changing “states”, our Hidden Markov Models were constructed with “observation data” that were extracted from the daily data of emergency transport at Nagoya City in Japan. We showed that SOM was an effective method to get weather patterns that would serve as “states” of Hidden Markov Models. Our Hidden Markov Models provided effective models to clarify background process for stroke incidence. The effectiveness of these Hidden Markov Models was estimated by stochastic test for root mean square errors (RMSE). “HMMs with states by SOM” would serve as a description of the background process of stroke incidence and were useful to show the influence of weather on stroke onset. This finding will contribute to an improvement of our understanding for links between weather variability and stroke incidence.展开更多
In this paper,we tested our methodology on the stocks of four representative companies:Apple,Comcast Corporation(CMCST),Google,and Qualcomm.We compared their performance to several stocks using the hidden Markov model...In this paper,we tested our methodology on the stocks of four representative companies:Apple,Comcast Corporation(CMCST),Google,and Qualcomm.We compared their performance to several stocks using the hidden Markov model(HMM)and forecasts using mean absolute percentage error(MAPE).For simplicity,we considered four main features in these stocks:open,close,high,and low prices.When using the HMM for forecasting,the HMM has the best prediction for the daily low stock price and daily high stock price of Apple and CMCST,respectively.By calculating the MAPE for the four data sets of Google,the close price has the largest prediction error,while the open price has the smallest prediction error.The HMM has the largest prediction error and the smallest prediction error for Qualcomm’s daily low stock price and daily high stock price,respectively.展开更多
Compared with traditional real-time forecasting,this paper proposes a Grey Markov Model(GMM) to forecast the maximum water levels at hydrological stations in the estuary area.The GMM combines the Grey System and Marko...Compared with traditional real-time forecasting,this paper proposes a Grey Markov Model(GMM) to forecast the maximum water levels at hydrological stations in the estuary area.The GMM combines the Grey System and Markov theory into a higher precision model.The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values,and thus gives forecast results involving two aspects of information.The procedure for forecasting annul maximum water levels with the GMM contains five main steps:1) establish the GM(1,1) model based on the data series;2) estimate the trend values;3) establish a Markov Model based on relative error series;4) modify the relative errors caused in step 2,and then obtain the relative errors of the second order estimation;5) compare the results with measured data and estimate the accuracy.The historical water level records(from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin,China are utilized to calibrate and verify the proposed model according to the above steps.Every 25 years' data are regarded as a hydro-sequence.Eight groups of simulated results show reasonable agreement between the predicted values and the measured data.The GMM is also applied to the 10 other hydrological stations in the same estuary.The forecast results for all of the hydrological stations are good or acceptable.The feasibility and effectiveness of this new forecasting model have been proved in this paper.展开更多
In order to overcome defects of the classical hidden Markov model (HMM), Markov family model (MFM), a new statistical model was proposed. Markov family model was applied to speech recognition and natural language proc...In order to overcome defects of the classical hidden Markov model (HMM), Markov family model (MFM), a new statistical model was proposed. Markov family model was applied to speech recognition and natural language processing. The speaker independently continuous speech recognition experiments and the part-of-speech tagging experiments show that Markov family model has higher performance than hidden Markov model. The precision is enhanced from 94.642% to 96.214% in the part-of-speech tagging experiments, and the work rate is reduced by 11.9% in the speech recognition experiments with respect to HMM baseline system.展开更多
文摘Retailing is a dynamic business domain where commodities and goods are sold in small quantities directly to the customers.It deals with the end user customers of a supply-chain network and therefore has to accommodate the needs and desires of a large group of customers over varied utilities.The volume and volatility of the business makes it one of the prospectivefields for analytical study and data modeling.This is also why customer segmentation drives a key role in multiple retail business decisions such as marketing budgeting,customer targeting,customized offers,value proposition etc.The segmentation could be on various aspects such as demographics,historic behavior or preferences based on the use cases.In this paper,historic retail transactional data is used to segment the custo-mers using K-Means clustering and the results are utilized to arrive at a transition matrix which is used to predict the cluster movements over the time period using Markov Model algorithm.This helps in calculating the futuristic value a segment or a customer brings to the business.Strategic marketing designs and budgeting can be implemented using these results.The study is specifically useful for large scale marketing in domains such as e-commerce,insurance or retailers to segment,profile and measure the customer lifecycle value over a short period of time.
文摘Epilepsy is one of the most prevalent neurological disorders affecting 70 million people worldwide.The present work is focused on designing an efficient algorithm for automatic seizure detection by using electroencephalogram(EEG) as a noninvasive procedure to record neuronal activities in the brain.EEG signals' underlying dynamics are extracted to differentiate healthy and seizure EEG signals.Shannon entropy,collision entropy,transfer entropy,conditional probability,and Hjorth parameter features are extracted from subbands of tunable Q wavelet transform.Efficient decomposition level for different feature vector is selected using the Kruskal-Wallis test to achieve good classification.Different features are combined using the discriminant correlation analysis fusion technique to form a single fused feature vector.The accuracy of the proposed approach is higher for Q=2 and J=10.Transfer entropy is observed to be significant for different class combinations.Proposed approach achieved 100% accuracy in classifying healthy-seizure EEG signal using simple and robust features and hidden Markov model with less computation time.The proposed approach efficiency is evaluated in classifying seizure and non-seizure surface EEG signals.The system has achieved 96.87% accuracy in classifying surface seizure and nonseizure EEG segments using efficient features extracted from different J level.
基金The author extends his appreciation to the Deanship of Scientific Research at King Khalid University for funding this work under grant number(R.G.P.2/55/40/2019),Received by Fahd N.Al-Wesabi.www.kku.edu.sa。
文摘Text information is principally dependent on the natural languages.Therefore,improving security and reliability of text information exchanged via internet network has become the most difficult challenge that researchers encounter.Content authentication and tampering detection of digital contents have become a major concern in the area of communication and information exchange via the Internet.In this paper,an intelligent text Zero-Watermarking approach SETZWMWMM(Smart English Text Zero-Watermarking Approach Based on Mid-Level Order and Word Mechanism of Markov Model)has been proposed for the content authentication and tampering detection of English text contents.The SETZWMWMM approach embeds and detects the watermark logically without altering the original English text document.Based on Hidden Markov Model(HMM),Third level order of word mechanism is used to analyze the interrelationship between contexts of given English texts.The extracted features are used as a watermark information and integrated with digital zero-watermarking techniques.To detect eventual tampering,SETZWMWMM has been implemented and validated with attacked English text.Experiments were performed on four datasets of varying lengths under multiple random locations of insertion,reorder and deletion attacks.The experimental results show that our method is more sensitive and efficient for all kinds of tampering attacks with high level accuracy of tampering detection than compared methods.
基金support provided from the Cooperative Education Fund of China Ministry of Education(201702113002 and 201801193119)Hunan Natural Science Foundation(2018JJ2138)Degree and Graduate Education Reform Project of Hunan Province(JG2018B096)are greatly appreciated by the authors.
文摘Translation software has become an important tool for communication between different languages.People’s requirements for translation are higher and higher,mainly reflected in people’s desire for barrier free cultural exchange.With a large corpus,the performance of statistical machine translation based on words and phrases is limited due to the small size of modeling units.Previous statistical methods rely primarily on the size of corpus and number of its statistical results to avoid ambiguity in translation,ignoring context.To support the ongoing improvement of translation methods built upon deep learning,we propose a translation algorithm based on the Hidden Markov Model to improve the use of context in the process of translation.During translation,our Hidden Markov Model prediction chain selects a number of phrases with the highest result probability to form a sentence.The collection of all of the generated sentences forms a topic sequence.Using probabilities and article sequences determined from the training set,our method again applies the Hidden Markov Model to form the final translation to improve the context relevance in the process of translation.This algorithm improves the accuracy of translation,avoids the combination of invalid words,and enhances the readability and meaning of the resulting translation.
基金supported by the National Natural Science Foundation of China(No.61303098)
文摘Modeling experiences of traditional grey-Markov show that the prediction results are not accurate when analyzed data are rare and fluctuated.So it is necessary to revise or improve the original modeling procedure of the grey-Markov(GM)model.Therefore,a new idea is brought forward that the Markov theory is used twice,where the first time is to extend the original data and the second to calculate and estimate the residual errors.Then by comparing the original data sequence from a fault prediction case with the simulation sequence produced by the use of GM(1,1) and the new GM method,results are conforming to the original data.Finally,an assumption of GM model is put forward as the future work.
文摘In recent years, the accuracy of speech recognition (SR) has been one of the most active areas of research. Despite that SR systems are working reasonably well in quiet conditions, they still suffer severe performance degradation in noisy conditions or distorted channels. It is necessary to search for more robust feature extraction methods to gain better performance in adverse conditions. This paper investigates the performance of conventional and new hybrid speech feature extraction algorithms of Mel Frequency Cepstrum Coefficient (MFCC), Linear Prediction Coding Coefficient (LPCC), perceptual linear production (PLP), and RASTA-PLP in noisy conditions through using multivariate Hidden Markov Model (HMM) classifier. The behavior of the proposal system is evaluated using TIDIGIT human voice dataset corpora, recorded from 208 different adult speakers in both training and testing process. The theoretical basis for speech processing and classifier procedures were presented, and the recognition results were obtained based on word recognition rate.
文摘The paper aims to analyze land use/land cover (LULC) changes in western part and the populated area of Amman governorate and to identify the process of urbanization and urban expansion within the study area for the period of 1984-2014. It also aims to predict future LULC map for the year 2030 using Markov Model to provide city planners and decision makers with information about the past and current spatial dynamics of LULC change and strictly urban expansion towards successful management and better planning in the future. Images from Landsat 5-TM for the years 1984, 1999 and from Landsat 8-OLI for the year 2014 were used to investigate LULC within the study area during 1984-2014 and the resulted LULC maps in 1999 and 2014 were used to predict future LULC map based on Markov Model. The results indicated that the urban/built up area expanded by 147% during the period from 1984 to 2014 and predicted to expand by 43.9% from 2014 to 2030 based on Markov model predictions. The areas in the western, northwest and southwest parts of Amman as well as the areas of Marka and Uhud, the northeast of the study area, were predicted to witness the major urban expansion in 2030. And these are the areas where city planners and decision makers should take into consideration in future plans of Amman. The urban expansion was mainly attributed to the high population growth rate and large number of immigrants from neighboring countries and other socio-economic changes.
基金National Key Research and Development Program of China(Grant No.2020YFB1710300)National Natural Science Foundation of China(Grant No.52005042)+2 种基金National Defense Fundamental Research Foundation of China(Grant No.JCKY2020203B039)Equipment Pre-research Foundation of China(Grant No.80923010101)Beijing Institute of Technology Research Fund Program for Young Scholars.
文摘The assembly process of aerospace products such as satellites and rockets has the characteristics of single-or small-batch production,a long development period,high reliability,and frequent disturbances.How to predict and avoid quality abnormalities,quickly locate their causes,and improve product assembly quality and efficiency are urgent engineering issues.As the core technology to realize the integration of virtual and physical space,digital twin(DT)technology can make full use of the low cost,high efficiency,and predictable advantages of digital space to provide a feasible solution to such problems.Hence,a quality management method for the assembly process of aerospace products based on DT is proposed.Given that traditional quality control methods for the assembly process of aerospace products are mostly post-inspection,the Grey-Markov model and T-K control chart are used with a small sample of assembly quality data to predict the value of quality data and the status of an assembly system.The Apriori algorithm is applied to mine the strong association rules related to quality data anomalies and uncontrolled assembly systems so as to solve the issue that the causes of abnormal quality are complicated and difficult to trace.The implementation of the proposed approach is described,taking the collected centroid data of an aerospace product’s cabin,one of the key quality data in the assembly process of aerospace products,as an example.A DT-based quality management system for the assembly process of aerospace products is developed,which can effectively improve the efficiency of quality management for the assembly process of aerospace products and reduce quality abnormalities.
文摘With the emergence of the Internet of Things(IoT), there has been a proliferation of urban studies using big data. Yet, another type of urban research innovations that involve interdisciplinary thinking and methods remains underdeveloped. This paper represents an attempt to adopt a Hidden Markov Model(HMM) toolbox developed in Computer Science for the analysis of eye movement patterns in Psychology to answer urban mobility questions in Geography. The main idea is that both people’s eye movements and travel behavior follow the stop-travel-stop pattern, which can be summarized using HMM. Methodological challenges were addressed by adjusting the HMM to analyze territory-wide travel survey data in Hong Kong, China. By using the adjusted toolbox to identify the activitytravel patterns of working adults in Hong Kong, two distinctive groups of balanced(38.4%) and work-oriented(61.6%) lifestyles were identified. With some notable exceptions, working adults living in the urban core were having a more work-oriented lifestyle. Those with a balanced lifestyle were having a relatively compact zone of non-work activities around their homes but a relatively long commuting distance. Furthermore, working females tend to spend more time at home than their counterparts, regardless of their marital status and lifestyle. Overall, this interdisciplinary research demonstrates an attempt to integrate spatial, temporal, and sequential information for understanding people’s behavior in urban mobility research.
文摘Because performance parameters of gear have degradation,a method is proposed to recognize and analyze its faults using the hidden Markov model( HMM). In this method,firstly,the delayed correlation-envelope method is used to extract features from vibration signals. Then,HMMs are trained respectively using data under normal condition,gear root crack condition and gear root breaking condition. Further,the trained HMMs are used in pattern recognition and model assessment. Finally,the results from standard HMM and the proposed method are compared, which shows that the proposed methodology is feasible and effective.
基金This work was supported by the Youth Backbone Teacher Training Program of Henan Colleges and Universities under grant no.2016ggjs-287the Project of Science and Technology of Henan Province under grant nos.172102210124 and 20210221026the Key Scientific Research Project in Colleges and Universities in Henan,grant no.18B460003.
文摘Electric vehicles such as trains must match their electric power supply and demand,such as by using a composite energy storage system composed of lithium batteries and supercapacitors.In this paper,a predictive control strategy based on a Markov model is proposed for a composite energy storage system in an urban rail train.The model predicts the state of the train and a dynamic programming algorithm is employed to solve the optimization problem in a forecast time domain.Real-time online control of power allocation in the composite energy storage system can be achieved.Using standard train operating conditions for simulation,we found that the proposed control strategy achieves a suitable match between power supply and demand when the train is running.Compared with traditional predictive control systems,energy efficiency 10.5%higher.This system provides good stability and robustness,satisfactory speed tracking performance and control comfort,and significant suppression of disturbances,making it feasible for practical applications.
文摘The links between low temperature and the incidence of disease have been studied by many researchers. What remains still unclear is the exact nature of the relation, especially the mechanism by which the change of weather effects on the onset of diseases. The existence of lag period between exposure to temperature and its effect on mortality may reflect the nature of the onset of diseases. Therefore, to assess lagged effects becomes potentially important. The most of studies on lags used the method by Lag-distributed Poisson Regression, and neglected extreme case as random noise to get correlations. In order to assess the lagged effect, we proposed a new approach, i.e., Hidden Markov Model by Self Organized Map (HMM by SOM) apart from well-known regression models. HMM by SOM includes the randomness in its nature and encompasses the extreme cases which were neglected by auto-regression models. The daily data of the number of patients transported by ambulance in Nagoya, Japan, were used. SOM was carried out to classify the meteorological elements into six classes. These classes were used as “states” of HMM. HMM was used to describe a background process which might produce the time series of the incidence of diseases. The background process was considered to change randomly weather states, classified by SOM. We estimated the lagged effects of weather change on the onset of both cerebral infarction and ischemic heart disease. This fact is potentially important in that if one could trace a path in the chain of events leading from temperature change to death, one might be able to prevent it and avert the fatal outcome.
文摘A land cover classification procedure is presented utilizing the information content of fully polarimetric SAR images. The Cameron coherent target decomposition (CTD) is employed to characterize each pixel, using a set of canonical scattering mechanisms in order to describe the physical properties of the scatterer. The novelty of the proposed classification approach lies on the use of Hidden Markov Models (HMM) to uniquely characterize each type of land cover. The motivation to this approach is the investigation of the alternation between scattering mechanisms from SAR pixel to pixel. Depending </span><span style="font-family:Verdana;">on the observations-scattering mechanisms and exploiting the transitions </span><span style="font-family:Verdana;">between the scattering mechanisms we decide upon the HMM-land cover type. The classification process is based on the likelihood of observation sequences </span><span style="font-family:Verdana;">been evaluated by each model. The performance of the classification ap</span><span style="font-family:Verdana;">proach is assessed my means of fully polarimetric SLC SAR data from the broader </span><span style="font-family:Verdana;">area of Vancouver, Canada and was found satisfactory, reaching a success</span><span style="font-family:Verdana;"> from 87% to over 99%.
文摘The label text is a very important tool for the automatic processing of language. It is used in several applications such as morphological and syntactic text analysis, index-ing, retrieval, finished networks deterministic (in which all combinations of words that are accepted by the grammar are listed) or by statistical grammars (e.g., an n-gram in which the probabilities of sequences of n words in a specific order are given), etc. In this article, we developed a morphosyntactic labeling system language “Baoule” using hidden Markov models. This will allow us to build a tagged reference corpus and rep-resent major grammatical rules faced “Baoule” language in general. To estimate the parameters of this model, we used a training corpus manually labeled using a set of morpho-syntactic labels. We then proceed to an improvement of the system through the re-estimation procedure parameters of this model.
文摘Several studies were devoted to investigate the effects of meteorological factors on the occurrence of stroke. Regression models had been mostly used to assess the correlation between weather and stroke incidence. However, these methods could not describe the process proceeding in the back-ground of stroke incidence. The purpose of this study was to provide a new approach based on Hidden Markov Models (HMMs) and self-organizing maps (SOM), interpreting the background from the viewpoint of weather variability. Based on meteorological data, SOM was performed to classify weather patterns. Using these classes by SOM as randomly changing “states”, our Hidden Markov Models were constructed with “observation data” that were extracted from the daily data of emergency transport at Nagoya City in Japan. We showed that SOM was an effective method to get weather patterns that would serve as “states” of Hidden Markov Models. Our Hidden Markov Models provided effective models to clarify background process for stroke incidence. The effectiveness of these Hidden Markov Models was estimated by stochastic test for root mean square errors (RMSE). “HMMs with states by SOM” would serve as a description of the background process of stroke incidence and were useful to show the influence of weather on stroke onset. This finding will contribute to an improvement of our understanding for links between weather variability and stroke incidence.
文摘In this paper,we tested our methodology on the stocks of four representative companies:Apple,Comcast Corporation(CMCST),Google,and Qualcomm.We compared their performance to several stocks using the hidden Markov model(HMM)and forecasts using mean absolute percentage error(MAPE).For simplicity,we considered four main features in these stocks:open,close,high,and low prices.When using the HMM for forecasting,the HMM has the best prediction for the daily low stock price and daily high stock price of Apple and CMCST,respectively.By calculating the MAPE for the four data sets of Google,the close price has the largest prediction error,while the open price has the smallest prediction error.The HMM has the largest prediction error and the smallest prediction error for Qualcomm’s daily low stock price and daily high stock price,respectively.
基金Sasakawa Scientific Foundation of Japan, No.20-238 National Basic Research Program of China (973 Program), No.2006CB403200+1 种基金 National Natural Science Foundation of China, No.40261002 No.40561006
基金supported by the National Natural Science Foundation of China (50879085)the Program for New Century Excellent Talents in University(NCET-07-0778)the Key Technology Research Project of Dynamic Environmental Flume for Ocean Monitoring Facilities (201005027-4)
文摘Compared with traditional real-time forecasting,this paper proposes a Grey Markov Model(GMM) to forecast the maximum water levels at hydrological stations in the estuary area.The GMM combines the Grey System and Markov theory into a higher precision model.The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values,and thus gives forecast results involving two aspects of information.The procedure for forecasting annul maximum water levels with the GMM contains five main steps:1) establish the GM(1,1) model based on the data series;2) estimate the trend values;3) establish a Markov Model based on relative error series;4) modify the relative errors caused in step 2,and then obtain the relative errors of the second order estimation;5) compare the results with measured data and estimate the accuracy.The historical water level records(from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin,China are utilized to calibrate and verify the proposed model according to the above steps.Every 25 years' data are regarded as a hydro-sequence.Eight groups of simulated results show reasonable agreement between the predicted values and the measured data.The GMM is also applied to the 10 other hydrological stations in the same estuary.The forecast results for all of the hydrological stations are good or acceptable.The feasibility and effectiveness of this new forecasting model have been proved in this paper.
基金Project(60763001)supported by the National Natural Science Foundation of ChinaProjects(2009GZS0027,2010GZS0072)supported by the Natural Science Foundation of Jiangxi Province,China
文摘In order to overcome defects of the classical hidden Markov model (HMM), Markov family model (MFM), a new statistical model was proposed. Markov family model was applied to speech recognition and natural language processing. The speaker independently continuous speech recognition experiments and the part-of-speech tagging experiments show that Markov family model has higher performance than hidden Markov model. The precision is enhanced from 94.642% to 96.214% in the part-of-speech tagging experiments, and the work rate is reduced by 11.9% in the speech recognition experiments with respect to HMM baseline system.