Desertification is increasingly serious in Xinjiang,and the construction of water conservancy is a precondition for the development of agriculture.The main project for the development of agriculture and water conserva...Desertification is increasingly serious in Xinjiang,and the construction of water conservancy is a precondition for the development of agriculture.The main project for the development of agriculture and water conservancy in Xinjiang is to build Karez,which played a vital role in the development of Xinjiang agriculture in the Qing Dynasty.It has been recorded many times in historical documents of the Qing Dynasty,such as Lin Zexu s Diary,Tao Baolian s Diary,Xinjiang Atlas and Zuo Zongtang s Memorial to the Emperor,etc.,which recorded the situation and historical origin of Karez.Karez made a significant contribution to the development of agriculture in the Qing Dynasty.It increased the cultivated land in Xinjiang at that time,and increased the types and yields of crops.It is conducive to the stability and development of Xinjiang s economy.Until today,Karez is still an important water source for agricultural irrigation in Xinjiang.展开更多
Traditional human rights theory tends to hold that human rights should be aimed at defending public authority and that the legal issue of human rights is a matter of public law.However,the development of human rights ...Traditional human rights theory tends to hold that human rights should be aimed at defending public authority and that the legal issue of human rights is a matter of public law.However,the development of human rights concepts and practices is not just confined to this.A textual search shows that the term“human rights”exists widely in China’s civil judicial documents.Among the 3,412 civil judicial documents we researched,the concept of“human rights”penetrates all kinds of disputes in lawsuits,ranging from property rights,contracts,labor,and torts to marital property,which is embedded in both the claims of the parties concerned and the reasoning of judges.Human rights have become the discourse and yardstick for understanding and evaluating social behavior.The widespread use of the term“human rights”in civil judicial documents reflects at least three concepts related to human rights:first,the rights to subsistence and development are the primary basic human rights;second,the judicial protection of human rights is a bottom-line guarantee;third,the protection of human rights aims to achieve equal rights.Today,judges quote the theory of human rights in judicial judgments from time to time,evidencing that human rights have a practical function in judicial adjudication activities,and in practice this is mainly manifested in declaring righteous values and strengthening arguments with the values and ideas related to human rights,using the provisions concerning human rights in the Constitution to interpret the constitutionality,and using the principles of human rights to interpret blurred rules and rank the importance of different rights.展开更多
This paper examines automatic recognition and extraction of tables from a large collection of het-erogeneous documents. The heterogeneous documents are initially pre-processed and converted to HTML codes, after which ...This paper examines automatic recognition and extraction of tables from a large collection of het-erogeneous documents. The heterogeneous documents are initially pre-processed and converted to HTML codes, after which an algorithm recognises the table portion of the documents. Hidden Markov Model (HMM) is then applied to the HTML code in order to extract the tables. The model was trained and tested with five hundred and twenty six self-generated tables (three hundred and twenty-one (321) tables for training and two hundred and five (205) tables for testing). Viterbi algorithm was implemented for the testing part. The system was evaluated in terms of accuracy, precision, recall and f-measure. The overall evaluation results show 88.8% accuracy, 96.8% precision, 91.7% recall and 88.8% F-measure revealing that the method is good at solving the problem of table extraction.展开更多
Deacidification and self-cleaning are important for the preservation of paper documents.In this study,nano-CaCO_(3) was used as a deacidification agent and stabilized by nanocellulose(CNC)and hydroxypropyl methylcellu...Deacidification and self-cleaning are important for the preservation of paper documents.In this study,nano-CaCO_(3) was used as a deacidification agent and stabilized by nanocellulose(CNC)and hydroxypropyl methylcellulose(HPMC)to form a uniform dispersion.Followed by polydimethylsiloxane(PDMS)treatment and chemical vapor deposition(CVD)of methyltrimethoxysilane(MTMS),a hydrophobic coating was constructed for self-cleaning purposes.The pH value of the treated paper was approximately 8.20,and the static contact angle was as high as 152.29°.Compared to the untreated paper,the tensile strength of the treated paper increased by 12.6%.This treatment method endows the paper with a good deacidification effect and self-cleaning property,which are beneficial for its long-term preservation.展开更多
There exist a large number of composed documents in universities in the teaching process. Most of them are required to check the similarity for validation. A kind of similarity computation system is constructed for co...There exist a large number of composed documents in universities in the teaching process. Most of them are required to check the similarity for validation. A kind of similarity computation system is constructed for composed documents with images and text information. Firstly, each document is split and outputs two parts as images and text information. Then, these documents are compared by computing the similarities of images and text contents independently. Through Hadoop system, the text contents are easily and quickly separated. Experimental results show that the proposed system is efficient and practical.展开更多
Word segmentation is an integral step in many knowledge discovery applications. However, existing word segmentation methods have problems when applying to Chinese judicial documents:(1) existing methods rely on large-...Word segmentation is an integral step in many knowledge discovery applications. However, existing word segmentation methods have problems when applying to Chinese judicial documents:(1) existing methods rely on large-scale labeled data which is typically unavailable in judicial documents, and (2) judicial document has its own language features and writing formats. In this paper, a word segmentation method is proposed for Chinese judicial documents. The proposed method consists of two steps:(1) automatically generating some labeled data as legal dictionaries, and (2) applying a hybrid multilayer neural networks to do word segmentation incorporating legal dictionaries. Experiments are conducted on a dataset of Chinese judicial documents showing that the proposed model can achieve better results than the existing methods.展开更多
In his ice-breaking article "The Smell of the Cage in Cuneiform Digital Library" Journal 2009/4, Robert K. Englund discusses some archaic lists of slaves from Uruk III and Jemdet Nasr (Ancient Ni-ru?) about ...In his ice-breaking article "The Smell of the Cage in Cuneiform Digital Library" Journal 2009/4, Robert K. Englund discusses some archaic lists of slaves from Uruk III and Jemdet Nasr (Ancient Ni-ru?) about 3100-2900 B.C.展开更多
This paper introduces a new enhanced Arabic stemming algorithm for solving the information retrieval problem,especially in medical documents.Our proposed algorithm is a light stemming algorithm for extracting stems an...This paper introduces a new enhanced Arabic stemming algorithm for solving the information retrieval problem,especially in medical documents.Our proposed algorithm is a light stemming algorithm for extracting stems and roots from the input data.One of the main challenges facing the light stemming algorithm is cutting off the input word,to extract the initial segments.When initiating the light stemmer with strong initial segments,the final extracting stems and roots will be more accurate.Therefore,a new enhanced segmentation based on deploying the Direct Acyclic Graph(DAG)model is utilized.In addition to extracting the powerful initial segments,the main two procedures(i.e.,stems and roots extraction),should be also reinforced with more efficient operators to improve the final outputs.To validate the proposed enhanced stemmer,four data sets are used.The achieved stems and roots resulted from our proposed light stemmer are compared with the results obtained from five other well-known Arabic light stemmers using the same data sets.This evaluation process proved that the proposed enhanced stemmer outperformed other comparative stemmers.展开更多
Aiming at the problem that the mathematical expressions in unstructured text fields of documents are hard to be extracted automatically, rapidly and effectively, a method based on Hidden Markov Model (HMM) is proposed...Aiming at the problem that the mathematical expressions in unstructured text fields of documents are hard to be extracted automatically, rapidly and effectively, a method based on Hidden Markov Model (HMM) is proposed. Firstly, this method trained the HMM model through employing the symbol combination features of mathematical expressions. Then, some preprocessing works such as removing labels and filtering words were carried out. Finally, the preprocessed text was converted into an observation sequence as the input of the HMM model to determine which is the mathematical expression and extracts it. The experimental results show that the proposed method can effectively extract the mathematical expressions from the text fields of documents, and also has the relatively high accuracy rate and recall rate.展开更多
In Chinese language studies, both “The Textual Research on Historical Documents” and “The Comparative Study of Historical Data” are traditional in methodology and they both deserve being treasured, passed on, and ...In Chinese language studies, both “The Textual Research on Historical Documents” and “The Comparative Study of Historical Data” are traditional in methodology and they both deserve being treasured, passed on, and further developed. It will certainly do harm to the development of academic research if any of the two methods is given unreasonable priority. The author claims that the best or one of the best methodologies of the historical study of Chinese language is the combination of the two, hence a new interpretation of “The Double-proof Method”. Meanwhile, this essay is also an attempt to put forward “The Law of Quan-ma and Gui-mei” in Chinese language studies, in which the author believes that it is not advisable to either treat Gui-mei as Quan-ma or vice versa in linguistic research. It is crucial for us to respect always the language facts first, which is considered the very soul of linguistics.展开更多
The Extensible Markup Language (XML) is becoming a de-facto standard for exchanging information among the web applications. Efficient implementation of web application needs to be efficient implementation of XML and X...The Extensible Markup Language (XML) is becoming a de-facto standard for exchanging information among the web applications. Efficient implementation of web application needs to be efficient implementation of XML and XML schema document. The quality of XML document has great impact on the design quality of its schema document. Therefore, the design of XML schema document plays an important role in web engineering process and needs to have many schema qualities: functionality, extensibility, reusability, understandability, maintainability and so on. Three schema metrics: Reusable Quality metric (RQ), Extensible Quality metric (EQ) and Understandable Quality metric (UQ) are proposed to measure the Reusable, Extensible and Understandable of XML schema documents in web engineering process respectively. The base attributes are selected according to XML Quality Assurance Design Guidelines. These metrics are formulated based on Binary Entropy Function and Rank Order Centroid method. To check the validity of the proposed metrics empirically and analytically, the self-organizing feature map (SOM) and Weyuker’s 9 properties are used.展开更多
In order to improve the quality of the judgment documents, the state and government have introduced laws and regulations. However, the current status of trials in our country is that the number of cases is very large....In order to improve the quality of the judgment documents, the state and government have introduced laws and regulations. However, the current status of trials in our country is that the number of cases is very large. Using system to verify the documents can reduce the burden on the judges and ensure the accuracy of the judgment. This paper describes an evaluation system for reasoning description of judgment documents. The main evaluation steps include: segmenting the front and back of the law;extracting the key information in the document by using XML parsing technology;constructing the legal exclusive stop word library and preprocessing inputting text;entering the text input into the model to get the text matching result;using the “match keyword, compare sentencing degree” idea to judge whether the logic is consistent if it is the evaluation of “law and conclusion”;integrating the calculation results of each evaluation subject and feeding clear and concise results back to the system user. Simulation of real application scenarios was conducted to test whether the reasoning lacks key links or is insufficient or the judgment result is unreasonable. The result show that evaluation speed of each document is relatively fast and the accuracy of the evaluation of the common nine criminal cases is high.展开更多
Purpose:The thrust of this paper is to present a method for improving the accuracy of automatic indexing of Chinese-English mixed documents.Design/methodology/approach:Based on the inherent characteristics of Chinese-...Purpose:The thrust of this paper is to present a method for improving the accuracy of automatic indexing of Chinese-English mixed documents.Design/methodology/approach:Based on the inherent characteristics of Chinese-English mixed texts and the cybernetics theory,we proposed an integrated control method for indexing documents.It consists of"feed-forward control","in-progress control"and"feed-back control",aiming at improving the accuracy of automatic indexing of Chinese-English mixed documents.An experiment was conducted to investigate the effect of our proposed method.Findings:This method distinguishes Chinese and English documents in grammatical structures and word formation rules.Through the implementation of this method in the three phases of automatic indexing for the Chinese-English mixed documents,the results were encouraging.The precision increased from 88.54%to 97.10%and recall improved from97.37%to 99.47%.Research limitations:The indexing method is relatively complicated and the whole indexing process requires substantial human intervention.Due to pattern matching based on a bruteforce(BF)approach,the indexing efficiency has been reduced to some extent.Practical implications:The research is of both theoretical significance and practical value in improving the accuracy of automatic indexing of multilingual documents(not confined to Chinese-English mixed documents).The proposed method will benefit not only the indexing of life science documents but also the indexing of documents in other subject areas.Originality/value:So far,few studies have been published about the method for increasing the accuracy of multilingual automatic indexing.This study will provide insights into the automatic indexing of multilingual documents,especially Chinese-English mixed documents.展开更多
Plain Language has made a great difference nowadays. As it turns out, Plain Language works effectively to express clearly, concisely and systematically. However, it is necessary for contemporary practitioners to revie...Plain Language has made a great difference nowadays. As it turns out, Plain Language works effectively to express clearly, concisely and systematically. However, it is necessary for contemporary practitioners to review the origin and development of Plain Language Movement and to examine whether it has thoroughly implemented Plain Language policies in every federal document. Examining a contemporary federal document against the Guidelines for Document Designers reveals existing problems for further development.展开更多
With the practical experience of constru-ction bidding documentation,for example,in view of the large infrastructure project construction in colleges and universities bidding documents for the main body,the constructi...With the practical experience of constru-ction bidding documentation,for example,in view of the large infrastructure project construction in colleges and universities bidding documents for the main body,the construction technology,qualification,performance requirements,bill of quantities,the terms of the contract set aspects were discussed,and put forward practical measures and methods,for similar project construction bidding document preparation to provide certain reference.展开更多
Purpose:Detection of research fields or topics and understanding the dynamics help the scientific community in their decisions regarding the establishment of scientific fields.This also helps in having a better collab...Purpose:Detection of research fields or topics and understanding the dynamics help the scientific community in their decisions regarding the establishment of scientific fields.This also helps in having a better collaboration with governments and businesses.This study aims to investigate the development of research fields over time,translating it into a topic detection problem.Design/methodology/approach:To achieve the objectives,we propose a modified deep clustering method to detect research trends from the abstracts and titles of academic documents.Document embedding approaches are utilized to transform documents into vector-based representations.The proposed method is evaluated by comparing it with a combination of different embedding and clustering approaches and the classical topic modeling algorithms(i.e.LDA)against a benchmark dataset.A case study is also conducted exploring the evolution of Artificial Intelligence(AI)detecting the research topics or sub-fields in related AI publications.Findings:Evaluating the performance of the proposed method using clustering performance indicators reflects that our proposed method outperforms similar approaches against the benchmark dataset.Using the proposed method,we also show how the topics have evolved in the period of the recent 30 years,taking advantage of a keyword extraction method for cluster tagging and labeling,demonstrating the context of the topics.Research limitations:We noticed that it is not possible to generalize one solution for all downstream tasks.Hence,it is required to fine-tune or optimize the solutions for each task and even datasets.In addition,interpretation of cluster labels can be subjective and vary based on the readers’opinions.It is also very difficult to evaluate the labeling techniques,rendering the explanation of the clusters further limited.Practical implications:As demonstrated in the case study,we show that in a real-world example,how the proposed method would enable the researchers and reviewers of the academic research to detect,summarize,analyze,and visualize research topics from decades of academic documents.This helps the scientific community and all related organizations in fast and effective analysis of the fields,by establishing and explaining the topics.Originality/value:In this study,we introduce a modified and tuned deep embedding clustering coupled with Doc2Vec representations for topic extraction.We also use a concept extraction method as a labeling approach in this study.The effectiveness of the method has been evaluated in a case study of AI publications,where we analyze the AI topics during the past three decades.展开更多
The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same ...The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same period, workflow management has experienced significant development, but has become more focused on the industry. However, it seems to us that document workflows have not had the same interest for the scientific community. But nowadays, the emergence and supremacy of the Internet in electronic exchanges are leading to a massive dematerialization of documents;which requires a conceptual reconsideration of the organizational framework for the processing of said documents in both public and private administrations. This problem seems open to us and deserves the interest of the scientific community. Indeed, EDM has mainly focused on the storage (referencing) and circulation of documents (traceability). It paid little attention to the overall behavior of the system in processing documents. The purpose of our researches is to model document processing systems. In the previous works, we proposed a general model and its specialization in the case of small documents (any document processed by a single person at a time during its processing life cycle), which represent 70% of documents processed by administrations, according to our study. In this contribution, we extend the model for processing small documents to the case where they are managed in a system comprising document classes organized in subclasses;which is the case for most administrations. We have thus observed that this model is a Markovian <i>M<sup>L×K</sup>/M<sup>L×K</sup>/</i>1 queues network. We have analyzed the constraints of this model and deduced certain characteristics and metrics. <span style="white-space:normal;"><i></i></span><i>In fine<span style="white-space:normal;"></span></i>, the ultimate objective of our work is to design a document workflow management system, integrating a component of global behavior prediction.展开更多
While XML web services become recognized as a solution to business-to-business transactions, there are many problems that should be solved. For example, it is not easy to manipulate business documents of existing stan...While XML web services become recognized as a solution to business-to-business transactions, there are many problems that should be solved. For example, it is not easy to manipulate business documents of existing standards such as RosettaNet and UN/EDIFACT EDI, traditionally regarded as an important resource for managing B2B relationships. As a starting point for the complete implementation of B2B web services, this paper deals with how to support B2B business documents in XML web services. In the first phase, basic requirements for driving XML web services by business documents are introduced. As a solution, this paper presents how to express B2B business documents in WSDL, a core standard for XML web services. This kind of approach facilitates the reuse of existing business documents and enhances interoperability between implemented web services. Furthermore, it suggests how to link with other conceptual modeling frameworks such as ebXML/UMM, built on a rich heritage of electronic business experience.展开更多
Campus network establishment belongs to the field of system engineering. It is necessary to carry on cooperation among departments. Standardization is the key to solve the problem, and its core is standardization of d...Campus network establishment belongs to the field of system engineering. It is necessary to carry on cooperation among departments. Standardization is the key to solve the problem, and its core is standardization of documents. Therefore, this paper will be concentrated on the discussion of relevant problems in combination with our campus network practice.展开更多
文摘Desertification is increasingly serious in Xinjiang,and the construction of water conservancy is a precondition for the development of agriculture.The main project for the development of agriculture and water conservancy in Xinjiang is to build Karez,which played a vital role in the development of Xinjiang agriculture in the Qing Dynasty.It has been recorded many times in historical documents of the Qing Dynasty,such as Lin Zexu s Diary,Tao Baolian s Diary,Xinjiang Atlas and Zuo Zongtang s Memorial to the Emperor,etc.,which recorded the situation and historical origin of Karez.Karez made a significant contribution to the development of agriculture in the Qing Dynasty.It increased the cultivated land in Xinjiang at that time,and increased the types and yields of crops.It is conducive to the stability and development of Xinjiang s economy.Until today,Karez is still an important water source for agricultural irrigation in Xinjiang.
文摘Traditional human rights theory tends to hold that human rights should be aimed at defending public authority and that the legal issue of human rights is a matter of public law.However,the development of human rights concepts and practices is not just confined to this.A textual search shows that the term“human rights”exists widely in China’s civil judicial documents.Among the 3,412 civil judicial documents we researched,the concept of“human rights”penetrates all kinds of disputes in lawsuits,ranging from property rights,contracts,labor,and torts to marital property,which is embedded in both the claims of the parties concerned and the reasoning of judges.Human rights have become the discourse and yardstick for understanding and evaluating social behavior.The widespread use of the term“human rights”in civil judicial documents reflects at least three concepts related to human rights:first,the rights to subsistence and development are the primary basic human rights;second,the judicial protection of human rights is a bottom-line guarantee;third,the protection of human rights aims to achieve equal rights.Today,judges quote the theory of human rights in judicial judgments from time to time,evidencing that human rights have a practical function in judicial adjudication activities,and in practice this is mainly manifested in declaring righteous values and strengthening arguments with the values and ideas related to human rights,using the provisions concerning human rights in the Constitution to interpret the constitutionality,and using the principles of human rights to interpret blurred rules and rank the importance of different rights.
文摘This paper examines automatic recognition and extraction of tables from a large collection of het-erogeneous documents. The heterogeneous documents are initially pre-processed and converted to HTML codes, after which an algorithm recognises the table portion of the documents. Hidden Markov Model (HMM) is then applied to the HTML code in order to extract the tables. The model was trained and tested with five hundred and twenty six self-generated tables (three hundred and twenty-one (321) tables for training and two hundred and five (205) tables for testing). Viterbi algorithm was implemented for the testing part. The system was evaluated in terms of accuracy, precision, recall and f-measure. The overall evaluation results show 88.8% accuracy, 96.8% precision, 91.7% recall and 88.8% F-measure revealing that the method is good at solving the problem of table extraction.
基金This work was supported by Science and Technology Plan Special Project of Guangzhou,China(No.GZDD201808)National Key Research Program for International Cooperation-MOST/STDF(2021YFE0104500).
文摘Deacidification and self-cleaning are important for the preservation of paper documents.In this study,nano-CaCO_(3) was used as a deacidification agent and stabilized by nanocellulose(CNC)and hydroxypropyl methylcellulose(HPMC)to form a uniform dispersion.Followed by polydimethylsiloxane(PDMS)treatment and chemical vapor deposition(CVD)of methyltrimethoxysilane(MTMS),a hydrophobic coating was constructed for self-cleaning purposes.The pH value of the treated paper was approximately 8.20,and the static contact angle was as high as 152.29°.Compared to the untreated paper,the tensile strength of the treated paper increased by 12.6%.This treatment method endows the paper with a good deacidification effect and self-cleaning property,which are beneficial for its long-term preservation.
文摘There exist a large number of composed documents in universities in the teaching process. Most of them are required to check the similarity for validation. A kind of similarity computation system is constructed for composed documents with images and text information. Firstly, each document is split and outputs two parts as images and text information. Then, these documents are compared by computing the similarities of images and text contents independently. Through Hadoop system, the text contents are easily and quickly separated. Experimental results show that the proposed system is efficient and practical.
文摘Word segmentation is an integral step in many knowledge discovery applications. However, existing word segmentation methods have problems when applying to Chinese judicial documents:(1) existing methods rely on large-scale labeled data which is typically unavailable in judicial documents, and (2) judicial document has its own language features and writing formats. In this paper, a word segmentation method is proposed for Chinese judicial documents. The proposed method consists of two steps:(1) automatically generating some labeled data as legal dictionaries, and (2) applying a hybrid multilayer neural networks to do word segmentation incorporating legal dictionaries. Experiments are conducted on a dataset of Chinese judicial documents showing that the proposed model can achieve better results than the existing methods.
文摘In his ice-breaking article "The Smell of the Cage in Cuneiform Digital Library" Journal 2009/4, Robert K. Englund discusses some archaic lists of slaves from Uruk III and Jemdet Nasr (Ancient Ni-ru?) about 3100-2900 B.C.
文摘This paper introduces a new enhanced Arabic stemming algorithm for solving the information retrieval problem,especially in medical documents.Our proposed algorithm is a light stemming algorithm for extracting stems and roots from the input data.One of the main challenges facing the light stemming algorithm is cutting off the input word,to extract the initial segments.When initiating the light stemmer with strong initial segments,the final extracting stems and roots will be more accurate.Therefore,a new enhanced segmentation based on deploying the Direct Acyclic Graph(DAG)model is utilized.In addition to extracting the powerful initial segments,the main two procedures(i.e.,stems and roots extraction),should be also reinforced with more efficient operators to improve the final outputs.To validate the proposed enhanced stemmer,four data sets are used.The achieved stems and roots resulted from our proposed light stemmer are compared with the results obtained from five other well-known Arabic light stemmers using the same data sets.This evaluation process proved that the proposed enhanced stemmer outperformed other comparative stemmers.
文摘Aiming at the problem that the mathematical expressions in unstructured text fields of documents are hard to be extracted automatically, rapidly and effectively, a method based on Hidden Markov Model (HMM) is proposed. Firstly, this method trained the HMM model through employing the symbol combination features of mathematical expressions. Then, some preprocessing works such as removing labels and filtering words were carried out. Finally, the preprocessed text was converted into an observation sequence as the input of the HMM model to determine which is the mathematical expression and extracts it. The experimental results show that the proposed method can effectively extract the mathematical expressions from the text fields of documents, and also has the relatively high accuracy rate and recall rate.
文摘In Chinese language studies, both “The Textual Research on Historical Documents” and “The Comparative Study of Historical Data” are traditional in methodology and they both deserve being treasured, passed on, and further developed. It will certainly do harm to the development of academic research if any of the two methods is given unreasonable priority. The author claims that the best or one of the best methodologies of the historical study of Chinese language is the combination of the two, hence a new interpretation of “The Double-proof Method”. Meanwhile, this essay is also an attempt to put forward “The Law of Quan-ma and Gui-mei” in Chinese language studies, in which the author believes that it is not advisable to either treat Gui-mei as Quan-ma or vice versa in linguistic research. It is crucial for us to respect always the language facts first, which is considered the very soul of linguistics.
文摘The Extensible Markup Language (XML) is becoming a de-facto standard for exchanging information among the web applications. Efficient implementation of web application needs to be efficient implementation of XML and XML schema document. The quality of XML document has great impact on the design quality of its schema document. Therefore, the design of XML schema document plays an important role in web engineering process and needs to have many schema qualities: functionality, extensibility, reusability, understandability, maintainability and so on. Three schema metrics: Reusable Quality metric (RQ), Extensible Quality metric (EQ) and Understandable Quality metric (UQ) are proposed to measure the Reusable, Extensible and Understandable of XML schema documents in web engineering process respectively. The base attributes are selected according to XML Quality Assurance Design Guidelines. These metrics are formulated based on Binary Entropy Function and Rank Order Centroid method. To check the validity of the proposed metrics empirically and analytically, the self-organizing feature map (SOM) and Weyuker’s 9 properties are used.
文摘In order to improve the quality of the judgment documents, the state and government have introduced laws and regulations. However, the current status of trials in our country is that the number of cases is very large. Using system to verify the documents can reduce the burden on the judges and ensure the accuracy of the judgment. This paper describes an evaluation system for reasoning description of judgment documents. The main evaluation steps include: segmenting the front and back of the law;extracting the key information in the document by using XML parsing technology;constructing the legal exclusive stop word library and preprocessing inputting text;entering the text input into the model to get the text matching result;using the “match keyword, compare sentencing degree” idea to judge whether the logic is consistent if it is the evaluation of “law and conclusion”;integrating the calculation results of each evaluation subject and feeding clear and concise results back to the system user. Simulation of real application scenarios was conducted to test whether the reasoning lacks key links or is insufficient or the judgment result is unreasonable. The result show that evaluation speed of each document is relatively fast and the accuracy of the evaluation of the common nine criminal cases is high.
基金supported by the Shanghai International Studies University(Grant No.:2011114061)
文摘Purpose:The thrust of this paper is to present a method for improving the accuracy of automatic indexing of Chinese-English mixed documents.Design/methodology/approach:Based on the inherent characteristics of Chinese-English mixed texts and the cybernetics theory,we proposed an integrated control method for indexing documents.It consists of"feed-forward control","in-progress control"and"feed-back control",aiming at improving the accuracy of automatic indexing of Chinese-English mixed documents.An experiment was conducted to investigate the effect of our proposed method.Findings:This method distinguishes Chinese and English documents in grammatical structures and word formation rules.Through the implementation of this method in the three phases of automatic indexing for the Chinese-English mixed documents,the results were encouraging.The precision increased from 88.54%to 97.10%and recall improved from97.37%to 99.47%.Research limitations:The indexing method is relatively complicated and the whole indexing process requires substantial human intervention.Due to pattern matching based on a bruteforce(BF)approach,the indexing efficiency has been reduced to some extent.Practical implications:The research is of both theoretical significance and practical value in improving the accuracy of automatic indexing of multilingual documents(not confined to Chinese-English mixed documents).The proposed method will benefit not only the indexing of life science documents but also the indexing of documents in other subject areas.Originality/value:So far,few studies have been published about the method for increasing the accuracy of multilingual automatic indexing.This study will provide insights into the automatic indexing of multilingual documents,especially Chinese-English mixed documents.
文摘Plain Language has made a great difference nowadays. As it turns out, Plain Language works effectively to express clearly, concisely and systematically. However, it is necessary for contemporary practitioners to review the origin and development of Plain Language Movement and to examine whether it has thoroughly implemented Plain Language policies in every federal document. Examining a contemporary federal document against the Guidelines for Document Designers reveals existing problems for further development.
文摘With the practical experience of constru-ction bidding documentation,for example,in view of the large infrastructure project construction in colleges and universities bidding documents for the main body,the construction technology,qualification,performance requirements,bill of quantities,the terms of the contract set aspects were discussed,and put forward practical measures and methods,for similar project construction bidding document preparation to provide certain reference.
文摘Purpose:Detection of research fields or topics and understanding the dynamics help the scientific community in their decisions regarding the establishment of scientific fields.This also helps in having a better collaboration with governments and businesses.This study aims to investigate the development of research fields over time,translating it into a topic detection problem.Design/methodology/approach:To achieve the objectives,we propose a modified deep clustering method to detect research trends from the abstracts and titles of academic documents.Document embedding approaches are utilized to transform documents into vector-based representations.The proposed method is evaluated by comparing it with a combination of different embedding and clustering approaches and the classical topic modeling algorithms(i.e.LDA)against a benchmark dataset.A case study is also conducted exploring the evolution of Artificial Intelligence(AI)detecting the research topics or sub-fields in related AI publications.Findings:Evaluating the performance of the proposed method using clustering performance indicators reflects that our proposed method outperforms similar approaches against the benchmark dataset.Using the proposed method,we also show how the topics have evolved in the period of the recent 30 years,taking advantage of a keyword extraction method for cluster tagging and labeling,demonstrating the context of the topics.Research limitations:We noticed that it is not possible to generalize one solution for all downstream tasks.Hence,it is required to fine-tune or optimize the solutions for each task and even datasets.In addition,interpretation of cluster labels can be subjective and vary based on the readers’opinions.It is also very difficult to evaluate the labeling techniques,rendering the explanation of the clusters further limited.Practical implications:As demonstrated in the case study,we show that in a real-world example,how the proposed method would enable the researchers and reviewers of the academic research to detect,summarize,analyze,and visualize research topics from decades of academic documents.This helps the scientific community and all related organizations in fast and effective analysis of the fields,by establishing and explaining the topics.Originality/value:In this study,we introduce a modified and tuned deep embedding clustering coupled with Doc2Vec representations for topic extraction.We also use a concept extraction method as a labeling approach in this study.The effectiveness of the method has been evaluated in a case study of AI publications,where we analyze the AI topics during the past three decades.
文摘The issue of document management has been raised for a long time, especially with the appearance of office automation in the 1980s, which led to dematerialization and Electronic Document Management (EDM). In the same period, workflow management has experienced significant development, but has become more focused on the industry. However, it seems to us that document workflows have not had the same interest for the scientific community. But nowadays, the emergence and supremacy of the Internet in electronic exchanges are leading to a massive dematerialization of documents;which requires a conceptual reconsideration of the organizational framework for the processing of said documents in both public and private administrations. This problem seems open to us and deserves the interest of the scientific community. Indeed, EDM has mainly focused on the storage (referencing) and circulation of documents (traceability). It paid little attention to the overall behavior of the system in processing documents. The purpose of our researches is to model document processing systems. In the previous works, we proposed a general model and its specialization in the case of small documents (any document processed by a single person at a time during its processing life cycle), which represent 70% of documents processed by administrations, according to our study. In this contribution, we extend the model for processing small documents to the case where they are managed in a system comprising document classes organized in subclasses;which is the case for most administrations. We have thus observed that this model is a Markovian <i>M<sup>L×K</sup>/M<sup>L×K</sup>/</i>1 queues network. We have analyzed the constraints of this model and deduced certain characteristics and metrics. <span style="white-space:normal;"><i></i></span><i>In fine<span style="white-space:normal;"></span></i>, the ultimate objective of our work is to design a document workflow management system, integrating a component of global behavior prediction.
文摘While XML web services become recognized as a solution to business-to-business transactions, there are many problems that should be solved. For example, it is not easy to manipulate business documents of existing standards such as RosettaNet and UN/EDIFACT EDI, traditionally regarded as an important resource for managing B2B relationships. As a starting point for the complete implementation of B2B web services, this paper deals with how to support B2B business documents in XML web services. In the first phase, basic requirements for driving XML web services by business documents are introduced. As a solution, this paper presents how to express B2B business documents in WSDL, a core standard for XML web services. This kind of approach facilitates the reuse of existing business documents and enhances interoperability between implemented web services. Furthermore, it suggests how to link with other conceptual modeling frameworks such as ebXML/UMM, built on a rich heritage of electronic business experience.
文摘Campus network establishment belongs to the field of system engineering. It is necessary to carry on cooperation among departments. Standardization is the key to solve the problem, and its core is standardization of documents. Therefore, this paper will be concentrated on the discussion of relevant problems in combination with our campus network practice.