To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,al...To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance.展开更多
In this paper, constrained K closest pairs query is introduced, wbich retrieves the K closest pairs satisfying the given spatial constraint from two datasets. For data sets indexed by R trees in spatial databases, thr...In this paper, constrained K closest pairs query is introduced, wbich retrieves the K closest pairs satisfying the given spatial constraint from two datasets. For data sets indexed by R trees in spatial databases, three algorithms are presented for answering this kind of query. Among of them, two-phase Range+Join and Join+Range algorithms adopt the strategy that changes the execution order of range and closest pairs queries, and constrained heap-based algorithm utilizes extended distance functions to prune search space and minimize the pruning distance. Experimental results show that constrained heap-base algorithm has better applicability and performance than two-phase algorithms.展开更多
The idea of positional inverted index is exploited for indexing of graph database. The main idea is the use of hashing tables in order to prune a considerable portion of graph database that cannot contain the answer s...The idea of positional inverted index is exploited for indexing of graph database. The main idea is the use of hashing tables in order to prune a considerable portion of graph database that cannot contain the answer set. These tables are implemented using column-based techniques and are used to store graphs of database, frequent sub-graphs and the neighborhood of nodes. In order to exact checking of remaining graphs, the vertex invariant is used for isomorphism test which can be parallel implemented. The results of evaluation indicate that proposed method outperforms existing methods.展开更多
In a cloud environment,outsourced graph data is widely used in companies,enterprises,medical institutions,and so on.Data owners and users can save costs and improve efficiency by storing large amounts of graph data on...In a cloud environment,outsourced graph data is widely used in companies,enterprises,medical institutions,and so on.Data owners and users can save costs and improve efficiency by storing large amounts of graph data on cloud servers.Servers on cloud platforms usually have some subjective or objective attacks,which make the outsourced graph data in an insecure state.The issue of privacy data protection has become an important obstacle to data sharing and usage.How to query outsourcing graph data safely and effectively has become the focus of research.Adjacency query is a basic and frequently used operation in graph,and it will effectively promote the query range and query ability if multi-keyword fuzzy search can be supported at the same time.This work proposes to protect the privacy information of outsourcing graph data by encryption,mainly studies the problem of multi-keyword fuzzy adjacency query,and puts forward a solution.In our scheme,we use the Bloom filter and encryption mechanism to build a secure index and query token,and adjacency queries are implemented through indexes and query tokens on the cloud server.Our proposed scheme is proved by formal analysis,and the performance and effectiveness of the scheme are illustrated by experimental analysis.The research results of this work will provide solid theoretical and technical support for the further popularization and application of encrypted graph data processing technology.展开更多
The query processing in distributed database management systems(DBMS)faces more challenges,such as more operators,and more factors in cost models and meta-data,than that in a single-node DMBS,in which query optimizati...The query processing in distributed database management systems(DBMS)faces more challenges,such as more operators,and more factors in cost models and meta-data,than that in a single-node DMBS,in which query optimization is already an NP-hard problem.Learned query optimizers(mainly in the single-node DBMS)receive attention due to its capability to capture data distributions and flexible ways to avoid hard-craft rules in refinement and adaptation to new hardware.In this paper,we focus on extensions of learned query optimizers to distributed DBMSs.Specifically,we propose one possible but general architecture of the learned query optimizer in the distributed context and highlight differences from the learned optimizer in the single-node ones.In addition,we discuss the challenges and possible solutions.展开更多
A defining characteristic of continuous queries over on-line data streams,possibly bounded by sliding windows,is the potentially infinite and time-evolving nature of their inputs and outputs.For different update patte...A defining characteristic of continuous queries over on-line data streams,possibly bounded by sliding windows,is the potentially infinite and time-evolving nature of their inputs and outputs.For different update patterns of continuous queries,suitable data structures bring great query processing efficiency.In this paper,we proposed a data structure suitable for weak nonmonotonic update pattern in which the lifetime of each tuple is known at generation time,but the length of lifetime is not necessarily the same.The new data structure combined the ladder queue with the feature of weak non-monotonic update pattern.The experiment results show that the new data structure performs much better than the traditional calendar queue in many cases.展开更多
As one of the commonly used queries in modern databases, skyline query has received extensive attention from database research community. The uncertainty of the data in wireless sensor networks makes the corresponding...As one of the commonly used queries in modern databases, skyline query has received extensive attention from database research community. The uncertainty of the data in wireless sensor networks makes the corresponding skyline uncertain and not unique. This paper investigates the Pr-Skyline problem, i.e., how to compute the skyline with the highest existence probability in a computational and energy-efficient way. We formulate the problem and prove that it is NP-Complete and cannot be approximated in a given expression. However, the proposed algorithm SKY-SEARCH with pruning techniques can guarantee the computational efficiency given relatively large input size, while the filter-based distributed optimization strategy significantly reduces the transmission cost and the required storage space of the sensor nodes. Extensive experiments verify the efficiency and scalability of SKY-SEARCH and the distributed optimizing strategy.展开更多
With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enha...With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enhance database query systems, enabling more intuitive and semantic query mechanisms. Our model leverages LLM’s deep learning architecture to interpret and process natural language queries and translate them into accurate database queries. The system integrates an LLM-powered semantic parser that translates user input into structured queries that can be understood by the database management system. First, the user query is pre-processed, the text is normalized, and the ambiguity is removed. This is followed by semantic parsing, where the LLM interprets the pre-processed text and identifies key entities and relationships. This is followed by query generation, which converts the parsed information into a structured query format and tailors it to the target database schema. Finally, there is query execution and feedback, where the resulting query is executed on the database and the results are returned to the user. The system also provides feedback mechanisms to improve and optimize future query interpretations. By using advanced LLMs for model implementation and fine-tuning on diverse datasets, the experimental results show that the proposed method significantly improves the accuracy and usability of database queries, making data retrieval easy for users without specialized knowledge.展开更多
Recently, new techniques to efficiently manage current and past location information of moving objects have received significant interests in the area of moving object databases and location based service systems. In ...Recently, new techniques to efficiently manage current and past location information of moving objects have received significant interests in the area of moving object databases and location based service systems. In this paper, we exploit query processing schemes for location management systems, which consist of multiple data processing nodes to handle massive volume of moving objects such as cellular phone users. To show the usefulness of the proposed schemes, some experimental results showing performance factors regarding distributed query processing are explained. In our experiments, we use two kinds of data set: one is generated by the extended GSTD simulator and another is generated by the real time data generator which generates location sensing reports of various types of users having different movement patterns.展开更多
Cleaning duplicate data is a major problem that persists even though many works have been done to solve it, due to the exponential growth of data amount treated and the necessity to use scalable and speed algorithms. ...Cleaning duplicate data is a major problem that persists even though many works have been done to solve it, due to the exponential growth of data amount treated and the necessity to use scalable and speed algorithms. This problem depends on the type and quality of data, and differs according to the volume of data set manipulated. In this paper we are going to introduce a novel framework based on extended fuzzy C-means algorithm by using topic ontology. This work aims to improve the OLAP querying process over heterogeneous data warehouses that contain big data sets, by improving query results integration, eliminating redundancies by using the extended classification algorithm, and measuring the loss of information.展开更多
With the rapid growth of spatial data,POI(Point of Interest)is becoming ever more intensive,and the text description of each spatial point is also gradually increasing.The traditional query method can only address the...With the rapid growth of spatial data,POI(Point of Interest)is becoming ever more intensive,and the text description of each spatial point is also gradually increasing.The traditional query method can only address the problem that the text description is less and single keyword query.In view of this situation,the paper proposes an approximate matching algorithm to support spatial multi-keyword.The fuzzy matching algorithm is integrated into this algorithm,which not only supports multiple POI queries,but also supports fault tolerance of the query keywords.The simulation results demonstrate that the proposed algorithm can improve the accuracy and efficiency of query.展开更多
Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platf...Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platform and the process level of the offshore industry.Currently,qualitymanagement remains in the era of primary information,and there is a lack of effective tracking and recording of welding quality data.When welding defects are encountered,it is difficult to rapidly and accurately determine the root cause of the problem from various complexities and scattered quality data.In this paper,a composite welding quality traceability model for offshore platform block construction process is proposed,it contains the quality early-warning method based on long short-term memory and quality data backtracking query optimization algorithm.By fulfilling the training of the early-warning model and the implementation of the query optimization algorithm,the quality traceability model has the ability to assist enterprises in realizing the rapid identification and positioning of quality problems.Furthermore,the model and the quality traceability algorithm are checked by cases in actual working conditions.Verification analyses suggest that the proposed early-warningmodel for welding quality and the algorithmfor optimizing backtracking requests are effective and can be applied to the actual construction process.展开更多
本研究对大语言模型(large language model,LLM)、数据查询机器人(data query robot,DQR)的发展历程和研究现状进行了介绍,同时通过实证分析,探讨了在数字医学领域中,基于LLM的DQR的实际应用效果及其在处理医疗数据查询和分析的复杂任...本研究对大语言模型(large language model,LLM)、数据查询机器人(data query robot,DQR)的发展历程和研究现状进行了介绍,同时通过实证分析,探讨了在数字医学领域中,基于LLM的DQR的实际应用效果及其在处理医疗数据查询和分析的复杂任务中的作用,证实了基于LLM的DQR能为非技术人员提供一个直观且便捷的工具,显著提升医疗数据的查询效率和分析能力。此外,本文还探讨了LLM和DQR技术在当前应用中的局限性及未来发展潜力,为进一步的研究和应用提供参考。展开更多
基金Weaponry Equipment Pre-Research Foundation of PLA Equipment Ministry (No. 9140A06050409JB8102)Pre-Research Foundation of PLA University of Science and Technology (No. 2009JSJ11)
文摘To solve the query processing correctness problem for semantic-based relational data integration,the semantics of SAPRQL(simple protocol and RDF query language) queries is defined.In the course of query rewriting,all relative tables are found and decomposed into minimal connectable units.Minimal connectable units are joined according to semantic queries to produce the semantically correct query plans.Algorithms for query rewriting and transforming are presented.Computational complexity of the algorithms is discussed.Under the worst case,the query decomposing algorithm can be finished in O(n2) time and the query rewriting algorithm requires O(nm) time.And the performance of the algorithms is verified by experiments,and experimental results show that when the length of query is less than 8,the query processing algorithms can provide satisfactory performance.
基金Supported by National Natural Science Foundationof China (60073045)
文摘In this paper, constrained K closest pairs query is introduced, wbich retrieves the K closest pairs satisfying the given spatial constraint from two datasets. For data sets indexed by R trees in spatial databases, three algorithms are presented for answering this kind of query. Among of them, two-phase Range+Join and Join+Range algorithms adopt the strategy that changes the execution order of range and closest pairs queries, and constrained heap-based algorithm utilizes extended distance functions to prune search space and minimize the pruning distance. Experimental results show that constrained heap-base algorithm has better applicability and performance than two-phase algorithms.
文摘The idea of positional inverted index is exploited for indexing of graph database. The main idea is the use of hashing tables in order to prune a considerable portion of graph database that cannot contain the answer set. These tables are implemented using column-based techniques and are used to store graphs of database, frequent sub-graphs and the neighborhood of nodes. In order to exact checking of remaining graphs, the vertex invariant is used for isomorphism test which can be parallel implemented. The results of evaluation indicate that proposed method outperforms existing methods.
基金This research was supported in part by the Nature Science Foundation of China(Nos.62262033,61962029,61762055,62062045 and 62362042)the Jiangxi Provincial Natural Science Foundation of China(Nos.20224BAB202012,20202ACBL202005 and 20202BAB212006)+3 种基金the Science and Technology Research Project of Jiangxi Education Department(Nos.GJJ211815,GJJ2201914 and GJJ201832)the Hubei Natural Science Foundation Innovation and Development Joint Fund Project(No.2022CFD101)Xiangyang High-Tech Key Science and Technology Plan Project(No.2022ABH006848)Hubei Superior and Distinctive Discipline Group of“New Energy Vehicle and Smart Transportation”,the Project of Zhejiang Institute of Mechanical&Electrical Engineering,and the Jiangxi Provincial Social Science Foundation of China(No.23GL52D).
文摘In a cloud environment,outsourced graph data is widely used in companies,enterprises,medical institutions,and so on.Data owners and users can save costs and improve efficiency by storing large amounts of graph data on cloud servers.Servers on cloud platforms usually have some subjective or objective attacks,which make the outsourced graph data in an insecure state.The issue of privacy data protection has become an important obstacle to data sharing and usage.How to query outsourcing graph data safely and effectively has become the focus of research.Adjacency query is a basic and frequently used operation in graph,and it will effectively promote the query range and query ability if multi-keyword fuzzy search can be supported at the same time.This work proposes to protect the privacy information of outsourcing graph data by encryption,mainly studies the problem of multi-keyword fuzzy adjacency query,and puts forward a solution.In our scheme,we use the Bloom filter and encryption mechanism to build a secure index and query token,and adjacency queries are implemented through indexes and query tokens on the cloud server.Our proposed scheme is proved by formal analysis,and the performance and effectiveness of the scheme are illustrated by experimental analysis.The research results of this work will provide solid theoretical and technical support for the further popularization and application of encrypted graph data processing technology.
基金partially supported by NSFC under Grant Nos.61832001 and 62272008ZTE Industry-University-Institute Fund Project。
文摘The query processing in distributed database management systems(DBMS)faces more challenges,such as more operators,and more factors in cost models and meta-data,than that in a single-node DMBS,in which query optimization is already an NP-hard problem.Learned query optimizers(mainly in the single-node DBMS)receive attention due to its capability to capture data distributions and flexible ways to avoid hard-craft rules in refinement and adaptation to new hardware.In this paper,we focus on extensions of learned query optimizers to distributed DBMSs.Specifically,we propose one possible but general architecture of the learned query optimizer in the distributed context and highlight differences from the learned optimizer in the single-node ones.In addition,we discuss the challenges and possible solutions.
基金Funded by the Natural Science Foundation of China (No. 60873030)National High Technology Research and Development Program of China (No. 2007AA01Z309)Defense Pre-Research Foundation of China (No. 9140A04010209JW0504 and No. 9140A15040208JW0501)
文摘A defining characteristic of continuous queries over on-line data streams,possibly bounded by sliding windows,is the potentially infinite and time-evolving nature of their inputs and outputs.For different update patterns of continuous queries,suitable data structures bring great query processing efficiency.In this paper,we proposed a data structure suitable for weak nonmonotonic update pattern in which the lifetime of each tuple is known at generation time,but the length of lifetime is not necessarily the same.The new data structure combined the ladder queue with the feature of weak non-monotonic update pattern.The experiment results show that the new data structure performs much better than the traditional calendar queue in many cases.
文摘As one of the commonly used queries in modern databases, skyline query has received extensive attention from database research community. The uncertainty of the data in wireless sensor networks makes the corresponding skyline uncertain and not unique. This paper investigates the Pr-Skyline problem, i.e., how to compute the skyline with the highest existence probability in a computational and energy-efficient way. We formulate the problem and prove that it is NP-Complete and cannot be approximated in a given expression. However, the proposed algorithm SKY-SEARCH with pruning techniques can guarantee the computational efficiency given relatively large input size, while the filter-based distributed optimization strategy significantly reduces the transmission cost and the required storage space of the sensor nodes. Extensive experiments verify the efficiency and scalability of SKY-SEARCH and the distributed optimizing strategy.
文摘With the rapid development of artificial intelligence, large language models (LLMs) have demonstrated remarkable capabilities in natural language understanding and generation. These models have great potential to enhance database query systems, enabling more intuitive and semantic query mechanisms. Our model leverages LLM’s deep learning architecture to interpret and process natural language queries and translate them into accurate database queries. The system integrates an LLM-powered semantic parser that translates user input into structured queries that can be understood by the database management system. First, the user query is pre-processed, the text is normalized, and the ambiguity is removed. This is followed by semantic parsing, where the LLM interprets the pre-processed text and identifies key entities and relationships. This is followed by query generation, which converts the parsed information into a structured query format and tailors it to the target database schema. Finally, there is query execution and feedback, where the resulting query is executed on the database and the results are returned to the user. The system also provides feedback mechanisms to improve and optimize future query interpretations. By using advanced LLMs for model implementation and fine-tuning on diverse datasets, the experimental results show that the proposed method significantly improves the accuracy and usability of database queries, making data retrieval easy for users without specialized knowledge.
基金This work is supported by the Ministry of Inform ation & Comm unicationsKoreaunder theInformation Technology Research Center(ITRC) Support Program.
文摘Recently, new techniques to efficiently manage current and past location information of moving objects have received significant interests in the area of moving object databases and location based service systems. In this paper, we exploit query processing schemes for location management systems, which consist of multiple data processing nodes to handle massive volume of moving objects such as cellular phone users. To show the usefulness of the proposed schemes, some experimental results showing performance factors regarding distributed query processing are explained. In our experiments, we use two kinds of data set: one is generated by the extended GSTD simulator and another is generated by the real time data generator which generates location sensing reports of various types of users having different movement patterns.
文摘Cleaning duplicate data is a major problem that persists even though many works have been done to solve it, due to the exponential growth of data amount treated and the necessity to use scalable and speed algorithms. This problem depends on the type and quality of data, and differs according to the volume of data set manipulated. In this paper we are going to introduce a novel framework based on extended fuzzy C-means algorithm by using topic ontology. This work aims to improve the OLAP querying process over heterogeneous data warehouses that contain big data sets, by improving query results integration, eliminating redundancies by using the extended classification algorithm, and measuring the loss of information.
文摘With the rapid growth of spatial data,POI(Point of Interest)is becoming ever more intensive,and the text description of each spatial point is also gradually increasing.The traditional query method can only address the problem that the text description is less and single keyword query.In view of this situation,the paper proposes an approximate matching algorithm to support spatial multi-keyword.The fuzzy matching algorithm is integrated into this algorithm,which not only supports multiple POI queries,but also supports fault tolerance of the query keywords.The simulation results demonstrate that the proposed algorithm can improve the accuracy and efficiency of query.
基金funded by Ministry of Industry and Information Technology of the People’s Republic of China[Grant No.2018473].
文摘Quality traceability plays an essential role in assembling and welding offshore platform blocks.The improvement of the welding quality traceability system is conducive to improving the durability of the offshore platform and the process level of the offshore industry.Currently,qualitymanagement remains in the era of primary information,and there is a lack of effective tracking and recording of welding quality data.When welding defects are encountered,it is difficult to rapidly and accurately determine the root cause of the problem from various complexities and scattered quality data.In this paper,a composite welding quality traceability model for offshore platform block construction process is proposed,it contains the quality early-warning method based on long short-term memory and quality data backtracking query optimization algorithm.By fulfilling the training of the early-warning model and the implementation of the query optimization algorithm,the quality traceability model has the ability to assist enterprises in realizing the rapid identification and positioning of quality problems.Furthermore,the model and the quality traceability algorithm are checked by cases in actual working conditions.Verification analyses suggest that the proposed early-warningmodel for welding quality and the algorithmfor optimizing backtracking requests are effective and can be applied to the actual construction process.
文摘本研究对大语言模型(large language model,LLM)、数据查询机器人(data query robot,DQR)的发展历程和研究现状进行了介绍,同时通过实证分析,探讨了在数字医学领域中,基于LLM的DQR的实际应用效果及其在处理医疗数据查询和分析的复杂任务中的作用,证实了基于LLM的DQR能为非技术人员提供一个直观且便捷的工具,显著提升医疗数据的查询效率和分析能力。此外,本文还探讨了LLM和DQR技术在当前应用中的局限性及未来发展潜力,为进一步的研究和应用提供参考。