Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image proces...Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.展开更多
In order to explore the travel characteristics and space-time distribution of different groups of bikeshare users,an online analytical processing(OLAP)tool called data cube was used for treating and displaying multi-d...In order to explore the travel characteristics and space-time distribution of different groups of bikeshare users,an online analytical processing(OLAP)tool called data cube was used for treating and displaying multi-dimensional data.We extended and modified the traditionally threedimensional data cube into four dimensions,which are space,date,time,and user,each with a user-specified hierarchy,and took transaction numbers and travel time as two quantitative measures.The results suggest that there are two obvious transaction peaks during the morning and afternoon rush hours on weekdays,while the volume at weekends has an approximate even distribution.Bad weather condition significantly restricts the bikeshare usage.Besides,seamless smartcard users generally take a longer trip than exclusive smartcard users;and non-native users ride faster than native users.These findings not only support the applicability and efficiency of data cube in the field of visualizing massive smartcard data,but also raise equity concerns among bikeshare users with different demographic backgrounds.展开更多
This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,tradit...This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.展开更多
This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while ...This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.展开更多
The Growth Value Model(GVM)proposed theoretical closed form formulas consist-ing of Return on Equity(ROE)and the Price-to-Book value ratio(P/B)for fair stock prices and expected rates of return.Although regression ana...The Growth Value Model(GVM)proposed theoretical closed form formulas consist-ing of Return on Equity(ROE)and the Price-to-Book value ratio(P/B)for fair stock prices and expected rates of return.Although regression analysis can be employed to verify these theoretical closed form formulas,they cannot be explored by classical quintile or decile sorting approaches with intuition due to the essence of multi-factors and dynamical processes.This article uses visualization techniques to help intuitively explore GVM.The discerning findings and contributions of this paper is that we put forward the concept of the smart frontier,which can be regarded as the reasonable lower limit of P/B at a specific ROE by exploring fair P/B with ROE-P/B 2D dynamical process visualization.The coefficients in the formula can be determined by the quantile regression analysis with market data.The moving paths of the ROE and P/B in the cur-rent quarter and the subsequent quarters show that the portfolios at the lower right of the curve approaches this curve and stagnates here after the portfolios are formed.Furthermore,exploring expected rates of return with ROE-P/B-Return 3D dynamical process visualization,the results show that the data outside of the lower right edge of the“smart frontier”has positive quarterly return rates not only in the t+1 quarter but also in the t+2 quarter.The farther away the data in the t quarter is from the“smart frontier”,the larger the return rates in the t+1 and t+2 quarter.展开更多
Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision...Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.展开更多
Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal he...Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.展开更多
This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technol...This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technology,focusing on analyzing the current application status of computer science and technology in big data,including data storage,data processing,and data analysis.Then,it proposes development strategies for big data processing.Computer science and technology play a vital role in big data processing by providing strong technical support.展开更多
Data breaches have massive consequences for companies, affecting them financially and undermining their reputation, which poses significant challenges to online security and the long-term viability of businesses. This...Data breaches have massive consequences for companies, affecting them financially and undermining their reputation, which poses significant challenges to online security and the long-term viability of businesses. This study analyzes trends in data breaches in the United States, examining the frequency, causes, and magnitude of breaches across various industries. We document that data breaches are increasing, with hacking emerging as the leading cause. Our descriptive analyses explore factors influencing breaches, including security vulnerabilities, human error, and malicious attacks. The findings provide policymakers and businesses with actionable insights to bolster data security through proactive audits, patching, encryption, and response planning. By better understanding breach patterns and risk factors, organizations can take targeted steps to enhance protections and mitigate the potential damage of future incidents.展开更多
A visualization tool was developed through a web browser based on Java applets embedded into HTML pages, in order to provide a world access to the EAST experimental data. It can display data from various trees in diff...A visualization tool was developed through a web browser based on Java applets embedded into HTML pages, in order to provide a world access to the EAST experimental data. It can display data from various trees in different servers in a single panel. With WebScope, it is easier to make a comparison between different data sources and perform a simple calculation over different data sources.展开更多
Cyber security has been thrust into the limelight in the modern technological era because of an array of attacks often bypassing tmtrained intrusion detection systems (IDSs). Therefore, greater attention has been di...Cyber security has been thrust into the limelight in the modern technological era because of an array of attacks often bypassing tmtrained intrusion detection systems (IDSs). Therefore, greater attention has been directed on being able deciphering better methods for identifying attack types to train IDSs more effectively. Keycyber-attack insights exist in big data; however, an efficient approach is required to determine strong attack types to train IDSs to become more effective in key areas. Despite the rising growth in IDS research, there is a lack of studies involving big data visualization, which is key. The KDD99 data set has served as a strong benchmark since 1999; therefore, we utilized this data set in our experiment. In this study, we utilized hash algorithm, a weight table, and sampling method to deal with the inherent problems caused by analyzing big data; volume, variety, and velocity. By utilizing a visualization algorithm, we were able to gain insights into the KDD99 data set with a clear iden- tification of "normal" clusters and described distinct clusters of effective attacks.展开更多
The control system of Hefei Light Source II(HLS-Ⅱ) is a distributed system based on the experimental physics and industrial control system(EPICS). It is necessary to maintain the central configuration files for the e...The control system of Hefei Light Source II(HLS-Ⅱ) is a distributed system based on the experimental physics and industrial control system(EPICS). It is necessary to maintain the central configuration files for the existing archiving system. When the process variables in the control system are added, removed, or updated, the configuration files must be manually modified to maintain consistency with the control system. This paper presents a new method for data archiving, which realizes the automatic configuration of the archiving parameters. The system uses microservice architecture to integrate the EPICS Archiver Appliance and Rec Sync. In this way, the system can collect all the archived meta-configuration from the distributed input/output controllers and enter them into the EPICS Archiver Appliance automatically. Furthermore, we also developed a web-based GUI to provide automatic visualization of real-time and historical data. At present,this system is under commissioning at HLS-Ⅱ. The results indicate that the new archiving system is reliable and convenient to operate. The operation mode without maintenance is valuable for large-scale scientific facilities.展开更多
Water resources are one of the basic resources for human survival,and water protection has been becoming a major problem for countries around the world.However,most of the traditional water quality monitoring research...Water resources are one of the basic resources for human survival,and water protection has been becoming a major problem for countries around the world.However,most of the traditional water quality monitoring research work is still concerned with the collection of water quality indicators,and ignored the analysis of water quality monitoring data and its value.In this paper,by adopting Laravel and AdminTE framework,we introduced how to design and implement a water quality data visualization platform based on Baidu ECharts.Through the deployed water quality sensor,the collected water quality indicator data is transmitted to the big data processing platform that deployed on Tencent Cloud in real time through the 4G network.The collected monitoring data is analyzed,and the processing result is visualized by Baidu ECharts.The test results showed that the designed system could run well and will provide decision support for water resource protection.展开更多
Exploration of artworks is enjoyable but often time consuming.For example,it is not always easy to discover the favorite types of unknown painting works.It is not also always easy to explore unpopular painting works w...Exploration of artworks is enjoyable but often time consuming.For example,it is not always easy to discover the favorite types of unknown painting works.It is not also always easy to explore unpopular painting works which looks similar to painting works created by famous artists.This paper presents a painting image browser which assists the explorative discovery of user-interested painting works.The presented browser applies a new multidimensional data visualization technique that highlights particular ranges of particular numeric values based on association rules to suggest cues to find favorite painting images.This study assumes a large number of painting images are provided where categorical information(e.g.,names of artists,created year)is assigned to the images.The presented system firstly calculates the feature values of the images as a preprocessing step.Then the browser visualizes the multidimensional feature values as a heatmap and highlights association rules discovered from the relationships between the feature values and categorical information.This mechanism enables users to explore favorite painting images or painting images that look similar to famous painting works.Our case study and user evaluation demonstrates the effectiveness of the presented image browser.展开更多
One of the most indispensable needs of life is food and its worldwide availability endorsement has made agriculture an essential sector in recent years. As the technology evolved, the need to maintain a good and suita...One of the most indispensable needs of life is food and its worldwide availability endorsement has made agriculture an essential sector in recent years. As the technology evolved, the need to maintain a good and suitable climate in the greenhouse became imperative to ensure that the indoor plants are more productive hence the agriculture sector was not left behind. That notwithstanding, the introduction and deployment of IoT technology in agriculture solves many problems and increases crop production. This paper focuses mainly on the deployment of the Internet of Things (IoT) in acquiring real- time data of environmental parameters in the greenhouse. Various IoT technologies that can be applicable in greenhouse monitoring system was presented and in the proposed model, a method is developed to send the air temperature and humidity data obtained by the DHT11 sensor to the cloud using an ESP8266-based NodeMCU and firstly to the cloud platform Thing- Speak, and then to Adafruit.IO in which MQTT protocol was used for the reception of sensor data to the application layer referred as Human-Machine Interface. The system has been completely implemented in an actual prototype, allowing the acquiring of data and the publisher/subscriber concept used for communication. The data is published with a broker’s aid, which is responsible for transferring messages to the intended clients based on topic choice. Lastly, the functionality testing of MQTT was carried out and the results showed that the messages are successfully published.展开更多
The study of marine data visualization is of great value. Marine data, due to its large scale, random variation and multiresolution in nature, are hard to be visualized and analyzed. Nowadays, constructing an ocean mo...The study of marine data visualization is of great value. Marine data, due to its large scale, random variation and multiresolution in nature, are hard to be visualized and analyzed. Nowadays, constructing an ocean model and visualizing model results have become some of the most important research topics of ‘Digital Ocean'. In this paper, a spherical ray casting method is developed to improve the traditional ray-casting algorithm and to make efficient use of GPUs. Aiming at the ocean current data, a 3D view-dependent line integral convolution method is used, in which the spatial frequency is adapted according to the distance from a camera. The study is based on a 3D virtual reality and visualization engine, namely the VV-Ocean. Some interactive operations are also provided to highlight the interesting structures and the characteristics of volumetric data. Finally, the marine data gathered in the East China Sea are displayed and analyzed. The results show that the method meets the requirements of real-time and interactive rendering.展开更多
Scholarly communication of knowledge is predominantly document-based in digital repositories,and researchers find it tedious to automatically capture and process the semantics among related articles.Despite the presen...Scholarly communication of knowledge is predominantly document-based in digital repositories,and researchers find it tedious to automatically capture and process the semantics among related articles.Despite the present digital era of big data,there is a lack of visual representations of the knowledge present in scholarly articles,and a time-saving approach for a literature search and visual navigation is warranted.The majority of knowledge display tools cannot cope with current big data trends and pose limitations in meeting the requirements of automatic knowledge representation,storage,and dynamic visualization.To address this limitation,the main aim of this paper is to model the visualization of unstructured data and explore the feasibility of achieving visual navigation for researchers to gain insight into the knowledge hidden in scientific articles of digital repositories.Contemporary topics of research and practice,including modifiable risk factors leading to a dramatic increase in Alzheimer’s disease and other forms of dementia,warrant deeper insight into the evidence-based knowledge available in the literature.The goal is to provide researchers with a visual-based easy traversal through a digital repository of research articles.This paper takes the first step in proposing a novel integrated model using knowledge maps and next-generation graph datastores to achieve a semantic visualization with domain-specific knowledge,such as dementia risk factors.The model facilitates a deep conceptual understanding of the literature by automatically establishing visual relationships among the extracted knowledge from the big data resources of research articles.It also serves as an automated tool for a visual navigation through the knowledge repository for faster identification of dementia risk factors reported in scholarly articles.Further,it facilitates a semantic visualization and domain-specific knowledge discovery from a large digital repository and their associations.In this study,the implementation of the proposed model in the Neo4j graph data repository,along with the results achieved,is presented as a proof of concept.Using scholarly research articles on dementia risk factors as a case study,automatic knowledge extraction,storage,intelligent search,and visual navigation are illustrated.The implementation of contextual knowledge and its relationship for a visual exploration by researchers show promising results in the knowledge discovery of dementia risk factors.Overall,this study demonstrates the significance of a semantic visualization with the effective use of knowledge maps and paves the way for extending visual modeling capabilities in the future.展开更多
The explosion of online information with the recent advent of digital technology in information processing,information storing,information sharing,natural language processing,and text mining techniques has enabled sto...The explosion of online information with the recent advent of digital technology in information processing,information storing,information sharing,natural language processing,and text mining techniques has enabled stock investors to uncover market movement and volatility from heterogeneous content.For example,a typical stock market investor reads the news,explores market sentiment,and analyzes technical details in order to make a sound decision prior to purchasing or selling a particular company’s stock.However,capturing a dynamic stock market trend is challenging owing to high fluctuation and the non-stationary nature of the stock market.Although existing studies have attempted to enhance stock prediction,few have provided a complete decision-support system for investors to retrieve real-time data from multiple sources and extract insightful information for sound decision-making.To address the above challenge,we propose a unified solution for data collection,analysis,and visualization in real-time stock market prediction to retrieve and process relevant financial data from news articles,social media,and company technical information.We aim to provide not only useful information for stock investors but also meaningful visualization that enables investors to effectively interpret storyline events affecting stock prices.Specifically,we utilize an ensemble stacking of diversified machine-learning-based estimators and innovative contextual feature engineering to predict the next day’s stock prices.Experiment results show that our proposed stock forecasting method outperforms a traditional baseline with an average mean absolute percentage error of 0.93.Our findings confirm that leveraging an ensemble scheme of machine learning methods with contextual information improves stock prediction performance.Finally,our study could be further extended to a wide variety of innovative financial applications that seek to incorporate external insight from contextual information such as large-scale online news articles and social media data.展开更多
基金Supported by the National Natural Science Foun-dation of China (60173051) ,the Teaching and Research Award Pro-gramfor Outstanding Young Teachers in Higher Education Institu-tions of Ministry of Education of China ,and Liaoning Province HigherEducation Research Foundation (20040206)
文摘Visual data mining is one of important approach of data mining techniques. Most of them are based on computer graphic techniques but few of them exploit image-processing techniques. This paper proposes an image processing method, named RNAM (resemble neighborhood averaging method), to facilitate visual data mining, which is used to post-process the data mining result-image and help users to discover significant features and useful patterns effectively. The experiments show that the method is intuitive, easily-understanding and effectiveness. It provides a new approach for visual data mining.
基金Supported by Projects of International Cooperation and Exchange of the National Natural Science Foundation of China(51561135003)Key Project of National Natural Science Foundation of China(51338003)Scientific Research Foundation of Graduated School of Southeast University(YBJJ1842)
文摘In order to explore the travel characteristics and space-time distribution of different groups of bikeshare users,an online analytical processing(OLAP)tool called data cube was used for treating and displaying multi-dimensional data.We extended and modified the traditionally threedimensional data cube into four dimensions,which are space,date,time,and user,each with a user-specified hierarchy,and took transaction numbers and travel time as two quantitative measures.The results suggest that there are two obvious transaction peaks during the morning and afternoon rush hours on weekdays,while the volume at weekends has an approximate even distribution.Bad weather condition significantly restricts the bikeshare usage.Besides,seamless smartcard users generally take a longer trip than exclusive smartcard users;and non-native users ride faster than native users.These findings not only support the applicability and efficiency of data cube in the field of visualizing massive smartcard data,but also raise equity concerns among bikeshare users with different demographic backgrounds.
文摘This study aims to explore the application of Bayesian analysis based on neural networks and deep learning in data visualization.The research background is that with the increasing amount and complexity of data,traditional data analysis methods have been unable to meet the needs.Research methods include building neural networks and deep learning models,optimizing and improving them through Bayesian analysis,and applying them to the visualization of large-scale data sets.The results show that the neural network combined with Bayesian analysis and deep learning method can effectively improve the accuracy and efficiency of data visualization,and enhance the intuitiveness and depth of data interpretation.The significance of the research is that it provides a new solution for data visualization in the big data environment and helps to further promote the development and application of data science.
文摘This study focuses on meeting the challenges of big data visualization by using of data reduction methods based the feature selection methods.To reduce the volume of big data and minimize model training time(Tt)while maintaining data quality.We contributed to meeting the challenges of big data visualization using the embedded method based“Select from model(SFM)”method by using“Random forest Importance algorithm(RFI)”and comparing it with the filter method by using“Select percentile(SP)”method based chi square“Chi2”tool for selecting the most important features,which are then fed into a classification process using the logistic regression(LR)algorithm and the k-nearest neighbor(KNN)algorithm.Thus,the classification accuracy(AC)performance of LRis also compared to theKNN approach in python on eight data sets to see which method produces the best rating when feature selection methods are applied.Consequently,the study concluded that the feature selection methods have a significant impact on the analysis and visualization of the data after removing the repetitive data and the data that do not affect the goal.After making several comparisons,the study suggests(SFMLR)using SFM based on RFI algorithm for feature selection,with LR algorithm for data classify.The proposal proved its efficacy by comparing its results with recent literature.
文摘The Growth Value Model(GVM)proposed theoretical closed form formulas consist-ing of Return on Equity(ROE)and the Price-to-Book value ratio(P/B)for fair stock prices and expected rates of return.Although regression analysis can be employed to verify these theoretical closed form formulas,they cannot be explored by classical quintile or decile sorting approaches with intuition due to the essence of multi-factors and dynamical processes.This article uses visualization techniques to help intuitively explore GVM.The discerning findings and contributions of this paper is that we put forward the concept of the smart frontier,which can be regarded as the reasonable lower limit of P/B at a specific ROE by exploring fair P/B with ROE-P/B 2D dynamical process visualization.The coefficients in the formula can be determined by the quantile regression analysis with market data.The moving paths of the ROE and P/B in the cur-rent quarter and the subsequent quarters show that the portfolios at the lower right of the curve approaches this curve and stagnates here after the portfolios are formed.Furthermore,exploring expected rates of return with ROE-P/B-Return 3D dynamical process visualization,the results show that the data outside of the lower right edge of the“smart frontier”has positive quarterly return rates not only in the t+1 quarter but also in the t+2 quarter.The farther away the data in the t quarter is from the“smart frontier”,the larger the return rates in the t+1 and t+2 quarter.
文摘Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.
文摘Gestational Diabetes Mellitus (GDM) is a significant health concern affecting pregnant women worldwide. It is characterized by elevated blood sugar levels during pregnancy and poses risks to both maternal and fetal health. Maternal complications of GDM include an increased risk of developing type 2 diabetes later in life, as well as hypertension and preeclampsia during pregnancy. Fetal complications may include macrosomia (large birth weight), birth injuries, and an increased risk of developing metabolic disorders later in life. Understanding the demographics, risk factors, and biomarkers associated with GDM is crucial for effective management and prevention strategies. This research aims to address these aspects comprehensively through the analysis of a dataset comprising 600 pregnant women. By exploring the demographics of the dataset and employing data modeling techniques, the study seeks to identify key risk factors associated with GDM. Moreover, by analyzing various biomarkers, the research aims to gain insights into the physiological mechanisms underlying GDM and its implications for maternal and fetal health. The significance of this research lies in its potential to inform clinical practice and public health policies related to GDM. By identifying demographic patterns and risk factors, healthcare providers can better tailor screening and intervention strategies for pregnant women at risk of GDM. Additionally, insights into biomarkers associated with GDM may contribute to the development of novel diagnostic tools and therapeutic approaches. Ultimately, by enhancing our understanding of GDM, this research aims to improve maternal and fetal outcomes and reduce the burden of this condition on healthcare systems and society. However, it’s important to acknowledge the limitations of the dataset used in this study. Further research utilizing larger and more diverse datasets, perhaps employing advanced data analysis techniques such as Power BI, is warranted to corroborate and expand upon the findings of this research. This underscores the ongoing need for continued investigation into GDM to refine our understanding and improve clinical management strategies.
文摘This article discusses the current status and development strategies of computer science and technology in the context of big data.Firstly,it explains the relationship between big data and computer science and technology,focusing on analyzing the current application status of computer science and technology in big data,including data storage,data processing,and data analysis.Then,it proposes development strategies for big data processing.Computer science and technology play a vital role in big data processing by providing strong technical support.
文摘Data breaches have massive consequences for companies, affecting them financially and undermining their reputation, which poses significant challenges to online security and the long-term viability of businesses. This study analyzes trends in data breaches in the United States, examining the frequency, causes, and magnitude of breaches across various industries. We document that data breaches are increasing, with hacking emerging as the leading cause. Our descriptive analyses explore factors influencing breaches, including security vulnerabilities, human error, and malicious attacks. The findings provide policymakers and businesses with actionable insights to bolster data security through proactive audits, patching, encryption, and response planning. By better understanding breach patterns and risk factors, organizations can take targeted steps to enhance protections and mitigate the potential damage of future incidents.
基金supported by National Natural Science Foundation of China (No.10835009)Chinese Academy of Sciences for the Key Project of Knowledge Innovation Program (No.KJCX3.SYW.N4)Chinese Ministry of Sciences for the 973 project (No.2009GB103000)
文摘A visualization tool was developed through a web browser based on Java applets embedded into HTML pages, in order to provide a world access to the EAST experimental data. It can display data from various trees in different servers in a single panel. With WebScope, it is easier to make a comparison between different data sources and perform a simple calculation over different data sources.
文摘Cyber security has been thrust into the limelight in the modern technological era because of an array of attacks often bypassing tmtrained intrusion detection systems (IDSs). Therefore, greater attention has been directed on being able deciphering better methods for identifying attack types to train IDSs more effectively. Keycyber-attack insights exist in big data; however, an efficient approach is required to determine strong attack types to train IDSs to become more effective in key areas. Despite the rising growth in IDS research, there is a lack of studies involving big data visualization, which is key. The KDD99 data set has served as a strong benchmark since 1999; therefore, we utilized this data set in our experiment. In this study, we utilized hash algorithm, a weight table, and sampling method to deal with the inherent problems caused by analyzing big data; volume, variety, and velocity. By utilizing a visualization algorithm, we were able to gain insights into the KDD99 data set with a clear iden- tification of "normal" clusters and described distinct clusters of effective attacks.
基金supported by the National Natural Science Foundation of China(No.11375186)
文摘The control system of Hefei Light Source II(HLS-Ⅱ) is a distributed system based on the experimental physics and industrial control system(EPICS). It is necessary to maintain the central configuration files for the existing archiving system. When the process variables in the control system are added, removed, or updated, the configuration files must be manually modified to maintain consistency with the control system. This paper presents a new method for data archiving, which realizes the automatic configuration of the archiving parameters. The system uses microservice architecture to integrate the EPICS Archiver Appliance and Rec Sync. In this way, the system can collect all the archived meta-configuration from the distributed input/output controllers and enter them into the EPICS Archiver Appliance automatically. Furthermore, we also developed a web-based GUI to provide automatic visualization of real-time and historical data. At present,this system is under commissioning at HLS-Ⅱ. The results indicate that the new archiving system is reliable and convenient to operate. The operation mode without maintenance is valuable for large-scale scientific facilities.
基金This work is supported by National Natural Science Foundation of China 61304208by the 2011 Collaborative Innovation Center for Development and Utilization of Finance and Economics Big Data Property Open Fund Project 20181901CRP04+2 种基金by the Scientific Research Fund of Hunan Province Education Department 18C0003by the Research Project on Teaching Reform in General Colleges and Universities,Hunan Provincial Education Department 20190147by the Hunan Normal University Ungraduated Innovation and Entrepreneurship Training Plan Project 2019127.
文摘Water resources are one of the basic resources for human survival,and water protection has been becoming a major problem for countries around the world.However,most of the traditional water quality monitoring research work is still concerned with the collection of water quality indicators,and ignored the analysis of water quality monitoring data and its value.In this paper,by adopting Laravel and AdminTE framework,we introduced how to design and implement a water quality data visualization platform based on Baidu ECharts.Through the deployed water quality sensor,the collected water quality indicator data is transmitted to the big data processing platform that deployed on Tencent Cloud in real time through the 4G network.The collected monitoring data is analyzed,and the processing result is visualized by Baidu ECharts.The test results showed that the designed system could run well and will provide decision support for water resource protection.
文摘Exploration of artworks is enjoyable but often time consuming.For example,it is not always easy to discover the favorite types of unknown painting works.It is not also always easy to explore unpopular painting works which looks similar to painting works created by famous artists.This paper presents a painting image browser which assists the explorative discovery of user-interested painting works.The presented browser applies a new multidimensional data visualization technique that highlights particular ranges of particular numeric values based on association rules to suggest cues to find favorite painting images.This study assumes a large number of painting images are provided where categorical information(e.g.,names of artists,created year)is assigned to the images.The presented system firstly calculates the feature values of the images as a preprocessing step.Then the browser visualizes the multidimensional feature values as a heatmap and highlights association rules discovered from the relationships between the feature values and categorical information.This mechanism enables users to explore favorite painting images or painting images that look similar to famous painting works.Our case study and user evaluation demonstrates the effectiveness of the presented image browser.
文摘One of the most indispensable needs of life is food and its worldwide availability endorsement has made agriculture an essential sector in recent years. As the technology evolved, the need to maintain a good and suitable climate in the greenhouse became imperative to ensure that the indoor plants are more productive hence the agriculture sector was not left behind. That notwithstanding, the introduction and deployment of IoT technology in agriculture solves many problems and increases crop production. This paper focuses mainly on the deployment of the Internet of Things (IoT) in acquiring real- time data of environmental parameters in the greenhouse. Various IoT technologies that can be applicable in greenhouse monitoring system was presented and in the proposed model, a method is developed to send the air temperature and humidity data obtained by the DHT11 sensor to the cloud using an ESP8266-based NodeMCU and firstly to the cloud platform Thing- Speak, and then to Adafruit.IO in which MQTT protocol was used for the reception of sensor data to the application layer referred as Human-Machine Interface. The system has been completely implemented in an actual prototype, allowing the acquiring of data and the publisher/subscriber concept used for communication. The data is published with a broker’s aid, which is responsible for transferring messages to the intended clients based on topic choice. Lastly, the functionality testing of MQTT was carried out and the results showed that the messages are successfully published.
基金supported by the Natural Science Foundation of China under Project 41076115the Global Change Research Program of China under project 2012CB955603the Public Science and Technology Research Funds of the Ocean under project 201005019
文摘The study of marine data visualization is of great value. Marine data, due to its large scale, random variation and multiresolution in nature, are hard to be visualized and analyzed. Nowadays, constructing an ocean model and visualizing model results have become some of the most important research topics of ‘Digital Ocean'. In this paper, a spherical ray casting method is developed to improve the traditional ray-casting algorithm and to make efficient use of GPUs. Aiming at the ocean current data, a 3D view-dependent line integral convolution method is used, in which the spatial frequency is adapted according to the distance from a camera. The study is based on a 3D virtual reality and visualization engine, namely the VV-Ocean. Some interactive operations are also provided to highlight the interesting structures and the characteristics of volumetric data. Finally, the marine data gathered in the East China Sea are displayed and analyzed. The results show that the method meets the requirements of real-time and interactive rendering.
文摘Scholarly communication of knowledge is predominantly document-based in digital repositories,and researchers find it tedious to automatically capture and process the semantics among related articles.Despite the present digital era of big data,there is a lack of visual representations of the knowledge present in scholarly articles,and a time-saving approach for a literature search and visual navigation is warranted.The majority of knowledge display tools cannot cope with current big data trends and pose limitations in meeting the requirements of automatic knowledge representation,storage,and dynamic visualization.To address this limitation,the main aim of this paper is to model the visualization of unstructured data and explore the feasibility of achieving visual navigation for researchers to gain insight into the knowledge hidden in scientific articles of digital repositories.Contemporary topics of research and practice,including modifiable risk factors leading to a dramatic increase in Alzheimer’s disease and other forms of dementia,warrant deeper insight into the evidence-based knowledge available in the literature.The goal is to provide researchers with a visual-based easy traversal through a digital repository of research articles.This paper takes the first step in proposing a novel integrated model using knowledge maps and next-generation graph datastores to achieve a semantic visualization with domain-specific knowledge,such as dementia risk factors.The model facilitates a deep conceptual understanding of the literature by automatically establishing visual relationships among the extracted knowledge from the big data resources of research articles.It also serves as an automated tool for a visual navigation through the knowledge repository for faster identification of dementia risk factors reported in scholarly articles.Further,it facilitates a semantic visualization and domain-specific knowledge discovery from a large digital repository and their associations.In this study,the implementation of the proposed model in the Neo4j graph data repository,along with the results achieved,is presented as a proof of concept.Using scholarly research articles on dementia risk factors as a case study,automatic knowledge extraction,storage,intelligent search,and visual navigation are illustrated.The implementation of contextual knowledge and its relationship for a visual exploration by researchers show promising results in the knowledge discovery of dementia risk factors.Overall,this study demonstrates the significance of a semantic visualization with the effective use of knowledge maps and paves the way for extending visual modeling capabilities in the future.
基金supported by Mahidol University(Grant No.MU-MiniRC02/2564)We also appreciate the partial computing resources from Grant No.RSA6280105funded by Thailand Science Research and Innovation(TSRI),(formerly known as the Thailand Research Fund(TRF)),and the National Research Council of Thailand(NRCT).
文摘The explosion of online information with the recent advent of digital technology in information processing,information storing,information sharing,natural language processing,and text mining techniques has enabled stock investors to uncover market movement and volatility from heterogeneous content.For example,a typical stock market investor reads the news,explores market sentiment,and analyzes technical details in order to make a sound decision prior to purchasing or selling a particular company’s stock.However,capturing a dynamic stock market trend is challenging owing to high fluctuation and the non-stationary nature of the stock market.Although existing studies have attempted to enhance stock prediction,few have provided a complete decision-support system for investors to retrieve real-time data from multiple sources and extract insightful information for sound decision-making.To address the above challenge,we propose a unified solution for data collection,analysis,and visualization in real-time stock market prediction to retrieve and process relevant financial data from news articles,social media,and company technical information.We aim to provide not only useful information for stock investors but also meaningful visualization that enables investors to effectively interpret storyline events affecting stock prices.Specifically,we utilize an ensemble stacking of diversified machine-learning-based estimators and innovative contextual feature engineering to predict the next day’s stock prices.Experiment results show that our proposed stock forecasting method outperforms a traditional baseline with an average mean absolute percentage error of 0.93.Our findings confirm that leveraging an ensemble scheme of machine learning methods with contextual information improves stock prediction performance.Finally,our study could be further extended to a wide variety of innovative financial applications that seek to incorporate external insight from contextual information such as large-scale online news articles and social media data.