Enterprise Business Intelligence(BI)system refers to data mining through the existing database of the enterprise,and data analysis according to customer requirements through comprehensive processing.The data analysis ...Enterprise Business Intelligence(BI)system refers to data mining through the existing database of the enterprise,and data analysis according to customer requirements through comprehensive processing.The data analysis efficiency is high and the operation is convenient.This paper mainly analyzes the application of enterprise BI data analysis system in enterprises.展开更多
This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combin...This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combining quantitative surveys and qualitative interviews to understand teachers’perceptions and attitudes,and the factors influencing their adoption of LMS data analysis tools.The findings reveal that perceived usefulness,perceived ease of use,technical literacy,organizational support,and data privacy concerns significantly impact teachers’willingness to use these tools.Based on these insights,the study offers practical recommendations for educational institutions to enhance the effective adoption of LMS data analysis tools in English language teaching.展开更多
Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanal...Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.展开更多
In order to overcome the defects that the analysis of multi-well typical curves of shale gas reservoirs is rarely applied to engineering,this study proposes a robust production data analysis method based on deconvolut...In order to overcome the defects that the analysis of multi-well typical curves of shale gas reservoirs is rarely applied to engineering,this study proposes a robust production data analysis method based on deconvolution,which is used for multi-well inter-well interference research.In this study,a multi-well conceptual trilinear seepage model for multi-stage fractured horizontal wells was established,and its Laplace solutions under two different outer boundary conditions were obtained.Then,an improved pressure deconvolution algorithm was used to normalize the scattered production data.Furthermore,the typical curve fitting was carried out using the production data and the seepage model solution.Finally,some reservoir parameters and fracturing parameters were interpreted,and the intensity of inter-well interference was compared.The effectiveness of the method was verified by analyzing the production dynamic data of six shale gas wells in Duvernay area.The results showed that the fitting effect of typical curves was greatly improved due to the mutual restriction between deconvolution calculation parameter debugging and seepage model parameter debugging.Besides,by using the morphological characteristics of the log-log typical curves and the time corresponding to the intersection point of the log-log typical curves of two models under different outer boundary conditions,the strength of the interference between wells on the same well platform was well judged.This work can provide a reference for the optimization of well spacing and hydraulic fracturing measures for shale gas wells.展开更多
Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fis...Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fish and shellfish are responsible for more than 90%of food allergies.Here,we provide promising insights using a large-scale data-driven analysis,comparing the mechanistic feature and biological relevance of different ingredients presents in peanuts,tree nuts(walnuts,almonds,cashews,pecans and pistachios)and soybean.Additionally,we have analysed the chemical compositions of peanuts in different processed form raw,boiled and dry-roasted.Using the data-driven approach we are able to generate new hypotheses to explain why nuclear receptors like the peroxisome proliferator-activated receptors(PPARs)and its isoform and their interaction with dietary lipids may have significant effect on allergic response.The results obtained from this study will direct future experimeantal and clinical studies to understand the role of dietary lipids and PPARisoforms to exert pro-inflammatory or anti-inflammatory functions on cells of the innate immunity and influence antigen presentation to the cells of the adaptive immunity.展开更多
The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of ...The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.展开更多
Maintaining the integrity and longevity of structures is essential in many industries,such as aerospace,nuclear,and petroleum.To achieve the cost-effectiveness of large-scale systems in petroleum drilling,a strong emp...Maintaining the integrity and longevity of structures is essential in many industries,such as aerospace,nuclear,and petroleum.To achieve the cost-effectiveness of large-scale systems in petroleum drilling,a strong emphasis on structural durability and monitoring is required.This study focuses on the mechanical vibrations that occur in rotary drilling systems,which have a substantial impact on the structural integrity of drilling equipment.The study specifically investigates axial,torsional,and lateral vibrations,which might lead to negative consequences such as bit-bounce,chaotic whirling,and high-frequency stick-slip.These events not only hinder the efficiency of drilling but also lead to exhaustion and harm to the system’s components since they are difficult to be detected and controlled in real time.The study investigates the dynamic interactions of these vibrations,specifically in their high-frequency modes,usingfield data obtained from measurement while drilling.Thefindings have demonstrated the effect of strong coupling between the high-frequency modes of these vibrations on drilling sys-tem performance.The obtained results highlight the importance of considering the interconnected impacts of these vibrations when designing and implementing robust control systems.Therefore,integrating these compo-nents can increase the durability of drill bits and drill strings,as well as improve the ability to monitor and detect damage.Moreover,by exploiting thesefindings,the assessment of structural resilience in rotary drilling systems can be enhanced.Furthermore,the study demonstrates the capacity of structural health monitoring to improve the quality,dependability,and efficiency of rotary drilling systems in the petroleum industry.展开更多
Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision...Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.展开更多
This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and i...This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and its powerful data management and analysis tools make it suitable for handling complex data analysis tasks.It is also highly customizable,allowing users to create custom functions and packages to meet their specific needs.Additionally,R language provides high reproducibility,making it easy to replicate and verify research results,and it has excellent collaboration capabilities,enabling multiple users to work on the same project simultaneously.These advantages make R language a more suitable choice for complex data analysis tasks,particularly in scientific research and business applications.The findings of this study will help people understand that R is not just a language that can handle more data than Excel and demonstrate that r is essential to the field of data analysis.At the same time,it will also help users and organizations make informed decisions regarding their data analysis needs and software preferences.展开更多
Objective:To explain the use of concept mapping in a study about family members'experiences in taking care of people with cancer.Methods:This study used a phenomenological study design.In this study,we describe th...Objective:To explain the use of concept mapping in a study about family members'experiences in taking care of people with cancer.Methods:This study used a phenomenological study design.In this study,we describe the analytical process of using concept mapping in our phenomenological studies about family members'experiences in taking care of people with cancer.Results:We developed several concept maps that aided us in analyzing our collected data from the interviews.Conclusions:The use of concept mapping is suggested to researchers who intend to analyze their data in any qualitative studies,including those using a phenomenological design,because it is a time-efficient way of dealing with large amounts of qualitative data during the analytical process.展开更多
Reviewing the empirical and theoretical parameter relationships between various parameters is a good way to understand more about contact binary systems.In this investigation,two-dimensional(2D)relationships for P–MV...Reviewing the empirical and theoretical parameter relationships between various parameters is a good way to understand more about contact binary systems.In this investigation,two-dimensional(2D)relationships for P–MV(system),P–L1,2,M1,2–L1,2,and q–Lratiowere revisited.The sample used is related to 118 contact binary systems with an orbital period shorter than 0.6 days whose absolute parameters were estimated based on the Gaia Data Release 3 parallax.We reviewed previous studies on 2D relationships and updated six parameter relationships.Therefore,Markov chain Monte Carlo and Machine Learning methods were used,and the outcomes were compared.We selected 22 contact binary systems from eight previous studies for comparison,which had light curve solutions using spectroscopic data.The results show that the systems are in good agreement with the results of this study.展开更多
The application of single-cell RNA sequencing(scRNA-seq)in biomedical research has advanced our understanding of the pathogenesis of disease and provided valuable insights into new diagnostic and therapeutic strategie...The application of single-cell RNA sequencing(scRNA-seq)in biomedical research has advanced our understanding of the pathogenesis of disease and provided valuable insights into new diagnostic and therapeutic strategies.With the expansion of capacity for high-throughput scRNA-seq,including clinical samples,the analysis of these huge volumes of data has become a daunting prospect for researchers entering this field.Here,we review the workflow for typical scRNA-seq data analysis,covering raw data processing and quality control,basic data analysis applicable for almost all scRNA-seq data sets,and advanced data analysis that should be tailored to specific scientific questions.While summarizing the current methods for each analysis step,we also provide an online repository of software and wrapped-up scripts to support the implementation.Recommendations and caveats are pointed out for some specific analysis tasks and approaches.We hope this resource will be helpful to researchers engaging with scRNA-seq,in particular for emerging clinical applications.展开更多
In the nonparametric data envelopment analysis literature,scale elasticity is evaluated in two alternative ways:using either the technical efficiency model or the cost efficiency model.This evaluation becomes problema...In the nonparametric data envelopment analysis literature,scale elasticity is evaluated in two alternative ways:using either the technical efficiency model or the cost efficiency model.This evaluation becomes problematic in several situations,for example(a)when input proportions change in the long run,(b)when inputs are heterogeneous,and(c)when firms face ex-ante price uncertainty in making their production decisions.To address these situations,a scale elasticity evaluation was performed using a value-based cost efficiency model.However,this alternative value-based scale elasticity evaluation is sensitive to the uncertainty and variability underlying input and output data.Therefore,in this study,we introduce a stochastic cost-efficiency model based on chance-constrained programming to develop a value-based measure of the scale elasticity of firms facing data uncertainty.An illustrative empirical application to the Indian banking industry comprising 71 banks for eight years(1998–2005)was made to compare inferences about their efficiency and scale properties.The key findings are as follows:First,both the deterministic model and our proposed stochastic model yield distinctly different results concerning the efficiency and scale elasticity scores at various tolerance levels of chance constraints.However,both models yield the same results at a tolerance level of 0.5,implying that the deterministic model is a special case of the stochastic model in that it reveals the same efficiency and returns to scale characterizations of banks.Second,the stochastic model generates higher efficiency scores for inefficient banks than its deterministic counterpart.Third,public banks exhibit higher efficiency than private and foreign banks.Finally,public and old private banks mostly exhibit either decreasing or constant returns to scale,whereas foreign and new private banks experience either increasing or decreasing returns to scale.Although the application of our proposed stochastic model is illustrative,it can be potentially applied to all firms in the information and distribution-intensive industry with high fixed costs,which have ample potential for reaping scale and scope benefits.展开更多
Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor(GECAM),consisting of two microsatellites,is designed to detect gamma-ray bursts associated with gravitational-wave events.Here,we introduce th...Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor(GECAM),consisting of two microsatellites,is designed to detect gamma-ray bursts associated with gravitational-wave events.Here,we introduce the real-time burst alert system of GECAM,with the adoption of the BeiDou-3 short message communication service.We present the post-trigger operations,the detailed ground-based analysis,and the performance of the system.In the first year of the in-flight operation,GECAM was triggered by 42 gamma-ray bursts.The GECAM real-time burst alert system has the ability to distribute the alert within~1 minute after being triggered,which enables timely follow-up observations.展开更多
Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been wi...Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been widely used in various biomedical applications such as arrhythmia detection,disease-specific detection,mortality prediction,and biometric recognition.In recent years,ECG-related studies have been carried out using a variety of publicly available datasets,with many differences in the datasets used,data preprocessing methods,targeted challenges,and modeling and analysis techniques.Here we systematically summarize and analyze the ECGbased automatic analysis methods and applications.Specifically,we first reviewed 22 commonly used ECG public datasets and provided an overview of data preprocessing processes.Then we described some of the most widely used applications of ECG signals and analyzed the advanced methods involved in these applications.Finally,we elucidated some of the challenges in ECG analysis and provided suggestions for further research.展开更多
As COVID-19 poses a major threat to people’s health and economy,there is an urgent need for forecasting methodologies that can anticipate its trajectory efficiently.In non-stationary time series forecasting jobs,ther...As COVID-19 poses a major threat to people’s health and economy,there is an urgent need for forecasting methodologies that can anticipate its trajectory efficiently.In non-stationary time series forecasting jobs,there is frequently a hysteresis in the anticipated values relative to the real values.The multilayer deep-time convolutional network and a feature fusion network are combined in this paper’s proposal of an enhanced Multilayer Deep Time Convolutional Neural Network(MDTCNet)for COVID-19 prediction to address this problem.In particular,it is possible to record the deep features and temporal dependencies in uncertain time series,and the features may then be combined using a feature fusion network and a multilayer perceptron.Last but not least,the experimental verification is conducted on the prediction task of COVID-19 real daily confirmed cases in the world and the United States with uncertainty,realizing the short-term and long-term prediction of COVID-19 daily confirmed cases,and verifying the effectiveness and accuracy of the suggested prediction method,as well as reducing the hysteresis of the prediction results.展开更多
Today,we are living in the era of“big data”where massive amounts of data are used for quantitative decisions and communication management.With the continuous penetration of big data-based intelligent technology in a...Today,we are living in the era of“big data”where massive amounts of data are used for quantitative decisions and communication management.With the continuous penetration of big data-based intelligent technology in all fields of human life,the enormous commercial value inherent in the data industry has become a crucial force that drives the aggregation of new industries.For the publishing industry,the introduction of big data and relevant intelligent technologies,such as data intelligence analysis and scenario services,into the structure and value system of the publishing industry,has become an effective path to expanding and reshaping the demand space of publishing products,content decisions,workflow chain,and marketing direction.In the integration and reconstruction of big data,cloud computing,artificial intelligence,and other related technologies,it is expected that a generalized publishing industry pattern dominated by virtual interaction will be formed in the future.展开更多
Law enforcement remains to be the main strategy used to combat poaching and account for high budget share in protected area management. Studies on efficiency of wildlife law enforcement in the protected areas are limi...Law enforcement remains to be the main strategy used to combat poaching and account for high budget share in protected area management. Studies on efficiency of wildlife law enforcement in the protected areas are limited. This study analyzed economic efficiency of wildlife law enforcement in terms of resource used and output generated using three different protected areas (PAs) of Serengeti ecosystem namely Serengeti National Park (SENAPA), Ikorongo/Grumeti Game Reserves (IGGR) and Ikona Wildlife Management Area (IWMA). Three years (2010-2012) monthly data on wildlife law enforcement inputs and outputs were collected from respective PAs authorities and supplemented with key informant interviews and secondary data. Questionnaire surveys were conducted to wildlife law enforcement staff. Shadow prices for non-marketed inputs were estimated, and market prices for marketed inputs. Data Envelopment Analysis (DEA) was used to estimate economic efficiency using Variable Return to Scale (VRS) and Constant Return to Scale (CCR) assumptions. Results revealed that wildlife law enforcement in all PAs was economically inefficient, with less inefficiency observed in IWMA. The less inefficiency in IWMA is likely attributed to existing sense of ownership and responsibility created through community-based conservation which resulted in to decrease in law enforcement costs. A slacks evaluation revealed a potential to reduce fuel consumption, number of patrol vehicles, ration and prosecution efforts at different magnitudes between studied protected areas. There is equal potential to recruit more rangers while maintaining the resting time. These finding forms the bases for monitoring and evaluation with respect to resource usage to enhance efficiency. It is further recommended to enhance community participation in conservation in SENAPA and IGGR to lower law enforcement costs. Collaboration between protected area, police and judiciary is fundamental to enhance enforcement efficiency. Despite old dataset, these findings are relevant since neither conservation policy nor institution framework has changed substantially in the last decade.展开更多
With the rapid development of the Internet,many enterprises have launched their network platforms.When users browse,search,and click the products of these platforms,most platforms will keep records of these network be...With the rapid development of the Internet,many enterprises have launched their network platforms.When users browse,search,and click the products of these platforms,most platforms will keep records of these network behaviors,these records are often heterogeneous,and it is called log data.To effectively to analyze and manage these heterogeneous log data,so that enterprises can grasp the behavior characteristics of their platform users in time,to realize targeted recommendation of users,increase the sales volume of enterprises’products,and accelerate the development of enterprises.Firstly,we follow the process of big data collection,storage,analysis,and visualization to design the system,then,we adopt HDFS storage technology,Yarn resource management technology,and gink load balancing technology to build a Hadoop cluster to process the log data,and adopt MapReduce processing technology and data warehouse hive technology analyze the log data to obtain the results.Finally,the obtained results are displayed visually,and a log data analysis system is successfully constructed.It has been proved by practice that the system effectively realizes the collection,analysis and visualization of log data,and can accurately realize the recommendation of products by enterprises.The system is stable and effective.展开更多
Air quality is a critical concern for public health and environmental regulation. The Air Quality Index (AQI), a widely adopted index by the US Environmental Protection Agency (EPA), serves as a crucial metric for rep...Air quality is a critical concern for public health and environmental regulation. The Air Quality Index (AQI), a widely adopted index by the US Environmental Protection Agency (EPA), serves as a crucial metric for reporting site-specific air pollution levels. Accurately predicting air quality, as measured by the AQI, is essential for effective air pollution management. In this study, we aim to identify the most reliable regression model among linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), logistic regression, and K-nearest neighbors (KNN). We conducted four different regression analyses using a machine learning approach to determine the model with the best performance. By employing the confusion matrix and error percentages, we selected the best-performing model, which yielded prediction error rates of 22%, 23%, 20%, and 27%, respectively, for LDA, QDA, logistic regression, and KNN models. The logistic regression model outperformed the other three statistical models in predicting AQI. Understanding these models' performance can help address an existing gap in air quality research and contribute to the integration of regression techniques in AQI studies, ultimately benefiting stakeholders like environmental regulators, healthcare professionals, urban planners, and researchers.展开更多
文摘Enterprise Business Intelligence(BI)system refers to data mining through the existing database of the enterprise,and data analysis according to customer requirements through comprehensive processing.The data analysis efficiency is high and the operation is convenient.This paper mainly analyzes the application of enterprise BI data analysis system in enterprises.
文摘This study investigates university English teachers’acceptance and willingness to use learning management system(LMS)data analysis tools in their teaching practices.The research employs a mixed-method approach,combining quantitative surveys and qualitative interviews to understand teachers’perceptions and attitudes,and the factors influencing their adoption of LMS data analysis tools.The findings reveal that perceived usefulness,perceived ease of use,technical literacy,organizational support,and data privacy concerns significantly impact teachers’willingness to use these tools.Based on these insights,the study offers practical recommendations for educational institutions to enhance the effective adoption of LMS data analysis tools in English language teaching.
基金funded by the National Natural Science Foundation of China(NSFC)the Chinese Academy of Sciences(CAS)(grant No.U2031209)the National Natural Science Foundation of China(NSFC,grant Nos.11872128,42174192,and 91952111)。
文摘Seeing is an important index to evaluate the quality of an astronomical site.To estimate seeing at the Muztagh-Ata site with height and time quantitatively,the European Centre for Medium-Range Weather Forecasts reanalysis database(ERA5)is used.Seeing calculated from ERA5 is compared consistently with the Differential Image Motion Monitor seeing at the height of 12 m.Results show that seeing decays exponentially with height at the Muztagh-Ata site.Seeing decays the fastest in fall in 2021 and most slowly with height in summer.The seeing condition is better in fall than in summer.The median value of seeing at 12 m is 0.89 arcsec,the maximum value is1.21 arcsec in August and the minimum is 0.66 arcsec in October.The median value of seeing at 12 m is 0.72arcsec in the nighttime and 1.08 arcsec in the daytime.Seeing is a combination of annual and about biannual variations with the same phase as temperature and wind speed indicating that seeing variation with time is influenced by temperature and wind speed.The Richardson number Ri is used to analyze the atmospheric stability and the variations of seeing are consistent with Ri between layers.These quantitative results can provide an important reference for a telescopic observation strategy.
基金financial support from PetroChina Innovation Foundation。
文摘In order to overcome the defects that the analysis of multi-well typical curves of shale gas reservoirs is rarely applied to engineering,this study proposes a robust production data analysis method based on deconvolution,which is used for multi-well inter-well interference research.In this study,a multi-well conceptual trilinear seepage model for multi-stage fractured horizontal wells was established,and its Laplace solutions under two different outer boundary conditions were obtained.Then,an improved pressure deconvolution algorithm was used to normalize the scattered production data.Furthermore,the typical curve fitting was carried out using the production data and the seepage model solution.Finally,some reservoir parameters and fracturing parameters were interpreted,and the intensity of inter-well interference was compared.The effectiveness of the method was verified by analyzing the production dynamic data of six shale gas wells in Duvernay area.The results showed that the fitting effect of typical curves was greatly improved due to the mutual restriction between deconvolution calculation parameter debugging and seepage model parameter debugging.Besides,by using the morphological characteristics of the log-log typical curves and the time corresponding to the intersection point of the log-log typical curves of two models under different outer boundary conditions,the strength of the interference between wells on the same well platform was well judged.This work can provide a reference for the optimization of well spacing and hydraulic fracturing measures for shale gas wells.
文摘Peanut allergy is majorly related to severe food induced allergic reactions.Several food including cow's milk,hen's eggs,soy,wheat,peanuts,tree nuts(walnuts,hazelnuts,almonds,cashews,pecans and pistachios),fish and shellfish are responsible for more than 90%of food allergies.Here,we provide promising insights using a large-scale data-driven analysis,comparing the mechanistic feature and biological relevance of different ingredients presents in peanuts,tree nuts(walnuts,almonds,cashews,pecans and pistachios)and soybean.Additionally,we have analysed the chemical compositions of peanuts in different processed form raw,boiled and dry-roasted.Using the data-driven approach we are able to generate new hypotheses to explain why nuclear receptors like the peroxisome proliferator-activated receptors(PPARs)and its isoform and their interaction with dietary lipids may have significant effect on allergic response.The results obtained from this study will direct future experimeantal and clinical studies to understand the role of dietary lipids and PPARisoforms to exert pro-inflammatory or anti-inflammatory functions on cells of the innate immunity and influence antigen presentation to the cells of the adaptive immunity.
文摘The Internet of Multimedia Things(IoMT)refers to a network of interconnected multimedia devices that communicate with each other over the Internet.Recently,smart healthcare has emerged as a significant application of the IoMT,particularly in the context of knowledge‐based learning systems.Smart healthcare systems leverage knowledge‐based learning to become more context‐aware,adaptable,and auditable while maintain-ing the ability to learn from historical data.In smart healthcare systems,devices capture images,such as X‐rays,Magnetic Resonance Imaging.The security and integrity of these images are crucial for the databases used in knowledge‐based learning systems to foster structured decision‐making and enhance the learning abilities of AI.Moreover,in knowledge‐driven systems,the storage and transmission of HD medical images exert a burden on the limited bandwidth of the communication channel,leading to data trans-mission delays.To address the security and latency concerns,this paper presents a lightweight medical image encryption scheme utilising bit‐plane decomposition and chaos theory.The results of the experiment yield entropy,energy,and correlation values of 7.999,0.0156,and 0.0001,respectively.This validates the effectiveness of the encryption system proposed in this paper,which offers high‐quality encryption,a large key space,key sensitivity,and resistance to statistical attacks.
文摘Maintaining the integrity and longevity of structures is essential in many industries,such as aerospace,nuclear,and petroleum.To achieve the cost-effectiveness of large-scale systems in petroleum drilling,a strong emphasis on structural durability and monitoring is required.This study focuses on the mechanical vibrations that occur in rotary drilling systems,which have a substantial impact on the structural integrity of drilling equipment.The study specifically investigates axial,torsional,and lateral vibrations,which might lead to negative consequences such as bit-bounce,chaotic whirling,and high-frequency stick-slip.These events not only hinder the efficiency of drilling but also lead to exhaustion and harm to the system’s components since they are difficult to be detected and controlled in real time.The study investigates the dynamic interactions of these vibrations,specifically in their high-frequency modes,usingfield data obtained from measurement while drilling.Thefindings have demonstrated the effect of strong coupling between the high-frequency modes of these vibrations on drilling sys-tem performance.The obtained results highlight the importance of considering the interconnected impacts of these vibrations when designing and implementing robust control systems.Therefore,integrating these compo-nents can increase the durability of drill bits and drill strings,as well as improve the ability to monitor and detect damage.Moreover,by exploiting thesefindings,the assessment of structural resilience in rotary drilling systems can be enhanced.Furthermore,the study demonstrates the capacity of structural health monitoring to improve the quality,dependability,and efficiency of rotary drilling systems in the petroleum industry.
文摘Microsoft Excel is essential for the End-User Approach (EUA), offering versatility in data organization, analysis, and visualization, as well as widespread accessibility. It fosters collaboration and informed decision-making across diverse domains. Conversely, Python is indispensable for professional programming due to its versatility, readability, extensive libraries, and robust community support. It enables efficient development, advanced data analysis, data mining, and automation, catering to diverse industries and applications. However, one primary issue when using Microsoft Excel with Python libraries is compatibility and interoperability. While Excel is a widely used tool for data storage and analysis, it may not seamlessly integrate with Python libraries, leading to challenges in reading and writing data, especially in complex or large datasets. Additionally, manipulating Excel files with Python may not always preserve formatting or formulas accurately, potentially affecting data integrity. Moreover, dependency on Excel’s graphical user interface (GUI) for automation can limit scalability and reproducibility compared to Python’s scripting capabilities. This paper covers the integration solution of empowering non-programmers to leverage Python’s capabilities within the familiar Excel environment. This enables users to perform advanced data analysis and automation tasks without requiring extensive programming knowledge. Based on Soliciting feedback from non-programmers who have tested the integration solution, the case study shows how the solution evaluates the ease of implementation, performance, and compatibility of Python with Excel versions.
文摘This research paper compares Excel and R language for data analysis and concludes that R language is more suitable for complex data analysis tasks.R language’s open-source nature makes it accessible to everyone,and its powerful data management and analysis tools make it suitable for handling complex data analysis tasks.It is also highly customizable,allowing users to create custom functions and packages to meet their specific needs.Additionally,R language provides high reproducibility,making it easy to replicate and verify research results,and it has excellent collaboration capabilities,enabling multiple users to work on the same project simultaneously.These advantages make R language a more suitable choice for complex data analysis tasks,particularly in scientific research and business applications.The findings of this study will help people understand that R is not just a language that can handle more data than Excel and demonstrate that r is essential to the field of data analysis.At the same time,it will also help users and organizations make informed decisions regarding their data analysis needs and software preferences.
基金supported by Faculty of Medicine,Ministry of Education,Cultures,Research and Technology Tanjungpura University(No.3483/UN22.9/PG/2021)。
文摘Objective:To explain the use of concept mapping in a study about family members'experiences in taking care of people with cancer.Methods:This study used a phenomenological study design.In this study,we describe the analytical process of using concept mapping in our phenomenological studies about family members'experiences in taking care of people with cancer.Results:We developed several concept maps that aided us in analyzing our collected data from the interviews.Conclusions:The use of concept mapping is suggested to researchers who intend to analyze their data in any qualitative studies,including those using a phenomenological design,because it is a time-efficient way of dealing with large amounts of qualitative data during the analytical process.
基金The Binary Systems of South and North(BSN)project(https://bsnp.info/)。
文摘Reviewing the empirical and theoretical parameter relationships between various parameters is a good way to understand more about contact binary systems.In this investigation,two-dimensional(2D)relationships for P–MV(system),P–L1,2,M1,2–L1,2,and q–Lratiowere revisited.The sample used is related to 118 contact binary systems with an orbital period shorter than 0.6 days whose absolute parameters were estimated based on the Gaia Data Release 3 parallax.We reviewed previous studies on 2D relationships and updated six parameter relationships.Therefore,Markov chain Monte Carlo and Machine Learning methods were used,and the outcomes were compared.We selected 22 contact binary systems from eight previous studies for comparison,which had light curve solutions using spectroscopic data.The results show that the systems are in good agreement with the results of this study.
基金suppor ted by the National Key Research and Development Program of China (2022YFC2702502)the National Natural Science Foundation of China (32170742, 31970646, and 32060152)+7 种基金the Start Fund for Specially Appointed Professor of Jiangsu ProvinceHainan Province Science and Technology Special Fund (ZDYF2021SHFZ051)the Natural Science Foundation of Hainan Province (820MS053)the Start Fund for High-level Talents of Nanjing Medical University (NMUR2020009)the Marshal Initiative Funding of Hainan Medical University (JBGS202103)the Hainan Province Clinical Medical Center (QWYH202175)the Bioinformatics for Major Diseases Science Innovation Group of Hainan Medical Universitythe Shenzhen Science and Technology Program (JCYJ20210324140407021)
文摘The application of single-cell RNA sequencing(scRNA-seq)in biomedical research has advanced our understanding of the pathogenesis of disease and provided valuable insights into new diagnostic and therapeutic strategies.With the expansion of capacity for high-throughput scRNA-seq,including clinical samples,the analysis of these huge volumes of data has become a daunting prospect for researchers entering this field.Here,we review the workflow for typical scRNA-seq data analysis,covering raw data processing and quality control,basic data analysis applicable for almost all scRNA-seq data sets,and advanced data analysis that should be tailored to specific scientific questions.While summarizing the current methods for each analysis step,we also provide an online repository of software and wrapped-up scripts to support the implementation.Recommendations and caveats are pointed out for some specific analysis tasks and approaches.We hope this resource will be helpful to researchers engaging with scRNA-seq,in particular for emerging clinical applications.
文摘In the nonparametric data envelopment analysis literature,scale elasticity is evaluated in two alternative ways:using either the technical efficiency model or the cost efficiency model.This evaluation becomes problematic in several situations,for example(a)when input proportions change in the long run,(b)when inputs are heterogeneous,and(c)when firms face ex-ante price uncertainty in making their production decisions.To address these situations,a scale elasticity evaluation was performed using a value-based cost efficiency model.However,this alternative value-based scale elasticity evaluation is sensitive to the uncertainty and variability underlying input and output data.Therefore,in this study,we introduce a stochastic cost-efficiency model based on chance-constrained programming to develop a value-based measure of the scale elasticity of firms facing data uncertainty.An illustrative empirical application to the Indian banking industry comprising 71 banks for eight years(1998–2005)was made to compare inferences about their efficiency and scale properties.The key findings are as follows:First,both the deterministic model and our proposed stochastic model yield distinctly different results concerning the efficiency and scale elasticity scores at various tolerance levels of chance constraints.However,both models yield the same results at a tolerance level of 0.5,implying that the deterministic model is a special case of the stochastic model in that it reveals the same efficiency and returns to scale characterizations of banks.Second,the stochastic model generates higher efficiency scores for inefficient banks than its deterministic counterpart.Third,public banks exhibit higher efficiency than private and foreign banks.Finally,public and old private banks mostly exhibit either decreasing or constant returns to scale,whereas foreign and new private banks experience either increasing or decreasing returns to scale.Although the application of our proposed stochastic model is illustrative,it can be potentially applied to all firms in the information and distribution-intensive industry with high fixed costs,which have ample potential for reaping scale and scope benefits.
基金supported by the National Key R&D Program of China(2021YFA0718500,2022YFF0711404)the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences(grant Nos.XDA15360300,XDA15052700 and E02212A02S)+1 种基金the National Natural Science Foundation of China(grant Nos.U2031205,12133007)supported by the Strategic Priority Research Program on Space Science,the Chinese Academy of Sciences,grant No.XDA15360000。
文摘Gravitational wave high-energy Electromagnetic Counterpart All-sky Monitor(GECAM),consisting of two microsatellites,is designed to detect gamma-ray bursts associated with gravitational-wave events.Here,we introduce the real-time burst alert system of GECAM,with the adoption of the BeiDou-3 short message communication service.We present the post-trigger operations,the detailed ground-based analysis,and the performance of the system.In the first year of the in-flight operation,GECAM was triggered by 42 gamma-ray bursts.The GECAM real-time burst alert system has the ability to distribute the alert within~1 minute after being triggered,which enables timely follow-up observations.
基金Supported by the NSFC-Zhejiang Joint Fund for the Integration of Industrialization and Informatization(U1909208)the Science and Technology Major Project of Changsha(kh2202004)the Changsha Municipal Natural Science Foundation(kq2202106).
文摘Electrocardiogram(ECG)is a low-cost,simple,fast,and non-invasive test.It can reflect the heart’s electrical activity and provide valuable diagnostic clues about the health of the entire body.Therefore,ECG has been widely used in various biomedical applications such as arrhythmia detection,disease-specific detection,mortality prediction,and biometric recognition.In recent years,ECG-related studies have been carried out using a variety of publicly available datasets,with many differences in the datasets used,data preprocessing methods,targeted challenges,and modeling and analysis techniques.Here we systematically summarize and analyze the ECGbased automatic analysis methods and applications.Specifically,we first reviewed 22 commonly used ECG public datasets and provided an overview of data preprocessing processes.Then we described some of the most widely used applications of ECG signals and analyzed the advanced methods involved in these applications.Finally,we elucidated some of the challenges in ECG analysis and provided suggestions for further research.
基金supported by the major scientific and technological research project of Chongqing Education Commission(KJZD-M202000802)The first batch of Industrial and Informatization Key Special Fund Support Projects in Chongqing in 2022(2022000537).
文摘As COVID-19 poses a major threat to people’s health and economy,there is an urgent need for forecasting methodologies that can anticipate its trajectory efficiently.In non-stationary time series forecasting jobs,there is frequently a hysteresis in the anticipated values relative to the real values.The multilayer deep-time convolutional network and a feature fusion network are combined in this paper’s proposal of an enhanced Multilayer Deep Time Convolutional Neural Network(MDTCNet)for COVID-19 prediction to address this problem.In particular,it is possible to record the deep features and temporal dependencies in uncertain time series,and the features may then be combined using a feature fusion network and a multilayer perceptron.Last but not least,the experimental verification is conducted on the prediction task of COVID-19 real daily confirmed cases in the world and the United States with uncertainty,realizing the short-term and long-term prediction of COVID-19 daily confirmed cases,and verifying the effectiveness and accuracy of the suggested prediction method,as well as reducing the hysteresis of the prediction results.
文摘Today,we are living in the era of“big data”where massive amounts of data are used for quantitative decisions and communication management.With the continuous penetration of big data-based intelligent technology in all fields of human life,the enormous commercial value inherent in the data industry has become a crucial force that drives the aggregation of new industries.For the publishing industry,the introduction of big data and relevant intelligent technologies,such as data intelligence analysis and scenario services,into the structure and value system of the publishing industry,has become an effective path to expanding and reshaping the demand space of publishing products,content decisions,workflow chain,and marketing direction.In the integration and reconstruction of big data,cloud computing,artificial intelligence,and other related technologies,it is expected that a generalized publishing industry pattern dominated by virtual interaction will be formed in the future.
文摘Law enforcement remains to be the main strategy used to combat poaching and account for high budget share in protected area management. Studies on efficiency of wildlife law enforcement in the protected areas are limited. This study analyzed economic efficiency of wildlife law enforcement in terms of resource used and output generated using three different protected areas (PAs) of Serengeti ecosystem namely Serengeti National Park (SENAPA), Ikorongo/Grumeti Game Reserves (IGGR) and Ikona Wildlife Management Area (IWMA). Three years (2010-2012) monthly data on wildlife law enforcement inputs and outputs were collected from respective PAs authorities and supplemented with key informant interviews and secondary data. Questionnaire surveys were conducted to wildlife law enforcement staff. Shadow prices for non-marketed inputs were estimated, and market prices for marketed inputs. Data Envelopment Analysis (DEA) was used to estimate economic efficiency using Variable Return to Scale (VRS) and Constant Return to Scale (CCR) assumptions. Results revealed that wildlife law enforcement in all PAs was economically inefficient, with less inefficiency observed in IWMA. The less inefficiency in IWMA is likely attributed to existing sense of ownership and responsibility created through community-based conservation which resulted in to decrease in law enforcement costs. A slacks evaluation revealed a potential to reduce fuel consumption, number of patrol vehicles, ration and prosecution efforts at different magnitudes between studied protected areas. There is equal potential to recruit more rangers while maintaining the resting time. These finding forms the bases for monitoring and evaluation with respect to resource usage to enhance efficiency. It is further recommended to enhance community participation in conservation in SENAPA and IGGR to lower law enforcement costs. Collaboration between protected area, police and judiciary is fundamental to enhance enforcement efficiency. Despite old dataset, these findings are relevant since neither conservation policy nor institution framework has changed substantially in the last decade.
基金supported by the Huaihua University Science Foundation under Grant HHUY2019-24.
文摘With the rapid development of the Internet,many enterprises have launched their network platforms.When users browse,search,and click the products of these platforms,most platforms will keep records of these network behaviors,these records are often heterogeneous,and it is called log data.To effectively to analyze and manage these heterogeneous log data,so that enterprises can grasp the behavior characteristics of their platform users in time,to realize targeted recommendation of users,increase the sales volume of enterprises’products,and accelerate the development of enterprises.Firstly,we follow the process of big data collection,storage,analysis,and visualization to design the system,then,we adopt HDFS storage technology,Yarn resource management technology,and gink load balancing technology to build a Hadoop cluster to process the log data,and adopt MapReduce processing technology and data warehouse hive technology analyze the log data to obtain the results.Finally,the obtained results are displayed visually,and a log data analysis system is successfully constructed.It has been proved by practice that the system effectively realizes the collection,analysis and visualization of log data,and can accurately realize the recommendation of products by enterprises.The system is stable and effective.
文摘Air quality is a critical concern for public health and environmental regulation. The Air Quality Index (AQI), a widely adopted index by the US Environmental Protection Agency (EPA), serves as a crucial metric for reporting site-specific air pollution levels. Accurately predicting air quality, as measured by the AQI, is essential for effective air pollution management. In this study, we aim to identify the most reliable regression model among linear discriminant analysis (LDA), quadratic discriminant analysis (QDA), logistic regression, and K-nearest neighbors (KNN). We conducted four different regression analyses using a machine learning approach to determine the model with the best performance. By employing the confusion matrix and error percentages, we selected the best-performing model, which yielded prediction error rates of 22%, 23%, 20%, and 27%, respectively, for LDA, QDA, logistic regression, and KNN models. The logistic regression model outperformed the other three statistical models in predicting AQI. Understanding these models' performance can help address an existing gap in air quality research and contribute to the integration of regression techniques in AQI studies, ultimately benefiting stakeholders like environmental regulators, healthcare professionals, urban planners, and researchers.