Data assimilation(DA)and uncertainty quantification(UQ)are extensively used in analysing and reducing error propagation in high-dimensional spatial-temporal dynamics.Typical applications span from computational fluid ...Data assimilation(DA)and uncertainty quantification(UQ)are extensively used in analysing and reducing error propagation in high-dimensional spatial-temporal dynamics.Typical applications span from computational fluid dynamics(CFD)to geoscience and climate systems.Recently,much effort has been given in combining DA,UQ and machine learning(ML)techniques.These research efforts seek to address some critical challenges in high-dimensional dynamical systems,including but not limited to dynamical system identification,reduced order surrogate modelling,error covariance specification and model error correction.A large number of developed techniques and methodologies exhibit a broad applicability across numerous domains,resulting in the necessity for a comprehensive guide.This paper provides the first overview of state-of-the-art researches in this interdisciplinary field,covering a wide range of applications.This review is aimed at ML scientists who attempt to apply DA and UQ techniques to improve the accuracy and the interpretability of their models,but also at DA and UQ experts who intend to integrate cutting-edge ML approaches to their systems.Therefore,this article has a special focus on how ML methods can overcome the existing limits of DA and UQ,and vice versa.Some exciting perspectives of this rapidly developing research field are also discussed.Index Terms-Data assimilation(DA),deep learning,machine learning(ML),reduced-order-modelling,uncertainty quantification(UQ).展开更多
The advent of healthcare information management systems(HIMSs)continues to produce large volumes of healthcare data for patient care and compliance and regulatory requirements at a global scale.Analysis of this big da...The advent of healthcare information management systems(HIMSs)continues to produce large volumes of healthcare data for patient care and compliance and regulatory requirements at a global scale.Analysis of this big data allows for boundless potential outcomes for discovering knowledge.Big data analytics(BDA)in healthcare can,for instance,help determine causes of diseases,generate effective diagnoses,enhance Qo S guarantees by increasing efficiency of the healthcare delivery and effectiveness and viability of treatments,generate accurate predictions of readmissions,enhance clinical care,and pinpoint opportunities for cost savings.However,BDA implementations in any domain are generally complicated and resource-intensive with a high failure rate and no roadmap or success strategies to guide the practitioners.In this paper,we present a comprehensive roadmap to derive insights from BDA in the healthcare(patient care)domain,based on the results of a systematic literature review.We initially determine big data characteristics for healthcare and then review BDA applications to healthcare in academic research focusing particularly on No SQL databases.We also identify the limitations and challenges of these applications and justify the potential of No SQL databases to address these challenges and further enhance BDA healthcare research.We then propose and describe a state-of-the-art BDA architecture called Med-BDA for healthcare domain which solves all current BDA challenges and is based on the latest zeta big data paradigm.We also present success strategies to ensure the working of Med-BDA along with outlining the major benefits of BDA applications to healthcare.Finally,we compare our work with other related literature reviews across twelve hallmark features to justify the novelty and importance of our work.The aforementioned contributions of our work are collectively unique and clearly present a roadmap for clinical administrators,practitioners and professionals to successfully implement BDA initiatives in their organizations.展开更多
Excavation under complex geological conditions requires effective and accurate geological forward-prospecting to detect the unfavorable geological structure and estimate the classification of surround-ing rock in fron...Excavation under complex geological conditions requires effective and accurate geological forward-prospecting to detect the unfavorable geological structure and estimate the classification of surround-ing rock in front of the tunnel face.In this work,a forward-prediction method for tunnel geology and classification of surrounding rock is developed based on seismic wave velocity layered tomography.In particular,for the problem of strong multi-solution of wave velocity inversion caused by few ray paths in the narrow space of the tunnel,a layered inversion based on regularization is proposed.By reducing the inversion area of each iteration step and applying straight-line interface assumption,the convergence and accuracy of wave velocity inversion are effectively improved.Furthermore,a surrounding rock classification network based on autoencoder is constructed.The mapping relationship between wave velocity and classification of surrounding rock is established with density,Poisson’s ratio and elastic modulus as links.Two numerical examples with geological conditions similar to that in the field tunnel and a field case study in an urban subway tunnel verify the potential of the proposed method for practical application.展开更多
The decision-making method of tunnel boring machine(TBM)operating parameters has a significant guiding significance for TBM safe and efficient construction,and it has been one of the TBM tunneling research hotspots.Fo...The decision-making method of tunnel boring machine(TBM)operating parameters has a significant guiding significance for TBM safe and efficient construction,and it has been one of the TBM tunneling research hotspots.For this purpose,this paper introduces an intelligent decision-making method of TBM operating parameters based on multiple constraints and objective optimization.First,linear cutting tests and numerical simulations are used to investigate the physical rules between different cutting parameters(penetration,cutter spacing,etc.)and rock compressive strength.Second,a dual-driven mapping of rock parameters and TBM operating parameters based on data mining and physical rules of rock breaking is established with high accuracy by combining rock-breaking rules and deep neural networks(DNNs).The decision-making method is established by dual-driven mapping,using the effective rock-breaking capacity and the rated value of mechanical parameters as constraints and the total excavation cost as the optimization objective.The best operational parameters can be obtained by searching for the revolutions per minute and penetration that correspond to the extremum of the constrained objective function.The practicability and effectiveness of the developed decision-making model is verified in the SecondWater Source Channel of Hangzhou,China,resulting in the average penetration rate increasing by 11.3%and the total cost decreasing by 10%.展开更多
BACKGROUND Alpha-1 antitrypsin deficiency is a rare genetic disease and a leading cause of inherited alterations in plasma protein metabolism(APPM).AIM To understand the prevalence,burden and progression of liver dise...BACKGROUND Alpha-1 antitrypsin deficiency is a rare genetic disease and a leading cause of inherited alterations in plasma protein metabolism(APPM).AIM To understand the prevalence,burden and progression of liver disease in patients with APPM including alpha-1 antitrypsin deficiency.METHODS We conducted a retrospective analysis of anonymized patient-level claims data from a German health insurance provider(AOK PLUS).The APPM cohort comprised patients with APPM(identified using the German Modification of the International Classification of Diseases-10th Revision[ICD-10-GM]code E88.0 between 01/01/2010-30/09/2020)and incident liver disease(ICD-10-GM codes K74,K70.2-3 and K71.7 between 01/01/2012-30/09/2020).The control cohort comprised patients without APPM but with incident liver disease.Outcomes were incidence/prevalence of liver disease in patients with APPM,demographics/baseline characteristics,diagnostic procedures,progression-free survival(PFS),disease progression and mortality.RESULTS Overall,2680 and 26299 patients were included in the APPM(fibrosis,96;cirrhosis,2584)and control(fibrosis,1444;cirrhosis,24855)cohorts,respectively.Per 100000 individuals,annual incidence and prevalence of APPM and liver disease was 10-15 and 36-51,respectively.In the APPM cohort,median survival was 4.7 years[95%confidence interval(CI):3.5-7.0]and 2.5 years(95%CI:2.3-2.8)in patients with fibrosis and cirrhosis,respectively.A higher proportion of patients in the APPM cohort experienced disease progression(92.0%)compared with the control cohort(67.2%).Median PFS was shorter in the APPM cohort(0.9 years,95%CI:0.7-1.1)compared with the control cohort(3.7 years,95%CI:3.6-3.8;P<0.001).Patients with cirrhosis in the control cohort had longer event-free survival for ascites,hepatic encephalopathy,hepatic failure and esophageal/gastric varices than patients with cirrhosis in the APPM cohort(P<0.001).Patients with fibrosis in the control cohort had longer event-free survival for ascites,cirrhosis,hepatic failure and esophageal/gastric varices than patients with fibrosis in the APPM cohort(P<0.001).In the APPM cohort,the most common diagnostic procedures within 12 mo after the first diagnosis of liver disease were imaging procedures(66.3%)and laboratory tests(51.0%).CONCLUSION Among patients with liver disease,those with APPM experience substantial burden and earlier liver disease progression than patients without APPM.展开更多
In order to promote the application of clean energy technology in clothing and promote the integration of industrial development and artificial intelligence wearable technology,this study elaborates the energy applica...In order to promote the application of clean energy technology in clothing and promote the integration of industrial development and artificial intelligence wearable technology,this study elaborates the energy application characteristics of intelligent wearable products at home and abroad and its application in different fields,aiming at the current research status of wearable technology in the field of textile and clothing.The wearable distributed generation technology is classified,and a creative clothing design for detecting climate temperature is designed.Based on the monitoring of body temperature,the changes in clothing pattern color can reflect people’s health and emotional status.At the same time,it can also be applied to the screening of abnormal body temperature during the COVID-19.展开更多
Consciousness is one of the unique features of creatures,and is also the root of biological intelligence.Up to now,all machines and robots havenJt had consciousness.Then,will the artificial intelligence(AI)be consciou...Consciousness is one of the unique features of creatures,and is also the root of biological intelligence.Up to now,all machines and robots havenJt had consciousness.Then,will the artificial intelligence(AI)be conscious?Will robots have real intelligence without consciousness?The most primitive consciousness is the perception and expression of selfexistence.In order to perceive the existence of the concept of‘Ij,a creature must first have a perceivable boundary such as skin to separate‘I’from‘non-1’.For robots,to have the self-awareness,they also need to be wrapped by a similar sensory membrane.Nowadays,as intelligent tools,AI systems should also be regarded as the external extension of human intelligence.These tools are unconscious.The development of AI shows that intelligence can exist without consciousness.When human beings enter into the era of life intelligence from AI,it is not the AI became conscious,but that conscious lives will have strong AI.Therefore,it becomes more necessary to be careful on applying AI to living creatures,even to those lower-level animals with only consciousness.The subversive revolution of such application may produce more careful thinking.展开更多
This study uses <span style="font-family:Verdana;">an empirical</span><span style="font-family:Verdana;"> analysis to quantify the downstream analysis effects of data pre-processi...This study uses <span style="font-family:Verdana;">an empirical</span><span style="font-family:Verdana;"> analysis to quantify the downstream analysis effects of data pre-processing choices. Bootstrap data simulation is used to measure the bias-variance decomposition of an empirical risk function, mean square error (MSE). Results of the risk function decomposition are used to measure the effects of model development choices on </span><span style="font-family:Verdana;">model</span><span style="font-family:Verdana;"> bias, variance, and irreducible error. Measurements of bias and variance are then applied as diagnostic procedures for model pre-processing and development. Best performing model-normalization-data structure combinations were found to illustrate the downstream analysis effects of these model development choices. </span><span style="font-family:Verdana;">In addition</span><span style="font-family:Verdana;">s</span><span style="font-family:Verdana;">, results found from simulations were verified and expanded to include additional data characteristics (imbalanced, sparse) by testing on benchmark datasets available from the UCI Machine Learning Library. Normalization results on benchmark data were consistent with those found using simulations, while also illustrating that more complex and/or non-linear models provide better performance on datasets with additional complexities. Finally, applying the findings from simulation experiments to previously tested applications led to equivalent or improved results with less model development overhead and processing time.</span>展开更多
The quality of products manufactured or procured by organizations is an important aspect of their survival in the global market. The quality control processes put in place by organizations can be resource-intensive bu...The quality of products manufactured or procured by organizations is an important aspect of their survival in the global market. The quality control processes put in place by organizations can be resource-intensive but substantial savings can be realized by using acceptance sampling in conjunction with batch testing. This paper considers the batch testing model based on the quality control process where batches that test positive are re-tested. The results show that re-testing greatly improves the efficiency over one stage batch testing based on quality control. This is observed using Asymptotic Relative Efficiency (ARE), where for values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;"> computed ARE > 1 implying that our estimator has a smaller variance than the one-stage batch testing. Also, it was found that the model is more efficient than the classical two-stage batch testing for relatively high values of proportion.展开更多
Previous studies have shown that there is potential semantic dependency between part-of-speech and semantic roles.At the same time,the predicate-argument structure in a sentence is important information for semantic r...Previous studies have shown that there is potential semantic dependency between part-of-speech and semantic roles.At the same time,the predicate-argument structure in a sentence is important information for semantic role labeling task.In this work,we introduce the auxiliary deep neural network model,which models semantic dependency between part-of-speech and semantic roles and incorporates the information of predicate-argument into semantic role labeling.Based on the framework of joint learning,part-of-speech tagging is used as an auxiliary task to improve the result of the semantic role labeling.In addition,we introduce the argument recognition layer in the training process of the main task-semantic role labeling,so the argument-related structural information selected by the predicate through the attention mechanism is used to assist the main task.Because the model makes full use of the semantic dependency between part-of-speech and semantic roles and the structural information of predicate-argument,our model achieved the F1 value of 89.0%on the WSJ test set of CoNLL2005,which is superior to existing state-of-the-art model about 0.8%.展开更多
Rotational Bose-Einstein condensates can exhibit quantized vortices as topological excitations.In this study,the ground and excited states of the rotational Bose-Einstein condensates are systematically studied by calc...Rotational Bose-Einstein condensates can exhibit quantized vortices as topological excitations.In this study,the ground and excited states of the rotational Bose-Einstein condensates are systematically studied by calculating the stationary points of the Gross-Pitaevskii energy functional.Various excited states and their connections at different rotational frequencies are revealed in solution landscapes constructed with the constrained high-index saddle dynamics method.Four excitation mechanisms are identified:vortex addition,rearrangement,merging,and splitting.We demonstrate changes in the ground state with increasing rotational frequencies and decipher the evolution of the stability of ground states.展开更多
Background:The AbSeS-classification defines specific phenotypes of patients with intra-abdominal infection based on the(1)setting of infection onset(community-acquired,early onset,or late-onset hospital-acquired),(2)p...Background:The AbSeS-classification defines specific phenotypes of patients with intra-abdominal infection based on the(1)setting of infection onset(community-acquired,early onset,or late-onset hospital-acquired),(2)presence or absence of either localized or diffuse peritonitis,and(3)severity of disease expression(infection,sepsis,or septic shock).This classification system demonstrated reliable risk stratification in intensive care unit(ICU)patients with intra-abdominal infection.This study aimed to describe the epidemiology of ICU patients with pancreatic infection and assess the relationship between the components of the AbSeS-classification and mortality.Methods:This was a secondary analysis of an international observational study(“AbSeS”)investigating ICU patients with intra-abdominal infection.Only patients with pancreatic infection were included in this analysis(n=165).Mortality was defined as ICU mortality within 28 days of observation for patients discharged earlier from the ICU.Relationships with mortality were assessed using logistic regression analysis and reported as odds ratio(OR)and 95%confidence interval(CI).Results:The overall mortality was 35.2%(n=58).The independent risk factors for mortality included older age(OR=1.03,95%CI:1.0 to 1.1 P=0.023),localized peritonitis(OR=4.4,95%CI:1.4 to 13.9 P=0.011),and persistent signs of inflammation at day 7(OR=9.5,95%CI:3.8 to 23.9,P<0.001)or after the implementation of additional source control interventions within the first week(OR=4.0,95%CI:1.3 to 12.2,P=0.013).Gramnegative bacteria were most frequently isolated(n=58,49.2%)without clinically relevant differences in microbial etiology between survivors and non-survivors.Conclusions:In pancreatic infection,a challenging source/damage control and ongoing pancreatic inflammation appear to be the strongest contributors to an unfavorable short-term outcome.In this limited series,essentials of the AbSeS-classification,such as the setting of infection onset,diffuse peritonitis,and severity of disease expression,were not associated with an increased mortality risk.展开更多
One of the key goals of the FAIR guiding principles is defined by its final principle-to optimize data sets for reuse by both humans and machines.To do so,data providers need to implement and support consistent machin...One of the key goals of the FAIR guiding principles is defined by its final principle-to optimize data sets for reuse by both humans and machines.To do so,data providers need to implement and support consistent machine readable metadata to describe their data sets.This can seem like a daunting task for data providers,whether it is determining what level of detail should be provided in the provenance metadata or figuring out what common shared vocabularies should be used.Additionally,for existing data sets it is often unclear what steps should be taken to enable maximal,appropriate reuse.Data citation already plays an important role in making data findable and accessible,providing persistent and unique identifiers plus metadata on over 16 million data sets.In this paper,we discuss how data citation and its underlying infrastructures,in particular associated metadata,provide an important pathway for enabling FAIR data reuse.展开更多
Deep learning(DL)is one of the fastest-growing topics in materials data science,with rapidly emerging applications spanning atomistic,image-based,spectral,and textual data modalities.DL allows analysis of unstructured...Deep learning(DL)is one of the fastest-growing topics in materials data science,with rapidly emerging applications spanning atomistic,image-based,spectral,and textual data modalities.DL allows analysis of unstructured data and automated identification of features.The recent development of large materials databases has fueled the application of DL methods in atomistic prediction in particular.In contrast,advances in image and spectral data have largely leveraged synthetic data enabled by high-quality forward models as well as by generative unsupervised DL methods.In this article,we present a high-level overview of deep learning methods followed by a detailed discussion of recent developments of deep learning in atomistic simulation,materials imaging,spectral analysis,and natural language processing.For each modality we discuss applications involving both theoretical and experimental data,typical modeling approaches with their strengths and limitations,and relevant publicly available software and datasets.We conclude the review with a discussion of recent cross-cutting work related to uncertainty quantification in this field and a brief perspective on limitations,challenges,and potential growth areas for DL methods in materials science.展开更多
In this paper, we suggest a new methodology which combines Neural Networks(NN) into Data Assimilation(DA). Focusing on the structural model uncertainty, we propose a framework for integration NN with the physical mode...In this paper, we suggest a new methodology which combines Neural Networks(NN) into Data Assimilation(DA). Focusing on the structural model uncertainty, we propose a framework for integration NN with the physical models by DA algorithms, to improve both the assimilation process and the forecasting results. The NNs are iteratively trained as observational data is updated. The main DA models used here are the Kalman filter and the variational approaches. The effectiveness of the proposed algorithm is validated by examples and by a sensitivity study.展开更多
Numerical simulations are widely used as a predictive tool to better understand complex air flows and pollution transport on the scale of individual buildings,city blocks,and entire cities.To improve prediction for ai...Numerical simulations are widely used as a predictive tool to better understand complex air flows and pollution transport on the scale of individual buildings,city blocks,and entire cities.To improve prediction for air flows and pollution transport,we propose a Variational Data Assimilation(VarDA)model which assimilates data from sensors into the open-source,finite-element,fluid dynamics model Fluidity.VarDA is based on the minimization of a function which estimates the discrepancy between numerical results and observations assuming that the two sources of information,forecast and observations,have errors that are adequately described by error covariance matrices.The conditioning of the numerical problem is dominated by the condition number of the background error covariance matrix which is ill-conditioned.In this paper,a preconditioned VarDA model is presented,it is based on a reduced background error covariance matrix.The Empirical Orthogonal Functions(EOFs)method is used to alleviate the computational cost and reduce the space dimension.Experimental results are provided assuming observed values provided by sensors from positions mainly located on roofs of buildings.展开更多
While the forward and backward modeling of the process-structure-property chain has received a lot of attention from the materials’ community,fewer efforts have taken into consideration uncertainties.Those arise from...While the forward and backward modeling of the process-structure-property chain has received a lot of attention from the materials’ community,fewer efforts have taken into consideration uncertainties.Those arise from a multitude of sources and their quantification and integration in the inversion process are essential in meeting the materials design objectives.The first contribution of this paper is a flexible,fully probabilistic formulation of materials’ optimization problems that accounts for the uncertainty in the process-structure and structure-property linkages and enables the identification of optimal,high-dimensional,process parameters.We employ a probabilistic,data-driven surrogate for the structure-property link which expedites computations and enables handling of non-differential objectives.We couple this with a problem-tailored active learning strategy,i.e.,a self-supervised selection of training data,which significantly improves accuracy while reducing the number of expensive model simulations.We demonstrate its efficacy in optimizing the mechanical and thermal properties of two-phase,random media but envision that its applicability encompasses a wide variety of microstructure-sensitive design problems.展开更多
In this paper,we prove the extreme values of L-functions at the central point for almost prime quadratic twists of an elliptic curve.As an application,we get the extreme values for the Tate-Shafarevich groups in the q...In this paper,we prove the extreme values of L-functions at the central point for almost prime quadratic twists of an elliptic curve.As an application,we get the extreme values for the Tate-Shafarevich groups in the quadratic twist family of an elliptic curve under the Birch and Swinnerton-Dyer conjecture.展开更多
Let(λ_f(n))_(n≥1)be the Hecke eigenvalues of either a holomorphic Hecke eigencuspform or a Hecke-Maass cusp form f.We prove that,for any fixedη>0,under the Ramanujan-Petersson conjecture for GL_(2)Maass forms,th...Let(λ_f(n))_(n≥1)be the Hecke eigenvalues of either a holomorphic Hecke eigencuspform or a Hecke-Maass cusp form f.We prove that,for any fixedη>0,under the Ramanujan-Petersson conjecture for GL_(2)Maass forms,the Rankin-Selberg coefficients(λ_f(n)^(2))_(n≥1)admit a level of distributionθ=2/5+1/260-ηin arithmetic progressions.展开更多
In this paper,we prove the conjectured order lower bound for the k-th moment of central values of quadratic twisted self-dual GL(3)L-functions for all k≥1,based on our recent work on the twisted first moment of centr...In this paper,we prove the conjectured order lower bound for the k-th moment of central values of quadratic twisted self-dual GL(3)L-functions for all k≥1,based on our recent work on the twisted first moment of central values in this family of L-functions.展开更多
基金the support of the Leverhulme Centre for Wildfires,Environment and Society through the Leverhulme Trust(RC-2018-023)Sibo Cheng,César Quilodran-Casas,and Rossella Arcucci acknowledge the support of the PREMIERE project(EP/T000414/1)+5 种基金the support of EPSRC grant:PURIFY(EP/V000756/1)the Fundamental Research Funds for the Central Universitiesthe support of the SASIP project(353)funded by Schmidt Futures–a philanthropic initiative that seeks to improve societal outcomes through the development of emerging science and technologiesDFG for the Heisenberg Programm Award(JA 1077/4-1)the National Natural Science Foundation of China(61976120)the Natural Science Key Foundat ion of Jiangsu Education Department(21KJA510004)。
文摘Data assimilation(DA)and uncertainty quantification(UQ)are extensively used in analysing and reducing error propagation in high-dimensional spatial-temporal dynamics.Typical applications span from computational fluid dynamics(CFD)to geoscience and climate systems.Recently,much effort has been given in combining DA,UQ and machine learning(ML)techniques.These research efforts seek to address some critical challenges in high-dimensional dynamical systems,including but not limited to dynamical system identification,reduced order surrogate modelling,error covariance specification and model error correction.A large number of developed techniques and methodologies exhibit a broad applicability across numerous domains,resulting in the necessity for a comprehensive guide.This paper provides the first overview of state-of-the-art researches in this interdisciplinary field,covering a wide range of applications.This review is aimed at ML scientists who attempt to apply DA and UQ techniques to improve the accuracy and the interpretability of their models,but also at DA and UQ experts who intend to integrate cutting-edge ML approaches to their systems.Therefore,this article has a special focus on how ML methods can overcome the existing limits of DA and UQ,and vice versa.Some exciting perspectives of this rapidly developing research field are also discussed.Index Terms-Data assimilation(DA),deep learning,machine learning(ML),reduced-order-modelling,uncertainty quantification(UQ).
基金supported by two research grants provided by the Karachi Institute of Economics and Technology(KIET)the Big Data Analytics Laboratory at the Insitute of Business Administration(IBAKarachi)。
文摘The advent of healthcare information management systems(HIMSs)continues to produce large volumes of healthcare data for patient care and compliance and regulatory requirements at a global scale.Analysis of this big data allows for boundless potential outcomes for discovering knowledge.Big data analytics(BDA)in healthcare can,for instance,help determine causes of diseases,generate effective diagnoses,enhance Qo S guarantees by increasing efficiency of the healthcare delivery and effectiveness and viability of treatments,generate accurate predictions of readmissions,enhance clinical care,and pinpoint opportunities for cost savings.However,BDA implementations in any domain are generally complicated and resource-intensive with a high failure rate and no roadmap or success strategies to guide the practitioners.In this paper,we present a comprehensive roadmap to derive insights from BDA in the healthcare(patient care)domain,based on the results of a systematic literature review.We initially determine big data characteristics for healthcare and then review BDA applications to healthcare in academic research focusing particularly on No SQL databases.We also identify the limitations and challenges of these applications and justify the potential of No SQL databases to address these challenges and further enhance BDA healthcare research.We then propose and describe a state-of-the-art BDA architecture called Med-BDA for healthcare domain which solves all current BDA challenges and is based on the latest zeta big data paradigm.We also present success strategies to ensure the working of Med-BDA along with outlining the major benefits of BDA applications to healthcare.Finally,we compare our work with other related literature reviews across twelve hallmark features to justify the novelty and importance of our work.The aforementioned contributions of our work are collectively unique and clearly present a roadmap for clinical administrators,practitioners and professionals to successfully implement BDA initiatives in their organizations.
基金The research work described herein was funded by the National Natural Science Foundation of China(Grant No.51922067)The Key Research and Development Plan of Shandong Province of China(Grant No.2020ZLYS01)Taishan Scholars Program of Shan-dong Province of China(Grant No.tsqn201909003).
文摘Excavation under complex geological conditions requires effective and accurate geological forward-prospecting to detect the unfavorable geological structure and estimate the classification of surround-ing rock in front of the tunnel face.In this work,a forward-prediction method for tunnel geology and classification of surrounding rock is developed based on seismic wave velocity layered tomography.In particular,for the problem of strong multi-solution of wave velocity inversion caused by few ray paths in the narrow space of the tunnel,a layered inversion based on regularization is proposed.By reducing the inversion area of each iteration step and applying straight-line interface assumption,the convergence and accuracy of wave velocity inversion are effectively improved.Furthermore,a surrounding rock classification network based on autoencoder is constructed.The mapping relationship between wave velocity and classification of surrounding rock is established with density,Poisson’s ratio and elastic modulus as links.Two numerical examples with geological conditions similar to that in the field tunnel and a field case study in an urban subway tunnel verify the potential of the proposed method for practical application.
基金supported by the National Natural Science Foundation of China(Grant No.52021005)Outstanding Youth Foundation of Shandong Province of China(Grant No.ZR2021JQ22)Taishan Scholars Program of Shandong Province of China(Grant No.tsqn201909003)。
文摘The decision-making method of tunnel boring machine(TBM)operating parameters has a significant guiding significance for TBM safe and efficient construction,and it has been one of the TBM tunneling research hotspots.For this purpose,this paper introduces an intelligent decision-making method of TBM operating parameters based on multiple constraints and objective optimization.First,linear cutting tests and numerical simulations are used to investigate the physical rules between different cutting parameters(penetration,cutter spacing,etc.)and rock compressive strength.Second,a dual-driven mapping of rock parameters and TBM operating parameters based on data mining and physical rules of rock breaking is established with high accuracy by combining rock-breaking rules and deep neural networks(DNNs).The decision-making method is established by dual-driven mapping,using the effective rock-breaking capacity and the rated value of mechanical parameters as constraints and the total excavation cost as the optimization objective.The best operational parameters can be obtained by searching for the revolutions per minute and penetration that correspond to the extremum of the constrained objective function.The practicability and effectiveness of the developed decision-making model is verified in the SecondWater Source Channel of Hangzhou,China,resulting in the average penetration rate increasing by 11.3%and the total cost decreasing by 10%.
文摘BACKGROUND Alpha-1 antitrypsin deficiency is a rare genetic disease and a leading cause of inherited alterations in plasma protein metabolism(APPM).AIM To understand the prevalence,burden and progression of liver disease in patients with APPM including alpha-1 antitrypsin deficiency.METHODS We conducted a retrospective analysis of anonymized patient-level claims data from a German health insurance provider(AOK PLUS).The APPM cohort comprised patients with APPM(identified using the German Modification of the International Classification of Diseases-10th Revision[ICD-10-GM]code E88.0 between 01/01/2010-30/09/2020)and incident liver disease(ICD-10-GM codes K74,K70.2-3 and K71.7 between 01/01/2012-30/09/2020).The control cohort comprised patients without APPM but with incident liver disease.Outcomes were incidence/prevalence of liver disease in patients with APPM,demographics/baseline characteristics,diagnostic procedures,progression-free survival(PFS),disease progression and mortality.RESULTS Overall,2680 and 26299 patients were included in the APPM(fibrosis,96;cirrhosis,2584)and control(fibrosis,1444;cirrhosis,24855)cohorts,respectively.Per 100000 individuals,annual incidence and prevalence of APPM and liver disease was 10-15 and 36-51,respectively.In the APPM cohort,median survival was 4.7 years[95%confidence interval(CI):3.5-7.0]and 2.5 years(95%CI:2.3-2.8)in patients with fibrosis and cirrhosis,respectively.A higher proportion of patients in the APPM cohort experienced disease progression(92.0%)compared with the control cohort(67.2%).Median PFS was shorter in the APPM cohort(0.9 years,95%CI:0.7-1.1)compared with the control cohort(3.7 years,95%CI:3.6-3.8;P<0.001).Patients with cirrhosis in the control cohort had longer event-free survival for ascites,hepatic encephalopathy,hepatic failure and esophageal/gastric varices than patients with cirrhosis in the APPM cohort(P<0.001).Patients with fibrosis in the control cohort had longer event-free survival for ascites,cirrhosis,hepatic failure and esophageal/gastric varices than patients with fibrosis in the APPM cohort(P<0.001).In the APPM cohort,the most common diagnostic procedures within 12 mo after the first diagnosis of liver disease were imaging procedures(66.3%)and laboratory tests(51.0%).CONCLUSION Among patients with liver disease,those with APPM experience substantial burden and earlier liver disease progression than patients without APPM.
文摘In order to promote the application of clean energy technology in clothing and promote the integration of industrial development and artificial intelligence wearable technology,this study elaborates the energy application characteristics of intelligent wearable products at home and abroad and its application in different fields,aiming at the current research status of wearable technology in the field of textile and clothing.The wearable distributed generation technology is classified,and a creative clothing design for detecting climate temperature is designed.Based on the monitoring of body temperature,the changes in clothing pattern color can reflect people’s health and emotional status.At the same time,it can also be applied to the screening of abnormal body temperature during the COVID-19.
文摘Consciousness is one of the unique features of creatures,and is also the root of biological intelligence.Up to now,all machines and robots havenJt had consciousness.Then,will the artificial intelligence(AI)be conscious?Will robots have real intelligence without consciousness?The most primitive consciousness is the perception and expression of selfexistence.In order to perceive the existence of the concept of‘Ij,a creature must first have a perceivable boundary such as skin to separate‘I’from‘non-1’.For robots,to have the self-awareness,they also need to be wrapped by a similar sensory membrane.Nowadays,as intelligent tools,AI systems should also be regarded as the external extension of human intelligence.These tools are unconscious.The development of AI shows that intelligence can exist without consciousness.When human beings enter into the era of life intelligence from AI,it is not the AI became conscious,but that conscious lives will have strong AI.Therefore,it becomes more necessary to be careful on applying AI to living creatures,even to those lower-level animals with only consciousness.The subversive revolution of such application may produce more careful thinking.
文摘This study uses <span style="font-family:Verdana;">an empirical</span><span style="font-family:Verdana;"> analysis to quantify the downstream analysis effects of data pre-processing choices. Bootstrap data simulation is used to measure the bias-variance decomposition of an empirical risk function, mean square error (MSE). Results of the risk function decomposition are used to measure the effects of model development choices on </span><span style="font-family:Verdana;">model</span><span style="font-family:Verdana;"> bias, variance, and irreducible error. Measurements of bias and variance are then applied as diagnostic procedures for model pre-processing and development. Best performing model-normalization-data structure combinations were found to illustrate the downstream analysis effects of these model development choices. </span><span style="font-family:Verdana;">In addition</span><span style="font-family:Verdana;">s</span><span style="font-family:Verdana;">, results found from simulations were verified and expanded to include additional data characteristics (imbalanced, sparse) by testing on benchmark datasets available from the UCI Machine Learning Library. Normalization results on benchmark data were consistent with those found using simulations, while also illustrating that more complex and/or non-linear models provide better performance on datasets with additional complexities. Finally, applying the findings from simulation experiments to previously tested applications led to equivalent or improved results with less model development overhead and processing time.</span>
文摘The quality of products manufactured or procured by organizations is an important aspect of their survival in the global market. The quality control processes put in place by organizations can be resource-intensive but substantial savings can be realized by using acceptance sampling in conjunction with batch testing. This paper considers the batch testing model based on the quality control process where batches that test positive are re-tested. The results show that re-testing greatly improves the efficiency over one stage batch testing based on quality control. This is observed using Asymptotic Relative Efficiency (ARE), where for values of </span><i><span style="font-family:Verdana;">p</span></i><span style="font-family:Verdana;"> computed ARE > 1 implying that our estimator has a smaller variance than the one-stage batch testing. Also, it was found that the model is more efficient than the classical two-stage batch testing for relatively high values of proportion.
基金The work of this article is supported by Key Scientific Research Projects of Colleges and Universities in Henan Province(Grant No.20A520007)National Natural Science Foundation of China(Grant No.61402149).
文摘Previous studies have shown that there is potential semantic dependency between part-of-speech and semantic roles.At the same time,the predicate-argument structure in a sentence is important information for semantic role labeling task.In this work,we introduce the auxiliary deep neural network model,which models semantic dependency between part-of-speech and semantic roles and incorporates the information of predicate-argument into semantic role labeling.Based on the framework of joint learning,part-of-speech tagging is used as an auxiliary task to improve the result of the semantic role labeling.In addition,we introduce the argument recognition layer in the training process of the main task-semantic role labeling,so the argument-related structural information selected by the predicate through the attention mechanism is used to assist the main task.Because the model makes full use of the semantic dependency between part-of-speech and semantic roles and the structural information of predicate-argument,our model achieved the F1 value of 89.0%on the WSJ test set of CoNLL2005,which is superior to existing state-of-the-art model about 0.8%.
基金L.Z.is supported by the National Key Research and Development Program of China 2021YFF1200500 and the National Natural Science Foundation of China(No.12225102,T2321001,12050002,and 12288101)J.Y.is supported by the National Research Foundation,Singapore(Project No.NRF-NRFF13-2021-0005)+1 种基金Q.D.is supported by the National Science Foundation(DMS-2012562 and DMS-1937254)Y.C.is supported by the National Natural Science Foundation of China(No.12171041)。
文摘Rotational Bose-Einstein condensates can exhibit quantized vortices as topological excitations.In this study,the ground and excited states of the rotational Bose-Einstein condensates are systematically studied by calculating the stationary points of the Gross-Pitaevskii energy functional.Various excited states and their connections at different rotational frequencies are revealed in solution landscapes constructed with the constrained high-index saddle dynamics method.Four excitation mechanisms are identified:vortex addition,rearrangement,merging,and splitting.We demonstrate changes in the ground state with increasing rotational frequencies and decipher the evolution of the stability of ground states.
文摘Background:The AbSeS-classification defines specific phenotypes of patients with intra-abdominal infection based on the(1)setting of infection onset(community-acquired,early onset,or late-onset hospital-acquired),(2)presence or absence of either localized or diffuse peritonitis,and(3)severity of disease expression(infection,sepsis,or septic shock).This classification system demonstrated reliable risk stratification in intensive care unit(ICU)patients with intra-abdominal infection.This study aimed to describe the epidemiology of ICU patients with pancreatic infection and assess the relationship between the components of the AbSeS-classification and mortality.Methods:This was a secondary analysis of an international observational study(“AbSeS”)investigating ICU patients with intra-abdominal infection.Only patients with pancreatic infection were included in this analysis(n=165).Mortality was defined as ICU mortality within 28 days of observation for patients discharged earlier from the ICU.Relationships with mortality were assessed using logistic regression analysis and reported as odds ratio(OR)and 95%confidence interval(CI).Results:The overall mortality was 35.2%(n=58).The independent risk factors for mortality included older age(OR=1.03,95%CI:1.0 to 1.1 P=0.023),localized peritonitis(OR=4.4,95%CI:1.4 to 13.9 P=0.011),and persistent signs of inflammation at day 7(OR=9.5,95%CI:3.8 to 23.9,P<0.001)or after the implementation of additional source control interventions within the first week(OR=4.0,95%CI:1.3 to 12.2,P=0.013).Gramnegative bacteria were most frequently isolated(n=58,49.2%)without clinically relevant differences in microbial etiology between survivors and non-survivors.Conclusions:In pancreatic infection,a challenging source/damage control and ongoing pancreatic inflammation appear to be the strongest contributors to an unfavorable short-term outcome.In this limited series,essentials of the AbSeS-classification,such as the setting of infection onset,diffuse peritonitis,and severity of disease expression,were not associated with an increased mortality risk.
基金This work was partially supported by Horizon 2020,INFRADEV-4-2014-2015,654248,CORBEL,Coordinated Research Infrastructures Building Enduring Life-science services.
文摘One of the key goals of the FAIR guiding principles is defined by its final principle-to optimize data sets for reuse by both humans and machines.To do so,data providers need to implement and support consistent machine readable metadata to describe their data sets.This can seem like a daunting task for data providers,whether it is determining what level of detail should be provided in the provenance metadata or figuring out what common shared vocabularies should be used.Additionally,for existing data sets it is often unclear what steps should be taken to enable maximal,appropriate reuse.Data citation already plays an important role in making data findable and accessible,providing persistent and unique identifiers plus metadata on over 16 million data sets.In this paper,we discuss how data citation and its underlying infrastructures,in particular associated metadata,provide an important pathway for enabling FAIR data reuse.
基金Contributions from K.C.were supported by the financial assistance award 70NANB19H117 from the U.S.Department of CommerceNational Institute of Standards and Technology+5 种基金E.A.H.and R.C.(CMU)were supported by the National Science Foundation under grant CMMI-1826218the Air Force D3OM2S Center of Excellence under agreement FA8650-19-2-5209A.J.,C.C.,and S.P.O.were supported by the Materials Project,funded by the U.S.Department of Energy,Office of Science,Office of Basic Energy Sciences,Materials Sciences and Engineering Division under contract no,DE-AC02-05-CH11231Materials Project program KC23MP.S.J.L.B.was supported by the U.S.National Science Foundation through grant DMREF-1922234A.A.and A.C.were supported by NIST award 70NANB19H005NSF award CMMI-2053929.
文摘Deep learning(DL)is one of the fastest-growing topics in materials data science,with rapidly emerging applications spanning atomistic,image-based,spectral,and textual data modalities.DL allows analysis of unstructured data and automated identification of features.The recent development of large materials databases has fueled the application of DL methods in atomistic prediction in particular.In contrast,advances in image and spectral data have largely leveraged synthetic data enabled by high-quality forward models as well as by generative unsupervised DL methods.In this article,we present a high-level overview of deep learning methods followed by a detailed discussion of recent developments of deep learning in atomistic simulation,materials imaging,spectral analysis,and natural language processing.For each modality we discuss applications involving both theoretical and experimental data,typical modeling approaches with their strengths and limitations,and relevant publicly available software and datasets.We conclude the review with a discussion of recent cross-cutting work related to uncertainty quantification in this field and a brief perspective on limitations,challenges,and potential growth areas for DL methods in materials science.
基金supported by the EPSRC Grand Challenge grant "Managing Air for Green Inner Cities" (MAGIC) EP/N010221/1
文摘In this paper, we suggest a new methodology which combines Neural Networks(NN) into Data Assimilation(DA). Focusing on the structural model uncertainty, we propose a framework for integration NN with the physical models by DA algorithms, to improve both the assimilation process and the forecasting results. The NNs are iteratively trained as observational data is updated. The main DA models used here are the Kalman filter and the variational approaches. The effectiveness of the proposed algorithm is validated by examples and by a sensitivity study.
基金supported by the EPSRC Grand Challenge grant“Managing Air for Green Inner Cities”(MAGIC)EP/N010221/1.
文摘Numerical simulations are widely used as a predictive tool to better understand complex air flows and pollution transport on the scale of individual buildings,city blocks,and entire cities.To improve prediction for air flows and pollution transport,we propose a Variational Data Assimilation(VarDA)model which assimilates data from sensors into the open-source,finite-element,fluid dynamics model Fluidity.VarDA is based on the minimization of a function which estimates the discrepancy between numerical results and observations assuming that the two sources of information,forecast and observations,have errors that are adequately described by error covariance matrices.The conditioning of the numerical problem is dominated by the condition number of the background error covariance matrix which is ill-conditioned.In this paper,a preconditioned VarDA model is presented,it is based on a reduced background error covariance matrix.The Empirical Orthogonal Functions(EOFs)method is used to alleviate the computational cost and reduce the space dimension.Experimental results are provided assuming observed values provided by sensors from positions mainly located on roofs of buildings.
基金Funded under the Excellence Strategy of the Federal Government and the Länder in the context of the ARTEMIS Innovation Network.
文摘While the forward and backward modeling of the process-structure-property chain has received a lot of attention from the materials’ community,fewer efforts have taken into consideration uncertainties.Those arise from a multitude of sources and their quantification and integration in the inversion process are essential in meeting the materials design objectives.The first contribution of this paper is a flexible,fully probabilistic formulation of materials’ optimization problems that accounts for the uncertainty in the process-structure and structure-property linkages and enables the identification of optimal,high-dimensional,process parameters.We employ a probabilistic,data-driven surrogate for the structure-property link which expedites computations and enables handling of non-differential objectives.We couple this with a problem-tailored active learning strategy,i.e.,a self-supervised selection of training data,which significantly improves accuracy while reducing the number of expensive model simulations.We demonstrate its efficacy in optimizing the mechanical and thermal properties of two-phase,random media but envision that its applicability encompasses a wide variety of microstructure-sensitive design problems.
基金supported by the National Key R&D Program of China (Grant No. 2021YFA1000700)National Natural Science Foundation of China (Grant Nos. 12001314 and 12031008)。
文摘In this paper,we prove the extreme values of L-functions at the central point for almost prime quadratic twists of an elliptic curve.As an application,we get the extreme values for the Tate-Shafarevich groups in the quadratic twist family of an elliptic curve under the Birch and Swinnerton-Dyer conjecture.
文摘Let(λ_f(n))_(n≥1)be the Hecke eigenvalues of either a holomorphic Hecke eigencuspform or a Hecke-Maass cusp form f.We prove that,for any fixedη>0,under the Ramanujan-Petersson conjecture for GL_(2)Maass forms,the Rankin-Selberg coefficients(λ_f(n)^(2))_(n≥1)admit a level of distributionθ=2/5+1/260-ηin arithmetic progressions.
基金National Key R&D Program of China(Grant No.2021YFA1000700)NSFC(Grant Nos.12001314 and 12031008)。
文摘In this paper,we prove the conjectured order lower bound for the k-th moment of central values of quadratic twisted self-dual GL(3)L-functions for all k≥1,based on our recent work on the twisted first moment of central values in this family of L-functions.