BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Poly...BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.展开更多
Crossover designs are well-known to have major advantages when comparing the effects of various non-curative treatments. We compare efficiencies of several crossover designs along with the Balaam’s design with that o...Crossover designs are well-known to have major advantages when comparing the effects of various non-curative treatments. We compare efficiencies of several crossover designs along with the Balaam’s design with that of a parallel group design pertaining to longitudinal studies where event time can only be measured in discrete time intervals. With equally sized sequences, the parallel group design results in the greater efficiency if the number of time periods is small. However, the crossover and Balaam’s designs tend to be more efficient as the study duration increases. The degree to which these designs add efficiency depends on the baseline hazard function and effect size. Additionally, we incorporate different cost considerations at the subject level when comparing the designs to determine the most cost-efficient design. Researchers might consider the crossover or Balaam’s design more efficient if the duration of the study is long enough, especially if the costs of applying the baseline treatment are higher.展开更多
AIM:To examine performances regarding prediction of polyp histology using high-definition (HD) i-scan in a group of endoscopists with varying levels of experience. METHODS:We used a digital library of HD i-scan still ...AIM:To examine performances regarding prediction of polyp histology using high-definition (HD) i-scan in a group of endoscopists with varying levels of experience. METHODS:We used a digital library of HD i-scan still images, comprising twin pictures (surface enhancement and tone enhancement), collected at our university hospital. We defined endoscopic features of adenomatous and non-adenomatous polyps, according to the following parameters:color, surface pattern and vascular pattern. We familiarized the participating endoscopists on optical diagnosis of colorectal polyps using a 20-min didactic training session. All endoscopists were asked to evaluate an image set of 50 colorectal polyps with regard to polyp histology. We classified the diagnoses into high confidence (i.e., cases in which the endoscopist could assign a diagnosis with certainty) and low confidence diagnoses (i.e., cases in which the endoscopist preferred to send the polyp for formal histology). Mean sensitivity, specificity and accuracy per endoscopist/image were computed and differences between groups tested using independent-samples t tests. High vs low confidence diagnoses were compared using the pairedsamples t test. RESULTS:Eleven endoscopists without previous experience on optical diagnosis evaluated a total of 550 images (396 adenomatous, 154 non-adenomatous). Mean sensitivity, specificity and accuracy for diagnosing adenomas were 79.3%, 85.7% and 81.1%, respectively. No significant differences were found between gastroenterologists and trainees regarding performances of optical diagnosis (mean accuracy 78.0%vs 82.9%,P = 0.098). Diminutive lesions were predicted with a lower mean accuracy as compared to non-diminutive lesions (74.2% vs 93.1%, P = 0.008). A total of 446 (81.1%) diagnoses were made with high confidence. High confidence diagnoses corresponded to a significantly higher mean accuracy than low confidence diagnoses (84.0% vs 64.3%, P = 0.008). A total of 319 (58.0%) images were evaluated as having excellent quality. Considering excellent quality images in conjunction with high confidence diagnosis, overall accuracy increased to 92.8%. CONCLUSION:After a single training session, endoscopists with varying levels of experience can already provide optical diagnosis with an accuracy of 84.0%.展开更多
BackgroundFor the treatment of chronic heart failure (HF), both pharmacological and non-pharmacological treatment should be em-ployed in HF patients. Although HF is highly prevalent in nursing home residents, it is ...BackgroundFor the treatment of chronic heart failure (HF), both pharmacological and non-pharmacological treatment should be em-ployed in HF patients. Although HF is highly prevalent in nursing home residents, it is not clear whether the recommendations in the guide-lines for pharmacological therapy also are followed in nursing home residents. The aim of this study is to investigate how HF is treated in nursing home residents and to determine to what extent the current treatment corresponds to the guidelines.MethodsNursing home resi-dents of five large nursing home care organizations in the southern part of the Netherlands with a previous diagnosis of HF based on medical records irrespective of the left ventricle ejection fraction (LVEF) were included in this cross-sectional design study. Data were gathered on the (medical) records, which included clinical characteristics and pharmacological- and non-pharmacological treatment. Echocardiography was used as part of the study to determine the LVEF.ResultsOut of 501 residents, 112 had a diagnosis of HF at inclusion. One-third of them received an ACE-inhibitor and 40% used aβ-blocker. In 66%, there was a prescription of diuretics with a preference of a loop diuretic. Focusing on the residents with a LVEF£ 40%, only 46% of the 22 residents used an ACE-inhibitor and 64% aβ-blocker. The median daily doses of prescribed medication were lower than those that were recommended by the guidelines. Non-pharmacological interventions were recorded in almost none of the residents with HF.ConclusionsThe recommended medical therapy of HF was often not prescribed; if pre-scribed, the dosage was usually far below what was recommended. In addition, non-pharmacological interventions were mostly not used at all.展开更多
Experimental studies are usually designed with specific expectations about the results in mind. However, most researchers apply some form of omnibus test to test for any differences, with follow up tests like pairwise...Experimental studies are usually designed with specific expectations about the results in mind. However, most researchers apply some form of omnibus test to test for any differences, with follow up tests like pairwise comparisons or simple effects analyses for further investigation of the effects. The power to find full support for the theory with such an exploratory approach which is usually based on multiple testing is, however, rather disappointing. With the simulations in this paper we showed that many of the common choices in hypothesis testing led to a severely underpowered form of theory evaluation. Furthermore, some less commonly used approaches were presented and a comparison of results in terms of power to find support for the theory was made. We concluded that confirmatory methods are required in the context of theory evaluation and that the scientific literature would benefit from a clearer distinction between confirmatory and exploratory findings. Also, we emphasis the importance of reporting all tests, significant or not, including the appropriate sample statistics like means and standard deviations. Another recommendation is related to the fact that researchers, when they discuss the conclusions of their own study, seem to underestimate the role of sampling variability. The execution of more replication studies in combination with proper reporting of all results provides insight in between study variability and the amount of chance findings.展开更多
Active learning can be used for optimizing and speeding up the screening phase of systematic reviews.Running simulation studies mimicking the screening process can be used to test the performance of different machine-...Active learning can be used for optimizing and speeding up the screening phase of systematic reviews.Running simulation studies mimicking the screening process can be used to test the performance of different machine-learning models or to study the impact of different training data.This paper presents an architecture design withamultiprocessing computational strategyforrunningmanysuch simulation studiesinparallel,using the ASReview Makita workflow generator and Kubernetes software for deployment with cloud technologies.We provide a technical explanation of the proposed cloud architecture and its usage.In addition to that,we conducted 1140 simulations investigating the computational time using various numbers of CPUs and RAM settings.Our analysis demonstrates the degree to which simulations can be accelerated with multiprocessing computing usage.The parallel computation strategy and the architecture design that was developed in the present paper can contribute to future research with more optimal simulation time and,at the same time,ensure the safe completion of the needed processes.展开更多
<strong>Background:</strong><span style="font-family:;" "=""><span style="font-family:Verdana;"> In discrete-time event history analysis, subjects are measure...<strong>Background:</strong><span style="font-family:;" "=""><span style="font-family:Verdana;"> In discrete-time event history analysis, subjects are measured once each time period until they experience the event, prematurely drop out, or when the study concludes. This implies measuring event status of a subject in each time period determines whether (s)he should be measured in subsequent time periods. For that reason, intermittent missing event status causes a problem because, unlike other repeated measurement designs, it does not make sense to simply ignore the corresponding missing event status from the analysis (as long as the dropout is ignorable). </span><b><span style="font-family:Verdana;">Method:</span></b><span style="font-family:Verdana;"> We used Monte Carlo simulation to evaluate and compare various alternatives, including event occurrence recall, event (non-)occurrence, case deletion, period deletion, and single and multiple imputation methods, to deal with missing event status. Moreover, we showed the methods’ performance in the analysis of an empirical example on relapse to drug use. </span><b><span style="font-family:Verdana;">Result:</span></b><span style="font-family:Verdana;"> The strategies assuming event (non-)occurrence and the recall strategy had the worst performance because of a substantial parameter bias and a sharp decrease in coverage rate. Deletion methods suffered from either loss of power or undercoverage</span><span style="color:red;"> </span><span style="font-family:Verdana;">issues resulting from a biased standard error. Single imputation recovered the bias issue but showed an undercoverage estimate. Multiple imputations performed reasonabl</span></span><span style="font-family:Verdana;">y</span><span style="font-family:;" "=""><span style="font-family:Verdana;"> with a negligible standard error bias leading to a gradual decrease in power. </span><b><span style="font-family:Verdana;">Conclusion:</span></b><span style="font-family:Verdana;"> On the basis of the simulation results and real example, we provide practical guidance to researches in terms of the best ways to deal with missing event history data</span></span><span style="font-family:Verdana;">.</span>展开更多
This paper studies the nonlinear variational inequality with integro-differential term arising from valuation of American style double barrier option. First, the authors use the penalty method to transform the variati...This paper studies the nonlinear variational inequality with integro-differential term arising from valuation of American style double barrier option. First, the authors use the penalty method to transform the variational inequality into a nonlinear parabolic initial boundary problem(i.e., penalty problem). Second, the existence and uniqueness of solution to the penalty problem are proved by using the Scheafer fixed point theory. Third, the authors prove the existence of variational inequality' solution by showing the fact that the penalized PDE converges to the variational inequality. The uniqueness of solution to the variational inequality is also proved by contradiction.展开更多
文摘BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.
文摘Crossover designs are well-known to have major advantages when comparing the effects of various non-curative treatments. We compare efficiencies of several crossover designs along with the Balaam’s design with that of a parallel group design pertaining to longitudinal studies where event time can only be measured in discrete time intervals. With equally sized sequences, the parallel group design results in the greater efficiency if the number of time periods is small. However, the crossover and Balaam’s designs tend to be more efficient as the study duration increases. The degree to which these designs add efficiency depends on the baseline hazard function and effect size. Additionally, we incorporate different cost considerations at the subject level when comparing the designs to determine the most cost-efficient design. Researchers might consider the crossover or Balaam’s design more efficient if the duration of the study is long enough, especially if the costs of applying the baseline treatment are higher.
文摘AIM:To examine performances regarding prediction of polyp histology using high-definition (HD) i-scan in a group of endoscopists with varying levels of experience. METHODS:We used a digital library of HD i-scan still images, comprising twin pictures (surface enhancement and tone enhancement), collected at our university hospital. We defined endoscopic features of adenomatous and non-adenomatous polyps, according to the following parameters:color, surface pattern and vascular pattern. We familiarized the participating endoscopists on optical diagnosis of colorectal polyps using a 20-min didactic training session. All endoscopists were asked to evaluate an image set of 50 colorectal polyps with regard to polyp histology. We classified the diagnoses into high confidence (i.e., cases in which the endoscopist could assign a diagnosis with certainty) and low confidence diagnoses (i.e., cases in which the endoscopist preferred to send the polyp for formal histology). Mean sensitivity, specificity and accuracy per endoscopist/image were computed and differences between groups tested using independent-samples t tests. High vs low confidence diagnoses were compared using the pairedsamples t test. RESULTS:Eleven endoscopists without previous experience on optical diagnosis evaluated a total of 550 images (396 adenomatous, 154 non-adenomatous). Mean sensitivity, specificity and accuracy for diagnosing adenomas were 79.3%, 85.7% and 81.1%, respectively. No significant differences were found between gastroenterologists and trainees regarding performances of optical diagnosis (mean accuracy 78.0%vs 82.9%,P = 0.098). Diminutive lesions were predicted with a lower mean accuracy as compared to non-diminutive lesions (74.2% vs 93.1%, P = 0.008). A total of 446 (81.1%) diagnoses were made with high confidence. High confidence diagnoses corresponded to a significantly higher mean accuracy than low confidence diagnoses (84.0% vs 64.3%, P = 0.008). A total of 319 (58.0%) images were evaluated as having excellent quality. Considering excellent quality images in conjunction with high confidence diagnosis, overall accuracy increased to 92.8%. CONCLUSION:After a single training session, endoscopists with varying levels of experience can already provide optical diagnosis with an accuracy of 84.0%.
文摘BackgroundFor the treatment of chronic heart failure (HF), both pharmacological and non-pharmacological treatment should be em-ployed in HF patients. Although HF is highly prevalent in nursing home residents, it is not clear whether the recommendations in the guide-lines for pharmacological therapy also are followed in nursing home residents. The aim of this study is to investigate how HF is treated in nursing home residents and to determine to what extent the current treatment corresponds to the guidelines.MethodsNursing home resi-dents of five large nursing home care organizations in the southern part of the Netherlands with a previous diagnosis of HF based on medical records irrespective of the left ventricle ejection fraction (LVEF) were included in this cross-sectional design study. Data were gathered on the (medical) records, which included clinical characteristics and pharmacological- and non-pharmacological treatment. Echocardiography was used as part of the study to determine the LVEF.ResultsOut of 501 residents, 112 had a diagnosis of HF at inclusion. One-third of them received an ACE-inhibitor and 40% used aβ-blocker. In 66%, there was a prescription of diuretics with a preference of a loop diuretic. Focusing on the residents with a LVEF£ 40%, only 46% of the 22 residents used an ACE-inhibitor and 64% aβ-blocker. The median daily doses of prescribed medication were lower than those that were recommended by the guidelines. Non-pharmacological interventions were recorded in almost none of the residents with HF.ConclusionsThe recommended medical therapy of HF was often not prescribed; if pre-scribed, the dosage was usually far below what was recommended. In addition, non-pharmacological interventions were mostly not used at all.
文摘Experimental studies are usually designed with specific expectations about the results in mind. However, most researchers apply some form of omnibus test to test for any differences, with follow up tests like pairwise comparisons or simple effects analyses for further investigation of the effects. The power to find full support for the theory with such an exploratory approach which is usually based on multiple testing is, however, rather disappointing. With the simulations in this paper we showed that many of the common choices in hypothesis testing led to a severely underpowered form of theory evaluation. Furthermore, some less commonly used approaches were presented and a comparison of results in terms of power to find support for the theory was made. We concluded that confirmatory methods are required in the context of theory evaluation and that the scientific literature would benefit from a clearer distinction between confirmatory and exploratory findings. Also, we emphasis the importance of reporting all tests, significant or not, including the appropriate sample statistics like means and standard deviations. Another recommendation is related to the fact that researchers, when they discuss the conclusions of their own study, seem to underestimate the role of sampling variability. The execution of more replication studies in combination with proper reporting of all results provides insight in between study variability and the amount of chance findings.
基金supported by the Netherlands eScience Center under grant number ODISSEI.2022.023。
文摘Active learning can be used for optimizing and speeding up the screening phase of systematic reviews.Running simulation studies mimicking the screening process can be used to test the performance of different machine-learning models or to study the impact of different training data.This paper presents an architecture design withamultiprocessing computational strategyforrunningmanysuch simulation studiesinparallel,using the ASReview Makita workflow generator and Kubernetes software for deployment with cloud technologies.We provide a technical explanation of the proposed cloud architecture and its usage.In addition to that,we conducted 1140 simulations investigating the computational time using various numbers of CPUs and RAM settings.Our analysis demonstrates the degree to which simulations can be accelerated with multiprocessing computing usage.The parallel computation strategy and the architecture design that was developed in the present paper can contribute to future research with more optimal simulation time and,at the same time,ensure the safe completion of the needed processes.
文摘<strong>Background:</strong><span style="font-family:;" "=""><span style="font-family:Verdana;"> In discrete-time event history analysis, subjects are measured once each time period until they experience the event, prematurely drop out, or when the study concludes. This implies measuring event status of a subject in each time period determines whether (s)he should be measured in subsequent time periods. For that reason, intermittent missing event status causes a problem because, unlike other repeated measurement designs, it does not make sense to simply ignore the corresponding missing event status from the analysis (as long as the dropout is ignorable). </span><b><span style="font-family:Verdana;">Method:</span></b><span style="font-family:Verdana;"> We used Monte Carlo simulation to evaluate and compare various alternatives, including event occurrence recall, event (non-)occurrence, case deletion, period deletion, and single and multiple imputation methods, to deal with missing event status. Moreover, we showed the methods’ performance in the analysis of an empirical example on relapse to drug use. </span><b><span style="font-family:Verdana;">Result:</span></b><span style="font-family:Verdana;"> The strategies assuming event (non-)occurrence and the recall strategy had the worst performance because of a substantial parameter bias and a sharp decrease in coverage rate. Deletion methods suffered from either loss of power or undercoverage</span><span style="color:red;"> </span><span style="font-family:Verdana;">issues resulting from a biased standard error. Single imputation recovered the bias issue but showed an undercoverage estimate. Multiple imputations performed reasonabl</span></span><span style="font-family:Verdana;">y</span><span style="font-family:;" "=""><span style="font-family:Verdana;"> with a negligible standard error bias leading to a gradual decrease in power. </span><b><span style="font-family:Verdana;">Conclusion:</span></b><span style="font-family:Verdana;"> On the basis of the simulation results and real example, we provide practical guidance to researches in terms of the best ways to deal with missing event history data</span></span><span style="font-family:Verdana;">.</span>
基金supported by the National Science Foundation of China under Grant Nos.71171164 and 70471057the Doctorate Foundation of Northwestern Polytechnical University under Grant No.CX201235
文摘This paper studies the nonlinear variational inequality with integro-differential term arising from valuation of American style double barrier option. First, the authors use the penalty method to transform the variational inequality into a nonlinear parabolic initial boundary problem(i.e., penalty problem). Second, the existence and uniqueness of solution to the penalty problem are proved by using the Scheafer fixed point theory. Third, the authors prove the existence of variational inequality' solution by showing the fact that the penalized PDE converges to the variational inequality. The uniqueness of solution to the variational inequality is also proved by contradiction.