The aim of this study was to verify the existence of business and strategic intelligence policies at the level of Congolese companies and at the state level, likely to foster progress and healthy development in the ea...The aim of this study was to verify the existence of business and strategic intelligence policies at the level of Congolese companies and at the state level, likely to foster progress and healthy development in the east of the DRC. The study was based on a mixed perspective consisting of objective analysis of quantitative data and interpretative analysis of qualitative data. The results showed that business and strategic intelligence policies have not been established at either company or state level, as this is an area of activity that is not known to the players in companies and public departments, and there are no units or offices in their organizational structures responsible for managing strategic information for competitiveness on the international market. In addition, there is a real need to establish strategic information management units within companies, upstream, and to set up a national strategic information management department or agency to help local companies compete in the marketplace, downstream. This reflects the importance and timeliness of building business and strategic intelligence policies to ensure economic progress and development in the eastern DRC. Business and strategic intelligence provides companies with an appropriate tool for researching, collecting, processing and disseminating information useful for decision-making among stakeholders, in order to cope with a crisis or competitive situation. The study suggests a number of key recommendations based on its findings. To the government, it is recommended to establish the national policy of business and strategic intelligence by setting up a national agency of strategic intelligence in favor of local companies;and to companies to establish business intelligence units in their organizational structures in favor of stakeholders to foster advantageous decision-making in the competitive market and achieve progress. Finally, the study suggests that studies be carried out to fully understand the opportunities and impact of business and strategic intelligence in African countries, particularly in the DRC.展开更多
BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Poly...BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.展开更多
Big data has had significant impacts on our lives,economies,academia and industries over the past decade.The current equations are:What is the future of big data?What era do we live in?This article addresses these que...Big data has had significant impacts on our lives,economies,academia and industries over the past decade.The current equations are:What is the future of big data?What era do we live in?This article addresses these questions by looking at meta as an operation and argues that we are living in the era of big intelligence through analyzing from meta(big data)to big intelligence.More specifically,this article will analyze big data from an evolutionary perspective.The article overviews data,information,knowledge,and intelligence(DIKI)and reveals their relationships.After analyzing meta as an operation,this article explores Meta(DIKE)and its relationship.It reveals 5 Bigs consisting of big data,big information,big knowledge,big intelligence and big analytics.Applying meta on 5 Bigs,this article infers that 4 Big Data 4.0=meta(big data)=big intelligence.This article analyzes how intelligent big analytics support big intelligence.The proposed approach in this research might facilitate the research and development of big data,big data analytics,business intelligence,artificial intelligence,and data science.展开更多
Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of...Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.展开更多
Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on p...Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.展开更多
In recent years,Artificial Intelligence(AI)has revolutionized people’s lives.AI has long made breakthrough progress in the field of surgery.However,the research on the application of AI in orthopedics is still in the...In recent years,Artificial Intelligence(AI)has revolutionized people’s lives.AI has long made breakthrough progress in the field of surgery.However,the research on the application of AI in orthopedics is still in the exploratory stage.The paper first introduces the background of AI and orthopedic diseases,addresses the shortcomings of traditional methods in the detection of fractures and orthopedic diseases,draws out the advantages of deep learning and machine learning in image detection,and reviews the latest results of deep learning and machine learning applied to orthopedic image detection in recent years,describing the contributions,strengths and weaknesses,and the direction of the future improvements that can be made in each study.Next,the paper also introduces the difficulties of traditional orthopedic surgery and the roles played by AI in preoperative,intraoperative,and postoperative orthopedic surgery,scientifically discussing the advantages and prospects of AI in orthopedic surgery.Finally,the article discusses the limitations of current research and technology in clinical applications,proposes solutions to the problems,and summarizes and outlines possible future research directions.The main objective of this review is to inform future research and development of AI in orthopedics.展开更多
A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a tr...A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a trend.This paper provides AI based channel characteristic prediction and scenario classification model for millimeter wave(mmWave)HST communications.Firstly,the ray tracing method verified by measurement data is applied to reconstruct four representative HST scenarios.By setting the positions of transmitter(Tx),receiver(Rx),and other parameters,the multi-scenarios wireless channel big data is acquired.Then,based on the obtained channel database,radial basis function neural network(RBF-NN)and back propagation neural network(BP-NN)are trained for channel characteristic prediction and scenario classification.Finally,the channel characteristic prediction and scenario classification capabilities of the network are evaluated by calculating the root mean square error(RMSE).The results show that RBF-NN can generally achieve better performance than BP-NN,and is more applicable to prediction of HST scenarios.展开更多
Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow fie...Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow field data is used to initialize the model parameters,so that the parameters to be trained are close to the optimal value.Then physical prior knowledge is introduced into the training process so that the prediction results not only meet the known flow field information but also meet the physical conservation laws.Through two examples,it is proved that the model under the fusion driven framework can solve the strongly nonlinear flow field problems,and has stronger generalization and expansion.The proposed model is used to solve a muzzle flow field,and the safety clearance behind the barrel side is divided.It is pointed out that the shape of the safety clearance under different launch speeds is roughly the same,and the pressure disturbance in the area within 9.2 m behind the muzzle section exceeds the safety threshold,which is a dangerous area.Comparison with the CFD results shows that the calculation efficiency of the proposed model is greatly improved under the condition of the same calculation accuracy.The proposed model can quickly and accurately simulate the muzzle flow field under various launch conditions.展开更多
Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorit...Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.展开更多
DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in...DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning.展开更多
Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign La...Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing.展开更多
As big data becomes an apparent challenge to handle when building a business intelligence(BI)system,there is a motivation to handle this challenging issue in higher education institutions(HEIs).Monitoring quality in H...As big data becomes an apparent challenge to handle when building a business intelligence(BI)system,there is a motivation to handle this challenging issue in higher education institutions(HEIs).Monitoring quality in HEIs encompasses handling huge amounts of data coming from different sources.This paper reviews big data and analyses the cases from the literature regarding quality assurance(QA)in HEIs.It also outlines a framework that can address the big data challenge in HEIs to handle QA monitoring using BI dashboards and a prototype dashboard is presented in this paper.The dashboard was developed using a utilisation tool to monitor QA in HEIs to provide visual representations of big data.The prototype dashboard enables stakeholders to monitor compliance with QA standards while addressing the big data challenge associated with the substantial volume of data managed by HEIs’QA systems.This paper also outlines how the developed system integrates big data from social media into the monitoring dashboard.展开更多
While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),...While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.展开更多
●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,Scien...●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,ScienceDirect,ProQuest and Cochrane Library were searched before May 31,2023 which adopted AI for glaucoma detection with SD-OCT images.All pieces of the literature were screened and extracted by two investigators.Meta-analysis,Meta-regression,subgroup,and publication of bias were conducted by Stata16.0.The risk of bias assessment was performed in Revman5.4 using the QUADAS-2 tool.●RESULTS:Twenty studies and 51 models were selected for systematic review and Meta-analysis.The pooled sensitivity and specificity were 0.91(95%CI:0.86–0.94,I2=94.67%),0.90(95%CI:0.87–0.92,I2=89.24%).The pooled positive likelihood ratio(PLR)and negative likelihood ratio(NLR)were 8.79(95%CI:6.93–11.15,I2=89.31%)and 0.11(95%CI:0.07–0.16,I2=95.25%).The pooled diagnostic odds ratio(DOR)and area under curve(AUC)were 83.58(95%CI:47.15–148.15,I2=100%)and 0.95(95%CI:0.93–0.97).There was no threshold effect(Spearman correlation coefficient=0.22,P>0.05).●CONCLUSION:There is a high accuracy for the detection of glaucoma with AI with SD-OCT images.The application of AI-based algorithms allows together with“doctor+artificial intelligence”to improve the diagnosis of glaucoma.展开更多
In this editorial we comment on the article“Potential and limitations of ChatGPT and generative artificial intelligence in medial safety education”published in the recent issue of the World Journal of Clinical Cases...In this editorial we comment on the article“Potential and limitations of ChatGPT and generative artificial intelligence in medial safety education”published in the recent issue of the World Journal of Clinical Cases.This article described the usefulness of artificial intelligence(AI)in medial safety education.Herein,we focus specifically on the use of AI in the field of pain medicine.AI technology has emerged as a powerful tool,and is expected to play an important role in the healthcare sector and significantly contribute to pain medicine as further developments are made.AI may have several applications in pain medicine.First,AI can assist in selecting testing methods to identify causes of pain and improve diagnostic accuracy.Entry of a patient’s symptoms into the algorithm can prompt it to suggest necessary tests and possible diagnoses.Based on the latest medical information and recent research results,AI can support doctors in making accurate diagnoses and setting up an effective treatment plan.Second,AI assists in interpreting medical images.For neural and musculoskeletal disorders,imaging tests are of vital importance.AI can analyze a variety of imaging data,including that from radiography,computed tomography,and magnetic resonance imaging,to identify specific patterns,allowing quick and accurate image interpretation.Third,AI can predict the outcomes of pain treatments,contributing to setting up the optimal treatment plan.By predicting individual patient responses to treatment,AI algorithms can assist doctors in establishing a treatment plan tailored to each patient,further enhancing treatment effectiveness.For efficient utilization of AI in the pain medicine field,it is crucial to enhance the accuracy of AI decision-making by using more medical data,while issues related to the protection of patient personal information and responsibility for AI decisions will have to be addressed.In the future,AI technology is expected to be innovatively applied in the field of pain medicine.The advancement of AI is anticipated to have a positive impact on the entire medical field by providing patients with accurate and effective medical services.展开更多
Although the pediatric perioperative pain management has been improved in recent years,the valid and reliable pain assessment tool in perioperative period of children remains a challenging task.Pediatric perioperative...Although the pediatric perioperative pain management has been improved in recent years,the valid and reliable pain assessment tool in perioperative period of children remains a challenging task.Pediatric perioperative pain management is intractable not only because children cannot express their emotions accurately and objectively due to their inability to describe physiological characteristics of feeling which are different from those of adults,but also because there is a lack of effective and specific assessment tool for children.In addition,exposure to repeated painful stimuli early in life is known to have short and long-term adverse sequelae.The short-term sequelae can induce a series of neurological,endocrine,cardiovascular system stress related to psychological trauma,while long-term sequelae may alter brain maturation process,which can lead to impair neurodevelopmental,behavioral,and cognitive function.Children’s facial expressions largely reflect the degree of pain,which has led to the developing of a number of pain scoring tools that will help improve the quality of pain mana-gement in children if they are continually studied in depth.The artificial inte-lligence(AI)technology represented by machine learning has reached an unprecedented level in image processing of deep facial models through deep convolutional neural networks,which can effectively identify and systematically analyze various subtle features of children’s facial expressions.Based on the construction of a large database of images of facial expressions in children with perioperative pain,this study proposes to develop and apply automatic facial pain expression recognition software using AI technology.The study aims to improve the postoperative pain management for pediatric population and the short-term and long-term quality of life for pediatric patients after operational event.展开更多
Sleep and well-being have been intricately linked,and sleep hygiene is paramount for developing mental well-being and resilience.Although widespread,sleep disorders require elaborate polysomnography laboratory and pat...Sleep and well-being have been intricately linked,and sleep hygiene is paramount for developing mental well-being and resilience.Although widespread,sleep disorders require elaborate polysomnography laboratory and patient-stay with sleep in unfamiliar environments.Current technologies have allowed various devices to diagnose sleep disorders at home.However,these devices are in various validation stages,with many already receiving approvals from competent authorities.This has captured vast patient-related physiologic data for advanced analytics using artificial intelligence through machine and deep learning applications.This is expected to be integrated with patients’Electronic Health Records and provide individualized prescriptive therapy for sleep disorders in the future.展开更多
Recent advancements in science and technology,coupled with the proliferation of data,have also urged laboratory medicine to integrate with the era of artificial intelligence(AI)and machine learning(ML).In the current ...Recent advancements in science and technology,coupled with the proliferation of data,have also urged laboratory medicine to integrate with the era of artificial intelligence(AI)and machine learning(ML).In the current practices of evidencebased medicine,the laboratory tests analysing disease patterns through the association rule mining(ARM)have emerged as a modern tool for the risk assessment and the disease stratification,with the potential to reduce cardiovascular disease(CVD)mortality.CVDs are the well recognised leading global cause of mortality with the higher fatality rates in the Indian population due to associated factors like hypertension,diabetes,and lifestyle choices.AI-driven algorithms have offered deep insights in this field while addressing various challenges such as healthcare systems grappling with the physician shortages.Personalized medicine,well driven by the big data necessitates the integration of ML techniques and high-quality electronic health records to direct the meaningful outcome.These technological advancements enhance the computational analyses for both research and clinical practice.ARM plays a pivotal role by uncovering meaningful relationships within databases,aiding in patient survival prediction and risk factor identification.AI potential in laboratory medicine is vast and it must be cautiously integrated while considering potential ethical,legal,and privacy concerns.Thus,an AI ethics framework is essential to guide its responsible use.Aligning AI algorithms with existing lab practices,promoting education among healthcare professionals,and fostering careful integration into clinical settings are imperative for harnessing the benefits of this transformative technology.展开更多
In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,e...In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.展开更多
BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most ...BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most effective ways to control it.The latest findings suggest that a basal insulin administration rate and a single,highconcentration injection before a meal may not be sufficient to maintain healthy blood glucose levels.While the basal insulin rate treatment can stabilize blood glucose levels over the long term,it may not be enough to bring the levels below the post-meal limit after 60 min.The short-term impacts of meals can be greatly reduced by high-concentration injections,which can help stabilize blood glucose levels.Unfortunately,they cannot provide long-term stability to satisfy the postmeal or pre-meal restrictions.However,proportional-integral-derivative(PID)control with basal dose maintains the blood glucose levels within the range for a longer period.AIM To develop a closed-loop electronic system to pump required insulin into the patient's body automatically in synchronization with glucose sensor readings.METHODS The proposed system integrates a glucose sensor,decision unit,and pumping module to specifically address the pumping of insulin and enhance system effectiveness.Serving as the intelligence hub,the decision unit analyzes data from the glucose sensor to determine the optimal insulin dosage,guided by a pre-existing glucose and insulin level table.The artificial intelligence detection block processes this information,providing decision instructions to the pumping module.Equipped with communication antennas,the glucose sensor and micropump operate in a feedback loop,creating a closed-loop system that eliminates the need for manual intervention.RESULTS The incorporation of a PID controller to assess and regulate blood glucose and insulin levels in individuals with diabetes introduces a sophisticated and dynamic element to diabetes management.The simulation not only allows visualization of how the body responds to different inputs but also offers a valuable tool for predicting and testing the effects of various interventions over time.The PID controller's role in adjusting insulin dosage based on the discrepancy between desired setpoints and actual measurements showcases a proactive strategy for maintaining blood glucose levels within a healthy range.This dynamic feedback loop not only delays the onset of steady-state conditions but also effectively counteracts post-meal spikes in blood glucose.CONCLUSION The WiFi-controlled voltage controller and the PID controller simulation collectively underscore the ongoing efforts to enhance efficiency,safety,and personalized care within the realm of diabetes management.These technological advancements not only contribute to the optimization of insulin delivery systems but also have the potential to reshape our understanding of glucose and insulin dynamics,fostering a new era of precision medicine in the treatment of diabetes.展开更多
文摘The aim of this study was to verify the existence of business and strategic intelligence policies at the level of Congolese companies and at the state level, likely to foster progress and healthy development in the east of the DRC. The study was based on a mixed perspective consisting of objective analysis of quantitative data and interpretative analysis of qualitative data. The results showed that business and strategic intelligence policies have not been established at either company or state level, as this is an area of activity that is not known to the players in companies and public departments, and there are no units or offices in their organizational structures responsible for managing strategic information for competitiveness on the international market. In addition, there is a real need to establish strategic information management units within companies, upstream, and to set up a national strategic information management department or agency to help local companies compete in the marketplace, downstream. This reflects the importance and timeliness of building business and strategic intelligence policies to ensure economic progress and development in the eastern DRC. Business and strategic intelligence provides companies with an appropriate tool for researching, collecting, processing and disseminating information useful for decision-making among stakeholders, in order to cope with a crisis or competitive situation. The study suggests a number of key recommendations based on its findings. To the government, it is recommended to establish the national policy of business and strategic intelligence by setting up a national agency of strategic intelligence in favor of local companies;and to companies to establish business intelligence units in their organizational structures in favor of stakeholders to foster advantageous decision-making in the competitive market and achieve progress. Finally, the study suggests that studies be carried out to fully understand the opportunities and impact of business and strategic intelligence in African countries, particularly in the DRC.
文摘BACKGROUND Artificial intelligence(AI)has potential in the optical diagnosis of colorectal polyps.AIM To evaluate the feasibility of the real-time use of the computer-aided diagnosis system(CADx)AI for ColoRectal Polyps(AI4CRP)for the optical diagnosis of diminutive colorectal polyps and to compare the performance with CAD EYE^(TM)(Fujifilm,Tokyo,Japan).CADx influence on the optical diagnosis of an expert endoscopist was also investigated.METHODS AI4CRP was developed in-house and CAD EYE was proprietary software provided by Fujifilm.Both CADxsystems exploit convolutional neural networks.Colorectal polyps were characterized as benign or premalignant and histopathology was used as gold standard.AI4CRP provided an objective assessment of its characterization by presenting a calibrated confidence characterization value(range 0.0-1.0).A predefined cut-off value of 0.6 was set with values<0.6 indicating benign and values≥0.6 indicating premalignant colorectal polyps.Low confidence characterizations were defined as values 40%around the cut-off value of 0.6(<0.36 and>0.76).Self-critical AI4CRP’s diagnostic performances excluded low confidence characterizations.RESULTS AI4CRP use was feasible and performed on 30 patients with 51 colorectal polyps.Self-critical AI4CRP,excluding 14 low confidence characterizations[27.5%(14/51)],had a diagnostic accuracy of 89.2%,sensitivity of 89.7%,and specificity of 87.5%,which was higher compared to AI4CRP.CAD EYE had a 83.7%diagnostic accuracy,74.2%sensitivity,and 100.0%specificity.Diagnostic performances of the endoscopist alone(before AI)increased nonsignificantly after reviewing the CADx characterizations of both AI4CRP and CAD EYE(AI-assisted endoscopist).Diagnostic performances of the AI-assisted endoscopist were higher compared to both CADx-systems,except for specificity for which CAD EYE performed best.CONCLUSION Real-time use of AI4CRP was feasible.Objective confidence values provided by a CADx is novel and self-critical AI4CRP showed higher diagnostic performances compared to AI4CRP.
基金This research is supported partially by the Papua New Guinea Science and Technology Secretariat(PNGSTS)under the project grant No.1-3962 PNGSTS.
文摘Big data has had significant impacts on our lives,economies,academia and industries over the past decade.The current equations are:What is the future of big data?What era do we live in?This article addresses these questions by looking at meta as an operation and argues that we are living in the era of big intelligence through analyzing from meta(big data)to big intelligence.More specifically,this article will analyze big data from an evolutionary perspective.The article overviews data,information,knowledge,and intelligence(DIKI)and reveals their relationships.After analyzing meta as an operation,this article explores Meta(DIKE)and its relationship.It reveals 5 Bigs consisting of big data,big information,big knowledge,big intelligence and big analytics.Applying meta on 5 Bigs,this article infers that 4 Big Data 4.0=meta(big data)=big intelligence.This article analyzes how intelligent big analytics support big intelligence.The proposed approach in this research might facilitate the research and development of big data,big data analytics,business intelligence,artificial intelligence,and data science.
基金supported in part by the National Natural Science Foundation of China(82072019)the Shenzhen Basic Research Program(JCYJ20210324130209023)+5 种基金the Shenzhen-Hong Kong-Macao S&T Program(Category C)(SGDX20201103095002019)the Mainland-Hong Kong Joint Funding Scheme(MHKJFS)(MHP/005/20),the Project of Strategic Importance Fund(P0035421)the Projects of RISA(P0043001)from the Hong Kong Polytechnic University,the Natural Science Foundation of Jiangsu Province(BK20201441)the Provincial and Ministry Co-constructed Project of Henan Province Medical Science and Technology Research(SBGJ202103038,SBGJ202102056)the Henan Province Key R&D and Promotion Project(Science and Technology Research)(222102310015)the Natural Science Foundation of Henan Province(222300420575),and the Henan Province Science and Technology Research(222102310322).
文摘Modern medicine is reliant on various medical imaging technologies for non-invasively observing patients’anatomy.However,the interpretation of medical images can be highly subjective and dependent on the expertise of clinicians.Moreover,some potentially useful quantitative information in medical images,especially that which is not visible to the naked eye,is often ignored during clinical practice.In contrast,radiomics performs high-throughput feature extraction from medical images,which enables quantitative analysis of medical images and prediction of various clinical endpoints.Studies have reported that radiomics exhibits promising performance in diagnosis and predicting treatment responses and prognosis,demonstrating its potential to be a non-invasive auxiliary tool for personalized medicine.However,radiomics remains in a developmental phase as numerous technical challenges have yet to be solved,especially in feature engineering and statistical modeling.In this review,we introduce the current utility of radiomics by summarizing research on its application in the diagnosis,prognosis,and prediction of treatment responses in patients with cancer.We focus on machine learning approaches,for feature extraction and selection during feature engineering and for imbalanced datasets and multi-modality fusion during statistical modeling.Furthermore,we introduce the stability,reproducibility,and interpretability of features,and the generalizability and interpretability of models.Finally,we offer possible solutions to current challenges in radiomics research.
基金supported by the Capital’s Funds for Health Improvement and Research,No.2022-2-2072(to YG).
文摘Artificial intelligence can be indirectly applied to the repair of peripheral nerve injury.Specifically,it can be used to analyze and process data regarding peripheral nerve injury and repair,while study findings on peripheral nerve injury and repair can provide valuable data to enrich artificial intelligence algorithms.To investigate advances in the use of artificial intelligence in the diagnosis,rehabilitation,and scientific examination of peripheral nerve injury,we used CiteSpace and VOSviewer software to analyze the relevant literature included in the Web of Science from 1994–2023.We identified the following research hotspots in peripheral nerve injury and repair:(1)diagnosis,classification,and prognostic assessment of peripheral nerve injury using neuroimaging and artificial intelligence techniques,such as corneal confocal microscopy and coherent anti-Stokes Raman spectroscopy;(2)motion control and rehabilitation following peripheral nerve injury using artificial neural networks and machine learning algorithms,such as wearable devices and assisted wheelchair systems;(3)improving the accuracy and effectiveness of peripheral nerve electrical stimulation therapy using artificial intelligence techniques combined with deep learning,such as implantable peripheral nerve interfaces;(4)the application of artificial intelligence technology to brain-machine interfaces for disabled patients and those with reduced mobility,enabling them to control devices such as networked hand prostheses;(5)artificial intelligence robots that can replace doctors in certain procedures during surgery or rehabilitation,thereby reducing surgical risk and complications,and facilitating postoperative recovery.Although artificial intelligence has shown many benefits and potential applications in peripheral nerve injury and repair,there are some limitations to this technology,such as the consequences of missing or imbalanced data,low data accuracy and reproducibility,and ethical issues(e.g.,privacy,data security,research transparency).Future research should address the issue of data collection,as large-scale,high-quality clinical datasets are required to establish effective artificial intelligence models.Multimodal data processing is also necessary,along with interdisciplinary collaboration,medical-industrial integration,and multicenter,large-sample clinical studies.
基金This work was supported in part by the National Natural Science Foundation of China under Grants 61861007 and 61640014in part by theGuizhou Province Science and Technology Planning Project ZK[2021]303+2 种基金in part by the Guizhou Province Science Technology Support Plan under Grants[2022]017,[2023]096 and[2022]264in part by the Guizhou Education Department Innovation Group Project under Grant KY[2021]012in part by the Talent Introduction Project of Guizhou University(2014)-08.
文摘In recent years,Artificial Intelligence(AI)has revolutionized people’s lives.AI has long made breakthrough progress in the field of surgery.However,the research on the application of AI in orthopedics is still in the exploratory stage.The paper first introduces the background of AI and orthopedic diseases,addresses the shortcomings of traditional methods in the detection of fractures and orthopedic diseases,draws out the advantages of deep learning and machine learning in image detection,and reviews the latest results of deep learning and machine learning applied to orthopedic image detection in recent years,describing the contributions,strengths and weaknesses,and the direction of the future improvements that can be made in each study.Next,the paper also introduces the difficulties of traditional orthopedic surgery and the roles played by AI in preoperative,intraoperative,and postoperative orthopedic surgery,scientifically discussing the advantages and prospects of AI in orthopedic surgery.Finally,the article discusses the limitations of current research and technology in clinical applications,proposes solutions to the problems,and summarizes and outlines possible future research directions.The main objective of this review is to inform future research and development of AI in orthopedics.
基金supported by the National Key R&D Program of China under Grant 2021YFB1407001the National Natural Science Foundation of China (NSFC) under Grants 62001269 and 61960206006+2 种基金the State Key Laboratory of Rail Traffic Control and Safety (under Grants RCS2022K009)Beijing Jiaotong University, the Future Plan Program for Young Scholars of Shandong Universitythe EU H2020 RISE TESTBED2 project under Grant 872172
文摘A large amount of mobile data from growing high-speed train(HST)users makes intelligent HST communications enter the era of big data.The corresponding artificial intelligence(AI)based HST channel modeling becomes a trend.This paper provides AI based channel characteristic prediction and scenario classification model for millimeter wave(mmWave)HST communications.Firstly,the ray tracing method verified by measurement data is applied to reconstruct four representative HST scenarios.By setting the positions of transmitter(Tx),receiver(Rx),and other parameters,the multi-scenarios wireless channel big data is acquired.Then,based on the obtained channel database,radial basis function neural network(RBF-NN)and back propagation neural network(BP-NN)are trained for channel characteristic prediction and scenario classification.Finally,the channel characteristic prediction and scenario classification capabilities of the network are evaluated by calculating the root mean square error(RMSE).The results show that RBF-NN can generally achieve better performance than BP-NN,and is more applicable to prediction of HST scenarios.
基金Supported by the Natural Science Foundation of Jiangsu Province of China(Grant No.BK20210347)Supported by the National Natural Science Foundation of China(Grant No.U2141246).
文摘Artificial intelligence technology is introduced into the simulation of muzzle flow field to improve its simulation efficiency in this paper.A data-physical fusion driven framework is proposed.First,the known flow field data is used to initialize the model parameters,so that the parameters to be trained are close to the optimal value.Then physical prior knowledge is introduced into the training process so that the prediction results not only meet the known flow field information but also meet the physical conservation laws.Through two examples,it is proved that the model under the fusion driven framework can solve the strongly nonlinear flow field problems,and has stronger generalization and expansion.The proposed model is used to solve a muzzle flow field,and the safety clearance behind the barrel side is divided.It is pointed out that the shape of the safety clearance under different launch speeds is roughly the same,and the pressure disturbance in the area within 9.2 m behind the muzzle section exceeds the safety threshold,which is a dangerous area.Comparison with the CFD results shows that the calculation efficiency of the proposed model is greatly improved under the condition of the same calculation accuracy.The proposed model can quickly and accurately simulate the muzzle flow field under various launch conditions.
文摘Explainable Artificial Intelligence(XAI)has an advanced feature to enhance the decision-making feature and improve the rule-based technique by using more advanced Machine Learning(ML)and Deep Learning(DL)based algorithms.In this paper,we chose e-healthcare systems for efficient decision-making and data classification,especially in data security,data handling,diagnostics,laboratories,and decision-making.Federated Machine Learning(FML)is a new and advanced technology that helps to maintain privacy for Personal Health Records(PHR)and handle a large amount of medical data effectively.In this context,XAI,along with FML,increases efficiency and improves the security of e-healthcare systems.The experiments show efficient system performance by implementing a federated averaging algorithm on an open-source Federated Learning(FL)platform.The experimental evaluation demonstrates the accuracy rate by taking epochs size 5,batch size 16,and the number of clients 5,which shows a higher accuracy rate(19,104).We conclude the paper by discussing the existing gaps and future work in an e-healthcare system.
基金the National Natural Science Foundation of China(62271485,61903363,U1811463,62103411,62203250)the Science and Technology Development Fund of Macao SAR(0093/2023/RIA2,0050/2020/A1)。
文摘DURING our discussion at workshops for writing“What Does ChatGPT Say:The DAO from Algorithmic Intelligence to Linguistic Intelligence”[1],we had expected the next milestone for Artificial Intelligence(AI)would be in the direction of Imaginative Intelligence(II),i.e.,something similar to automatic wordsto-videos generation or intelligent digital movies/theater technology that could be used for conducting new“Artificiofactual Experiments”[2]to replace conventional“Counterfactual Experiments”in scientific research and technical development for both natural and social studies[2]-[6].Now we have OpenAI’s Sora,so soon,but this is not the final,actually far away,and it is just the beginning.
基金supported by National Social Science Foundation Annual Project“Research on Evaluation and Improvement Paths of Integrated Development of Disabled Persons”(Grant No.20BRK029)the National Language Commission’s“14th Five-Year Plan”Scientific Research Plan 2023 Project“Domain Digital Language Service Resource Construction and Key Technology Research”(YB145-72)the National Philosophy and Social Sciences Foundation(Grant No.20BTQ065).
文摘Research on Chinese Sign Language(CSL)provides convenience and support for individuals with hearing impairments to communicate and integrate into society.This article reviews the relevant literature on Chinese Sign Language Recognition(CSLR)in the past 20 years.Hidden Markov Models(HMM),Support Vector Machines(SVM),and Dynamic Time Warping(DTW)were found to be the most commonly employed technologies among traditional identificationmethods.Benefiting from the rapid development of computer vision and artificial intelligence technology,Convolutional Neural Networks(CNN),3D-CNN,YOLO,Capsule Network(CapsNet)and various deep neural networks have sprung up.Deep Neural Networks(DNNs)and their derived models are integral tomodern artificial intelligence recognitionmethods.In addition,technologies thatwerewidely used in the early days have also been integrated and applied to specific hybrid models and customized identification methods.Sign language data collection includes acquiring data from data gloves,data sensors(such as Kinect,LeapMotion,etc.),and high-definition photography.Meanwhile,facial expression recognition,complex background processing,and 3D sign language recognition have also attracted research interests among scholars.Due to the uniqueness and complexity of Chinese sign language,accuracy,robustness,real-time performance,and user independence are significant challenges for future sign language recognition research.Additionally,suitable datasets and evaluation criteria are also worth pursuing.
文摘As big data becomes an apparent challenge to handle when building a business intelligence(BI)system,there is a motivation to handle this challenging issue in higher education institutions(HEIs).Monitoring quality in HEIs encompasses handling huge amounts of data coming from different sources.This paper reviews big data and analyses the cases from the literature regarding quality assurance(QA)in HEIs.It also outlines a framework that can address the big data challenge in HEIs to handle QA monitoring using BI dashboards and a prototype dashboard is presented in this paper.The dashboard was developed using a utilisation tool to monitor QA in HEIs to provide visual representations of big data.The prototype dashboard enables stakeholders to monitor compliance with QA standards while addressing the big data challenge associated with the substantial volume of data managed by HEIs’QA systems.This paper also outlines how the developed system integrates big data from social media into the monitoring dashboard.
文摘While emerging technologies such as the Internet of Things(IoT)have many benefits,they also pose considerable security challenges that require innovative solutions,including those based on artificial intelligence(AI),given that these techniques are increasingly being used by malicious actors to compromise IoT systems.Although an ample body of research focusing on conventional AI methods exists,there is a paucity of studies related to advanced statistical and optimization approaches aimed at enhancing security measures.To contribute to this nascent research stream,a novel AI-driven security system denoted as“AI2AI”is presented in this work.AI2AI employs AI techniques to enhance the performance and optimize security mechanisms within the IoT framework.We also introduce the Genetic Algorithm Anomaly Detection and Prevention Deep Neural Networks(GAADPSDNN)sys-tem that can be implemented to effectively identify,detect,and prevent cyberattacks targeting IoT devices.Notably,this system demonstrates adaptability to both federated and centralized learning environments,accommodating a wide array of IoT devices.Our evaluation of the GAADPSDNN system using the recently complied WUSTL-IIoT and Edge-IIoT datasets underscores its efficacy.Achieving an impressive overall accuracy of 98.18%on the Edge-IIoT dataset,the GAADPSDNN outperforms the standard deep neural network(DNN)classifier with 94.11%accuracy.Furthermore,with the proposed enhancements,the accuracy of the unoptimized random forest classifier(80.89%)is improved to 93.51%,while the overall accuracy(98.18%)surpasses the results(93.91%,94.67%,94.94%,and 94.96%)achieved when alternative systems based on diverse optimization techniques and the same dataset are employed.The proposed optimization techniques increase the effectiveness of the anomaly detection system by efficiently achieving high accuracy and reducing the computational load on IoT devices through the adaptive selection of active features.
文摘●AIM:To quantify the performance of artificial intelligence(AI)in detecting glaucoma with spectral-domain optical coherence tomography(SD-OCT)images.●METHODS:Electronic databases including PubMed,Embase,Scopus,ScienceDirect,ProQuest and Cochrane Library were searched before May 31,2023 which adopted AI for glaucoma detection with SD-OCT images.All pieces of the literature were screened and extracted by two investigators.Meta-analysis,Meta-regression,subgroup,and publication of bias were conducted by Stata16.0.The risk of bias assessment was performed in Revman5.4 using the QUADAS-2 tool.●RESULTS:Twenty studies and 51 models were selected for systematic review and Meta-analysis.The pooled sensitivity and specificity were 0.91(95%CI:0.86–0.94,I2=94.67%),0.90(95%CI:0.87–0.92,I2=89.24%).The pooled positive likelihood ratio(PLR)and negative likelihood ratio(NLR)were 8.79(95%CI:6.93–11.15,I2=89.31%)and 0.11(95%CI:0.07–0.16,I2=95.25%).The pooled diagnostic odds ratio(DOR)and area under curve(AUC)were 83.58(95%CI:47.15–148.15,I2=100%)and 0.95(95%CI:0.93–0.97).There was no threshold effect(Spearman correlation coefficient=0.22,P>0.05).●CONCLUSION:There is a high accuracy for the detection of glaucoma with AI with SD-OCT images.The application of AI-based algorithms allows together with“doctor+artificial intelligence”to improve the diagnosis of glaucoma.
文摘In this editorial we comment on the article“Potential and limitations of ChatGPT and generative artificial intelligence in medial safety education”published in the recent issue of the World Journal of Clinical Cases.This article described the usefulness of artificial intelligence(AI)in medial safety education.Herein,we focus specifically on the use of AI in the field of pain medicine.AI technology has emerged as a powerful tool,and is expected to play an important role in the healthcare sector and significantly contribute to pain medicine as further developments are made.AI may have several applications in pain medicine.First,AI can assist in selecting testing methods to identify causes of pain and improve diagnostic accuracy.Entry of a patient’s symptoms into the algorithm can prompt it to suggest necessary tests and possible diagnoses.Based on the latest medical information and recent research results,AI can support doctors in making accurate diagnoses and setting up an effective treatment plan.Second,AI assists in interpreting medical images.For neural and musculoskeletal disorders,imaging tests are of vital importance.AI can analyze a variety of imaging data,including that from radiography,computed tomography,and magnetic resonance imaging,to identify specific patterns,allowing quick and accurate image interpretation.Third,AI can predict the outcomes of pain treatments,contributing to setting up the optimal treatment plan.By predicting individual patient responses to treatment,AI algorithms can assist doctors in establishing a treatment plan tailored to each patient,further enhancing treatment effectiveness.For efficient utilization of AI in the pain medicine field,it is crucial to enhance the accuracy of AI decision-making by using more medical data,while issues related to the protection of patient personal information and responsibility for AI decisions will have to be addressed.In the future,AI technology is expected to be innovatively applied in the field of pain medicine.The advancement of AI is anticipated to have a positive impact on the entire medical field by providing patients with accurate and effective medical services.
文摘Although the pediatric perioperative pain management has been improved in recent years,the valid and reliable pain assessment tool in perioperative period of children remains a challenging task.Pediatric perioperative pain management is intractable not only because children cannot express their emotions accurately and objectively due to their inability to describe physiological characteristics of feeling which are different from those of adults,but also because there is a lack of effective and specific assessment tool for children.In addition,exposure to repeated painful stimuli early in life is known to have short and long-term adverse sequelae.The short-term sequelae can induce a series of neurological,endocrine,cardiovascular system stress related to psychological trauma,while long-term sequelae may alter brain maturation process,which can lead to impair neurodevelopmental,behavioral,and cognitive function.Children’s facial expressions largely reflect the degree of pain,which has led to the developing of a number of pain scoring tools that will help improve the quality of pain mana-gement in children if they are continually studied in depth.The artificial inte-lligence(AI)technology represented by machine learning has reached an unprecedented level in image processing of deep facial models through deep convolutional neural networks,which can effectively identify and systematically analyze various subtle features of children’s facial expressions.Based on the construction of a large database of images of facial expressions in children with perioperative pain,this study proposes to develop and apply automatic facial pain expression recognition software using AI technology.The study aims to improve the postoperative pain management for pediatric population and the short-term and long-term quality of life for pediatric patients after operational event.
文摘Sleep and well-being have been intricately linked,and sleep hygiene is paramount for developing mental well-being and resilience.Although widespread,sleep disorders require elaborate polysomnography laboratory and patient-stay with sleep in unfamiliar environments.Current technologies have allowed various devices to diagnose sleep disorders at home.However,these devices are in various validation stages,with many already receiving approvals from competent authorities.This has captured vast patient-related physiologic data for advanced analytics using artificial intelligence through machine and deep learning applications.This is expected to be integrated with patients’Electronic Health Records and provide individualized prescriptive therapy for sleep disorders in the future.
文摘Recent advancements in science and technology,coupled with the proliferation of data,have also urged laboratory medicine to integrate with the era of artificial intelligence(AI)and machine learning(ML).In the current practices of evidencebased medicine,the laboratory tests analysing disease patterns through the association rule mining(ARM)have emerged as a modern tool for the risk assessment and the disease stratification,with the potential to reduce cardiovascular disease(CVD)mortality.CVDs are the well recognised leading global cause of mortality with the higher fatality rates in the Indian population due to associated factors like hypertension,diabetes,and lifestyle choices.AI-driven algorithms have offered deep insights in this field while addressing various challenges such as healthcare systems grappling with the physician shortages.Personalized medicine,well driven by the big data necessitates the integration of ML techniques and high-quality electronic health records to direct the meaningful outcome.These technological advancements enhance the computational analyses for both research and clinical practice.ARM plays a pivotal role by uncovering meaningful relationships within databases,aiding in patient survival prediction and risk factor identification.AI potential in laboratory medicine is vast and it must be cautiously integrated while considering potential ethical,legal,and privacy concerns.Thus,an AI ethics framework is essential to guide its responsible use.Aligning AI algorithms with existing lab practices,promoting education among healthcare professionals,and fostering careful integration into clinical settings are imperative for harnessing the benefits of this transformative technology.
基金supported by the National Natural Science Foundation of China(62172033).
文摘In recent years,the global surge of High-speed Railway(HSR)revolutionized ground transportation,providing secure,comfortable,and punctual services.The next-gen HSR,fueled by emerging services like video surveillance,emergency communication,and real-time scheduling,demands advanced capabilities in real-time perception,automated driving,and digitized services,which accelerate the integration and application of Artificial Intelligence(AI)in the HSR system.This paper first provides a brief overview of AI,covering its origin,evolution,and breakthrough applications.A comprehensive review is then given regarding the most advanced AI technologies and applications in three macro application domains of the HSR system:mechanical manufacturing and electrical control,communication and signal control,and transportation management.The literature is categorized and compared across nine application directions labeled as intelligent manufacturing of trains and key components,forecast of railroad maintenance,optimization of energy consumption in railroads and trains,communication security,communication dependability,channel modeling and estimation,passenger scheduling,traffic flow forecasting,high-speed railway smart platform.Finally,challenges associated with the application of AI are discussed,offering insights for future research directions.
文摘BACKGROUND Diabetes,a globally escalating health concern,necessitates innovative solutions for efficient detection and management.Blood glucose control is an essential aspect of managing diabetes and finding the most effective ways to control it.The latest findings suggest that a basal insulin administration rate and a single,highconcentration injection before a meal may not be sufficient to maintain healthy blood glucose levels.While the basal insulin rate treatment can stabilize blood glucose levels over the long term,it may not be enough to bring the levels below the post-meal limit after 60 min.The short-term impacts of meals can be greatly reduced by high-concentration injections,which can help stabilize blood glucose levels.Unfortunately,they cannot provide long-term stability to satisfy the postmeal or pre-meal restrictions.However,proportional-integral-derivative(PID)control with basal dose maintains the blood glucose levels within the range for a longer period.AIM To develop a closed-loop electronic system to pump required insulin into the patient's body automatically in synchronization with glucose sensor readings.METHODS The proposed system integrates a glucose sensor,decision unit,and pumping module to specifically address the pumping of insulin and enhance system effectiveness.Serving as the intelligence hub,the decision unit analyzes data from the glucose sensor to determine the optimal insulin dosage,guided by a pre-existing glucose and insulin level table.The artificial intelligence detection block processes this information,providing decision instructions to the pumping module.Equipped with communication antennas,the glucose sensor and micropump operate in a feedback loop,creating a closed-loop system that eliminates the need for manual intervention.RESULTS The incorporation of a PID controller to assess and regulate blood glucose and insulin levels in individuals with diabetes introduces a sophisticated and dynamic element to diabetes management.The simulation not only allows visualization of how the body responds to different inputs but also offers a valuable tool for predicting and testing the effects of various interventions over time.The PID controller's role in adjusting insulin dosage based on the discrepancy between desired setpoints and actual measurements showcases a proactive strategy for maintaining blood glucose levels within a healthy range.This dynamic feedback loop not only delays the onset of steady-state conditions but also effectively counteracts post-meal spikes in blood glucose.CONCLUSION The WiFi-controlled voltage controller and the PID controller simulation collectively underscore the ongoing efforts to enhance efficiency,safety,and personalized care within the realm of diabetes management.These technological advancements not only contribute to the optimization of insulin delivery systems but also have the potential to reshape our understanding of glucose and insulin dynamics,fostering a new era of precision medicine in the treatment of diabetes.