Objective: To examine the trajectory of psychosomatic symptoms and to explore the impact of psychosomatic symptoms on setup error in patients undergoing breast cancer radiotherapy.Methods: A total of 102 patients with...Objective: To examine the trajectory of psychosomatic symptoms and to explore the impact of psychosomatic symptoms on setup error in patients undergoing breast cancer radiotherapy.Methods: A total of 102 patients with early breast cancer who received initial radiotherapy were consecutively recruited. The M.D. Anderson Symptom Inventory(MDASI) and three different anxiety scales, i.e., the Self-Rating Anxiety Scale(SAS), State-Trait Anxiety Inventory(STAI), and Anxiety Sensitivity Index(ASI), were used in this study. The radiotherapy setup errors were measured in millimetres by comparing the real-time isocratic verification film during radiotherapy with the digitally reconstructed radiograph(DRR). Patients completed the assessment at three time points: before the initial radiotherapy(T1), before the middle radiotherapy(T2), and before the last radiotherapy(T3).Results: The SAS and STAI-State scores of breast cancer patients at T1 were significantly higher than those at T2 and T3(F=24.44, P<0.001;F=30.25, P<0.001). The core symptoms of MDASI were positively correlated with anxiety severity. The setup errors of patients with high SAS scores were greater than those of patients with low anxiety levels at T1(Z=-2.01, P=0.044). We also found that higher SAS scores were associated with a higher risk of radiotherapy setup errors at T1(B=0.458, P<0.05).Conclusions: This study seeks to identify treatment-related psychosomatic symptoms and mitigate their impact on patients and treatment. Patients with early breast cancer experienced the highest level of anxiety before the initial radiotherapy, and then, anxiety levels declined. Patients with high somatic symptoms of anxiety may have a higher risk of radiotherapy setup errors.展开更多
Objective: We aim to quantify the magnitude of setup errors in intensity-modulated radiotherapy (IMRT) treated Head and Neck cancer patients and recommend appropriate PTV margin. Methods: 60 patients with head and nec...Objective: We aim to quantify the magnitude of setup errors in intensity-modulated radiotherapy (IMRT) treated Head and Neck cancer patients and recommend appropriate PTV margin. Methods: 60 patients with head and neck cancer required bilateral neck irradiation were planned and treated by simultaneous integrated boost IMRT technique either treated radically or postoperative. Patients undergoing image-guided radiotherapy (IGRT) each with once weekly scheduled cone beam computed tomography (CBCT). The 3D displacements, systematic and random errors were calculated. The appropriate PTV expansion was determined using Van Herk’s formula. Results: Mean 3D displacement was 0.16 cm in the vertical direction, 0.14 cm in the horizontal direction and 0.16 cm in the longitudinal direction. Conclusion: Use of weekly CBCT allows the planning target volume (PTV) expansion to be reduced according to our setup. The appropriate clinical target volume (CTV)-PTV margin for our institute is 0.30 cm, 0.38 cm, and 0.33 cm in the horizontal, vertical, and longitudinal directions, respectively.展开更多
Purpose: To investigate the feasibility of applying ANOVA newly proposed by Yukinori to verify the setup errors, PTV (Planning Target Volume) margins, DVH for lung cancer with SBRT. Methods: 20 patients receiving SBRT...Purpose: To investigate the feasibility of applying ANOVA newly proposed by Yukinori to verify the setup errors, PTV (Planning Target Volume) margins, DVH for lung cancer with SBRT. Methods: 20 patients receiving SBRT to 50 Gy in 5 fractions with a Varian iX linear acceleration were selected. Each patient was scanned with kV-CBCT before the daily treatment to verify the setup position. Two other error calculation methods raised by Van Herk and Remeijer were also compared to discover the statistical difference in systematic errors (Σ), random errors (σ), PTV margins and DVH. Results: Utilizing two PTV margin calculation formulas (Stroom, Van Herk), PTV calculated by Yukinori method in three directions were (5.89 and 3.95), (5.54 and 3.55), (3.24 and 0.78) mm;Van Herk method were (6.10 and 4.25), (5.73 and 3.83), (3.51 and 1.13) mm;Remeijer method were (6.39 and 4.57), (5.98 and 4.10), (3.69 and 1.33) mm. The volumes of PTV using Yukinori method were significantly smaller (P < 0.05) than Van Herk method and Remeijer method. However, dosimetric indices of PTV (D98, D50, D2) and for OARs (Mean Dose, V20, V5) had no significant difference (P > 0.05) among three methods. Conclusions: In lung SBRT treatment, due to fraction reduction and high level of dose per fraction, ANOVA was able to offset the effect of random factors in systematic errors, reducing the PTV margins and volumes. However, no distinct dose distribution improvement was founded in target volume and organs at risk.展开更多
In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution de...In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.展开更多
An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and pr...An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.展开更多
Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irra...Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irradiators are Cs-137 OB6 irradiator and X-ray irradiators at the Protection level SSDL;and Co-60 irradiator at the Therapy Level SSDL. PTW UNIDOS electrometer and LS01 Ionization chamber were used at the Protection Level to obtain doses for both Cs-137 OB6 and X-ray irradiators while an IBA farmer type ionization chamber and an IBA DOSE 1 electrometer were used at the Protection Level SSDL. Single/multiple exposure method and graphical method were used in the determination of the timer error for the three irradiators. The timer error obtained for Cs-137 OB6 irradiator was 0.48 ± 0.01 s, the timer error for the X-ray irradiator was 0.09 ± 0.01 s while the timer error obtained for GammaBeam X200 was 1.21 ± 0.04 s. It was observed that the timer error is not affected by source to detector distance. It was also observed that the timer error of Co-60 Gamma X200 irradiator is increasing with the age of the machine. Source to detector distance and field size do not contribute towards the timer error of the irradiators. The timer error of the Co-60 Gamma X200 irradiator (the only irradiator among the irradiators with a pneumatic system) increases with the age of the irradiator.展开更多
AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was...AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.展开更多
The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness...The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness ofIoT devices. These devices, present in offices, homes, industries, and more, need constant monitoring to ensuretheir proper functionality. The success of smart systems relies on their seamless operation and ability to handlefaults. Sensors, crucial components of these systems, gather data and contribute to their functionality. Therefore,sensor faults can compromise the system’s reliability and undermine the trustworthiness of smart environments.To address these concerns, various techniques and algorithms can be employed to enhance the performance ofIoT devices through effective fault detection. This paper conducted a thorough review of the existing literature andconducted a detailed analysis.This analysis effectively links sensor errors with a prominent fault detection techniquecapable of addressing them. This study is innovative because it paves theway for future researchers to explore errorsthat have not yet been tackled by existing fault detection methods. Significant, the paper, also highlights essentialfactors for selecting and adopting fault detection techniques, as well as the characteristics of datasets and theircorresponding recommended techniques. Additionally, the paper presents amethodical overview of fault detectiontechniques employed in smart devices, including themetrics used for evaluation. Furthermore, the paper examinesthe body of academic work related to sensor faults and fault detection techniques within the domain. This reflectsthe growing inclination and scholarly attention of researchers and academicians toward strategies for fault detectionwithin the realm of the Internet of Things.展开更多
In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of nor...In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of normalizing constants.It is shown that M_(n)^(p),when v=1,converges to the Frechet extreme value distribution at the rate of 1/n,and if v>1 then M_(n)^(p)converges to the Gumbel extreme value distribution at the rate of(loglogn)^(2)=(log n)^(1-1/v).展开更多
Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NIS...Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models.展开更多
This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding type...This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.展开更多
In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in...In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.展开更多
In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken a...In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.展开更多
This paper investigates the anomaly-resistant decentralized state estimation(SE) problem for a class of wide-area power systems which are divided into several non-overlapping areas connected through transmission lines...This paper investigates the anomaly-resistant decentralized state estimation(SE) problem for a class of wide-area power systems which are divided into several non-overlapping areas connected through transmission lines. Two classes of measurements(i.e., local measurements and edge measurements) are obtained, respectively, from the individual area and the transmission lines. A decentralized state estimator, whose performance is resistant against measurement with anomalies, is designed based on the minimum error entropy with fiducial points(MEEF) criterion. Specifically, 1) An augmented model, which incorporates the local prediction and local measurement, is developed by resorting to the unscented transformation approach and the statistical linearization approach;2) Using the augmented model, an MEEF-based cost function is designed that reflects the local prediction errors of the state and the measurement;and 3) The local estimate is first obtained by minimizing the MEEF-based cost function through a fixed-point iteration and then updated by using the edge measuring information. Finally, simulation experiments with three scenarios are carried out on the IEEE 14-bus system to illustrate the validity of the proposed anomaly-resistant decentralized SE scheme.展开更多
The assessment of the measurement error status of online Capacitor Voltage Transformers (CVT) within the power grid is of profound significance to the equitable trade of electric energy and the secure operation of the...The assessment of the measurement error status of online Capacitor Voltage Transformers (CVT) within the power grid is of profound significance to the equitable trade of electric energy and the secure operation of the power grid. This paper advances an online CVT error state evaluation method, anchored in the in-phase relationship and outlier detection. Initially, this method leverages the in-phase relationship to obviate the influence of primary side fluctuations in the grid on assessment accuracy. Subsequently, Principal Component Analysis (PCA) is employed to meticulously disentangle the error change information inherent in the CVT from the measured values and to compute statistics that delineate the error state. Finally, the Local Outlier Factor (LOF) is deployed to discern outliers in the statistics, with thresholds serving to appraise the CVT error state. Experimental results incontrovertibly demonstrate the efficacy of this method, showcasing its prowess in effecting online tracking of CVT error changes and conducting error state assessments. The discernible enhancements in reliability, accuracy, and sensitivity are manifest, with the assessment accuracy reaching an exemplary 0.01%.展开更多
In this paper, we propose the nonconforming virtual element method (NCVEM) discretization for the pointwise control constraint optimal control problem governed by elliptic equations. Based on the NCVEM approximation o...In this paper, we propose the nonconforming virtual element method (NCVEM) discretization for the pointwise control constraint optimal control problem governed by elliptic equations. Based on the NCVEM approximation of state equation and the variational discretization of control variables, we construct a virtual element discrete scheme. For the state, adjoint state and control variable, we obtain the corresponding prior estimate in H<sup>1</sup> and L<sup>2</sup> norms. Finally, some numerical experiments are carried out to support the theoretical results.展开更多
Chinese non-English majors are a large group of English learners.In the process of English pronunciation acquisition,issues such as incomplete phonological knowledge,transfer of mother tongue,and overgeneralization,le...Chinese non-English majors are a large group of English learners.In the process of English pronunciation acquisition,issues such as incomplete phonological knowledge,transfer of mother tongue,and overgeneralization,lead to confusion of phonemes and stress,misunderstanding of syllable structure,and errors of assimilation,drop,and epenthesis.The accuracy of English pronunciation can only be improved by knowing both English and Chinese phonological systems,strengthening the teaching of English phonological knowledge,and adopting various phonological training activities.展开更多
AIM:To determine the prevalence of refractive error in 5-to 17-year-old schoolchildren in Puerto Rico.METHODS:A quantitative descriptive study of 2867 children aged 5 to 17y from all seven educational regions of Puert...AIM:To determine the prevalence of refractive error in 5-to 17-year-old schoolchildren in Puerto Rico.METHODS:A quantitative descriptive study of 2867 children aged 5 to 17y from all seven educational regions of Puerto Rico was conducted from 2016–2019.Refractive error was determined via static and subjective refraction.Children with distance acuity≤20/40 or near visual acuity≤20/32 had a cycloplegic refraction.Data analysis included descriptive statistics,correlation coefficient,Kruskal-Wallis,Chi-square,and t test calculations.RESULTS:Twenty percent of the children had a spherical equivalent refractive error≤-0.50 D,3.2%had a spherical equivalent≥+2.00 D,and 10.4%had astigmatism≥1 D.There was a statistically(but non-clinically)significant myopic change in spherical equivalent refractive error with age(P<0.001).The prevalence of myopia increased with age(P<0.001)but not hyperopia(P=0.59)or astigmatism(P=0.51).Males had a significantly higher hyperopic spherical equivalent than females(P<0.001).Females had a higher prevalence of myopia(P<0.001)than males,but there was no difference in the hyperopia(P=0.74)or astigmatism prevalence(P=0.87).CONCLUSION:The prevalence of a spherical equivalent equal to or less than-0.50 D(myopia,20.7%)is one of the highest among similar-aged children worldwide.Further studies should explore the rate of myopia progression in children in Puerto Rico.Individual children must be monitored to examine the need for treatment of myopia progression.展开更多
To solve the complex weight matrix derivative problem when using the weighted least squares method to estimate the parameters of the mixed additive and multiplicative random error model(MAM error model),we use an impr...To solve the complex weight matrix derivative problem when using the weighted least squares method to estimate the parameters of the mixed additive and multiplicative random error model(MAM error model),we use an improved artificial bee colony algorithm without derivative and the bootstrap method to estimate the parameters and evaluate the accuracy of MAM error model.The improved artificial bee colony algorithm can update individuals in multiple dimensions and improve the cooperation ability between individuals by constructing a new search equation based on the idea of quasi-affine transformation.The experimental results show that based on the weighted least squares criterion,the algorithm can get the results consistent with the weighted least squares method without multiple formula derivation.The parameter estimation and accuracy evaluation method based on the bootstrap method can get better parameter estimation and more reasonable accuracy information than existing methods,which provides a new idea for the theory of parameter estimation and accuracy evaluation of the MAM error model.展开更多
文摘Objective: To examine the trajectory of psychosomatic symptoms and to explore the impact of psychosomatic symptoms on setup error in patients undergoing breast cancer radiotherapy.Methods: A total of 102 patients with early breast cancer who received initial radiotherapy were consecutively recruited. The M.D. Anderson Symptom Inventory(MDASI) and three different anxiety scales, i.e., the Self-Rating Anxiety Scale(SAS), State-Trait Anxiety Inventory(STAI), and Anxiety Sensitivity Index(ASI), were used in this study. The radiotherapy setup errors were measured in millimetres by comparing the real-time isocratic verification film during radiotherapy with the digitally reconstructed radiograph(DRR). Patients completed the assessment at three time points: before the initial radiotherapy(T1), before the middle radiotherapy(T2), and before the last radiotherapy(T3).Results: The SAS and STAI-State scores of breast cancer patients at T1 were significantly higher than those at T2 and T3(F=24.44, P<0.001;F=30.25, P<0.001). The core symptoms of MDASI were positively correlated with anxiety severity. The setup errors of patients with high SAS scores were greater than those of patients with low anxiety levels at T1(Z=-2.01, P=0.044). We also found that higher SAS scores were associated with a higher risk of radiotherapy setup errors at T1(B=0.458, P<0.05).Conclusions: This study seeks to identify treatment-related psychosomatic symptoms and mitigate their impact on patients and treatment. Patients with early breast cancer experienced the highest level of anxiety before the initial radiotherapy, and then, anxiety levels declined. Patients with high somatic symptoms of anxiety may have a higher risk of radiotherapy setup errors.
文摘Objective: We aim to quantify the magnitude of setup errors in intensity-modulated radiotherapy (IMRT) treated Head and Neck cancer patients and recommend appropriate PTV margin. Methods: 60 patients with head and neck cancer required bilateral neck irradiation were planned and treated by simultaneous integrated boost IMRT technique either treated radically or postoperative. Patients undergoing image-guided radiotherapy (IGRT) each with once weekly scheduled cone beam computed tomography (CBCT). The 3D displacements, systematic and random errors were calculated. The appropriate PTV expansion was determined using Van Herk’s formula. Results: Mean 3D displacement was 0.16 cm in the vertical direction, 0.14 cm in the horizontal direction and 0.16 cm in the longitudinal direction. Conclusion: Use of weekly CBCT allows the planning target volume (PTV) expansion to be reduced according to our setup. The appropriate clinical target volume (CTV)-PTV margin for our institute is 0.30 cm, 0.38 cm, and 0.33 cm in the horizontal, vertical, and longitudinal directions, respectively.
文摘Purpose: To investigate the feasibility of applying ANOVA newly proposed by Yukinori to verify the setup errors, PTV (Planning Target Volume) margins, DVH for lung cancer with SBRT. Methods: 20 patients receiving SBRT to 50 Gy in 5 fractions with a Varian iX linear acceleration were selected. Each patient was scanned with kV-CBCT before the daily treatment to verify the setup position. Two other error calculation methods raised by Van Herk and Remeijer were also compared to discover the statistical difference in systematic errors (Σ), random errors (σ), PTV margins and DVH. Results: Utilizing two PTV margin calculation formulas (Stroom, Van Herk), PTV calculated by Yukinori method in three directions were (5.89 and 3.95), (5.54 and 3.55), (3.24 and 0.78) mm;Van Herk method were (6.10 and 4.25), (5.73 and 3.83), (3.51 and 1.13) mm;Remeijer method were (6.39 and 4.57), (5.98 and 4.10), (3.69 and 1.33) mm. The volumes of PTV using Yukinori method were significantly smaller (P < 0.05) than Van Herk method and Remeijer method. However, dosimetric indices of PTV (D98, D50, D2) and for OARs (Mean Dose, V20, V5) had no significant difference (P > 0.05) among three methods. Conclusions: In lung SBRT treatment, due to fraction reduction and high level of dose per fraction, ANOVA was able to offset the effect of random factors in systematic errors, reducing the PTV margins and volumes. However, no distinct dose distribution improvement was founded in target volume and organs at risk.
文摘In this paper,an antenna array composed of circular array and orthogonal linear array is proposed by using the design of long and short baseline“orthogonal linear array”and the circular array ambiguity resolution design of multi-group baseline clustering.The effectiveness of the antenna array in this paper is verified by sufficient simulation and experiment.After the system deviation correction work,it is found that in the L/S/C/X frequency bands,the ambiguity resolution probability is high,and the phase difference system error between each channel is basically the same.The angle measurement error is less than 0.5°,and the positioning error is less than 2.5 km.Notably,as the center frequency increases,calibration consistency improves,and the calibration frequency points become applicable over a wider frequency range.At a center frequency of 11.5 GHz,the calibration frequency point bandwidth extends to 1200 MHz.This combined antenna array deployment holds significant promise for a wide range of applications in contemporary wireless communication systems.
基金Project supported by the National Natural Science Foundation of China (Grant No.51821005)。
文摘An externally generated resonant magnetic perturbation can induce complex non-ideal MHD responses in their resonant surfaces.We have studied the plasma responses using Fitzpatrick's improved two-fluid model and program LAYER.We calculated the error field penetration threshold for J-TEXT.In addition,we find that the island width increases slightly as the error field amplitude increases when the error field amplitude is below the critical penetration value.However,the island width suddenly jumps to a large value because the shielding effect of the plasma against the error field disappears after the penetration.By scanning the natural mode frequency,we find that the shielding effect of the plasma decreases as the natural mode frequency decreases.Finally,we obtain the m/n=2/1 penetration threshold scaling on density and temperature.
文摘Timer error as well as its convention is very important for dose accuracy during irradiation. This paper determines the timer error of irradiators at Secondary Standard Dosimetry Laboratory (SSDL) in Nigeria. The irradiators are Cs-137 OB6 irradiator and X-ray irradiators at the Protection level SSDL;and Co-60 irradiator at the Therapy Level SSDL. PTW UNIDOS electrometer and LS01 Ionization chamber were used at the Protection Level to obtain doses for both Cs-137 OB6 and X-ray irradiators while an IBA farmer type ionization chamber and an IBA DOSE 1 electrometer were used at the Protection Level SSDL. Single/multiple exposure method and graphical method were used in the determination of the timer error for the three irradiators. The timer error obtained for Cs-137 OB6 irradiator was 0.48 ± 0.01 s, the timer error for the X-ray irradiator was 0.09 ± 0.01 s while the timer error obtained for GammaBeam X200 was 1.21 ± 0.04 s. It was observed that the timer error is not affected by source to detector distance. It was also observed that the timer error of Co-60 Gamma X200 irradiator is increasing with the age of the machine. Source to detector distance and field size do not contribute towards the timer error of the irradiators. The timer error of the Co-60 Gamma X200 irradiator (the only irradiator among the irradiators with a pneumatic system) increases with the age of the irradiator.
文摘AIM:To investigate the prevalence of visual impairment(VI)and provide an estimation of uncorrected refractive errors in school-aged children,conducted by optometry students as a community service.METHODS:The study was cross-sectional.Totally 3343 participants were included in the study.The initial examination involved assessing the uncorrected distance visual acuity(UDVA)and visual acuity(VA)while using a+2.00 D lens.The inclusion criteria for a subsequent comprehensive cycloplegic eye examination,performed by an optometrist,were as follows:a UDVA<0.6 decimal(0.20 logMAR)and/or a VA with+2.00 D≥0.8 decimal(0.96 logMAR).RESULTS:The sample had a mean age of 10.92±2.13y(range 4 to 17y),and 51.3%of the children were female(n=1715).The majority of the children(89.7%)fell within the age range of 8 to 14y.Among the ethnic groups,the highest representation was from the Luhya group(60.6%)followed by Luo(20.4%).Mean logMAR UDVA choosing the best eye for each student was 0.29±0.17(range 1.70 to 0.22).Out of the total,246 participants(7.4%)had a full eye examination.The estimated prevalence of myopia(defined as spherical equivalent≤-0.5 D)was found to be 1.45%of the total sample.While around 0.18%of the total sample had hyperopia value exceeding+1.75 D.Refractive astigmatism(cil<-0.75 D)was found in 0.21%(7/3343)of the children.The VI prevalence was 1.26%of the total sample.Among our cases of VI,76.2%could be attributed to uncorrected refractive error.Amblyopia was detected in 0.66%(22/3343)of the screened children.There was no statistically significant correlation observed between age or gender and refractive values.CONCLUSION:The primary cause of VI is determined to be uncorrected refractive errors,with myopia being the most prevalent refractive error observed.These findings underscore the significance of early identification and correction of refractive errors in school-aged children as a means to alleviate the impact of VI.
文摘The widespread adoption of the Internet of Things (IoT) has transformed various sectors globally, making themmore intelligent and connected. However, this advancement comes with challenges related to the effectiveness ofIoT devices. These devices, present in offices, homes, industries, and more, need constant monitoring to ensuretheir proper functionality. The success of smart systems relies on their seamless operation and ability to handlefaults. Sensors, crucial components of these systems, gather data and contribute to their functionality. Therefore,sensor faults can compromise the system’s reliability and undermine the trustworthiness of smart environments.To address these concerns, various techniques and algorithms can be employed to enhance the performance ofIoT devices through effective fault detection. This paper conducted a thorough review of the existing literature andconducted a detailed analysis.This analysis effectively links sensor errors with a prominent fault detection techniquecapable of addressing them. This study is innovative because it paves theway for future researchers to explore errorsthat have not yet been tackled by existing fault detection methods. Significant, the paper, also highlights essentialfactors for selecting and adopting fault detection techniques, as well as the characteristics of datasets and theircorresponding recommended techniques. Additionally, the paper presents amethodical overview of fault detectiontechniques employed in smart devices, including themetrics used for evaluation. Furthermore, the paper examinesthe body of academic work related to sensor faults and fault detection techniques within the domain. This reflectsthe growing inclination and scholarly attention of researchers and academicians toward strategies for fault detectionwithin the realm of the Internet of Things.
文摘In this paper,Let M_(n)denote the maximum of logarithmic general error distribution with parameter v≥1.Higher-order expansions for distributions of powered extremes M_(n)^(p)are derived under an optimal choice of normalizing constants.It is shown that M_(n)^(p),when v=1,converges to the Frechet extreme value distribution at the rate of 1/n,and if v>1 then M_(n)^(p)converges to the Gumbel extreme value distribution at the rate of(loglogn)^(2)=(log n)^(1-1/v).
基金Project supported by the Natural Science Foundation of Shandong Province,China (Grant No.ZR2021MF049)Joint Fund of Natural Science Foundation of Shandong Province (Grant Nos.ZR2022LLZ012 and ZR2021LLZ001)。
文摘Readout errors caused by measurement noise are a significant source of errors in quantum circuits,which severely affect the output results and are an urgent problem to be solved in noisy-intermediate scale quantum(NISQ)computing.In this paper,we use the bit-flip averaging(BFA)method to mitigate frequent readout errors in quantum generative adversarial networks(QGAN)for image generation,which simplifies the response matrix structure by averaging the qubits for each random bit-flip in advance,successfully solving problems with high cost of measurement for traditional error mitigation methods.Our experiments were simulated in Qiskit using the handwritten digit image recognition dataset under the BFA-based method,the Kullback-Leibler(KL)divergence of the generated images converges to 0.04,0.05,and 0.1 for readout error probabilities of p=0.01,p=0.05,and p=0.1,respectively.Additionally,by evaluating the fidelity of the quantum states representing the images,we observe average fidelity values of 0.97,0.96,and 0.95 for the three readout error probabilities,respectively.These results demonstrate the robustness of the model in mitigating readout errors and provide a highly fault tolerant mechanism for image generation models.
基金supported in part by the National Natural Science Foundation of China(Nos.62071441 and 61701464)in part by the Fundamental Research Funds for the Central Universities(No.202151006).
文摘This study explores the application of single photon detection(SPD)technology in underwater wireless optical communication(UWOC)and analyzes the influence of different modulation modes and error correction coding types on communication performance.The study investigates the impact of on-off keying(OOK)and 2-pulse-position modulation(2-PPM)on the bit error rate(BER)in single-channel intensity and polarization multiplexing.Furthermore,it compares the error correction performance of low-density parity check(LDPC)and Reed-Solomon(RS)codes across different error correction coding types.The effects of unscattered photon ratio and depolarization ratio on BER are also verified.Finally,a UWOC system based on SPD is constructed,achieving 14.58 Mbps with polarization OOK multiplexing modulation and 4.37 Mbps with polarization 2-PPM multiplexing modulation using LDPC code error correction.
基金supported by the National Natural Science Foundation of China(61601147)the Beijing Natural Science Foundation(L182032)。
文摘In this paper,an efficient unequal error protection(UEP)scheme for online fountain codes is proposed.In the buildup phase,the traversing-selection strategy is proposed to select the most important symbols(MIS).Then,in the completion phase,the weighted-selection strategy is applied to provide low overhead.The performance of the proposed scheme is analyzed and compared with the existing UEP online fountain scheme.Simulation results show that in terms of MIS and the least important symbols(LIS),when the bit error ratio is 10-4,the proposed scheme can achieve 85%and 31.58%overhead reduction,respectively.
基金This work is funded by the National Natural Science Foundation of China(Grant Nos.42377164 and 52079062)the National Science Fund for Distinguished Young Scholars of China(Grant No.52222905).
文摘In the existing landslide susceptibility prediction(LSP)models,the influences of random errors in landslide conditioning factors on LSP are not considered,instead the original conditioning factors are directly taken as the model inputs,which brings uncertainties to LSP results.This study aims to reveal the influence rules of the different proportional random errors in conditioning factors on the LSP un-certainties,and further explore a method which can effectively reduce the random errors in conditioning factors.The original conditioning factors are firstly used to construct original factors-based LSP models,and then different random errors of 5%,10%,15% and 20%are added to these original factors for con-structing relevant errors-based LSP models.Secondly,low-pass filter-based LSP models are constructed by eliminating the random errors using low-pass filter method.Thirdly,the Ruijin County of China with 370 landslides and 16 conditioning factors are used as study case.Three typical machine learning models,i.e.multilayer perceptron(MLP),support vector machine(SVM)and random forest(RF),are selected as LSP models.Finally,the LSP uncertainties are discussed and results show that:(1)The low-pass filter can effectively reduce the random errors in conditioning factors to decrease the LSP uncertainties.(2)With the proportions of random errors increasing from 5%to 20%,the LSP uncertainty increases continuously.(3)The original factors-based models are feasible for LSP in the absence of more accurate conditioning factors.(4)The influence degrees of two uncertainty issues,machine learning models and different proportions of random errors,on the LSP modeling are large and basically the same.(5)The Shapley values effectively explain the internal mechanism of machine learning model predicting landslide sus-ceptibility.In conclusion,greater proportion of random errors in conditioning factors results in higher LSP uncertainty,and low-pass filter can effectively reduce these random errors.
基金supported in part by the National Natural Science Foundation of China(61933007, U21A2019, 62273005, 62273088, 62303301)the Program of Shanghai Academic/Technology Research Leader of China (20XD1420100)+2 种基金the Hainan Province Science and Technology Special Fund of China(ZDYF2022SHFZ105)the Natural Science Foundation of Anhui Province of China (2108085MA07)the Alexander von Humboldt Foundation of Germany。
文摘This paper investigates the anomaly-resistant decentralized state estimation(SE) problem for a class of wide-area power systems which are divided into several non-overlapping areas connected through transmission lines. Two classes of measurements(i.e., local measurements and edge measurements) are obtained, respectively, from the individual area and the transmission lines. A decentralized state estimator, whose performance is resistant against measurement with anomalies, is designed based on the minimum error entropy with fiducial points(MEEF) criterion. Specifically, 1) An augmented model, which incorporates the local prediction and local measurement, is developed by resorting to the unscented transformation approach and the statistical linearization approach;2) Using the augmented model, an MEEF-based cost function is designed that reflects the local prediction errors of the state and the measurement;and 3) The local estimate is first obtained by minimizing the MEEF-based cost function through a fixed-point iteration and then updated by using the edge measuring information. Finally, simulation experiments with three scenarios are carried out on the IEEE 14-bus system to illustrate the validity of the proposed anomaly-resistant decentralized SE scheme.
文摘The assessment of the measurement error status of online Capacitor Voltage Transformers (CVT) within the power grid is of profound significance to the equitable trade of electric energy and the secure operation of the power grid. This paper advances an online CVT error state evaluation method, anchored in the in-phase relationship and outlier detection. Initially, this method leverages the in-phase relationship to obviate the influence of primary side fluctuations in the grid on assessment accuracy. Subsequently, Principal Component Analysis (PCA) is employed to meticulously disentangle the error change information inherent in the CVT from the measured values and to compute statistics that delineate the error state. Finally, the Local Outlier Factor (LOF) is deployed to discern outliers in the statistics, with thresholds serving to appraise the CVT error state. Experimental results incontrovertibly demonstrate the efficacy of this method, showcasing its prowess in effecting online tracking of CVT error changes and conducting error state assessments. The discernible enhancements in reliability, accuracy, and sensitivity are manifest, with the assessment accuracy reaching an exemplary 0.01%.
文摘In this paper, we propose the nonconforming virtual element method (NCVEM) discretization for the pointwise control constraint optimal control problem governed by elliptic equations. Based on the NCVEM approximation of state equation and the variational discretization of control variables, we construct a virtual element discrete scheme. For the state, adjoint state and control variable, we obtain the corresponding prior estimate in H<sup>1</sup> and L<sup>2</sup> norms. Finally, some numerical experiments are carried out to support the theoretical results.
文摘Chinese non-English majors are a large group of English learners.In the process of English pronunciation acquisition,issues such as incomplete phonological knowledge,transfer of mother tongue,and overgeneralization,lead to confusion of phonemes and stress,misunderstanding of syllable structure,and errors of assimilation,drop,and epenthesis.The accuracy of English pronunciation can only be improved by knowing both English and Chinese phonological systems,strengthening the teaching of English phonological knowledge,and adopting various phonological training activities.
基金Supported by the Lions Clubs International Foundation(No.SF1757/UND)。
文摘AIM:To determine the prevalence of refractive error in 5-to 17-year-old schoolchildren in Puerto Rico.METHODS:A quantitative descriptive study of 2867 children aged 5 to 17y from all seven educational regions of Puerto Rico was conducted from 2016–2019.Refractive error was determined via static and subjective refraction.Children with distance acuity≤20/40 or near visual acuity≤20/32 had a cycloplegic refraction.Data analysis included descriptive statistics,correlation coefficient,Kruskal-Wallis,Chi-square,and t test calculations.RESULTS:Twenty percent of the children had a spherical equivalent refractive error≤-0.50 D,3.2%had a spherical equivalent≥+2.00 D,and 10.4%had astigmatism≥1 D.There was a statistically(but non-clinically)significant myopic change in spherical equivalent refractive error with age(P<0.001).The prevalence of myopia increased with age(P<0.001)but not hyperopia(P=0.59)or astigmatism(P=0.51).Males had a significantly higher hyperopic spherical equivalent than females(P<0.001).Females had a higher prevalence of myopia(P<0.001)than males,but there was no difference in the hyperopia(P=0.74)or astigmatism prevalence(P=0.87).CONCLUSION:The prevalence of a spherical equivalent equal to or less than-0.50 D(myopia,20.7%)is one of the highest among similar-aged children worldwide.Further studies should explore the rate of myopia progression in children in Puerto Rico.Individual children must be monitored to examine the need for treatment of myopia progression.
基金supported by the National Natural Science Foundation of China(No.42174011 and No.41874001).
文摘To solve the complex weight matrix derivative problem when using the weighted least squares method to estimate the parameters of the mixed additive and multiplicative random error model(MAM error model),we use an improved artificial bee colony algorithm without derivative and the bootstrap method to estimate the parameters and evaluate the accuracy of MAM error model.The improved artificial bee colony algorithm can update individuals in multiple dimensions and improve the cooperation ability between individuals by constructing a new search equation based on the idea of quasi-affine transformation.The experimental results show that based on the weighted least squares criterion,the algorithm can get the results consistent with the weighted least squares method without multiple formula derivation.The parameter estimation and accuracy evaluation method based on the bootstrap method can get better parameter estimation and more reasonable accuracy information than existing methods,which provides a new idea for the theory of parameter estimation and accuracy evaluation of the MAM error model.