有限状态集模型预测控制具备快速动态响应、无调制模块等优势,已在高性能功率变换器广泛应用。然而该技术高度依赖建模精度,实际应用中受模型匹配度和参数摄动等因素影响,难以运行于最优性能。为此,提出一种基于递归最小二乘(recursive ...有限状态集模型预测控制具备快速动态响应、无调制模块等优势,已在高性能功率变换器广泛应用。然而该技术高度依赖建模精度,实际应用中受模型匹配度和参数摄动等因素影响,难以运行于最优性能。为此,提出一种基于递归最小二乘(recursive least squares,RLS)估算的无参数预测控制方法。以数据驱动建模代替物理参数建模,首先采用外生变量自回归技术建立三相逆变器等效模型,并利用RLS算法进行等效模型参数估算。最后,基于22 kW测试平台对所提方法进行验证与分析。结果表明,所提方法对模型和参数变化具有强鲁棒性,不失为一种通用型鲁棒预测控制方案。展开更多
Sign language fills the communication gap for people with hearing and speaking ailments.It includes both visual modalities,manual gestures consisting of movements of hands,and non-manual gestures incorporating body mo...Sign language fills the communication gap for people with hearing and speaking ailments.It includes both visual modalities,manual gestures consisting of movements of hands,and non-manual gestures incorporating body movements including head,facial expressions,eyes,shoulder shrugging,etc.Previously both gestures have been detected;identifying separately may have better accuracy,butmuch communicational information is lost.Aproper sign language mechanism is needed to detect manual and non-manual gestures to convey the appropriate detailed message to others.Our novel proposed system contributes as Sign LanguageAction Transformer Network(SLATN),localizing hand,body,and facial gestures in video sequences.Here we are expending a Transformer-style structural design as a“base network”to extract features from a spatiotemporal domain.Themodel impulsively learns to track individual persons and their action context inmultiple frames.Furthermore,a“head network”emphasizes hand movement and facial expression simultaneously,which is often crucial to understanding sign language,using its attention mechanism for creating tight bounding boxes around classified gestures.The model’s work is later compared with the traditional identification methods of activity recognition.It not only works faster but achieves better accuracy as well.Themodel achieves overall 82.66%testing accuracy with a very considerable performance of computation with 94.13 Giga-Floating Point Operations per Second(G-FLOPS).Another contribution is a newly created dataset of Pakistan Sign Language forManual and Non-Manual(PkSLMNM)gestures.展开更多
文摘有限状态集模型预测控制具备快速动态响应、无调制模块等优势,已在高性能功率变换器广泛应用。然而该技术高度依赖建模精度,实际应用中受模型匹配度和参数摄动等因素影响,难以运行于最优性能。为此,提出一种基于递归最小二乘(recursive least squares,RLS)估算的无参数预测控制方法。以数据驱动建模代替物理参数建模,首先采用外生变量自回归技术建立三相逆变器等效模型,并利用RLS算法进行等效模型参数估算。最后,基于22 kW测试平台对所提方法进行验证与分析。结果表明,所提方法对模型和参数变化具有强鲁棒性,不失为一种通用型鲁棒预测控制方案。
文摘Sign language fills the communication gap for people with hearing and speaking ailments.It includes both visual modalities,manual gestures consisting of movements of hands,and non-manual gestures incorporating body movements including head,facial expressions,eyes,shoulder shrugging,etc.Previously both gestures have been detected;identifying separately may have better accuracy,butmuch communicational information is lost.Aproper sign language mechanism is needed to detect manual and non-manual gestures to convey the appropriate detailed message to others.Our novel proposed system contributes as Sign LanguageAction Transformer Network(SLATN),localizing hand,body,and facial gestures in video sequences.Here we are expending a Transformer-style structural design as a“base network”to extract features from a spatiotemporal domain.Themodel impulsively learns to track individual persons and their action context inmultiple frames.Furthermore,a“head network”emphasizes hand movement and facial expression simultaneously,which is often crucial to understanding sign language,using its attention mechanism for creating tight bounding boxes around classified gestures.The model’s work is later compared with the traditional identification methods of activity recognition.It not only works faster but achieves better accuracy as well.Themodel achieves overall 82.66%testing accuracy with a very considerable performance of computation with 94.13 Giga-Floating Point Operations per Second(G-FLOPS).Another contribution is a newly created dataset of Pakistan Sign Language forManual and Non-Manual(PkSLMNM)gestures.