This article will discuss the manifestation of the Arab Spring in Arab literature and sheds light on what has become known as "Video Clips Poems" or "Flash Poems" on YouTube. We shall attempt to answer a number of...This article will discuss the manifestation of the Arab Spring in Arab literature and sheds light on what has become known as "Video Clips Poems" or "Flash Poems" on YouTube. We shall attempt to answer a number of questions concerning Arab Spring literature in general and the video clips in particular: What role did this literature plays in the political transformations witnessed by the Arab world in recent years? What are the means it used in order to inflame the masses? What are the features of the Arab Spring video clips as a new genre different from printed genres? Will the Arab Spring continue to be a literary theme after the end of these uprisings? These clips possess rich and dense semantics content and meaning as it will be shown in this article.展开更多
视频-文本检索作为一项被广泛应用于现实生活中的多模态检索技术受到越来越多的研究者的关注.近来,大部分视频文本工作通过利用大规模预训练模型中所学到的视觉与语言之间的匹配关系来提升文本视频间跨模态检索效果.然而,这些方法忽略...视频-文本检索作为一项被广泛应用于现实生活中的多模态检索技术受到越来越多的研究者的关注.近来,大部分视频文本工作通过利用大规模预训练模型中所学到的视觉与语言之间的匹配关系来提升文本视频间跨模态检索效果.然而,这些方法忽略了视频、文本数据都是由一个个事件组合而成.倘若能捕捉视频事件与文本事件之间的细粒度相似性关系,将能帮助模型计算出更准确的文本与视频之间的语义相似性关系,进而提升文本视频间跨模态检索效果.因此,提出了一种基于CLIP生成多事件表示的视频文本检索方法(CLIP based multi-event representation generation for video-text retrieval,CLIPMERG).首先,通过利用大规模图文预训练模型CLIP的视频编码器(ViT)以及文本编码器(Tansformer)分别将视频、文本数据转换成视频帧token序列以及文本的单词token序列;然后,通过视频事件生成器(文本事件生成器)将视频帧token序列(单词token序列)转换成k个视频事件表示(k个文本事件表示);最后,通过挖掘视频事件表示与文本事件表示之间的细粒度关系以定义视频、文本间的语义相似性关系.在3个常用的公开视频文本检索数据集MSR-VTT,DiDeMo,LSMDC上的实验结果表明所提的CLIPMERG优于现有的视频文本检索方法.展开更多
计算机视觉(Computer Vision,CV)与自然语言处理(Natural Language Processing,NLP)技术已逐渐趋于成熟,结合视觉和语言的多模态领域技术将成为学界和业界的研究热点。文章使用CLIP预训练模型,结合图像与语言两种模态信息,进一步将图像...计算机视觉(Computer Vision,CV)与自然语言处理(Natural Language Processing,NLP)技术已逐渐趋于成熟,结合视觉和语言的多模态领域技术将成为学界和业界的研究热点。文章使用CLIP预训练模型,结合图像与语言两种模态信息,进一步将图像拓展至视频,利用FFmpeg处理视频,并对视频与文本信息进行嵌入(embedding)和余弦相似度匹配,从而实现利用纯文本检索视频中符合该文本语义的片段。展开更多
Video-text retrieval (VTR) is an essential task in multimodal learning, aiming to bridge the semantic gap between visual and textual data. Effective video frame sampling plays a crucial role in improving retrieval per...Video-text retrieval (VTR) is an essential task in multimodal learning, aiming to bridge the semantic gap between visual and textual data. Effective video frame sampling plays a crucial role in improving retrieval performance, as it determines the quality of the visual content representation. Traditional sampling methods, such as uniform sampling and optical flow-based techniques, often fail to capture the full semantic range of videos, leading to redundancy and inefficiencies. In this work, we propose CLIP4Video-Sampling: Global Semantics-Guided Multi-Granularity Frame Sampling for Video-Text Retrieval, a global semantics-guided multi-granularity frame sampling strategy designed to optimize both computational efficiency and retrieval accuracy. By integrating multi-scale global and local temporal sampling and leveraging the CLIP (Contrastive Language-Image Pre-training) model’s powerful feature extraction capabilities, our method significantly outperforms existing approaches in both zero-shot and fine-tuned video-text retrieval tasks on popular datasets. CLIP4Video-Sampling reduces redundancy, ensures keyframe coverage, and serves as an adaptable pre-processing module for multimodal models.展开更多
文摘This article will discuss the manifestation of the Arab Spring in Arab literature and sheds light on what has become known as "Video Clips Poems" or "Flash Poems" on YouTube. We shall attempt to answer a number of questions concerning Arab Spring literature in general and the video clips in particular: What role did this literature plays in the political transformations witnessed by the Arab world in recent years? What are the means it used in order to inflame the masses? What are the features of the Arab Spring video clips as a new genre different from printed genres? Will the Arab Spring continue to be a literary theme after the end of these uprisings? These clips possess rich and dense semantics content and meaning as it will be shown in this article.
文摘视频-文本检索作为一项被广泛应用于现实生活中的多模态检索技术受到越来越多的研究者的关注.近来,大部分视频文本工作通过利用大规模预训练模型中所学到的视觉与语言之间的匹配关系来提升文本视频间跨模态检索效果.然而,这些方法忽略了视频、文本数据都是由一个个事件组合而成.倘若能捕捉视频事件与文本事件之间的细粒度相似性关系,将能帮助模型计算出更准确的文本与视频之间的语义相似性关系,进而提升文本视频间跨模态检索效果.因此,提出了一种基于CLIP生成多事件表示的视频文本检索方法(CLIP based multi-event representation generation for video-text retrieval,CLIPMERG).首先,通过利用大规模图文预训练模型CLIP的视频编码器(ViT)以及文本编码器(Tansformer)分别将视频、文本数据转换成视频帧token序列以及文本的单词token序列;然后,通过视频事件生成器(文本事件生成器)将视频帧token序列(单词token序列)转换成k个视频事件表示(k个文本事件表示);最后,通过挖掘视频事件表示与文本事件表示之间的细粒度关系以定义视频、文本间的语义相似性关系.在3个常用的公开视频文本检索数据集MSR-VTT,DiDeMo,LSMDC上的实验结果表明所提的CLIPMERG优于现有的视频文本检索方法.
文摘计算机视觉(Computer Vision,CV)与自然语言处理(Natural Language Processing,NLP)技术已逐渐趋于成熟,结合视觉和语言的多模态领域技术将成为学界和业界的研究热点。文章使用CLIP预训练模型,结合图像与语言两种模态信息,进一步将图像拓展至视频,利用FFmpeg处理视频,并对视频与文本信息进行嵌入(embedding)和余弦相似度匹配,从而实现利用纯文本检索视频中符合该文本语义的片段。
文摘Video-text retrieval (VTR) is an essential task in multimodal learning, aiming to bridge the semantic gap between visual and textual data. Effective video frame sampling plays a crucial role in improving retrieval performance, as it determines the quality of the visual content representation. Traditional sampling methods, such as uniform sampling and optical flow-based techniques, often fail to capture the full semantic range of videos, leading to redundancy and inefficiencies. In this work, we propose CLIP4Video-Sampling: Global Semantics-Guided Multi-Granularity Frame Sampling for Video-Text Retrieval, a global semantics-guided multi-granularity frame sampling strategy designed to optimize both computational efficiency and retrieval accuracy. By integrating multi-scale global and local temporal sampling and leveraging the CLIP (Contrastive Language-Image Pre-training) model’s powerful feature extraction capabilities, our method significantly outperforms existing approaches in both zero-shot and fine-tuned video-text retrieval tasks on popular datasets. CLIP4Video-Sampling reduces redundancy, ensures keyframe coverage, and serves as an adaptable pre-processing module for multimodal models.