期刊文献+
共找到26篇文章
< 1 2 >
每页显示 20 50 100
An Enhanced GAN for Image Generation
1
作者 Chunwei Tian Haoyang Gao +1 位作者 Pengwei Wang Bob Zhang 《Computers, Materials & Continua》 SCIE EI 2024年第7期105-118,共14页
Generative adversarial networks(GANs)with gaming abilities have been widely applied in image generation.However,gamistic generators and discriminators may reduce the robustness of the obtained GANs in image generation... Generative adversarial networks(GANs)with gaming abilities have been widely applied in image generation.However,gamistic generators and discriminators may reduce the robustness of the obtained GANs in image generation under varying scenes.Enhancing the relation of hierarchical information in a generation network and enlarging differences of different network architectures can facilitate more structural information to improve the generation effect for image generation.In this paper,we propose an enhanced GAN via improving a generator for image generation(EIGGAN).EIGGAN applies a spatial attention to a generator to extract salient information to enhance the truthfulness of the generated images.Taking into relation the context account,parallel residual operations are fused into a generation network to extract more structural information from the different layers.Finally,a mixed loss function in a GAN is exploited to make a tradeoff between speed and accuracy to generate more realistic images.Experimental results show that the proposed method is superior to popular methods,i.e.,Wasserstein GAN with gradient penalty(WGAN-GP)in terms of many indexes,i.e.,Frechet Inception Distance,Learned Perceptual Image Patch Similarity,Multi-Scale Structural Similarity Index Measure,Kernel Inception Distance,Number of Statistically-Different Bins,Inception Score and some visual images for image generation. 展开更多
关键词 Generative adversarial networks spatial attention mixed loss image generation
下载PDF
Controllable image generation based on causal representation learning 被引量:1
2
作者 Shanshan HUANG Yuanhao WANG +3 位作者 Zhili GONG Jun LIAO Shu WANG Li LIU 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第1期135-148,共14页
Artificial intelligence generated content(AIGC)has emerged as an indispensable tool for producing large-scale content in various forms,such as images,thanks to the significant role that AI plays in imitation and produ... Artificial intelligence generated content(AIGC)has emerged as an indispensable tool for producing large-scale content in various forms,such as images,thanks to the significant role that AI plays in imitation and production.However,interpretability and controllability remain challenges.Existing AI methods often face challenges in producing images that are both flexible and controllable while considering causal relationships within the images.To address this issue,we have developed a novel method for causal controllable image generation(CCIG)that combines causal representation learning with bi-directional generative adversarial networks(GANs).This approach enables humans to control image attributes while considering the rationality and interpretability of the generated images and also allows for the generation of counterfactual images.The key of our approach,CCIG,lies in the use of a causal structure learning module to learn the causal relationships between image attributes and joint optimization with the encoder,generator,and joint discriminator in the image generation module.By doing so,we can learn causal representations in image’s latent space and use causal intervention operations to control image generation.We conduct extensive experiments on a real-world dataset,CelebA.The experimental results illustrate the effectiveness of CCIG. 展开更多
关键词 image generation Controllable image editing Causal structure learning Causal representation learning
原文传递
Autoencoder-based conditional optimal transport generative adversarial network for medical image generation
3
作者 Jun Wang Bohan Lei +3 位作者 Liya Ding Xiaoyin Xu Xianfeng Gu Min Zhang 《Visual Informatics》 EI 2024年第1期15-25,共11页
Medical image generation has recently garnered significant interest among researchers.However,the primary generative models,such as Generative Adversarial Networks(GANs),often encounter challenges during training,incl... Medical image generation has recently garnered significant interest among researchers.However,the primary generative models,such as Generative Adversarial Networks(GANs),often encounter challenges during training,including mode collapse.To address these issues,we proposed the AECOT-GAN model(Autoencoder-based Conditional Optimal Transport Generative Adversarial Network)for the generation of medical images belonging to specific categories.The training process of our model comprises three fundamental components.The training process of our model encompasses three fundamental components.First,we employ an autoencoder model to obtain a low-dimensional manifold representation of real images.Second,we apply extended semi-discrete optimal transport to map Gaussian noise distribution to the latent space distribution and obtain corresponding labels effectively.This procedure leads to the generation of new latent codes with known labels.Finally,we integrate a GAN to train the decoder further to generate medical images.To evaluate the performance of the AE-COT-GAN model,we conducted experiments on two medical image datasets,namely DermaMNIST and BloodMNIST.The model’s performance was compared with state-of-the-art generative models.Results show that the AE-COT-GAN model had excellent performance in generating medical images.Moreover,it effectively addressed the common issues associated with traditional GANs. 展开更多
关键词 Medical image generation Mode collapse Mode mixing Optimal transport Generative adversarial networks
原文传递
On generated artistic styles:Image generation experiments with GAN algorithms 被引量:1
4
作者 Jianheng Xiang 《Visual Informatics》 EI 2023年第4期36-40,共5页
As computer graphics technology supports pursuing a photorealistic style,replicated artworks with a photorealistic style overwhelmingly predominate in the computer-generated art circle.Along with the progression of ge... As computer graphics technology supports pursuing a photorealistic style,replicated artworks with a photorealistic style overwhelmingly predominate in the computer-generated art circle.Along with the progression of generative technology,this trend may make generative art a virtual world of photorealistic fake,in which the single criterion of expressive style imperils art into the context of a single boring stereotype.This article focuses on the issue of style diversity and its technical feasibility by artistic experiments of generating flower images in StyleGAN.The author insisted that photo both technology and artistic style should not be confined merely for realistic purposes.This proposition was validated in the GAN generation experiment by changing the training materials. 展开更多
关键词 CG art Virtual realistic image generation Deep learning Machine replication
原文传递
Data augmentation via joint multi-scale CNN and multi-channel attention for bumblebee image generation 被引量:1
5
作者 Du Rong Chen Shudong +3 位作者 Li Weiwei Zhang Xueting Wang Xianhui Ge Jin 《The Journal of China Universities of Posts and Telecommunications》 EI CSCD 2023年第3期32-40,98,共10页
The difficulty of bumblebee data collecting and the laborious nature of bumblebee data annotation sometimes result in a lack of training data,which impairs the effectiveness of deep learning based counting methods.Giv... The difficulty of bumblebee data collecting and the laborious nature of bumblebee data annotation sometimes result in a lack of training data,which impairs the effectiveness of deep learning based counting methods.Given that it is challenging to produce the detailed background information in the generated bumblebee images using current data augmentation methods,in this paper,a joint multi-scale convolutional neural network and multi-channel attention based generative adversarial network(MMGAN)is proposed.MMGAN generates the bumblebee image in accordance with the corresponding density map marking the bumblebee positions.Specifically,the multi-scale convolutional neural network(CNN)module utilizes multiple convolution kernels to completely extract features of different scales from the input bumblebee image and density map.To generate various targets in the generated image,the multi-channel attention module builds numerous intermediate generation layers and attention maps.These targets are then stacked to produce a bumblebee image with a specific number of bumblebees.The proposed model obtains the greatest performance in bumblebee image generating tasks,and such generated bumblebee images considerably improve the efficiency of deep learning based counting methods in bumblebee counting applications. 展开更多
关键词 data augmentation image generation attention mechanism
原文传递
Restoration of the JPEG Maximum Lossy Compressed Face Images with Hourglass Block-GAN
6
作者 Jongwook Si Sungyoung Kim 《Computers, Materials & Continua》 SCIE EI 2024年第3期2893-2908,共16页
In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the imag... In the context of high compression rates applied to Joint Photographic Experts Group(JPEG)images through lossy compression techniques,image-blocking artifacts may manifest.This necessitates the restoration of the image to its original quality.The challenge lies in regenerating significantly compressed images into a state in which these become identifiable.Therefore,this study focuses on the restoration of JPEG images subjected to substantial degradation caused by maximum lossy compression using Generative Adversarial Networks(GAN).The generator in this network is based on theU-Net architecture.It features a newhourglass structure that preserves the characteristics of the deep layers.In addition,the network incorporates two loss functions to generate natural and high-quality images:Low Frequency(LF)loss and High Frequency(HF)loss.HF loss uses a pretrained VGG-16 network and is configured using a specific layer that best represents features.This can enhance the performance in the high-frequency region.In contrast,LF loss is used to handle the low-frequency region.The two loss functions facilitate the generation of images by the generator,which can mislead the discriminator while accurately generating high-and low-frequency regions.Consequently,by removing the blocking effects frommaximum lossy compressed images,images inwhich identities could be recognized are generated.This study represents a significant improvement over previous research in terms of the image resolution performance. 展开更多
关键词 JPEG lossy compression RESTORATION image generation GAN
下载PDF
A Low Spectral Bias Generative Adversarial Model for Image Generation
7
作者 Lei Xu Zhentao Liu +1 位作者 Peng Liu Liyan Cai 《国际计算机前沿大会会议论文集》 2022年第1期354-362,共9页
We propose a systematic analysis of the neglected spectral bias in the frequency domain in this paper.Traditional generative adversarial networks(GANs)try to fulfill the details of images by designing specific network... We propose a systematic analysis of the neglected spectral bias in the frequency domain in this paper.Traditional generative adversarial networks(GANs)try to fulfill the details of images by designing specific network architectures or losses,focusing on generating visually qualitative images.The convolution theorem shows that image processing in the frequency domain is parallelizable and performs better and faster than that in the spatial domain.However,there is little work about discussing the bias of frequency features between the generated images and the real ones.In this paper,we first empirically demonstrate the general distribution bias across datasets and GANs with different sampling methods.Then,we explain the causes of the spectral bias through the deduction that reconsiders the sampling process of the GAN generator.Based on these studies,we provide a low-spectral-bias hybrid generative model to reduce the spectral bias and improve the quality of the generated images. 展开更多
关键词 Deep learning applications image generation models Generative adversarial network
原文传递
An Interactive Collaborative Creation System for Shadow Puppets Based on Smooth Generative Adversarial Networks
8
作者 Cheng Yang Miaojia Lou +1 位作者 Xiaoyu Chen Zixuan Ren 《Computers, Materials & Continua》 SCIE EI 2024年第6期4107-4126,共20页
Chinese shadow puppetry has been recognized as a world intangible cultural heritage.However,it faces substantial challenges in its preservation and advancement due to the intricate and labor-intensive nature of crafti... Chinese shadow puppetry has been recognized as a world intangible cultural heritage.However,it faces substantial challenges in its preservation and advancement due to the intricate and labor-intensive nature of crafting shadow puppets.To ensure the inheritance and development of this cultural heritage,it is imperative to enable traditional art to flourish in the digital era.This paper presents an Interactive Collaborative Creation System for shadow puppets,designed to facilitate the creation of high-quality shadow puppet images with greater ease.The system comprises four key functions:Image contour extraction,intelligent reference recommendation,generation network,and color adjustment,all aimed at assisting users in various aspects of the creative process,including drawing,inspiration,and content generation.Additionally,we propose an enhanced algorithm called Smooth Generative Adversarial Networks(SmoothGAN),which exhibits more stable gradient training and a greater capacity for generating high-resolution shadow puppet images.Furthermore,we have built a new dataset comprising high-quality shadow puppet images to train the shadow puppet generation model.Both qualitative and quantitative experimental results demonstrate that SmoothGAN significantly improves the quality of image generation,while our system efficiently assists users in creating high-quality shadow puppet images,with a SUS scale score of 84.4.This study provides a valuable theoretical and practical reference for the digital creation of shadow puppet art. 展开更多
关键词 Shadow puppets deep learning image generation co-create
下载PDF
Comprehensive Relation Modelling for Image Paragraph Generation
9
作者 Xianglu Zhu Zhang Zhang +1 位作者 Wei Wang Zilei Wang 《Machine Intelligence Research》 EI CSCD 2024年第2期369-382,共14页
Image paragraph generation aims to generate a long description composed of multiple sentences,which is different from traditional image captioning containing only one sentence.Most of previous methods are dedicated to... Image paragraph generation aims to generate a long description composed of multiple sentences,which is different from traditional image captioning containing only one sentence.Most of previous methods are dedicated to extracting rich features from image regions,and ignore modelling the visual relationships.In this paper,we propose a novel method to generate a paragraph by modelling visual relationships comprehensively.First,we parse an image into a scene graph,where each node represents a specific object and each edge denotes the relationship between two objects.Second,we enrich the object features by implicitly encoding visual relationships through a graph convolutional network(GCN).We further explore high-order relations between different relation features using another graph convolutional network.In addition,we obtain the linguistic features by projecting the predicted object labels and their relationships into a semantic embedding space.With these features,we present an attention-based topic generation network to select relevant features and produce a set of topic vectors,which are then utilized to generate multiple sentences.We evaluate the proposed method on the Stanford image-paragraph dataset which is currently the only available dataset for image paragraph generation,and our method achieves competitive performance in comparison with other state-of-the-art(SOTA)methods. 展开更多
关键词 image paragraph generation visual relationship scene graph graph convolutional network(GCN) long short-term memory
原文传递
Dual Variational Generation Based ResNeSt for Near Infrared-Visible Face Recognition
10
作者 DING Xiangwu LIU Chao QIN Yanxia 《Journal of Donghua University(English Edition)》 CAS 2022年第2期156-162,共7页
Near infrared-visible(NIR-VIS)face recognition is to match an NIR face image to a VIS image.The main challenges of NIR-VIS face recognition are the gap caused by cross-modality and the lack of sufficient paired NIR-VI... Near infrared-visible(NIR-VIS)face recognition is to match an NIR face image to a VIS image.The main challenges of NIR-VIS face recognition are the gap caused by cross-modality and the lack of sufficient paired NIR-VIS face images to train models.This paper focuses on the generation of paired NIR-VIS face images and proposes a dual variational generator based on ResNeSt(RS-DVG).RS-DVG can generate a large number of paired NIR-VIS face images from noise,and these generated NIR-VIS face images can be used as the training set together with the real NIR-VIS face images.In addition,a triplet loss function is introduced and a novel triplet selection method is proposed specifically for the training of the current face recognition model,which maximizes the inter-class distance and minimizes the intra-class distance in the input face images.The method proposed in this paper was evaluated on the datasets CASIA NIR-VIS 2.0 and BUAA-VisNir,and relatively good results were obtained. 展开更多
关键词 near infrared-visible face recognition face image generation ResNeSt triplet loss function attention mechanism
下载PDF
Deep Learning for Distinguishing Computer Generated Images and Natural Images:A Survey 被引量:4
11
作者 Bingtao Hu Jinwei Wang 《Journal of Information Hiding and Privacy Protection》 2020年第2期95-105,共11页
With the development of computer graphics,realistic computer graphics(CG)have become more and more common in our field of vision.This rendered image is invisible to the naked eye.How to effectively identify CG and nat... With the development of computer graphics,realistic computer graphics(CG)have become more and more common in our field of vision.This rendered image is invisible to the naked eye.How to effectively identify CG and natural images(NI)has been become a new issue in the field of digital forensics.In recent years,a series of deep learning network frameworks have shown great advantages in the field of images,which provides a good choice for us to solve this problem.This paper aims to track the latest developments and applications of deep learning in the field of CG and NI forensics in a timely manner.Firstly,it introduces the background of deep learning and the knowledge of convolutional neural networks.The purpose is to understand the basic model structure of deep learning applications in the image field,and then outlines the mainstream framework;secondly,it briefly introduces the application of deep learning in CG and NI forensics,and finally points out the problems of deep learning in this field and the prospects for the future. 展开更多
关键词 Deep learning convolutional neural network image forensics computer generated image natural image
下载PDF
A method to generate foggy optical images based on unsupervised depth estimation
12
作者 WANG Xiangjun LIU Linghao +1 位作者 NI Yubo WANG Lin 《Journal of Measurement Science and Instrumentation》 CAS CSCD 2021年第1期44-52,共9页
For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the ... For traffic object detection in foggy environment based on convolutional neural network(CNN),data sets in fog-free environment are generally used to train the network directly.As a result,the network cannot learn the object characteristics in the foggy environment in the training set,and the detection effect is not good.To improve the traffic object detection in foggy environment,we propose a method of generating foggy images on fog-free images from the perspective of data set construction.First,taking the KITTI objection detection data set as an original fog-free image,we generate the depth image of the original image by using improved Monodepth unsupervised depth estimation method.Then,a geometric prior depth template is constructed to fuse the image entropy taken as weight with the depth image.After that,a foggy image is acquired from the depth image based on the atmospheric scattering model.Finally,we take two typical object-detection frameworks,that is,the two-stage object-detection Fster region-based convolutional neural network(Faster-RCNN)and the one-stage object-detection network YOLOv4,to train the original data set,the foggy data set and the mixed data set,respectively.According to the test results on RESIDE-RTTS data set in the outdoor natural foggy environment,the model under the training on the mixed data set shows the best effect.The mean average precision(mAP)values are increased by 5.6%and by 5.0%under the YOLOv4 model and the Faster-RCNN network,respectively.It is proved that the proposed method can effectively improve object identification ability foggy environment. 展开更多
关键词 traffic object detection foggy images generation unsupervised depth estimation YOLOv4 model Faster region-based convolutional neural network(Faster-RCNN)
下载PDF
A Study on the Influence of Luminance L* in the L*a*b* Color Space during Color Segmentation 被引量:1
13
作者 Rodolfo Alvarado-Cervantes Edgardo M. Felipe-Riveron +1 位作者 Vladislav Khartchenko Oleksiy Pogrebnyak 《Journal of Computer and Communications》 2016年第3期28-34,共7页
In this paper an evaluation of the influence of luminance L* at the L*a*b* color space during color segmentation is presented. A comparative study is made between the behavior of segmentation in color images using onl... In this paper an evaluation of the influence of luminance L* at the L*a*b* color space during color segmentation is presented. A comparative study is made between the behavior of segmentation in color images using only the Euclidean metric of a* and b* and an adaptive color similarity function defined as a product of Gaussian functions in a modified HSI color space. For the evaluation synthetic images were particularly designed to accurately assess the performance of the color segmentation. The testing system can be used either to explore the behavior of a similarity function (or metric) in different color spaces or to explore different metrics (or similarity functions) in the same color space. From the results is obtained that the color parameters a* and b* are not independent of the luminance parameter L* as one might initially assume. 展开更多
关键词 Color image Segmentation CIELAB Color Space L*a*b* Color Space Color Metrics Color Segmentation Evaluation Synthetic Color image generation
下载PDF
Cascading Enhancement of Reflected Optical Third-Harmonic Imaging in Bio-Tissues
14
作者 Yao Duan\|zheng , Xiong Gui\|guang School of Physics and Techno logy, Wuhan University, Wuhan 430072,Hu bei, China 《Wuhan University Journal of Natural Sciences》 CAS 2003年第01A期51-53,共3页
A new nonlinear optical third\|harmonic imaging technology in reflected fashion in bio\|tissues by using cascading effect, a process whereby the second\|order effects combine to contribute to a... A new nonlinear optical third\|harmonic imaging technology in reflected fashion in bio\|tissues by using cascading effect, a process whereby the second\|order effects combine to contribute to a third\|order nonlinear process, has been analyzed. The performance of the reflected optical third harmonic imaging enhanced by cascading effect in bio\|tissues is analyzed with the semi\|classical theory. The microscopic understanding of the enhancement of cascaded optical third\|harmonic imaging in reflected manner in bio\|tissues has been discussed.Some i deas for further enhancement is given. 展开更多
关键词 third harmonic generation imaging cascading enhancement bio\|tissue
下载PDF
A Prototype Expert System for Automatic Generation of Image Processing Programs
15
作者 宋茂强 FelixGrimm HorstBunke 《Journal of Computer Science & Technology》 SCIE EI CSCD 1991年第3期296-300,共5页
A prototype expert system for generating image processing programs using the subroutine pack- age SPIDER is described in this paper.Based on an interactive dialog,the system can generate a complete application program... A prototype expert system for generating image processing programs using the subroutine pack- age SPIDER is described in this paper.Based on an interactive dialog,the system can generate a complete application program using SPIDER routines. 展开更多
关键词 A Prototype Expert System for Automatic generation of image Processing Programs
原文传递
Prompt learning in computer vision: a survey 被引量:1
16
作者 Yiming LEI Jingqi LI +2 位作者 Zilong LI Yuan CAO Hongming SHAN 《Frontiers of Information Technology & Electronic Engineering》 SCIE EI CSCD 2024年第1期42-63,共22页
Prompt learning has attracted broad attention in computer vision since the large pre-trained visionlanguagemodels (VLMs) exploded. Based on the close relationship between vision and language information builtby VLM, p... Prompt learning has attracted broad attention in computer vision since the large pre-trained visionlanguagemodels (VLMs) exploded. Based on the close relationship between vision and language information builtby VLM, prompt learning becomes a crucial technique in many important applications such as artificial intelligencegenerated content (AIGC). In this survey, we provide a progressive and comprehensive review of visual promptlearning as related to AIGC. We begin by introducing VLM, the foundation of visual prompt learning. Then, wereview the vision prompt learning methods and prompt-guided generative models, and discuss how to improve theefficiency of adapting AIGC models to specific downstream tasks. Finally, we provide some promising researchdirections concerning prompt learning. 展开更多
关键词 Prompt learning Visual prompt tuning(VPT) image generation image classification Artificial intelligence generated content(AIGC)
原文传递
Mask guided diverse face image synthesis
17
作者 Song SUN Bo ZHAO +2 位作者 Muhammad MATEEN Xin CHEN Junhao WEN 《Frontiers of Computer Science》 SCIE EI CSCD 2022年第3期67-75,共9页
Recent studies have shown remarkable success in face image generation task.However,existing approaches have limited diversity,quality and controllability in generating results.To address these issues,we propose a nove... Recent studies have shown remarkable success in face image generation task.However,existing approaches have limited diversity,quality and controllability in generating results.To address these issues,we propose a novel end-to-end learning framework to generate diverse,realistic and controllable face images guided by face masks.The face mask provides a good geometric constraint for a face by specifying the size and location of different components of the face,such as eyes,nose and mouse.The framework consists of four components:style encoder,style decoder,generator and discriminator.The style encoder generates a style code which represents the style of the result face;the generator translate the input face mask into a real face based on the style code;the style decoder learns to reconstruct the style code from the generated face image;and the discriminator classifies an input face image as real or fake.With the style code,the proposed model can generate different face images matching the input face mask,and by manipulating the face mask,we can finely control the generated face image.We empirically demonstrate the effectiveness of our approach on mask guided face image synthesis task. 展开更多
关键词 face image generation image translation generative adversarial networks
原文传递
Improving contrast and sectioning power in confocal imaging by third harmonic generation in SiOx nanocrystallites 被引量:1
18
作者 Gilbert Boyer Karsten Plamann 《Chinese Optics Letters》 SCIE EI CAS CSCD 2007年第8期477-479,共3页
We present a new optical microscope in which the light transmitted by a sample-scanned transmission confocal microscope is frequency-tripled by SiOx nanocrystallites in lieu of being transmitted by a confocal pinhole.... We present a new optical microscope in which the light transmitted by a sample-scanned transmission confocal microscope is frequency-tripled by SiOx nanocrystallites in lieu of being transmitted by a confocal pinhole. This imaging technique offers an increased contrast and a high scattered light rejection. It is demonstrated that the contrast close to the Sparrow resolution limit is enhanced and the sectioning power are increased with respect to the linear confocal detection mode. An experimental implementation is presented and compared with the conventional linear confocal mode. 展开更多
关键词 mode Improving contrast and sectioning power in confocal imaging by third harmonic generation in SiO_x nanocrystallites THG
原文传递
Computer generated hologram from full-parallax 3D image data captured by scanning vertical camera array(Invited Paper) 被引量:2
19
作者 Masahiro Yamaguchi Koki Wakunami Mamoru Inaniwa 《Chinese Optics Letters》 SCIE EI CAS CSCD 2014年第6期80-85,共6页
Full-parallax light-field is captured by a small-scale 3D image scanning system and applied to holographic display. A vertical camera array is scanned horizontally to capture full-parallax imagery, and the vertical vi... Full-parallax light-field is captured by a small-scale 3D image scanning system and applied to holographic display. A vertical camera array is scanned horizontally to capture full-parallax imagery, and the vertical views between cameras are interpolated by depth image-based rendering technique. An improved technique for depth estimation reduces the estimation error and high-density light-field is obtained. The captured data is employed for the calculation of computer hologram using ray-sampling plane. This technique enables high-resolution display even in deep 3D scene although a hologram is calculated from ray information, and thus it makes use of the important advantage of holographic 3D display. 展开更多
关键词 Computer generated hologram from full-parallax 3D image data captured by scanning vertical camera array data
原文传递
上一页 1 2 下一页 到第
使用帮助 返回顶部