Logo do repositório
  • Página Inicial(current)
  • Buscar
    Por Data de PublicaçãoPor AutorPor TítuloPor Assunto
  • Tutoriais
  • Documentos
  • Sobre o RI
  • Eventos
    Repositório Institucional da UFRN: 15 anos de conexão com o conhecimento
  • Padrão
  • Amarelo
  • Azul
  • Verde
  • English
  • Português do Brasil
Entrar

SIGAA

  1. Início
  2. Pesquisar por Autor

Navegando por Autor "Curinga, Artur Maricato"

Filtrar resultados informando as primeiras letras
Agora exibindo 1 - 1 de 1
  • Resultados por página
  • Opções de Ordenação
  • Nenhuma Miniatura disponível
    TCC
    A Comparative Study of Image-to-Image Translation Techniques for Virtual Object Illumination in Augmented Images
    (Universidade Federal do Rio Grande do Norte, 2022-07-14) Curinga, Artur Maricato; Santos, Selan Rodrigues dos; Thomé, Antônio Carlos Gay; http://lattes.cnpq.br/9282046098909851; https://orcid.org/ 0000-0002-8056-1101; http://lattes.cnpq.br/4022950700003347; http://lattes.cnpq.br/6740272956017842; Santos, Selan Rodrigues dos; https://orcid.org/0000-0002-8056-1101; http://lattes.cnpq.br/4022950700003347; Thomé, Antônio Carlos Gay; http://lattes.cnpq.br/9282046098909851; Carvalho, Bruno Motta de; https://orcid.org/0000-0002-9122-0257; http://lattes.cnpq.br/0330924133337698; Campos, André Maurício Cunha; http://lattes.cnpq.br/7154508093406987
    This work is a comparison study between two Deep Neural Network (DNN) models in the augmented reality context, aiming to produce visually coherent augmented indoor images with a virtual object inserted. We trained DNN models to generate coherent shadows and illumination for an unlit object given a computer generated photorealistic indoor environment as a reference. The goal is to add the artificially lit object to the reference scene and make it blend in nicely when seen by an human viewer unaware of the interference. We develop a dataset Indoor Shadows with 4826 set of images from the 3D-Front scene dataset, in order to use it as our benchmark. The Pix2Pix and ShadowGAN were trained using the SGD, and Adam, and compared regarding the generated images with a ground truth. We used the L1, L2, and MSSIM metrics to evaluate the results of the trained models. We found that the ShadowGAN trained with Adam had the best results regarding the MSSIM metric and the Pix2Pix trained with SGD and the best results with L1, and L2. We concluded that both techniques are very limited, and the generated images are easily distinguishable from the ground truth.
Repositório Institucional - UFRN Campus Universitário Lagoa NovaCEP 59078-970 Caixa postal 1524 Natal/RN - BrasilUniversidade Federal do Rio Grande do Norte© Copyright 2025. Todos os direitos reservados.
Contato+55 (84) 3342-2260 - R232Setor de Repositórios Digitaisrepositorio@bczm.ufrn.br
DSpaceIBICT
OasisBR
LAReferencia
Customizado pela CAT - BCZM