In his latest project, Akihiko Taniguchi conflates game engines, video game editors, and selfies:
This is a software that can generate 3D avatars from a single photo of a face, and take selfies in a virtual space. The part that generates 3D face shapes from a single photo uses the Avatar SDK, a tool that uses deep learning. In other words, 3D models are generated by inferring the 3D shape from the participant's face photo from the huge number of someone's face photo and 3D model learning data. The face of the 3D model generated in this way is fragmentarily similar to the vast majority of past learning data. Can you conclude that all of the generated 3D heads are mine?Also, in photographs taken in a virtual space, the subject is already rendered every frame before shooting, and is represented as a set of pixels on the screen. In other words, can you call it a picture when what you see and the picture taken are exactly the same physically? Or what does "shooting" in a virtual photograph reflect?
Akihiko Taniguchi lives and works in Japan. Artist. He is a full time lecturer of Tama Art University and part-time lecturer of Musashino Art University. His practice features installations, performances and video works using self-built devices and software. In recent years, he has been focusing on net.art and sometimes VJing. Among others, his work was presented at "dangling media" ("emergencies! 004" at "Open Space 2007," ICC, Tokyo, 2007), "Space of Imperception" (Radiator Festival, UK, 2009), "redundant web" (Internet, 2010) "[Internet Art Future?]" (ICC, Tokyo, 2012) and others.
LINK: Akihiko Taniguchi