[self.] was a robotic art installation made by Øyvind Brandtsegg and Axel Tidemann in 2013. As an artwork, it reflects on the role of artificial intelligence in our society, and on human learning processes as well as machine learning processes. [self.] analyses sound through a system based on the human ear, and learns to recognize images using a digital model of how nerve cells in the brain handle sensory impressions. It was designed to learn entirely from sensory input with no pre-defined knowledge database, so that its learning process will resemble that of a human child in early life. Notably, it also makes associations between the person from whom it learned each phrase or word, and recreates a video of this person when a term is recalled. The video synthesis is based on a neural network trained on the sound of the phrase or word, and when fed with the exact sound, it will recreate the video. If the sound of the phrase is not exact (as weh nthe same word is spoken by a different person, the recreated video will be more fuzzy.

Video with [self.] in Norwegian, with English captions

[self.] in action at TSSK from Oeyvind Brandtsegg on Vimeo.

Early testing while mounting the first exhibition of [self.]

We did write some papers about the technology developed for the work: [self.]: an Interactive Art Installation that Embodies Artificial Intelligence and Creativity in C&C ‘15: Proceedings of the 2015 ACM SIGCHI Conference on Creativity and Cognition
On Audio Processes in the artificial intelligence [self.] in the 3rd International Csound Conference
[Self.]: Realization / art installation / artificial intelligence: A demonstration. Lecture Notes in Computer Science (LNCS). volum 9353.

After the initial exhibition in Trondheim, [self.] was also exhibited in Arendal, St. Petersburg and Glasgow

Tags: soundart, writing

Updated: