Toward Artificial Synesthesia: Linking Images and Sounds via Words
We tackle a new challenge of modeling a perceptual experience in which astimulus in one modality gives rise to an experience in a different sensorymodality, termed synesthesia. To meet the challenge, we propose a probabilisticframework based on graphical models that enables to link visual modalities andauditory modalities via natural language text. An online prototype system isdeveloped for allowing human judgement to evaluate the model’s performance.Experimental results indicate usefulness and applicability of the framework.
Toward Artificial Synesthesia: Linking Images and Sounds via Words
NIPS Workshop on Machine Learning for Next Generation Computer Vision Challenges
Authors: | Han Xiao and Frederic Stumpf |
Year/month: | 2010/ |
Booktitle: | NIPS Workshop on Machine Learning for Next Generation Computer Vision Challenges |
Fulltext: | Xiao_Stibor_NIPS2010_Workshop.pdf |
Abstract |
|
We tackle a new challenge of modeling a perceptual experience in which astimulus in one modality gives rise to an experience in a different sensorymodality, termed synesthesia. To meet the challenge, we propose a probabilisticframework based on graphical models that enables to link visual modalities andauditory modalities via natural language text. An online prototype system isdeveloped for allowing human judgement to evaluate the model’s performance.Experimental results indicate usefulness and applicability of the framework. |
Bibtex:
@inproceedings { Xiao:Stibor:2010a,author = { Han Xiao and Frederic Stumpf},
title = { Toward Artificial Synesthesia: Linking Images and Sounds via Words },
year = { 2010 },
booktitle = { NIPS Workshop on Machine Learning for Next Generation Computer Vision Challenges },
url = {https://www.sec.in.tum.de/i20/publications/toward-artificial-synesthesia-linking-images-and-sounds-via-words/@@download/file/Xiao_Stibor_NIPS2010_Workshop.pdf}
}