Home

Traditionally, the recognition of tangible properties of data, such as objects and scenes, have overwhelmingly covered the spectra of applications in computer vision. In the recent past, the understanding of subjective attributes (SA) of data has attracted the attention of many researchers in vision. These subjective attributes include the ones assessed by individuals, e.g.:

  • safety [1,2],
  • interestingness [3,4,5],
  • evoked emotions and sentiment [6,7,8,9,10],
  • memorability [11,12],
  • aesthetics [3,13,14,15,16],
  • or creativity [17],

as well as aggregated emergent properties such as popularity or virality [18,19,20].

Given the inherent abstract nature of such concepts, many new challenges arise when attempting to automatically detect such properties from visual data, or to perform SA-based large-scale retrieval, including:

  • Collecting huge amounts of annotation reflecting subjective judgements
  • Learning visual representations specifically tailored for SA recognition
  • Reliably evaluating the accuracy of detectors of subjective properties
  • Translating (social) psychology theories into computational approaches to  systematically understand human SA perception

Currently, we organize a CVPR 2019 workshop focused on fashion and subjective search (FFSS-USAD). The list of previous events can be found here.

References

  1. V. Ordonez and T. L. Berg, “Learning high-level judgments of urban perception,” in ECCV, 2014.
  2. A. Khosla, B. An An, J. J. Lim, and A. Torralba, “Looking beyond the visible scene,” in IEEE CVPR, 2014.
  3. S. Dhar, V. Ordonez, and T. L. Berg, “High level describable attributes for predicting aesthetics and interestingness,” in IEEE CVPR, 2011.
  4. M. Gygli, H. Grabner, H. Riemenschneider, F. Nater, and L. Van Gool, “The interestingness of images,” in IEEE ICCV, 2013.
  5. Y. Fu, T. M. Hospedales, T. Xiang, S. Gong, and Y. Yao, “Interestingness prediction by robust learning to rank,” in ECCV, 2014.
  6. K. Peng, T. Chen, A. Sadovnik, and A. C. Gallagher, “A mixed bag of emotions: model, predict, and transfer emotion distributions,” in IEEE CVPR, 2015.
  7. R. Kosti, J. M. Alvarez, A. Recasens, and A. Lapedriza, “Emotion recognition in context,” in IEEE CVPR, 2017.
  8. Z. Hussain, M. Zhang, X. Zhang, K. Ye, C. Thomas, Z. Agha, N. Ong, and A. Kovashka, “Automatic understanding of image and video advertisements,” in IEEE CVPR, 2017.
  9. X. Alameda-Pineda, E. Ricci, Y. Yan, and N. Sebe, “Recognizing emotions from abstract paintings using non-linear matrix completion,” in IEEE CVPR, 2016.
  10. B. Jou, T. Chen, N. Pappas, M. Redi, M. Topkara, and S. Chang, “Visual affect around the world: a large-scale multilingual visual sentiment ontology,” in ACM Multimedia, 2015.
  11. A. Khosla, A. S. Raju, A. Torralba, and A. Oliva, “Understanding and predicting image memorability at a large scale,” in IEEE ICCV, 2015.
  12. A. Siarohin, G. Zen, C. Majtanovic, X. Alameda-Pineda, E. Ricci, and N. Sebe, “How to make an image more memorable? a deep style transfer approach,” in ACM ICMR, 2017.
  13. N. Murray, L. Marchesotti, and F. Perronnin, “Ava: a large-scale database for aesthetic visual analysis,” in IEEE CVPR, 2012.
  14. W. Luo, X. Wang, and X. Tang, “Content-based photo quality assessment,” in IEEE ICCV, 2011.
  15. E. Simo-Serra, S. Fidler, F. Moreno-Noguer, and R. Urtasun, “Neuroaesthetics in fashion: modeling the perception of fashionability,” in IEEE CVPR, 2015.
  16. R. Schifanella, M. Redi, and L. M. Aiello, “An image is worth more than a thousand favorites: surfacing the hidden beauty of flickr pictures.,” in ICWSM, 2015.
  17. M. Redi, N. O’Hare, R. Schifanella, M. Trevisiol, and A. Jaimes, “6 seconds of sound and vision: creativity in micro-videos,” in IEEE CVPR, 2014.
  18. X. Alameda-Pineda, A. Pilzer, D. Xu, N. Sebe, and E. Ricci, “Viraliency: pooling local virality,” in IEEE CVPR, 2017.
  19. A. Khosla, A. Das Sarma, and R. Hamid, “What makes an image popular?,” in Int. conf. on world wide web, 2014.
  20. A. Deza and D. Parikh, “Understanding image virality,” in IEEE CVPR, 2015.

Comments are closed.