

{"id":4,"date":"2011-12-08T11:55:34","date_gmt":"2011-12-08T11:55:34","guid":{"rendered":"http:\/\/project.inria.fr\/template1\/?page_id=4"},"modified":"2019-01-02T14:06:20","modified_gmt":"2019-01-02T13:06:20","slug":"home","status":"publish","type":"page","link":"https:\/\/project.inria.fr\/usad\/","title":{"rendered":"Home"},"content":{"rendered":"<p>Traditionally, the recognition of <em>tangible<\/em> properties of data, such as objects and scenes, have overwhelmingly covered the spectra of applications in computer vision. In the recent past, the understanding of <em>subjective attributes<\/em> (SA) of data has attracted the attention of many researchers in vision. These subjective attributes include the ones assessed by individuals, e.g.:<\/p>\n<ul>\n<li>safety [1,2],<\/li>\n<li>interestingness [3,4,5],<\/li>\n<li>evoked emotions and sentiment [6,7,8,9,10],<\/li>\n<li>memorability [11,12],<\/li>\n<li>aesthetics [3,13,14,15,16],<\/li>\n<li>or creativity [17],<\/li>\n<\/ul>\n<p>as well as aggregated emergent properties such as popularity or virality [18,19,20].<\/p>\n<p>Given the inherent abstract nature of such concepts, many new challenges arise when attempting to automatically detect such properties from visual data, or to perform SA-based large-scale retrieval, including:<\/p>\n<ul>\n<li>Collecting huge amounts of annotation reflecting subjective judgements<\/li>\n<li>Learning visual representations specifically tailored for SA recognition<\/li>\n<li>Reliably evaluating the accuracy of detectors of subjective properties<\/li>\n<li>Translating (social) psychology theories into computational approaches to \u00a0systematically understand human SA perception<\/li>\n<\/ul>\n<p>Currently, we organize a CVPR 2019 workshop focused on fashion and subjective search (FFSS-USAD).\u00a0The list of previous events can be found <a href=\"https:\/\/project.inria.fr\/usad\/previous-events\/\">here<\/a>.<\/p>\n<p><strong>References<\/strong><\/p>\n<ol>\n<li id=\"paperkey_0\" class=\"papercite_entry\">V. Ordonez and T. L. Berg, \u201cLearning high-level judgments of urban perception,\u201d in\u00a0ECCV, 2014.<\/li>\n<li id=\"paperkey_1\" class=\"papercite_entry\">A. Khosla, B. An An, J. J. Lim, and A. Torralba, \u201cLooking beyond the visible scene,\u201d in\u00a0IEEE CVPR, 2014.<\/li>\n<li id=\"paperkey_2\" class=\"papercite_entry\">S. Dhar, V. Ordonez, and T. L. Berg, \u201cHigh level describable attributes for predicting aesthetics and interestingness,\u201d in\u00a0IEEE CVPR, 2011.<\/li>\n<li id=\"paperkey_3\" class=\"papercite_entry\">M. Gygli, H. Grabner, H. Riemenschneider, F. Nater, and L. Van Gool, \u201cThe interestingness of images,\u201d in\u00a0IEEE ICCV, 2013.<\/li>\n<li id=\"paperkey_4\" class=\"papercite_entry\">Y. Fu, T. M. Hospedales, T. Xiang, S. Gong, and Y. Yao, \u201cInterestingness prediction by robust learning to rank,\u201d in\u00a0ECCV, 2014.<\/li>\n<li id=\"paperkey_5\" class=\"papercite_entry\">K. Peng, T. Chen, A. Sadovnik, and A. C. Gallagher, \u201cA mixed bag of emotions: model, predict, and transfer emotion distributions,\u201d in\u00a0IEEE CVPR, 2015.<\/li>\n<li id=\"paperkey_6\" class=\"papercite_entry\">R. Kosti, J. M. Alvarez, A. Recasens, and A. Lapedriza, \u201cEmotion recognition in context,\u201d in\u00a0IEEE CVPR, 2017.<\/li>\n<li id=\"paperkey_7\" class=\"papercite_entry\">Z. Hussain, M. Zhang, X. Zhang, K. Ye, C. Thomas, Z. Agha, N. Ong, and A. Kovashka, \u201cAutomatic understanding of image and video advertisements,\u201d in\u00a0IEEE CVPR, 2017.<\/li>\n<li id=\"paperkey_8\" class=\"papercite_entry\">X. Alameda-Pineda, E. Ricci, Y. Yan, and N. Sebe, \u201cRecognizing emotions from abstract paintings using non-linear matrix completion,\u201d in\u00a0IEEE CVPR, 2016.<\/li>\n<li id=\"paperkey_9\" class=\"papercite_entry\">B. Jou, T. Chen, N. Pappas, M. Redi, M. Topkara, and S. Chang, \u201cVisual affect around the world: a large-scale multilingual visual sentiment ontology,\u201d in\u00a0ACM Multimedia, 2015.<\/li>\n<li id=\"paperkey_10\" class=\"papercite_entry\">A. Khosla, A. S. Raju, A. Torralba, and A. Oliva, \u201cUnderstanding and predicting image memorability at a large scale,\u201d in\u00a0IEEE ICCV, 2015.<\/li>\n<li id=\"paperkey_11\" class=\"papercite_entry\">A. Siarohin, G. Zen, C. Majtanovic, X. Alameda-Pineda, E. Ricci, and N. Sebe, \u201cHow to make an image more memorable? a deep style transfer approach,\u201d in\u00a0ACM ICMR, 2017.<\/li>\n<li id=\"paperkey_12\" class=\"papercite_entry\">N. Murray, L. Marchesotti, and F. Perronnin, \u201cAva: a large-scale database for aesthetic visual analysis,\u201d in\u00a0IEEE CVPR, 2012.<\/li>\n<li id=\"paperkey_13\" class=\"papercite_entry\">W. Luo, X. Wang, and X. Tang, \u201cContent-based photo quality assessment,\u201d in\u00a0IEEE ICCV, 2011.<\/li>\n<li id=\"paperkey_14\" class=\"papercite_entry\">E. Simo-Serra, S. Fidler, F. Moreno-Noguer, and R. Urtasun, \u201cNeuroaesthetics in fashion: modeling the perception of fashionability,\u201d in\u00a0IEEE CVPR, 2015.<\/li>\n<li id=\"paperkey_15\" class=\"papercite_entry\">R. Schifanella, M. Redi, and L. M. Aiello, \u201cAn image is worth more than a thousand favorites: surfacing the hidden beauty of flickr pictures.,\u201d in\u00a0ICWSM, 2015.<\/li>\n<li id=\"paperkey_16\" class=\"papercite_entry\">M. Redi, N. O\u2019Hare, R. Schifanella, M. Trevisiol, and A. Jaimes, \u201c6 seconds of sound and vision: creativity in micro-videos,\u201d in\u00a0IEEE CVPR, 2014.<\/li>\n<li id=\"paperkey_17\" class=\"papercite_entry\">X. Alameda-Pineda, A. Pilzer, D. Xu, N. Sebe, and E. Ricci, \u201cViraliency: pooling local virality,\u201d in\u00a0IEEE CVPR, 2017.<\/li>\n<li id=\"paperkey_18\" class=\"papercite_entry\">A. Khosla, A. Das Sarma, and R. Hamid, \u201cWhat makes an image popular?,\u201d in\u00a0Int. conf. on world wide web, 2014.<\/li>\n<li id=\"paperkey_19\" class=\"papercite_entry\">A. Deza and D. Parikh, \u201cUnderstanding image virality,\u201d in\u00a0IEEE CVPR, 2015.<\/li>\n<\/ol>\n<p><\/p>","protected":false},"excerpt":{"rendered":"<p>Traditionally, the recognition of tangible properties of data, such as objects and scenes, have overwhelmingly covered the spectra of applications in computer vision. In the recent past, the understanding of subjective attributes (SA) of data has attracted the attention of many researchers in vision. These subjective attributes include the ones\u2026<\/p>\n<p> <a class=\"continue-reading-link\" href=\"https:\/\/project.inria.fr\/usad\/\"><span>Continue reading<\/span><i class=\"crycon-right-dir\"><\/i><\/a> <\/p>\n","protected":false},"author":1,"featured_media":0,"parent":0,"menu_order":0,"comment_status":"closed","ping_status":"open","template":"","meta":{"footnotes":""},"class_list":["post-4","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/project.inria.fr\/usad\/wp-json\/wp\/v2\/pages\/4","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/project.inria.fr\/usad\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/project.inria.fr\/usad\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/usad\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/project.inria.fr\/usad\/wp-json\/wp\/v2\/comments?post=4"}],"version-history":[{"count":17,"href":"https:\/\/project.inria.fr\/usad\/wp-json\/wp\/v2\/pages\/4\/revisions"}],"predecessor-version":[{"id":235,"href":"https:\/\/project.inria.fr\/usad\/wp-json\/wp\/v2\/pages\/4\/revisions\/235"}],"wp:attachment":[{"href":"https:\/\/project.inria.fr\/usad\/wp-json\/wp\/v2\/media?parent=4"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}