dbjapanメーリングリストアーカイブ(2020年)
[dbjapan] CFP: MIPR 2021 Special Session on Knowledge-Driven Multi-modal Deep Analysis for Multimedia
- To: dbjapan [at] dbsj.org
- Subject: [dbjapan] CFP: MIPR 2021 Special Session on Knowledge-Driven Multi-modal Deep Analysis for Multimedia
- From: Jianwei ZHANG <zhang [at] iwate-u.ac.jp>
- Date: Mon, 12 Oct 2020 14:51:22 +0900
日本データベース学会の皆様 (重ねてお受け取りの方はご容赦下さい) 岩手大学の張と申します. お世話になっております. 2021年3月22-24日に開催される国際会議 MIPR 2021 では, Knowledge-Driven Multi-modal Deep Analysis for Multimedia というSpecial Session を企画しております. CFP をご案内させていただきます. 関連分野に従事される皆様のご投稿をご検討いただけましたら幸いです. どうぞ宜しくお願い致します. https://mipr2021.org/pages/ss_kmdam/ -------------------------------------------------------------------------- Special Session: Knowledge-Driven Multi-modal Deep Analysis for Multimedia --------------------------------------------------------------------------With the rapid development of Internet and multimedia services in the past decade, a huge amount of user-generated and service provider-generated multimedia data become available. These data are heterogeneous and multi-modal in nature, imposing great challenges for processing and analyzing them. Multi-modal data consist of a mixture of various types of data from different modalities such as texts, images, videos, audios etc. Data-driven correlational representation and knowledge-guided fusion are the main scientific problems for multimedia analysis. In order to gather and present innovative research on the following aspects: 1) multi-modal correlational representation: multi-modal fusion of data across different modalities, and 2) multi-modal data and knowledge fusion: multi-modal fusion of data with domain knowledge, we solicit submissions of high-quality manuscripts reporting the state-of-the-art techniques and trends in this field.
** List of Topics Multi-modal representation learning with knowledge Multi-modal data fusion with knowledge Knowledge representation for multi-modal data Deep cross-modality alignment with knowledge Methodology and architectures to improve model explainability with knowledgeMulti-modal deep analysis for innovative multimedia applications, such as person reidentification, social network analysis, cross-modal retrieval, recommendation systems and so on.
** Important Dates Paper Submission Deadline: November 20, 2020 Notification of Acceptance: December 25, 2020 Camera-Ready Deadline: January 8, 2021 ** Paper Submission InstructionsSpecial session paper manuscripts must be in English of up to 6 pages excluding references (using the IEEE two-column template instructions). Submissions should include the title, author(s), affiliation(s), e-mail address(es), abstract, and postal address(es) on the first page. The templates in Word or LaTex format are available here. To submit your papers to this session, please select the “Special Session: Knowledge-Driven Multi-modal Deep Analysis for Multimedia” in Microsoft CMT submission site.
** Special Session Organizers Jianwei Zhang (Iwate University, Japan) zhang [at] iwate-u.ac.jpXiahui Tao (University of Southern Queensland, Australia) Xiaohui.tao [at] usq.edu.au
--- 張 建偉 / 岩手大学理工学部 http://www.zl.cis.iwate-u.ac.jp/~zjw/wiki/
- Prev by Date: [dbjapan] DB学会MLへの投稿依頼
- Next by Date: [dbjapan] 【10/27】 Project Tsurugi(劔)ユーザー会 兼 経過報告会のお知らせ
- Index(es):