dbjapanメーリングリストアーカイブ(2022年)
[dbjapan] [2nd CfP] 論文募集中: Information Processing & Management (IP&M) (IF: 6.222) 特集号タイトル: Science Behind Neural Language Models
- To: dbjapan [at] dbsj.org
- Subject: [dbjapan] [2nd CfP] 論文募集中: Information Processing & Management (IP&M) (IF: 6.222) 特集号タイトル: Science Behind Neural Language Models
- From: Ptaszynski Michal <michal.ptaszynski [at] gmail.com>
- Date: Thu, 31 Mar 2022 15:09:49 +0900
MLの皆様 *重複して受け取られた場合はご容赦ください/Apologies for cross-posting* 北見工業大学のプタシンスキと申します. 現在,Information Processing&Management(IP&M)(IF:6.222)ジャーナルにて「Special Issue on Science Behind Neural Language Models」(事前学習言語モデルを裏付ける科学)という特集号のため論文を受け付けております.この特集号は,Information Processing & Management Conference 2022 (IP&MC2022)のテーマトラックと同時に実施しています.採択された原稿の最低でも1人の著者がIP&MC2022会議に参加する必要があります. IP&MC2022の詳細については,次のWebサイトをご覧ください. https://www.elsevier.com/events/conferences/information-processing-and-management-conference 原稿の提出締め切りは2022年6月15日ですが,論文は投稿後すぐに査読に送られ,採択された場合すぐに公開となります. 論文の投稿をどうぞご検討ください。 https://www.elsevier.com/events/conferences/information-processing-and-management-conference/author-submission/science-behind-neural-language-models どうぞよろしくお願いいたします. ミハウ・プタシンスキ(博士/情報科学,准教授) 北見工業大学 情報システム工学科 1号棟5103号室 〒090-8507 北見市公園町165番地 TEL/FAX: 0157-26-9327 michal [at] mail.kitami-it.ac.jp ============================================ Information Processing & Management (IP&M) (IF: 6.222) Special Issue on "Science Behind Neural Language Models" & Information Processing & Management Conference 2022 (IP&MC2022) Thematic Track on "Science Behind Neural Language Models" Motivation Last several years showed an explosive popularity of neural language models, especially large pretrained language models based on the transformer architecture. The field of Natural Language Processing (NLP) and Computational Linguistics (CL) experienced a shift from simple language models such as Bag-of-Words, and word representations like word2vec, or GloVe, to more contextually-aware language models, such as ELMo, or more recently, BERT, or GPT including their improvements and derivatives. The general high performance obtained by BERT-based models in various tasks even convinced Google to apply it as a default backbone in its search engine query expansion module, thus making BERT-based models a mainstream, and a strong baseline in NLP/CL research. The popularity of large pretrained language models also allowed a major growth of companies providing freely available repositories of such models, and, more recently, the founding of Stanford University’s Center for Research on Foundation Models (CRFM). However, despite the overwhelming popularity, and undeniable performance of large pretrained language models, or “foundation models”, the specific inner-workings of those models have been notoriously difficult to analyze and the causes of - usually unexpected and unreasonable - errors they make, difficult to untangle and mitigate. As the neural language models keep gaining in popularity while expanding into the area of multimodality by incorporating visual and speech information, it has become the more important to thoroughly analyze, fully explain and understand the internal mechanisms of neural language models. In other words, the science behind neural language models needs to be developed. Aims and scope With the above background in mind, we propose the following Information Processing & Management Conference 2022 (IP&MC2022) Thematic Track and Information Processing & Management Journal Special Issue on Science Behind Neural Language Models. The TT/SI will focus on topics deepening the knowledge on how the neural language models work. Therefore, instead of taking up basic topics from the fields of CL and NLP, such as improvement of part-of-speech tagging, or standard sentiment analysis, regardless of whether they apply neural language models in practice, we will focus on promoting research that specifically aims at analyzing and understanding the “bells and whistles” of neural language models, for which the generally perceived science has not been established yet. Target audience The TT/SI will aim at the audience of scientists, researchers, scholars, and students performing research on the analysis of pretrained language models, with a specific focus on explainable approaches to language models, analysis of errors such models make, methods for debiasing, detoxification and other methods of improvement of the pretrained language models. The TT/SI will not accept research on basic NLP/CL topics for which the field has been well established, such as improvement of part-of-speech tagging, sentiment analysis, etc., even if they apply neural language models unless they directly contribute to furthering the understanding and explanation of the inner workings of large scale pretrained language models. List of Topics List of Topics The Thematic Track / Special Issue will invite papers on topics listed, but not limited to the following: - Neural language model architectures - Improvement of neural language model generation process - Methods for fine tuning and optimization of neural language models - Debiasing neural language models - Detoxification of neural language models - Error analysis and probing of neural language models - Explainable methods for neural language models - Neural language models and linguistic phenomena - Lottery Ticket Hypothesis for neural language models - Multimodality in neural language models - Generative neural language models - Inferential neural language models - Cross-lingual or multilingual neural language models - Compression of neural language models - Domain specific neural language models - Expansion of information embedded in neural language models Important Dates: Thematic track manuscript submission due date; authors are welcome to submit early as reviews will be rolling: June 15, 2022 Author notification: July 31, 2022 IP&MC conference presentation and feedback: October 20-23, 2022 Post conference revision due date: January 1, 2023 Submission Guidelines: Submit your manuscript to the Special Issue category (VSI: IPMC2022 HCICTS) through the online submission system of Information Processing & Management. https://www.editorialmanager.com/ipm/ Authors will prepare the submission following the Guide for Authors on IP&M journal at (https://www.elsevier.com/journals/information-processing-and-management/0306-4573/guide-for-authors). All papers will be peer-reviewed following the IP&MC2022 reviewing procedures. The authors of accepted papers will be obligated to participate in IP&MC 2022 and present the paper to the community to receive feedback. The accepted papers will be invited for revision after receiving feedback on the IP&MC 2022 conference. The submissions will be given premium handling at IP&M following its peer-review procedure and, (if accepted), published in IP&M as full journal articles, with also an option for a short conference version at IP&MC2022. Please see this infographic for the manuscript flow: https://www.elsevier.com/__data/assets/pdf_file/0003/1211934/IPMC2022Timeline10Oct2022.pdf For more information about IP&MC2022, please visit https://www.elsevier.com/events/conferences/information-processing-and-management-conference. Thematic Track / Special Issue Editors: Managing Guest Editor: Michal Ptaszynski (Kitami Institute of Technology) Guest Editors: Rafal Rzepka (Hokkaido University) Anna Rogers (University of Copenhagen) Karol Nowakowski (Tohoku University of Community Service and Science) For further information, please feel free to contact Michal Ptaszynski directly.
- Prev by Date: [dbjapan] 論文募集IEEE BITS2022
- Next by Date: [dbjapan] DBSJ Newsletter Vol. 15, No. 1: DEIM 2022, 令和3年度データ解析コンペティションDB部会, WSDM 2022, AAAI 2022
- Index(es):