-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathsummdial-2022.html
323 lines (282 loc) · 21 KB
/
summdial-2022.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="SummDial">
<link rel="icon" type="image/gif/png" href="images/elitr.png">
<title>SummDial @ SemDial 2022</title>
<!-- Bootstrap core CSS -->
<link href="./dist/css/bootstrap.min.css" rel="stylesheet">
<!-- Fira Sans font -->
<link href="https://fonts.googleapis.com/css?family=Fira+Sans&display=swap" rel="stylesheet">
<!--[if lt IE 9]><script src="../../assets/js/ie8-responsive-file-warning.js"></script><![endif]-->
<!-- HTML5 shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
<!-- Custom styles for this template -->
<link href="styles.css" rel="stylesheet">
<!-- icons -->
<link rel="stylesheet" href="./font-awesome-4.1.0/css/font-awesome.min.css">
<style>
div {
text-align: justify;
text-justify: inter-word;
}
</style>
</head>
<body>
<!-- NAVBAR ================================================== -->
<div class="navbar-wrapper">
<div class="container">
<div class="navbar navbar-inverse navbar-static-top" role="navigation">
<div class="container">
<!-- MENU BUTTON FOR SMALL SCREENS + LOGO ================================ -->
<div class="navbar-header">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target=".navbar-collapse">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="header-logo-link" href="">
<div class="header-logo">
<span class="letter-highlight">2nd SummDial</span>
@ <span class="letter-highlight">SemDial 2022</span>
<span class="letter-highlight">Aug 24, 2022</span>
</div>
</a>
</div>
<!-- MENU OPTIONS ================================================== -->
<div class="navbar-collapse collapse pull-right">
<ul class="nav navbar-nav">
<!--<li class=""><a href="https://elitr.eu/"><img src="images/elitr.png" alt="logo" height="100"/></a></li>
<li class=""><a href="https://www.sigdial.org"><img src="images/sigdial.png" alt="logo" height="100"/></a></li>
<li class=""><a href="https://ufal.mff.cuni.cz"><img src="images/charles2.png" alt="logo" height="100"/></a></li>-->
<li class=""><a href="">Home</a></li>
<li class="dropdown">
<a href="#" class="dropdown-toggle" data-toggle="dropdown">SummDial<b class="caret"></b></a>
<ul class="dropdown-menu">
<!-- <li><a href="">CfP</a></li>
<li><a href="">Submission Information</a></li>
<li><a href="">Important Dates</a></li>
<li><a href="">Keynote Speaker</a></li>
<li><a href="">Panel</a></li>
<li><a href="">Accepted Submissions</a></li>
<li><a href="">Program Outline</a></li>
<li><a href="">Program Committee</a></li>
<li><a href="">Organizing Committee</a></li>
<li><a href="">Contact</a></li> -->
<li><a href="https://elitr.github.io/automatic-minuting/summdial.html">SummDial 2021</a></li>
<li><a href="https://semdial2022.github.io/">SemDial 2022</a></li>
<!--<li><a href="#program-schedule">SummDial Schedule</a></li>
<li><a href="summdial-cfp.html">Call for Papers</a></li>-->
</ul>
</li>
<li><a href="https://elitr.github.io/automatic-minuting/index.html">AutoMin 2021</a></li>
</ul>
</div>
<!-- MENU OPTIONS END ================================ -->
</div>
</div>
</div>
</div>
<!-- MAIN CONTENT ============================================= -->
<div class="container marketing navbar-spacing">
<div class="row">
<div class="col-md-12">
<h2 id="sdial"><b></b></h2> <h3><b>2nd SummDial: A <a href="https://semdial2022.github.io/#">SemDial 2022</a> Special Session on Summarization of Dialogues and Multi-Party Meetings</b></h3>
<p>
With a sizeable working population of the world going virtual, resulting in information overload from multiple online meetings, imagine how convenient it would be to just hover over past calendar invites and get concise summaries of the meeting proceedings? How about automatically minuting a multimodal multi-party meeting? Are minutes and multi-party dialogue summaries the same? We believe Automatic Minuting is challenging. There are possibly no agreed-upon guidelines for taking minutes, and people adopt different styles to record meeting minutes. The minutes also depend on the meeting's category, the intended audience, and the goal or objective of the meeting. We hosted the First SummDial Special Session at SIGDial 2021. Several significant problems and challenges in multi-party dialogue and meeting summarization came from the discussions in the first SummDial, which we documented in our <a href="https://dl.acm.org/doi/10.1145/3527546.3527561">event report</a>. You can read the report of the First SummDial @ SIGDial 2021 <a href="https://www.sigir.org/wp-content/uploads/2022/02/p12.pdf">here</a>.
<p>
Since we witnessed enthusiastic participation of the dialogue and summarization community in the first <a href="https://elitr.github.io/automatic-minuting/summdial.html">SummDial Special Session</a>, we are hosting the Second SummDial special session at <a href="https://semdial2022.github.io/#">SemDial 2022</a>. This year, we intend to continue the discussions on these challenges and lessons learned from the previous SummDial. Our goal for this special session would be to stimulate intense discussions around this topic and set the tone for further interest, research, and collaboration in both Speech and Natural Language Processing communities. Our topic of interests are Dialogue Summarization, including but not confined to Meeting Summarization, Chat Summarization, Email Threads Summarisation, Customer Service Summarization, Medical Dialogue Summarziation, and Multi-modal Dialogue Summarization. Our shared task on Automatic Minuting (AutoMin) at Interspeech 2021 was another community effort in this direction.
Our shared task on <a href="https://elitr.github.io/automatic-minuting/">Automatic Minuting (AutoMin)</a> at <a href="https://www.interspeech2021.org">Interspeech 2021</a> was another community effort in this direction. We are pleased to annouce that the <a href="https://twitter.com/TirthankarSlg/status/1539978910376439808?s=20&t=TvhSgKqMtlvA2aU21ooaLQ">second iteration of the Automatic Minuting (AutoMin) shared task</a> will happen with INLG 2023. More updates soon on the <a href="https://elitr.github.io/automatic-minuting/">AutoMin website</a>.
</p>
<hr class="featurette-divider">
<h1>Keynote Speaker</h1>
<div class="row featurette">
<div class="col-md-9 bio-text">
<h2><a href="https://sites.google.com/site/verenateresarieser/home">Verena Rieser</a></h2>
<a name="keynote"></a>
<p class="lead">Heriot-Watt University, Edinburgh, UK</p>
<p>Verena Rieser leads research on Conversational AI and Natural Language Generation. Verena is a full professor in Computer Science at Heriot-Watt University in Edinburgh, co-founder of ALANA AI, and Director of Ethics at the UK National Center for Robotics. She received her PhD from Saarland University in 2008 and then joined the University of Edinburgh as a postdoctoral research fellow, before taking up a faculty position at Heriot-Watt in 2011 where she was promoted to full professor in 2017. She is the PI of several UKRI-funded research projects and industry awards including Apple, Amazon, Google and Adobe. Her team is a double prize winner of the Amazon Alexa Prize challenge, and they currently compete as a sponsored entry to the Amazon SimBot challenge. Verena was recently awarded a Leverhulme Senior Research Fellowship by the Royal Society in recognition of her work in developing multimodal conversational systems. </p>
<!-- <p>youngsr "at" ornl.gov</p> -->
<p> <h2>Sources of Truth: Content-dependent Natural Language Generation from different source modalities</h2>
<h3><b>Abstract :</b></h3>
<i>Neural models for Natural Language Generation (NLG) are known to produce fluent output, however the content is often bland, inconsistent, or inappropriate. In this talk, I will argue that one way to alleviate these issues is via `grounding’ or 'conditioning' the output dependent on external sources -- i.e., encoding external knowledge which provides additional content at decoding time. I will illustrate this argument for 4 different tasks: data-to-text generation, document summarisation, visual dialogue and open-domain dialogue systems.</i>
</p>
</div>
<div class="col-md-3 bio-photo">
<img class="featurette-image img-responsive imagedropshadow" src="images/verena-rieser.jpg" alt="Verena Rieser">
</div>
</div>
<hr class="featurette-divider">
<h1>Panel Discussion </h1>
<h2>Current Challenges and Advances in Multiparty Meeting and Dialogue Summarization</h2>
<a name="panel"></a>
<div class="row featurette">
<div class="col-md-9 bio-text">
<h2 class="featurette-name-heading"><a href="https://sites.google.com/site/nancyfchen/home">Nancy F. Chen</a></h2>
<p class="lead">Institute for Infocomm Research (I2R), Agency for Science, Technology, and Research (A*STAR), Singapore</p>
<p>
Dr. Nancy Chen is a laboratory head, principal investigator and senior scientist at the Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), Singapore, where she leads research on conversational AI and language intelligence with applications in education, healthcare, journalism, and defense. Speech evaluation technology developed by her team is deployed at the Ministry of Education in Singapore to support home-based learning, and their low-resource spoken language processing system was one of the top performers in the NIST Open Keyword Search Evaluations (2013-2016). She has received numerous awards, including Singapore 100 Women in Tech (2021), Young Scientist Award at MICCAI 2021, Best Paper Award at SIGDIAL 2021, the 2020 P&G Connect + Develop Open Innovation Award, the 2019 L’Oréal Singapore For Women in Science National Fellowship, Best Paper at APSIPA ASC (2016), MOE Outstanding Mentor Award (2012), the Microsoft-sponsored IEEE Spoken Language Processing Grant (2011), and the NIH (National Institute of Health) Ruth L. Kirschstein National Research Award (2004-2008). Technology from her team has also resulted in spin-off companies such as nomopai to help engage customers with confidence and empathy. She received her Ph.D. from MIT and Harvard in 2011, and worked at MIT Lincoln Laboratory before joining I2R.
</p>
<!-- <p>pattonrm "at" ornl.gov</p> -->
</div>
<div class="col-md-3 bio-photo">
<img class="featurette-image img-responsive imagedropshadow" src="images/nancy_chen.jpg" alt="Nancy Chen">
</div>
</div>
<div class="row featurette">
<div class="col-md-9 bio-text">
<h2 class="featurette-name-heading"><a href="https://www.scss.tcd.ie/~ygraham/">Yvette Graham</a></h2>
<p class="lead">Trinity College, Dublin</p>
<p>Yvette Graham is a Natural Language Processing (NLP) researcher and Assistant Professor in AI at Trinity College Dublin, Ireland. Her work includes development of systems for a wide range of AI/NLP tasks, including Machine Translation, Dialogue Systems, Sentiment Analysis, Video Captioning, and Lifelong Retrieval. Besides NLP, Dr. Graham is also widely known for her work on NLP evaluation that has revealed misconceptions and bias in system evaluations and has been adopted by high profile competitions including the Conference on Machine Translation and TRECvid. She has published upwards of 70 papers in venues such as EMNLP, ACL and JNLE, and was previously awarded best paper at the Annual Conference for the Association of Computational Linguistics in 2015. </p>
<!-- <p>pattonrm "at" ornl.gov</p> -->
</div>
<div class="col-md-3 bio-photo">
<img class="featurette-image img-responsive imagedropshadow" src="images/yvette_graham.jpg" alt="Yvette Graham">
</div>
</div>
<div class="row featurette">
<div class="col-md-9 bio-text">
<h2 class="featurette-name-heading"><a href="http://www.ikonstas.net">Yannis Konstas</a></h2>
<p class="lead">Heriot-Watt University, UK</p>
<p>Yannis Konstas is an Assistant Professor of Computer Science at Heriot-Watt University and Head of Machine Learning at Alana AI. His research focuses on Natural Language Processing and in particular Natural Language Generation with an empahsis on scalable machine learning models. </p>
<!-- <p>pattonrm "at" ornl.gov</p> -->
</div>
<div class="col-md-3 bio-photo">
<img class="featurette-image img-responsive imagedropshadow" src="images/yannis_konstas.jpg" alt="Yannis Konstas">
</div>
</div>
<div class="row featurette">
<div class="col-md-9 bio-text">
<h2 class="featurette-name-heading"><a href="https://www.microsoft.com/en-us/research/people/budeb/">Budhaditya Deb</a></h2>
<p class="lead">Microsoft Search, Assistant and Intelligence (MSAI) group, US</p>
<p>Budhaditya Deb is a Principal Researcher in the Language, Learning and Privacy lab, Microsoft Research, Redmond. His current research interests are in Natural Language Generation, with focus on learning from natural interactions and feedback in zero and few shot learning scenarios. Budhaditya has also led the research and development of several AI based products for Microsoft. Recent applications include the Suggested Replies in Outlook and Teams conversations, and Meeting Insights and Summarization for Teams meetings. Prior to Microsoft, Budhaditya spent several years at GE-Research and BBN Technologies as a researcher working on various industrial, academic and government projects after receiving his Ph.D. from Rutgers University in 2005.</p>
<!-- <p>pattonrm "at" ornl.gov</p> -->
</div>
<div class="col-md-3 bio-photo">
<img class="featurette-image img-responsive imagedropshadow" src="images/budhaditya_deb.jpg" alt="Budhaditya Deb">
</div>
</div>
<div class="row featurette">
<div class="col-md-9 bio-text">
<h2 class="featurette-name-heading"><a href="https://tuetschek.github.io">Ondřej Dušek</a></h2>
<p class="lead">Charles University, CZ</p>
<p>Ondřej Dušek is an assistant professor at the Institute of Formal and
Applied Linguistics, Faculty of Mathematics and Physics, Charles
University. His research is in the areas of dialogue systems and
natural language generation, including summarization; he specifically
focuses on neural-networks-based approaches to these problems and
their evaluation. Ondřej got his PhD in 2017 at Charles University.
Between 2016 and 2018, he worked at Heriot Watt University in
Edinburgh and co-supervised a two-time finalist team in the Amazon
Alexa Prize competition. There he also co-organized the E2E NLG text
generation challenge, and since then he has been involved in multiple
efforts around the evaluation of generated text. He is now in the
early stages of his ERC Starting Grant aiming to develop new, fluent
and accurate methods for language generation.</p>
<!-- <p>pattonrm "at" ornl.gov</p> -->
</div>
<div class="col-md-3 bio-photo">
<img class="featurette-image img-responsive imagedropshadow" src="images/ondrej_dusek.jpg" alt="Ondrej Dusek">
</div>
</div>
<hr class="featurette-divider">
<h1>Program Schedule</h1>
<a name="program-schedule"></a>
<table class="table program">
<thead>
<tr>
<th scope="col"></th>
<td class="tz">GMT+1 (Ireland Time)</td>
</tr>
</thead>
<tbody>
<tr>
<th scope="row">Opening</th>
<td class="tz">14:00-14:05</td>
</tr>
<tr>
<th scope="row">Keynote: <a href="#keynote">Verena Rieser</a></th>
<td class="tz">14:05-14:50</td>
</tr>
<tr>
<th scope="row">Break - 5 minutes</th>
<td class="tz">14:50-14:55</td>
</tr>
<tr>
<th scope="row"><a href="#panel">Panel Discussion</a></th>
<td class="tz">14:55-16:25</td>
</tr>
<tr>
<th scope="row">Break - 10 minutes</th>
<td class="tz">16:25-16:35</td>
</tr>
<tr>
<th scope="row">(Invited Talk 1) <a href="https://aclanthology.org/2021.emnlp-main.530/">Simple Conversational Data Augmentation for Semi-supervised Abstractive Conversation Summarization </a></th>
<td class="tz">16:35-16:55</td>
</tr>
<tr>
<th scope="row">(Invited Talk 2) <a href="https://aclanthology.org/2022.naacl-main.415/">CONFIT: Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning</a></th>
<td class="tz">16:55-17:15</td>
</tr>
<tr>
<th scope="row">(Invited Talk 3) <a href="https://aclanthology.org/2022.naacl-main.418.pdf">DialSummEval: Revisiting Summarization Evaluation for Dialogues</a></th>
<td class="tz">17:15-17:35</td>
</tr>
<tr>
<th scope="row">(Invited Talk 4) <a href="https://aclanthology.org/2022.findings-acl.302.pdf">Using Dialogue Summarization for Few-shot Dialogue State Tracking</a></th>
<td class="tz">17:35-17:55</td>
</tr>
<tr>
<th scope="row">Closing</th>
<td class="tz">17:55-18:00</td>
</tr>
</tbody>
</table>
<h3 id="organizers"><b>Organizers</b></h3>
<ul>
<li><a href="https://elitr.eu/tirthankar-ghosal/">Tirthankar Ghosal</a>, Institute of Formal and Applied Linguistics, Charles University, Czech Republic </li>
<li><a href="">Xinnuo Xu</a>, University of Edinburgh, UK </li>
<li><a href="">Muskaan Singh</a>, IDIAP Research Institute, Switzerland </li>
<li><a href="https://ufal.mff.cuni.cz/ondrej-bojar">Ondřej Bojar</a>, Institute of Formal and Applied Linguistics, Charles University, Czech Republic </li>
</ul>
<h3 id="contact"><b>Contact</b></h3> <h4> <a href="mailto:[email protected]">[email protected]</a></h4>
<h3 id="ack"><b>Acknowledgement</b></h3> <h4> <a href="https://ufal.mff.cuni.cz/grants/neurem3">GAČR Grant id 19-26934X (NEUREM3)</a></h4>
<!-- FOOTER ========================================== -->
<hr><br />
<footer>
<div class="footer-wrapper">
<div class="footer-left">
<!-- <p>Follow us: <a href="https://twitter.com/elitrorg">https://twitter.com/elitrorg</a></p>
<p>© 2020 European Live Translator, A Horizon 2020 Project, <a href="https://elitr.eu/">https://elitr.eu/</a></p>
<p>Ondřej Bojar would also like to acknowledge the support from the grant <a href="https://ufal.mff.cuni.cz/grants/neurem3">19-26934X (NEUREM3)</a> of the Czech Science Foundation.</p> -->
<p>Website Template Acknowledgement: <a href="https://sdproc.org/2021/">SDP 2021</a></p>
<!-- <p>
<a href="https://www.pexels.com/photo/library-university-books-students-12064/">
Photo of Library Room by Tamas Meszaros (Free to use)
</a>
</p>
<p>
<a href="https://pxhere.com/en/photo/1575603">
Picture of a network by asawin form PxHere (Creative Commons CC0)
</a>
</p>-->
</div>
<div class="footer-right">
<a href="#">Back to top</a>
</div>
</div>
</footer>
</div>
<!-- Bootstrap core JavaScript ================================================== -->
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script src="./dist/js/bootstrap.min.js"></script>
<script src="./assets/js/docs.min.js"></script>
</body>
</html>