Accepted Tutorials

Scheduling Information

For up-to-date information on date, time, and location of the tutorials below, we refer to the IJCAI 2022 Schedule.

T1: A Tutorial on Domain Generalization

Jindong Wang (Microsoft Research); Haoliang Li (City University of Hong Hong); Sinno Pan (NTU, Singapore)

This tutorial is dedicated to introducing the latest advancements in Domain Generalization (DG).
Different from transfer learning and domain adaptation that assume the availability of target domain data, DG takes a step further that does not require the access of target data. The purpose of DG is to learn a generalized model from one or several training domains with different probability distributions that can achieve good out-of-distribution generalization. The potential audience will be machine learning researchers and industry practitioners, with special interest in transfer learning, domain adaptation and generalization. Our tutorial aims to make these techniques easier to learn and use in real applications.

https://dgresearch.github.io

T2: Pure Exploration in Multi-Armed Bandits

Zixin Zhong (National University of Singapore), Vincent Tan (National University of Singapore) 

This tutorial focuses on pure exploration of the multi-armed bandit (MAB) problem. It is intended to introduce the background, several state-of-the-art algorithms and their theoretical guarantees, and some fundamental analytical techniques in this area.

https://zixinzh.github.io/homepage/conf_tutorial/

T3: Distortion in Social Choice & Beyond

Nisarg Shah (University of Toronto), Dominik Peters (CNRS, LAMSADE, Universite Paris Dauphine)

The distortion framework offers a way to quantitatively evaluate economic efficiency of collective decision-making algorithms. Originally proposed in the context of voting, it has since been extended to fair division, matching, graph algorithms, and beyond.

This tutorial will begin by surveying distortion in voting theory. Optimal distortion bounds for deterministic and randomized voting rules for aggregating ranked ballots will be covered under both utilitarian and metric cost settings. This will be followed by a survey of information-distortion tradeoff, where ranked ballots are replaced by more or less expressive ballot formats. Towards the end, the tutorial will present applications of the framework to other research areas. No prior background of social choice theory or the distortion framework will be necessary.

https://www.cs.toronto.edu/~nisarg/tutorials/distortion.html

T4: Adversarial Sequential Decision-Making

Goran Radanovic (Max Planck Institute for Software Systems); Adish Singla (MPI-SWS); Wen Sun (Cornell University); Xiaojin Zhu (University of Wisconsin-Madison)

This tutorial will provide an overview of recent research on adversarial learning in sequential decision-making settings. In particular, the tutorial will focus on adversarial attacks and defense mechanisms in the context of agents based on multi-armed bandits, reinforcement learning, and multi-agent interactions.

https://adversarial-rl.org/ijcai2022/

T6: Deep Learning Methods for Query Auto Completion

Manish Gupta (Microsoft,India); Puneet Agrawal (Microsoft,India)

Query Auto Completion (QAC) aims to help users reach their search intent faster and is a gateway to search for users. Everyday, Billions of keystrokes across 100s of languages are served by Bing Autosuggest in less than 100 ms. The expected suggestions could differ depending on user demography, previous search queries and current trends. In general, the suggestions in the AutoSuggest block are expected to be relevant, personalized, fresh, diverse and need to be guarded against being defective, hateful, adult or offensive in any way. In this tutorial, we will talk how state-of-the-art deep learning models have been leveraged for ranking in QAC, personalization, spell corrections and natural language generation (NLG) for QAC.

https://aka.ms/dl4qac

T7: Constraints in Fair Division

Ayumi Igarashi (National Institute of Informatics); Warut Suksompong (National University of Singapore)

The fair allocation of resources to interested agents is a fundamental problem in society and has received significant attention from the game theory and AI communities in the past decade. In this tutorial, we will survey the active line of work on investigating common types of constraints in practical fair division problems, including connectivity, cardinality, matroid, geometric, separation, budget, and conflict constraints.

http://www.comp.nus.edu.sg/~warut/ijcai22-tutorial.html

T8: Tensor Computation for Data Processing

Yipeng Liu (University of Electronic Science and Technology of China)

Tenor is a natural representation for multi-dimensional data, and tensor computation based data processing methods can avoid multi-linear data structure loss in classical matrix based counterparts. In this tutorial, a series of tensor based machine learning methods are presented, as the multi-linear extensions of classical sparse learning, missing component analysis, principal component analysis, subspace cluster, linear regression, support vector machine, deep neural network, etc.

https://yipengliu.github.io/ijcai2022tutorial/

T9: Spoken Language Understanding: Recent Advances and New Frontiers

Libo Qin (Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology); Wanxiang Che (Research Center for Social Computing and Information Retrieval, Harbin Institute of Technology); Zhou YU (Columbia)

This tutorial introduces a comprehensive and comparative review on spoken language understanding (SLU), which aims to extract semantic constituents from the natural language utterances in task-oriented dialogue system that serves as a major goal of artificial intelligence. The discussion covers the scope of background, development, influence, datasets, new taxonomy for state-of-the-art techniques, recent trends and challenges.

https://github.com/yizhen20133868/Awesome-SLU-Survey

T10: Science communication for AI researchers

Lucy Smith (AIhub.org)

Would you like to learn how to communicate your AI research to a general audience? In this tutorial you will learn how to turn your research articles into blog posts, how to use social media to promote your work, and how to avoid hype when writing about your research.

https://aihub.org/science-communication-for-ai-researchers/

T11: Opinion Formation in Social Networks: Models and Computational Problems

Aristides Gionis (KTH Royal Institute of Technology); Stefan Neumann (KTH Royal Institute of Technology); Bruno Ordozgoiti (Queen Mary University of London)

Social networks have become an integral part of modern societies, and over the last few years they have exhibited increasing degrees of polarization. In this tutorial we consider models from sociology that allow us to quantify opinion formation, and polarized behavior in particular, and we present computational problems that arise in this domain, discussing key contributions from the data science and machine learning communities.

https://sites.google.com/view/tutorialopinionformation-ijcai

T12: Recent Advances in Bayesian Optimization

Janardhan Rao Doppa (Washington State University); Aryan Deshwal (Washington state university)*; Syrine Belakaria (Washington State university)

Many engineering and scientific applications including automated machine learning (e.g., neural architecture search and hyper-parameter tuning) involve making design choices to optimize one or more expensive to evaluate objectives. Some examples include tuning the knobs of a compiler to optimize performance and efficiency of a set of software programs; designing new materials to optimize strength, elasticity, and durability; and designing hardware to optimize performance, power, and area. Bayesian Optimization (BO) is an effective framework to solve black-box optimization problems with expensive function evaluations. The key idea behind BO is to build a cheap surrogate statistical model (e.g., Gaussian Process) using the real experimental data; and employ it to intelligently select the sequence of experiments or function evaluations using an acquisition function, e.g., expected improvement (EI) and upper-confidence bound (UCB).

There is a large body of work on BO for single-objective optimization in the single-fidelity setting (i.e., experiments are expensive and accurate in function evaluation) for continuous input spaces. However, BO work in recent years has focused on more challenging problem settings including optimization of multiple objectives; optimization with multi-fidelity function evaluations (vary in resource cost and accuracy of evaluation); optimization with black-box constraints with applications to safety; optimization of combinatorial spaces (e.g., sequences, trees, and graphs); and optimization of hybrid spaces (mixture of discrete and continuous input variables. The goal of this tutorial is to present a comprehensive survey of BO starting from foundations to these recent advances by focusing on challenges, principles, and algorithmic ideas and their connections.

https://bayesopt-tutorial.github.io

T13: Recent Advances in Retrieval-Augmented Text Generation

Deng Cai (The Chinese University of Hong Kong); Yan Wang (Tencent AI Lab); Lemao Liu (Tencent AI Lab); Shuming Shi (Tencent AI Lab)

Recently retrieval-augmented text generation has achieved state-of-the-art performance in many NLP tasks and has attracted increasing attention of the NLP and AI community, this tutorial thereby aims to present recent advances in retrieval-augmented text generation comprehensively and comparatively. It firstly highlights the generic paradigm of retrieval-augmented text generation, then reviews notable works for different text generation tasks including dialogue generation, machine translation, and other generation tasks, and finally points out some limitations and shortcomings to facilitate future research.

https://github.com/lemaoliu/retrieval-generation-tutorial

T14: Automated Synthesis: Towards the Holy Grail of AI

Kuldeep S Meel (National University of Singapore); Supratik Chakraborty (IIT Bombay); S Akshay (IIT Bombay); Priyanka Golia (IIT Kanpur); Subhajit Roy (IIT Kanpur)

The seminal work of Freuder articulated the holy grail of Computer Science as: “the user states the problem and the computer solves it”. While machine intelligence powerful enough to achieve this general goal remains elusive, significant advances have been made in several sub-areas in the past few decades. One such sub-area is automated synthesis, wherein a machine automatically synthesizes programs (also represented as circuits) that provably meet the end-user’s functional requirements. We will present approaches that combine advances in automated reasoning, knowledge compilation, and machine learning to solve a wide variety of practical functional synthesis problems.

Given the fundamental importance of synthesis in computer science, recent developments, naturally, have been reported in several communities besides the core AI conferences such as formal methods, programming languages, and software engineering; as such, a typical IJCAI attendee may not be aware of all the recent advances. The goal of the tutorial is to remedy the situation by introducing the Artificial Intelligence researcher and practitioner to the theory and tools of this emerging area of importance. This tutorial is expected to benefit practitioners and researchers who want to automatically build provably correct (sub)-systems from logical specifications.

https://priyanka-golia.github.io/ijcai22-tutorial/index.html

T15: Deep Energy-Based Learning

Jianwen Xie (Baidu Research)

In recent years, there has been growing interest in ConvNet-parametrized energy-based generative models. The concomitant need for representation, generation, efficiency and scalability in generative models is addressed by the framework of ConvNet-parametrized EBMs. Specifically, different from existing popular generative models, such as GAN and VAE, the energy-based generative model can unify the bottom-up representation and top-down generation into a single framework, and can be trained by “analysis by synthesis”, without recruiting an extra auxiliary model. Both model parameter update and data synthesis can be efficiently computed by back-propagation. The model can be easily designed and scaled up. The expressive power and advantages of this framework has launched a series of research works leading to significant theoretical and algorithmic maturity. Due to its major advantages over conventional models, energy-based generative models are now utilized in many computer vision tasks, including image, video, 3D volumetric shape, point cloud modeling and synthesis. The proposed tutorial will provide a comprehensive introduction to the recent advance of energy-based learning in computer vision. An intuitive and systematic understanding of the underlying learning objective and sampling strategy will be developed. Different types of computer vision tasks successfully solved by the energy-based generative frameworks will be presented. Besides introducing the energy-based framework and the state-of-the-art applications, this tutorial will aim to enable researchers to apply the energy-based learning principles in other contexts of computer vision.

https://energy-based-models.github.io/ijcai2022-tutorial

T16: When Multiple Agents Care About More than One Objective

Diederik M Roijers (HU University of Applied Sciences); Roxana Rădulescu (Vrije Universiteit Brussel)

Many real-world decision problems have more than a single objective, and more than one agent. Mathematically, this means that agents receive a reward vector, rather than a scalar reward. This might seem like a minor modelling change. However, it changes the problem to its core; from its optimisation criterion, to its solutions. For example, the well-known game-theory result that every (single-objective) normal form game has a Nash equilibrium, no longer holds when the agents care about more than one objective.

In this tutorial, we will start from what it means to care about more than one aspect of the solution, and why you should care about that when modelling multi-agent settings. Then we will go into what agents should optimise for in multi-objective settings, and discuss different assumptions, culminating in a taxonomy of multi-objective multi-agent settings and accompanying solution concepts. We will then follow up with a few recent and surprising results from the multi-objective decision making field. Finally, we will end with tips and tricks for identifying and dealing with multi-objective multi-agent problems. (They are everywhere!)

http://roijers.info/motutorial.html

T17: Mechanism Design without Money: Matching, Facility Locations, and Beyond

Haris Aziz (UNSW Sydney & Data61, CSIRO); Hau Chan (University of Nebraska-Lincoln); Hadi Hosseini (Penn State University); Chenhao Wang (Beijing Normal University)*

The proposed tutorial aims to introduce audiences to algorithmic mechanism design without money and its applications, for strategic environments when the mechanism designers are required to elicit private information from the agents in order to generate desirable outcomes and implement desirable mechanisms’ properties when monetary transfers are not allowed. The audiences will be exposed to various classical mechanism design settings (e.g., matching and facility locations), mechanisms’ desired properties and solution concepts, and algorithmic tools/mechanisms. The tutorial will also cover some recent directions and applications of mechanism design without money.

https://sites.google.com/view/ijcaiecai-22-tutorialmdwomoney/home

T18: Disentangled Representation Learning: Approaches and Applications

Xin Wang (Tsinghua University); Hong Chen (Tsinghua University); Wenwu Zhu (Tsinghua University)

Discovering and recognizing the hidden factors behind observable data serves as one crucial step for machine learning algorithms to better understand the world.
However, it still remains a challenging problem for current deep learning models which heavily rely on data representations. To solve this challenge, disentangled representation learning, as a recently cutting-edge topic in both academy and industry, aims at learning a disentangled representation for each object where different parts of the representation can express different (disentangled) semantics so as to improve the explainability and controllability of the machine learning models. Notably, it has achieved great success in diverse fields, such as image/video generation, recommender systems, and graph neural networks, covering a variety of areas ranging from computer vision to datamining.

In this tutorial, we will disseminate and promote the recent research achievements on disentangled representation learning as well as its applications, which is an exciting and fast-growing research direction in the general field of machine learning.
We will also advocate novel, high-quality research findings, and innovative solutions to the challenging problems in disentangled representation learning.

This tutorial consists of five parts. We first give a brief introduction on the research and industrial motivation, followed by discussions on basics, fundamentals and applications of disentangled representation learning. We will also discuss some recent advances covering disentangled graph representation learning and disentangled representation for recommendation. We finally share some of our insights on the trending for disentangled representation learning.

https://mn.cs.tsinghua.edu.cn/drl-ijcai2022.html

T19: Graph Neural Networks: Foundation, Frontiers and Applications

Lingfei Wu (JD.COM Silicon Valley Research Center); Peng Cui (Tsinghua University); Jian Pei (Simon Fraser University); Zhao Liang (Emory University); Xiaojie Guo (JD.COM Silicon Valley Research Center)

The field of graph neural networks (GNNs) has seen rapid and incredible strides over the recent years. Graph neural networks, also known as deep learning on graphs, graph representation learning, or geometric deep learning, have become one of the fastest-growing research topics in machine learning, especially deep learning. This wave of research at the intersection of graph theory and deep learning has also influenced other fields of science, including recommendation systems, computer vision, natural language processing, inductive logic programming, program synthesis, software mining, automated planning, cybersecurity, and intelligent transportation. However, as the field rapidly grows, it has been extremely challenging to gain a global perspective of the developments of GNNs. Therefore, we feel the urgency to bridge the above gap and have a comprehensive tutorial on this fast-growing yet challenging topic.

This tutorial of Graph Neural Networks (GNNs): Foundation, Frontiers and Applications will cover a broad range of topics in graph neural networks, by reviewing and introducing the fundamental concepts and algorithms of GNNs, new research frontiers of GNNs, and broad and emerging applications with GNNs. In addition, rich tutorial materials wil be included and introduced to help the audience gain a systematic understanding by using our recently published book-Graph Neural Networks (GNN): Foundation, Frontiers and Applications, one of the most comprehensive book for researchers and practitioners for reading and studying in GNNs.

https://graph-neural-networks.github.io/tutorial_ijcai22.html

T20: Hybrid Probabilistic Inference with Algebraic and Logical Constraints

Paolo Morettin (KU Leuven); Pedro Zuidberg Dos Martires (KU Leuven); Samuel M Kolb (KU Leuven); Andrea Passerini (University of Trento)

In this tutorial we study probabilistic inference in the presence of algebraic and logical constraints. We will cover the theoretical foundation and computational challenges related to probabilistic inference in constrained settings, while exploring at the same time highly relevant applications, such as probabilistic formal verification of hybrid systems and learning models that satisfy constraints by construction.

https://dtai.cs.kuleuven.be/tutorials/wmitutorial/

T21: Tableau methods for linear-time temporal logics

Luca Geatti (Free University of Bozen/Bolzano); Nicola Gigante (Free University of Bozen-Bolzano); Angelo Montanari (University of Udine)

Temporal logic is one of the most used formalisms to express properties of computations, plans, processes, in AI, formal verification and other fields. Reasoning in temporal logic is one of the most studied task in the literature. Tableau methods are among the first reasoning techniques studied for this purpose, and provide both interesting theoretical insights and practical algorithmic techniques. Despite being a field that has been studied for decades, development of tableau methods for temporal logics continues to this date. In particular, while classic tableau methods for linear-time temporal logics are graph-shaped, recently much work have been focused on tree-shaped tableaux, which proved to have many practical advantages.

This tutorial aims at providing a detailed overview of classical results and recent advanced developments in the area of tableau methods for linear-time temporal logics. Focus will be given on both theoretical foundations and practical aspects, from classic graph-shaped tableaux to recent tree-shaped ones, and their recent SAT encodings. All the topics will be introduced together with their needed background, hence no particular prerequisite is needed to attend the tutorial. The treated topics can be of interest to researchers and practitioners alike in the areas of automated reasoning, planning, (temporal) knowledge representation, and formal verification.

https://www.inf.unibz.it/~gigante/ijcai22-tutorial/

T22: Automated Verification of Multi-Agent Systems: Why, What, and Especially: How?

Wojciech Jamroga (University of Luxembourg); Wojciech Penczek (Institute of Computer Science, Polish Academy of Sciences); Catalin Dima (Université Paris-Est Créteil)

The course offers an introduction to some recent advances in formal verification of intelligent agents and multi-agent systems. The focus is on accessible presentation and simple examples, without going too deep into the involved mathematical machinery.

Automated verification of discrete-state systems has been a hot topic in computer science for over 35 years. The idea found its way into AI and multi-agent systems in late 1990’s, and techniques for verification of such systems have been in constant development since then. In this tutorial, we present a lightweight introduction to the topic, and mention relevant properties that one might like to verify this way. Then, we describe some very recent results on incomplete model checking and model reductions, which can lead to practical solutions for the notoriously hard problem. We conclude by a presentation of the experimental tool for verification of strategic ability, being developed at the Polish Academy of Sciences.

https://home.ipipan.waw.pl/w.jamroga/courses/Verification2022IJCAI/

T23: Evidential Reasoning and Learning

Federico Cerutti (University of Brescia); Lance Kaplan (US DEVCOM Army Research Laboratory)

When collaborating with an AI system, we need to assess when to trust its recommendations. Suppose we mistakenly trust it in regions where it is likely to err. In that case, catastrophic failures may occur, hence the need for Bayesian approaches for reasoning and learning to determine the confidence (or epistemic uncertainty) in the probabilities of the queried outcome. Pure Bayesian methods, however, suffer from high computational costs. To overcome them, we revert to efficient and effective approximations. In this tutorial, PhD students and early-stage researchers will be introduced to techniques that take the name of evidential reasoning and learn from the Bayesian update of given hypotheses based on additional evidence collected. The tutorial provides the reader with a gentle introduction to the area of investigation, the up-to-date research outcomes, and the open questions still left unanswered.

https://federico-cerutti.unibs.it/tutorials/2022-ijcai-erl/

T24: Robust Time Series Analysis: from Theory to Applications in the AI Era

Qingsong Wen (Alibaba Group U.S.); Linxiao Yang (Machine Intelligence Technology, Alibaba Group, Hangzhou, China); Tian Zhou (Alibaba DAMO Academy); Liang Sun (Alibaba Group)

Time series analysis is ubiquitous and important in various areas, such as Artificial Intelligence for IT Operations (AIOps) in cloud computing, AI-powered Business Intelligence in E-commerce, Artificial Intelligence of Things (AIoT), etc. In real-world scenarios, time series data often exhibit complex patterns with trend, seasonality, outlier and noise. In addition, as more time series data are collected and stored, how to handle the huge amount of data efficiently is crucial in many applications. We note that these significant challenges exist in various tasks like forecasting, anomaly detection, and classification. Therefore, how to design effective and efficient time series models for different tasks, which are robust to address the aforementioned challenging patterns and noise in real scenarios, is of great theoretical and practical interests.

In this tutorial, we provide a comprehensive and organized tutorial on the state-of-the-art algorithms of robust time series analysis, ranging from traditional statistical methods to the most recent deep learning based methods. We will not only introduce the principle of time series algorithms, but also provide insights into how to apply them effectively in practical real-world industrial applications. Specifically, we organize the tutorial in a bottom-up framework. We first present preliminaries from different disciplines including robust statistics, signal processing, optimization, and deep learning. Then, we identify and discuss those most-frequently processing blocks in robust time series analysis, including periodicity detection, trend filtering, seasonal-trend decomposition, and time series similarity. Lastly, we discuss recent advances in multiple time series tasks including forecasting, anomaly detection, classification and so on, as well as practical lessons of large-scale time series applications from an industrial perspective.

https://sites.google.com/view/timeseries-tutorial-ijcai-2022

T25: Using constraint solvers as an oracle, with CPMpy

Tias Guns (KU Leuven); Emilio Gamba (Vrije Universiteit Brussel); Ignace Bleukx (KU Leuven)

Constraint solving is a well-established approach in AI for reasoning. It involves formulating and solving complex constraint satisfaction and optimisation problems. Highly efficient solvers exist for the various paradigms in constraint solving. These include Boolean satisfaction problems, Pseudo-Boolean optimization problems, (mixed) integer linear programming problems (MILP) as well as knowledge compilation approaches. For each paradigm, the user has to learn how to encode
the problem using the supported constraints, although many paradigms can be translated to each other.

Furthermore, many methods for problems Beyond-NP use multiple solving paradigms at the same time, such as implicit hitting-set based approaches and methods for explaining constraint satisfaction and optimisation problems. This tutorial shows how the open-source CPMpy library can be used to automatically translate logical and mathematical expressions into different solving paradigms, and how this 1) enlarges the reach of low-level solvers as it takes away part of the modeling/encoding effort; and 2) eases the development of master/sub-problem style methods that require efficient, incremental solvers of different paradigms within the same programming environment.

https://github.com/CPMpy/cpmpy/tree/master/examples/tutorial_ijcai22/

T26: Designing Agents’ Preferences, Beliefs, and Identities

Vincent Connitzer (Duke University, Carnegie Mellon University)

The premise of this tutorial is that as we see AI deployed in an increasing number of real-world settings, the traditional model of the “agent” in AI is coming under significant pressure. Consider having Siri on your phone. Is that an agent? Do you have “your own personal” Siri agent, or is there just one single Siri agent across everyone’s phones? One could ask the same question about, for example, a well-connected network of self-driving cars. How should we think about the boundaries of agents? Also, traditionally in AI it is assumed that somehow an objective function to pursue is given to the agent, but these and other real-world examples make clear that determining what this objective function should be is not at all an easy problem, often involving an ethical component, and ideally the objective function is aggregated from the input of a variety of stakeholders.

The goal of the tutorial is to give participants a framework within which to conceptualize these questions, as well
as to show them how specific insights and tools from decision theory, social choice theory, and game theory can help
us address these questions. The target audience is anyone interested in these questions, which at IJCAI I expect to be
anyone interested in the broader goals of AI rather than being completely focused on making progress on narrow applications.

https://users.cs.duke.edu/~conitzer/agentdesigntutorial.html

T27: Representing learning for acting and planning: A top-down approach

 
Blai Bonet (Universidad Simón Bolívar), Hector Geffner (Universitat Pompeu Fabra)

In bottom-up approaches to representation learning, the learned representations are those that result from a deep neural net after training. In top-down approaches, on the other hand, representations are learned over a domain-independent formal language with a known semantics, whether by deep learning or any other method. There is a clean distinction between what representations need to be learned, e.g., in order to generalize, and how such representations are to be learned. The setting of action and planning provides a rich, challenging, and crisp context for representation learning, involving three key central problems: learning representations of dynamics that generalize, learning policies that are general and apply to many instances, and learning the common subgoal structure of families of problems; what in reinforcement learning are called the intrinsic rewards. In this short tutorial, we will look at the languages developed to support these representations, the methods developed for learning them, and the challenges ahead.

https://www.dtic.upf.edu/~hgeffner/tutorial-2022.pdf

T28: Conversational Recommender Systems

Dietmar Jannach (Alpen-Adria-Universität Klagenfurt), Markus Zanker (Free University of Bozen-Bolzano)

Personalized recommendations have become a ubiquitous part of our online user experience. Today, recommendations are commonly implemented as a one-directional communication from the system to the user. However, in recent years, we observed an increased interest in conversational recommender systems (CRS). These systems are able to sustain an interactive dialogue with users, often in natural language, with the goal of providing suitable recommendations based on the users’ observed needs and preferences. While conversational recommendation is not a new field, recent developments in natural language processing technology and in deep learning have significantly spurred new research in this area.

In this tutorial, we will provide a multi-faceted survey on existing research in the area of conversational recommender systems. We will first discuss typical technical architectures and the possible interaction modalities for CRS. Then, we will focus on the various types of knowledge these systems can rely on and elaborate on the computational tasks such systems usually have to support. In the final parts of the tutorial, we emphasize on current approaches and the open challenges when evaluating complex interactive software solutions like conversational recommender systems.

https://web-ainf.aau.at/pub/jannach/files/ijcai-2022/crs-tutorial-2022.htm

T29: Decision Focussed Learning

Kai Wang (Harvard University), Andrew Perrault (The Ohio State University), Brandon Amos (Facebook AI (FAIR))

Structural information and domain knowledge are two necessary components of training a good machine learning model to maximize the performance in the targeted application. This tutorial summarizes how to use optimization as a differentiable building block to incorporate the non-trivial operational information in applications into machine learning models.

https://guaguakai.github.io/IJCAI22-differentiable-optimization/