Accepted Workshops Copy
Trustworthy Federated Learning
Han Yu, Zehui Xiong, Lixin Fan, Qiang Yang, Boi Faltings
Federated learning (FL) addresses several relevant challenges in this space and it has thus become an important research area in machine learning and AI at large. Federated learning can be used when one wants to train a machine learning model based on a dataset stored across multiple locations, without the ability to move the data to any central location. This seemingly mild restriction renders many of the state-of-the-art techniques in machine learning impractical. One class of applications arises when data is generated by different users of a smartphone app, staying on users’ phones for privacy reasons. Another class of applications involves data collected by different organizations, unable to share due to confidentiality reasons. Nevertheless, the same restrictions can also be present independent of privacy concerns, such as the case of data streams collected by IoT devices or self-driving cars, which need to be processed on-device, be-cause it is infeasible to transmit and store the sheer amount of data. This workshop aims to bring together academic researchers and industry practitioners with common interests in this domain to address open issues. For industry participants, we intend to create a forum to communicate what kind of problems are practically relevant. For academic participants, we hope to make it easier to become productive in this area. The workshop will focus on the theme of building trustwor-thiness into federated learning to enable FL solutions to be more readily applicable to solve real-world problems. The notion of trustworthiness will include, but not limited to, interpretability, fairness, verifiability, transparency, audita-bility, privacy-preservation, robustness to misbehaviours, and setting up healthy market mechanisms to enable open dynamic collaboration among data owners under the FL paradigm.
13th Multidisciplinary Workshop on Advances in Preference Handling (M-PREF22)
Meltem Ozturk, Paolo Viappiani, Destercke Sebastien, Christophe Labreuche
Human-centered AI requires that AI systems are able to adapt to humans, to understand the preferences underlying human choice behavior, and to take them into account when interacting with humans or when acting on their behalf. Preference models are needed in decision-support systems such as web-based recommender systems, in digital assistants and chatbots, in automated problem solvers such as configurators, and in autonomous systems such as Mars rovers. Nearly all areas of artificial intelligence deal with choice situations and can thus benefit from computational methods for handling preferences while gaining new capabilities such as explainability and revisability of choices. Preference handling is also important for machine learning as preferences may guide learning behaviour and be subject of dedicated learning methods. Moreover, social choice methods are of key importance in computational domains such as multi-agent systems. Preferences are studied in many areas of artificial intelligence such as knowledge representation & reasoning, multi-agent systems, game theory, computational social choice, constraint satisfaction, logic programming and non-monotonic reasoning, decision making, decision-theoretic planning, and beyond. Preferences are inherently a multi-disciplinary topic, of interest to economists, computer scientists, operations researchers, mathematicians and more. This broad set of application areas leads to new types of preference models, new problems for applying preference structures, and new kinds of benefits. The workshop on Advances in Preference Handling studies these questions and addresses all computational aspects of preference handling. It will be 13th edition of this workshop ; previous ones have been held at different IJCAI, ECAI, AAAI or VLD conferences. We expect 20-30 submissions and 30 attendees. Each submission will be reviewed by at least two referees and thinking of having a special issue in a journal with an open call.
The Fourth Workshop on Financial Technology and Natural Language Processing (FinNLP-2022)
Hsin-Hsi Chen, Hiroya Takamura, Hen-Hsen Huang, Chung-Chi Chen
We plan to organize a half-day workshop related to natural language processing (NLP) for financial technology (FinTech), FinNLP-2022. Besides the regular paper submission, we cooperate with Fortia Financial Solutions to extend the shared task in FinNLP-2019, FinNLP-2020, and FinNLP-2021, which are also in conjunction with IJCAI-2019, IJCAI-2020, and IJCAI-2021. If IJCAI-2022 will be a virtual conference, the workshop program is expected to include two oral sessions as previous FinNLP workshops. That is, all accepted papers will be presented in the oral method. However, if IJCAI-2021 will be a physical conference, we will separate the accepted paper into oral and poster presentations based on reviewers’ comments.
AIofAI 2022: 2nd Workshop on Adverse Impacts and Collateral Effects of AI Technologies
Esma Aimeur, Nicolás E Díaz Ferreyra, Hicham Hage
The role of Artificial Intelligence (AI) in people’s everyday life has grown exponentially over the last decade. Currently, individuals rely heavily on intelligent software applications across different domains including healthcare, logistics, defence, and governance. Particularly, AI systems facilitate decision-making processes across these domains through the automatic analysis and classification of large data sets and the subsequent identification of relevant patterns. To a large extent, such an approach has contributed to the sustainable development of modern societies and remains a powerful instrument for social and economic growth. However, recent events related to the massive spread of misinformation and deepfakes, along with large privacy and security breaches, have raised concerns among AI practitioners and researchers about the negative and detrimental impacts of these technologies. Hence, there is an urgent call for guidelines, methods, and techniques to assess and mitigate the potentially adverse impacts and side effects of AI applications. This workshop explores how and up to which extent AI technologies can serve deceptive and malicious purposes either intentionally or not. Furthermore, it seeks to elaborate on countermeasures and mitigation actions to prevent potential negative effects and collateral damages of AI systems.
Agents and Robots for reliable Engineered Autonomy (AREA)
Rafael C. Cardoso, Angelo Ferrando, Fabio Papacchini, Mehrnoosh Askarpour, Louise A Dennis
Autonomous agents is a well-established area that has been researched for decades, both from a design and implementation viewpoint. Nonetheless, the application of agents in real world scenarios has largely been adopted in applications which are primarily software based, and remains limited in applications which involve physical interaction. In parallel, robots are no longer used only in tightly constrained industrial applications, but are instead being applied to an increasing number of domains, ranging from robotic assistants to search and rescue, where the working environment is both dynamic and underspecified, and may involve interactions between multiple robots and humans. This presents significant challenges to traditional software engineering methodologies. Increased autonomy is an important route to enabling robotic applications to function in these environments and autonomous agents and multi-agent systems are a promising approach to their engineering. However, as autonomy and interaction increases, the engineering of reliable behaviour becomes more challenging (both in robotic applications and in more traditional autonomous agent settings) and so there is a need for researching new approaches to verification and validation that can be integrated in the engineering lifecycle of these systems. This workshop aims to bring together researchers from the autonomous agents and the robotics communities, since combining knowledge coming from these two research areas may lead to innovative approaches that solve complex problems related with the verification and validation of autonomous robotic systems. Consequently, we encourage submissions that combine agents, robots, software engineering, and verification, but we also welcome papers focused on one of these areas, as long as their applicability to the other areas is clear.
Ad Hoc Teamwork
Muhammad A Rahman, Elliot Fosong, William Macke, Sam Devlin, Ignacio Carlucho, Reuth Mirsky
Ad hoc teamwork is defined as a challenge to design autonomous agents that can collaborate with new teammates without prior coordination. Related problems include zero-shot coordination, agent modeling, and human-agent collaborations. The aim of this workshop is to build a united, supportive research community for ad hoc teamwork and these related problems. It will facilitate discussions between different research labs in academia and industry, identify the main attributes that can vary between ad-hoc teamwork tasks, and discuss the progress that has been made in this field so far, while identifying the next immediate, and long-term open problems the community should address.
The 2nd International Workshop on Heuristic Search in Industry (HSI)
Shaowei Cai, Nathan R Sturtevant
We propose a full-day workshop (with 8 accepted papers, 1 keynote, 3 invited talks, 1 panel, and 1 poster) at IJCAI-ECAI 2022 for professionals, researchers, and practitioners who are interested in leveraging heuristic search to efficiently solve industrial problems. This workshop will be guided by leadership from steering committee members and program committee members in the heuristic search area of AI, from both academia and industry.
What can FCA do for Artificial Intelligence? (Tenth Workshop Edition)
Sergei Kuznetsov, Amedeo Napoli, Sebastian Rudolph
This is a proposal for organizing the tenth edition of the FCA4AI workshop (see http://www.fca4ai.hse.ru/). Formal Concept Analysis (FCA) is a mathematically well-founded theory aimed at classification and knowledge discovery that can be used for many purposes in Artificial Intelligence (AI). The objective of the workshop is to investigate several issues such as: how can FCA support various AI activities (knowledge discovery, knowledge engineering, machine learning, data mining, information retrieval, recommendation…), how can FCA be extended in order to help AI researchers to solve new and complex problems in their domains, and how FCA can play a role in current trends in AI such as explainable AI and fairness of algorithms in decision making.
First International Workshop on Spatio-Temporal Reasoning and Learning
Michael Sioutis, Zhiguo Long, John G Stell, Jochen Renz
We propose to hold the first international workshop on Spatio-Temporal Reasoning and Learning, a cross-discipline workshop that aims to foster exchange of ideas between the Machine Learning and the Symbolic AI communities, especially in the context of handling spatio-temporal knowledge.
12th International Workshop on Agents in Traffic and Transportation (ATT 2022)
Giuseppe Vizzari, Ana Bazzan, Ivana Dusparic, Marin Lujak
Workshop proposal for the 12th International Workshop on Agents in Traffic and Transportation (ATT 2022) to be held in the context of IJCAI2022
The Eleventh International Workshop on Statistical Relational Artificial Intelligence
Sebastijan Dumancic, Angelika Kimmig, David Poole, Jay Pujara
The purpose of the Statistical Relational AI (StarAI) workshop is to bring together researchers and practitioners from three fields: logical (or relational) AI/learning, probabilistic (or statistical) AI/learning and neural approaches for AI/learning with knowledge graphs and other structured data. These fields share many key features and often solve similar problems and tasks. Until recently, however, research in them has progressed independently with little or no interaction. The fields often use different terminology for the same concepts and, as a result, keeping up and understanding the results in the other field is cumbersome, thus slowing down research. Our long term goal is to change this by achieving synergy between logical, statistical and neural AI.
Cognitive Aspects of Knowledge Representation
Jesse Heyninck, Gabriele Kern-Isberner, Thomas Meyer, Marco Ragni
Knowledge representation is a lively and well-established field of AI, where knowledge and belief is represented declaratively and suitable for machine processing. It is often claimed that this declarative nature makes knowledge representation cognitively more adequate than e.g. sub-symbolic approaches, such as machine learning. This cognitive adequacy has important ramifications for the explainability of approaches in knowledge representation, which on its turn is essential for the trustworthiness of these approaches. However, exactly how cognitive adequacy is ensured has been often left implicit, and connections with cognitive science and psychology are only recently being taken up. The goal of this workshop is to bring together experts from fields including artificial intelligence, psychology, cognitive science and philosophy to discuss important questions related to cognitive aspects of knowledge representation.
Artificial Intelligence Safety (AISafety)
Gabriel Pedroza, Jose Hernandez-Orallo, Xin Chen, Xiaowei Huang, Huascar Espinoza, Mauricio Castillo-Effen, John McDermid
In the last decade, there has been a growing concern on the risks of Artificial Intelligence (AI). Safety is becoming increasingly relevant as humans are progressively ruled out from the decisions and control loops of intelligent systems. In particular, the technical foundations and assumptions on which traditional safety engineering principles are based, are inadequate for systems in which AI algorithms, in particular Machine Learning (ML) algorithms, are interacting with the physical world at increasingly higher levels of autonomy. We must also consider the connection between the safety challenges posed by present-day AI systems, and more forward-looking research focused on more capable future AI systems, up to and including Artificial General Intelligence (AGI). This workshop seeks to explore new ideas on AI safety with particular focus on addressing the following questions: *How can we engineer trustable AI software architectures? *Do we need to specify and use bounded morality in system engineering to make AI-based systems more ethically aligned? *What is the status of existing approaches in ensuring AI and ML safety and what are the gaps? *What safety engineering considerations are required to develop safe human-machine interaction in automated decision-making systems? *What AI safety considerations and experiences are relevant from industry? *How can we characterise or evaluate AI systems according to their potential risks and vulnerabilities? *How can we develop solid technical visions and paradigm shift articles about AI Safety? *How do metrics of capability and generality affect the level of risk of a system and how trade-offs can be found with performance? *How do AI systems feature for example ethics, explainability, transparency, and accountability relate to, or contribute to, its safety? *How to evaluate AI safety?
35th International Workshop on Qualitative Reasoning
The Qualitative Reasoning (QR) community is involved with the development and application of qualita-tive representations to understand the world from incomplete, imprecise, or uncertain data. Qualitative representations have been used to model natural systems (e.g., physics, biology, ecology, geology), social systems (e.g., economics, cultural decision-making), cognitive systems (e.g., conceptual learning, spatial reasoning, intelligent tutors, robotics), technical systems (e.g., manufacturing, robotics) and more. QR connects to several AI subfields commonly represented at AI conferences such as IJCAI, AAAI, E-CAI. As QR strives to capture the everyday reasoning that comes naturally to humans, its method con-tribute to explainable AI. QR has a long tradition as a workshop and has been co-located with major AI conferences over the last several years.
MRC 2022 – 13th International Workshop on Modelling and Reasoning in Context
Jörg Cassens, Rebekah Wegener, Anders Kofod-Petersen
We propose to organise MRC at ICJAI-ECAI 2022 as the thirteenth in a series of workshops that was started in 2004 at the German Conference on Artificial Intelligence and made its first international appearance in 2005 at IJCAI in Edinburgh, Scotland. MRC is an interdisciplinary and highly interactive workshop with a focus on applications within computer science. However, MRC has always had a strong interdisciplinary appeal and does draw from fields such as linguistics, semiotics, philosophy, mathematics, cognitive science, social sciences and psychology as well as various sub-fields within computer science. MRC has traditionally been held on major AI-conferences such as ECAI, IJCAI and AAAI or conferences focusing on context from different perspectives such as the International and Interdisciplinary Conference on Modeling and Using Context (CONTEXT). These workshops have been successful in raising awareness about the importance of context as a major issue for future intelligent systems, especially in ambient and ubiquitous computing and current research on autononomous systems. At the same time, advances in methodologies for modelling and retrieving context have been made and MRC continues to provide a venue for the discussion and furthering of research into issues surrounding context. With the renewed interest in Artificial Intelligence in general and Machine Learning in particular, we think it is crucial to assure a human-centric perspective, and a closer collaboration between the fields of Artificial Intelligence and Human-Computer Interaction is a necessity. With context being a central concept both in HCI and in AI, we think that the workshop is ideally suited to help further such a human-centric perspective.
Explainable AI (XAI) Workshop Proposal @ IJCAI-22
Tim Miller, Rosina Weber, Ofra Amir
We propose this Explainable AI workshop as a continuation of the highly successful workshops on Explainable AI held at IJCAI-17 (Melbourne), IJCAI-18 (Stockholm), IJCAI-19 )Macau), and IJCAI-20 (Tokyo/online) focusing on issues pertaining to explainability and interpretability in artificial intelligence (AI), and with a particular focus on the interdisciplinary nature of this challenging area.
The 13th International Workshop on Agent-based Complex Automated Negotiations
Rafik Hadfi, Takayuki Ito, Reyhan Aydoğan, Ryuta Arisaka
Complex Automated Negotiation is one of the emerging areas of research in the field of Multiagent Systems. Future AI systems are posed to act in complex situations and need to have coordination mechanisms based on automated negotiation technologies. In this context, automated negotiations deal with encounters where we may have for instance, a large number of agents, a large number of issues, real-time constraints, concurrent and interdependent negotiation, and so forth. Software agents can support the automation of complex negotiations by negotiating on behalf of their owners and finding adequate strategies to achieve realistic, win-win agreements. The proposed workshop, namely, the “13th International Workshop on Agent-based Complex Automated Negotiations” (ACAN2022), will address key topics in the area of automated negotiation research within the general field of multiagent systems. A considerable number of researchers in various communities of autonomous agents and multiagent systems are actively working on related issues. They are, for instance, being studied in agent negotiations, multi-issue negotiations, auctions, mechanism design, electronic commerce, voting, secure protocols, matchmaking and brokering, argumentation, co-operation mechanisms and distributed optimization. The goal of this workshop is to bring together researchers from these communities to learn about each other’s work, encourages the exchange of ideas, and potentially fosters long-term research collaborations. Automated negotiations have traditionally been one of the main topics within the IJCAI research community. Thus, the workshop is highly relevant to the topics of the main conference. This workshop will additionally create opportunities for researchers and experts who cannot present their results or on-going research contents in the main conference (for example, because their work is too specialized or preliminary in nature), and thus will complement the main conference.
Third International Workshop on Human Brain and Artificial Intelligence (HBAI 2022)
The quest for brain research is to uncover the nature of brain cognition, consciousness, and intelligence. Artificial Intelligence (AI) is committed to the realization of machine-borne intelligence. The development of these two fields is undergoing a continuous trend of crossvergence and convergence. To bring together active researchers and practitioners in the frontiers of Artificial Intelligence (AI) and Human Brain Research for the presentation of original research results, and provide an opportunity for the exchange and dissemination of innovative research ideas relevant to both fields, we propose this workshop, called Human Brain and Artificial Intelligence (HBAI). HBAI will contribute to answering the following two questions: How can AI techniques help human brain research (AI- inspired/powered brain research)? And, how can human brain research inspire the study of AI (brain-inspired computing)? The discussions on the workshop will clearly be greatly helpful to the brain and cognitive science, neural computation and artificial general intelligence, brain-computer interface, data science, and their applications.
NASO 2022 – Workshop on New Architectures for Search and Optimization
NASO 2022 – “Workshop on New Architectures for Search and Optimization”
Evaluation Beyond Metrics
José Hernández-Orallo, Lucy Cheke, Joshua Tenenbaum, Tomer Ullman, Fernando Martínez-Plume, Danaja Rutar, Ryan Burnell, John Burden, Wout Schellaert
This workshop will welcome formalizations, methodologies and test benches for the evaluation of AI systems. More specifically, we are interested in theoretical or experimental research focused on the development of concepts, tools and clear indicators and evaluation to characterize and measure AI systems and how this relates to, among others, cognitive abilities and skills, as well as rates of development, progress and impact. We consider regular papers, short papers, demo papers about benchmarks or tools, and position papers, and encourage discussions over a broad list of topics (not exhaustive): – Evaluation methods founded on cognitive, developmental or comparative psychology – Measurement of skills, capabilities, or cognitive abilities – Evaluation methods based on software testing or other engineering practises – Meta-analysis or comparisons of evaluation instruments – The role of evaluation in AI development, policy making, and modelling of social impact – Measurements of generality or common-sense – Capture and use of evaluation data, e.g for error analysis or calibration – Analysis of the task space and its relation to corresponding capabilities – The role of causality in evaluation – Topics complementary to evaluation such as documentation or auditing – Alternative evaluation methods with added benefits, e.g. granularity, use of population data, black box, capture of emergent behavior, non-additive aggregation of results, interpretability of the results, improved validity – Discussion and progress in hard to evaluate scenarios, e.g. developmental robotics, multi-agent systems, tightly human coupled systems, artificial social ecosystems, conversational bots, language models, (multimodal) generative systems, open ended learning, lifelong learning.
Safe RL Workshop
David Bossens, Bettina Könighofer, Roderick Bloem, Stephen Giguere
Reinforcement learning is the dominant paradigm for an agent to learn interactively with its environment. Despite impressive gains in areas such as gaming, for many real-world applications, such as robotic systems or autonomous cars, RL agents are currently not employed because they are not safe. Learning by trial-and-error often comes with dangerous side-effects, leading to the challenge of safe exploration. Moreover, learned RL policies must meet certain desirability criteria, where some behaviours need to be prevented at all costs. These and related issues are of primary concern in the field of safe reinforcement learning. To achieve safe reinforcement learning, a variety of angles have been taken recently. From a techincal point of view, researchers have brought together insights from constrained optimisation, robust optimisation, model-based reinforcement learning, formal methods, control theory, statistical hypothesis testing, etc. From a more societal point of view, researchers have investigated how to adjust the environment, to restrict the autonomy of the agent, and how humans can intervene for safe RL. Since IJCAI brings together a large audience within the AI community with lots of prior interest being demonstrated in the fields of safety, reinforcement learning, robustness, and robotics in the wild, we propose the Safe RL 2022 Workshop at IJCAI 2022. The workshop is proposed as a combination of invited talks and contributed talks with opportunities for researchers to interact with the speakers, discuss novel and exciting research, and establish new and fruitful collaborations.
Scarce Data in Artificial Intelligence for Healthcare (SDAIH)
Simone Lionetti, Alexander Navarini, Marc Pouly, Philipp Tschandl
AI has the potential to generate a revolution in the field of healthcare by enabling accurate, fast and reliable analyses of data at an unprecedented scale both in the clinics and in industry. Leveraged properly, AI can thus allow to better meet patient needs by developing new medical devices, drugs, and personalized treatments, while simultaneously freeing up time for clinical staff to nourish the profound human connection between caregivers and patients. Moreover, AI promises to democratize the healthcare system by spreading basic services to low-income or remote areas through telemedicine. Notwithstanding the terrific progress achieved in the last two decades, many AI projects related to medicine struggle to make their way to deployment and sustainable productivity because of the limited availability of high-quality annotated data. The scarcity of useful information is often exacerbated in medicine, medical engineering, and healthcare in general because labelling requires highly-specialized staff, patient privacy must be respected, ethnic differences and rare diseases adequately represented. Despite the incredible advances of the last few years in facilitating data collection and annotation, learning representations, and detecting different types of bias, basic observations on implications for practitioners are often lacking, new ingenious ideas are flourishing, and recommendations for healthcare are far from established. The goal of this workshop is to exchange learnings and efforts on how to solve the issue of data scarcity for the practical deployment of AI in healthcare. We aim at bringing together, from both academia and industry, researchers and data scientists that are confronted with challenges related to limited data availability for machine learning in medicine, medical engineering, biotechnology, pharmaceuticals, and medical services.
Communication in Human-AI Interactions
Jennifer Renoux, Antti Oulasvirta, Mohamed Chetouani, Andrew Howes
Human interactions with AI systems are becoming part of our everyday life. If designed and developed efficiently, these interactions have great potential in enhancing human work, abilities, and well-being. Communication, here the iterative process of establishing shared meaning, is a crucial aspect of successful interaction and has been studied for years, from AI, HCI, and cognitive sciences points of view, among others. However, we can observe paradoxically relatively little interaction between these communities, even though collaboration would greatly benefit all of them. The goal of this workshop is to bring together experts from these different communities to explore and understand the specificities and characteristics of communication in human-AI interactions, as well as the salient principles, methods, and theories one has to consider to build meaningful human-AI communication systems.
Process Management in the AI era (PMAI)
Antonella Guzzo, Giuseppe De Giacomo, Marco Montali, Tathagata Chakraborty, Fabiana Fournier, Lior Limonad
Process Management (PM) is a growing multidisciplinary discipline combining insights from operation management, computer science, and data science. The development of AI techniques is paving the way towards a new generation of information systems able to “augment” process management, making them more autonomous, adaptive, intelligent, self-optimizing. To support these sophisticated forms of decision-making, they will need to extract useful insights from the execution logs of ongoing/past process executions, so as to strengthen, integrate, and continuously revise their domain knowledge. This poses foundational, conceptual, and technical challenges related to the integration of symbolic and sub-symbolic AI techniques, and how to infuse them within PM. At the same time, delegating autonomy brings pressing requirements on the trustworthiness of such systems, and on their ability to interact with human experts and explain their own behavior. The workshop aims at bringing together researchers from different research disciplines and a strong interest in promoting the synergy between AI and PM to address the above frontier challenges. Although the AI community of IJCAI and AAAI hosted some of the first events dedicated to business processes, over the years, the business process community has found its own place (see. BPM and ICPM conferences). Now, thanks to the advancement of artificial intelligence techniques and the need to use them in business applications, the two communities are getting closer. With this workshop, we intend to empower this trend and offer the AI audience the challenges and recent developments in the area. We welcome not only contributions that empower PM with AI techniques, but also contributions that exploit their combination to solve more general problems in AI.
CDCEO 2022 – Complex Data Challenges in Earth Observation
Aleksandra Gruca, Pedram Ghamisi, Fabio Pacifici, Naoto Yokoya, Sepp Hochreiter
The Big Data accumulating from remote sensing technology in ground, aerial, and satellite-based Earth Observation (EO) has radically changed how we monitor the state of our planet. Advanced EO sensors nowadays generate rich streams of data around the clock. Using recent techniques from signal processing and machine learning allows for an effective interpretation of such complex datasets. In this workshop we will bring together leading researchers from both academia and industry across diverse domains, including experts in remote sensing, EO, computer vision, signal processing, pattern recognition, data mining, big data processing, and AI. We build on the success of CDCEO’21 at the ACM CIKM, attracting over 40 submissions from 17 countries. CDCEO is the first workshop that is fully dedicated to all relevant aspects of AI in the EO community whose scopes are comprehensive and not bound to a specific application or a specific type of EO data. The ever-growing availability of high-resolution remote sensing data increasingly confronts researchers with the unique machine learning challenges posed by characteristic heterogeneity and correlation structures in these data. Data collections are typically multi-source and multi-scale, and have isometric representations. The multi-dimensional measurements over time reflect dynamic states with complex interdependencies. A better understanding of these will aid both short- and long-term progress in Earth system research. The latest generation of optical sensors features high spatial resolution and high temporal collection frequencies, allowing application of modern data-hungry methods characteristic of AI. The CDCEO’22 thus covers advances in both method development and applications in a wide range of related areas, including satellite image processing, super-resolution, gap-filling, high-resolution prediction of spatio-temporal features, and detection of rules underlying the observed state transitions and causal relationships.
Semantic Techniques for Narrative-based Understanding
Lise Stork, Katrien Beuls, Luc Steels
This workshop explores how AI systems can employ narratives in the realization of human-centric AI. Human-centric AI focuses on collaborating with humans, enhancing human capabilities, and empowering humans to better achieve their goals. This requires that we complement the reactive intelligence of today’s neuro-statistical machine learning with deliberative intelligence based on rich models of the problem situation: computational representations of narratives, elaborate ontologies, fine-grained semantic parsing inspired by cognitive linguistics and construction grammar, reasoning and inference, mental simulation, the consultation of semantic resources such as knowledge graphs and episodic memories of past situations, and a control architecture that can flexibly combine all these knowledge sources to arrive at coherent understanding. Besides technical contributions to realize the components needed for understanding, the workshop encourages the presentation of benchmarks as well as demonstrations of AI systems in which narrative-based understanding plays a critical role, such as the decoding and execution of actions needed for everyday activities (e.g. to cook a dish from a recipe), the reconstruction of the temporal and causal relations found in documents (e.g. between historical events leading up to a significant political change), or the formulation of `scientific storylines’ in knowledge discovery that combine information across different scientific papers to generate hypotheses or compare and validate experimental evidence.
Generalization in Planning
Pulkit Verma, Yuqian Jiang, Rushang Karia, Jendrik Seipp
Humans are good at solving sequential decision making problems, generalizing from a few examples, and learning and expressing generalized knowledge that can help solve new problems. Computing such knowledge remains a long-standing open problem for Artificial Intelligence (AI). Over the last two decades, there has been remarkable progress in the performance of automated planning systems. However, real-world scalability and skill/plan generalization for complex, long-horizon tasks remains an open challenge. This workshop aims to build synergies across different AI communities in order to address all aspects of generalization of solutions for sequential decision making, including, but not limited to, representation of problems and solution concepts that enable efficient generalization and transfer of relevant knowledge, and algorithms for learning or synthesizing such generalized knowledge and solutions. We welcome contributions focusing on different formulations/representations for generalization, empirically validated methods, and theoretical analyses and foundations for generalization.
Artificial Intelligence for Time Series Analysis: Theory, Algorithms, and Applications
Dongjin Song, Fenglong Ma, Sanjay Purushotham, Themis Palpanas, Wei Cheng, Yifeng Gao, Yufei Han
Time series data are becoming ubiquitous in numerous real-world applications, e.g., IoT devices, healthcare, wearable devices, smart vehicles, financial markets, biological sciences, environmental sciences, etc. Given the availability of massive amounts of data, their complex underlying structures/distributions, together with the high-performance computing platforms, there is a great demand for developing new theories and algorithms to tackle some fundamental challenges (e.g., representation, classification, prediction, causal analysis, etc.) in various types of applications. The goal of this workshop is to provide a platform for researchers and AI practitioners from both academia and industry to discuss potential research directions, key technical issues, and present solutions to tackle related issues in practical applications. The workshop will focus on both the theoretical and practical aspects of time series data analysis and aims to trigger research innovations on theories, algorithms, and applications. We will invite researchers and AI practitioners from the related areas of machine learning, data science, statistics, econometrics, and many others to contribute to this workshop.
Interactions between Analogical Reasoning and Machine Learning
Miguel Couceiro, Pierre-Alexandre Murena
Analogical reasoning is a remarkable capability of human reasoning, used to solve hard reasoning tasks. It consists in transferring knowledge from a source domain to a different, but somewhat similar, target domain by relying simultaneously on similarities and dissimilarities. In particular, analogical proportions, i.e. statements of the form “A is to B as C is to D”, are the basis of analogical inference. It contributed to case-based reasoning and to multiple machine learning tasks such as classification, decision making, and automatic translation with competitive results. Moreover, analogical extrapolation can support dataset augmentation (analogical extension) for model learning, especially in environments with few labeled examples. Conversely, advanced neural techniques, such as representation learning, enabled efficient approaches to detecting and solving analogies in domains where symbolic approaches had shown their limits. However, recent approaches using deep learning architectures remain task and domain specific, and strongly rely on ad-hoc representations of objects, i.e. tailor made embeddings. The purpose of this workshop is to bring together AI researchers at the cross roads of machine learning and knowledge representation and reasoning, and interested by the various applications of analogical reasoning in machine learning or, conversely, of machine learning techniques to improve analogical reasoning. The workshop will focus on bridging gaps between multiple communities of AI researchers, including case-based reasoning, deep learning and neuro-symbolic machine learning.
AI4AD: Artificial Intelligence for Autonomous Driving
Jonathan M Francis, Bingqing Chen, Xinshuo Weng, Siddha Ganju, Daniel Omeiza, Hitesh Arora, Eric Nyberg, Jean Oh, Sylvia Herbert
We propose a continuation of “Artificial Intelligence for Autonomous Driving” (AI4AD), as a venue for researchers in artificial intelligence to discuss research problems on autonomous driving (AD), with a specific focus on safe learning. While there have been significant advances in vehicle autonomy (e.g., perception, trajectory forecasting, planning and control, etc.), it is of paramount importance for autonomous systems to adhere to safety specifications, as any safety infraction in urban and highway driving, or high-speed racing could lead to catastrophic failures. We envision the workshop to bring together researchers and industry practitioners from different AI subfields to work towards safer and more robust autonomous technology.
Fourth Workshop on Bridging the Gap Between AI Planning and Reinforcement Learning (PRL)
Michael Katz, Hector Palacios, Vicenç Gómez
We propose a fourth edition of the PRL workshop series that started at ICAPS 2020. The workshop aims to bridge the gap between the AI Planning and Reinforcement Learning communities, facilitate the discussion of differences and similarities in existing techniques, and encourage collaboration across the fields. In the fourth edition of PRL, at IJCAI, we want to focus on topics relevant to a broader audience than ICAPS. In particular, we will welcome relevant contributions to sequential decision-making beyond existing planning solvers and benchmarks. Meanwhile, ICAPS 2022 has accepted the third edition of PRL. If this proposal for IJCAI is accepted and we have the number of submissions we expect in ICAPS, we might invite some authors to present their work in PRL at IJCAI instead of PRL at ICAPS.
The fifth Data Science Meets Optimization (DSO) Workshop
Tias Guns, Michele Lombardi, Neil Yorke-Smith, Yingqian Zhang
This workshop on the close relationship and interplay between data science and optimization continues on the DSO@IJCAI2021, DSO@IJCAI2020, DSO@IJCAI2019, and the DSO@IJCAI-ECAI workshop in 2018. Invited are studies on how techniques from combinatorial optimization and mathematical programming can be enforced by learning from historical data and on how such advanced techniques can contribute to machine learning and data mining. The DSO workshop is closely related to the DSO working group of The Association of European Operational Research Societies (EURO) with yearly streams and workshops at major conferences such as EURO2021 in Athens, IFORS 2021 Virtual, EURO 2018 in Valencia, EURO 2019 in Dublin, IFORS 2017 in Quebec, CPAIOR 2017 in Padua, CEC 2017 in San Sebastian.
AI for understanding the Ocean and Climate Change
Nayat Sánchez Pi, Pablo Marquet, Luis Martí
The ocean plays a key role in the biosphere, regulating the carbon cycle; absorbing emitted CO2 through the biological pump, and a large part of the heat that the remaining CO2 and other greenhouse gases retained in the atmosphere. Understanding the drivers of micro and macroorganisms in the ocean is of paramount importance to understanding the functioning of ecosystems and the efficiency of the biological pump in sequestering carbon and thus abating climate change. AI, ML, and modeling tools are key to understanding oceans and climate change. In return, these problems also pose important challenges to the current state of the art. Consequently, the topics of interest of this workshop can be grouped into (but not limited to) two sets: a) Advancing the state of the art in areas like AI, ML, and modeling: – handling of graph-based info, – ML for small data: transfer, few-shot, active learning, etc. – causality, interpretability and explainability in AI, – data assimilation and physics-informed neural networks (PINNs), and – develop, calibrate, and validate mechanistic models. b) Answering the questions that emerge from the application domain: – Which are the patterns in plankton taxa and functional diversity? – How these patterns will change under climate change? – How will changes affect the capacity of ocean to sequester CO2 from the atmosphere? – What relations bind communities and local conditions? – What links biodiversity functioning and structure? – How AI can be applied as research support tool to understand planktonic communities? – How new knowledge can be derived from self-supervision, anomaly detection, causal learning, and/or explainable AI? The goal of this workshop is to bring together researchers that are interested and/or applying AI and ML techniques to problems related to marine biology and climate change mitigation. We also expect to attract natural science researchers interested in learning about and applying modern AI and ML methods.