2nd Workshop on Deep Reinforcement Learning for Knowledge Discovery

April, 2021
Ljubljana, Slovenia


While supervised and unsupervised learning have been extensively used for knowledge discovery for decades and have achieved immense success, much less attention has been paid to reinforcement learning in knowledge discovery until the recent emergence of deep reinforcement learning (DRL). By integrating deep learning into reinforcement learning, DRL is not only capable of continuing sensing and learning to act, but also capturing complex patterns with the power of deep learning. Recent years have witnessed the enormous success of DRL for numerous domains such as the game of Go, video games, and robotics, leading up to increasing advances of DRL for knowledge discovery. For instance, RL-based recommender systems have been developed to produce recommendations that maximize user utility (reward) in the long run for interactive systems; RL-based traffic signal systems have been designed to control traffic lights in real time to enhance traffic efficiency for urban computing. Similar excitement has been generated in other areas of knowledge discovery, such as graph optimization, interactive dialogue systems, and big data systems. While these successes show the promise of DRL, applying learning from game-based DRL to knowledge discovery is fraught with unique challenges, including, but not limited to, extreme data sparsity, power-law distributed samples, and large state and action spaces. Therefore, it is timely and necessary to provide a venue, which can bring together academia researchers and industry practitioners (1) to discuss the principles, limitations and applications of DRL for knowledge discovery; and (2) to foster research on innovative algorithms, novel techniques, and new applications of DRL to knowledge discovery.


We invite the submission of full paper (8+1 pages) and short paper (4+1 pages). Submissions must be in PDF format, written in English, and formatted according to the latest double-column ACM Conference Proceedings Template. All papers will be peer reviewed, single-blinded. Submitted papers will be assessed based on their novelty, technical quality, potential impact, insightfulness, depth, clarity, and reproducibility. All the papers are required to be submitted via EasyChair system. Accepted papers will appear in the ACM proceedings. For more questions about the workshop and submissions, please send email to zhaoxi35@msu.edu.

We encourage submissions on a broad range of DRL for knowledge discovery in various domains. Topics of interest include but are not limited to theoretical aspects, algorithms, methods, applications, and systems, such as:

  • Foundation:
  • - Reinforcement Learning and Planning
  • - Decision and Control
  • - Exploration
  • - Hierarchical RL
  • - Markov Decision Processes
  • - Model-Based RL
  • - Multi-Agent RL
  • - Inverse RL
  • - Contextual Bandits
  • - Navigation
  • Business:
  • - Advertising and E-commerce
  • - Finance
  • - Marketing
  • - Markets and Crowds
  • - Recommender systems
  • Urban Computing:
  • - Smart Transportation
  • - Intelligent Environment
  • - Urban Planning
  • - Urban Economy
  • - Urban Energy
  • Computational Linguistics:
  • - Dialogue and Interactive Systems
  • - Semantic Parsing
  • - Summarization
  • - Machine Translation
  • - Question Answering
  • Graph Mining:
  • - Social and Network Sciences
  • - Graph Modeling and Embedding
  • - Graph Generation and Optimization
  • - Combinatorial Optimization and Planning
  • Big Data Systems:
  • - Systems for large-scale RL
  • - Environments for testing RL
  • - RL to improve Systems
  • Further target application areas:
  • - Health Care
  • - Computer Vision
  • - Education
  • - Security
  • - Time Series
  • - Multimedia


Feb 10, 2021: Workshop paper submission due (23:59, Pacific Standard Time)

Feb 20, 2021: Workshop paper notifications

Mar 1, 2021: Camera-ready deadline for workshop papers

April, 2021: Workshop Date



Jiliang Tang Michigan State University


Xiangyu Zhao Michigan State University


Dawei Yin Baidu


Long Xia York University


Huiji Gao Linkedin


Rui Chen Samsung Research America


Jason Gauci Facebook AI