I am interested in what real-time machine learning techniques can do to leverage natural human pathways of communication and cooperation. My goal is to form strong collaborative partnerships between humans and AI-enabled robotics. I have used mixed-methods techniques to gather rich data from both the robot and human sides of the interactions to understand the impacts of real-time machine learning in interactions with humans. Using the understanding gained through such studies, I explore what properties of machine learning impact human interaction and how users' understanding of the interaction changes. I primarily explore temporal difference machine learning methods and the application of the joint action and communicative capital frameworks. While I have previously used robot arms in the context of prostheses, I am interested in human-robot-AI interactions of all kinds.
My research interests lie in the area of reinforcement learning, especially in the fields of exploration, representation learning, transfer learning, policy search, and multi-objective optimization. Recently, I have been working on methods that move away from the classic tabula-rasa paradigm towards a more practical transfer framework. More specifically, I am interested in (1) better understanding state representation learning from rich and diverse inputs, (2) developing task-agnostic exploration policies, and (3) developing policy architectures for transfer/continual learning of state representations and exploration policies.
Matthew Schlegel is a Postdoctoral Associate working with Mostafa Farrokhabadi and Matt Taylor. His research focuses on applying machine learning and reinforcement learning techniques to power systems. He is especially interested in explainable AI, human-in-the-loop reinforcement learning, data-driven solutions to partial differential equations, and the broader role of AI in enhancing communities and improving human well-being.
Laura's research interests include reinforcement learning, human-robot interaction, biomechatronics, and assistive robotics. Drawing inspiration from her anatomical studies, Laura’s research aims to develop control methods for robotic manipulation with the goal of increased functionality, usability, reliability, and safety in the real-world.
I specialize in applying reinforcement learning (RL) within electronic design automation (EDA), with a focus on optimizing placement in chip design. My research is centered on leveraging AI and RL to accelerate the chip design process and reduce the excessive human involvement. Currently, I am developing model-based RL algorithms, inspired by approaches like MuZero, to enhance the quality of RL-driven placements. Additionally, I am exploring representation learning to further advance chip design methodologies.
Currently, my research interest lies mostly in the regime of designing algorithms around exploration-exploitation dilemma in reinforcement learning.
My goal is to create AI agents that are aligned with human preferences. My research interests include reward alignment, deep reinforcement learning, and human-in-the-loop reinforcement learning.
Michael's research interests include deep learning, generative models, reinforcement learning and robotics. His current research focuses on policy transfer between robotic embodiment's, and he's previously published work on learning both temporal and spatial skill models.
Currently working on analyzing the different modalities of human assisted reinforcement learning
Currently working on investigating RL to help improve Large Language Model Decision-Making and Reasoning ability. But also had an interest in Human-Computer Interaction and Human-Robot interaction.
My research interests focuses on developing intelligent, autonomous systems for robust, real-time decision-making in complex, multi-agent environments. Currently, I am working on controlling the autonomous vehicle using reinforcement learning.
The human brain and body have evolved over millions of years to become incredibly complex, making it difficult for us to fully understand our own capabilities. However, the challenge of replicating human-level intelligence in computers is a goal that drives many areas of research, including Machine Learning, Deep Learning, Computer Vision, Robotics, and Reinforcement Learning, which is why they intrigue me as much as they do. My ultimate aim is to contribute to this goal and gain a deeper understanding of human behavior and decision-making.
My primary research interest lies in Reinforcement Learning, with a focus on its real-world applications. I also have a strong background in Computer Vision, where I've worked on various projects involving object detection and recognition. I am passionate about leveraging AI to address practical challenges.
Thesis: COVID-19 and Far-Right Engagement:An Interrupted Time Series Analysis To Investigate Online Behaviour, Fall 2024
Thesis: Continual Preference-based Reinforcement Learning with Hypernetworks, Fall 2024
Thesis: CANOR COACH:Towards Noise-Robust Human-in-the-Loop Reinforcement Learning, Summer 2024
Thesis: Iterative Large Language Models Evolution through Self-Critique, Summer 2024
Thesis: Automated Coordination of Distributed Energy Resources using Local Energy Markets and Reinforcement Learning, Spring 2024
Thesis: Human-AI Collaboration in Real-World Complex Environment With Reinforcement Learning, Winter 2023
Thesis: Ensembling Diverse Policies Improves Generalization of Deep Reinforcement Learning Algorithms to Environmental Changes in Continuous Control Tasks , Winter 2022
Thesis: Program Optimization with Local Search, Fall 2022
Thesis: Effective Transfer Learning with the Use of Distance Metrics, Winter 2021
Thesis: Methodical Advice Collection and Reuse in Deep Reinforcement Learning,
Thesis: To Ask or Explore: A Systematic Approach to Advice,
Thesis: Uncertainty Methods in Active Reinforcement Learning,
Thesis: A framework for Safe Evaluation of Offline Learning, Fall 2021
Thesis: The Impact of Different Summaries as Reinforcement Learning Explanations on Human Performance And Perception, Summer 2021
Thesis: Transfer in Deep Reinforcement Learning: How an Agent Can Leverage Knowledge From Another Agent, A Human, or Itself, Spring 2021
Thesis: Knowledge Transfer in Reinforcement Learning: How Agents Should Benefit from Prior Knowledge, Fall 2019
Thesis: Teaching Effectiveness of Intelligent Tutoring Systems, Spring 2019
Thesis: Learning from Human Teachers — Supporting How People Want to Teach in Interactive Machine Learning, Summer 2018
Thesis: TINGLE — Topic-Independent Gamification Learning Environment, Summer 2018
Thesis: Policy Advice, Non-convex and Distributed Optimization in Reinforcement Learning, Fall 2016
Thesis: Regret Minimization with Function Approximation in Extensive-Form Games, Summer 2020
Thesis: Useful Policy Invariant Shaping from Arbitrary Advice, Winter 2020
Thesis: Accelerate the Learning Speed of Deep Reinforcement Learning by Pre- training with Non-Expert Human Demonstrations, Spring 2019
Thesis: Engineering a Smart Scarecrow: Bird Deterrence with Drones, Spring 2017
Thesis: Development of the Baton: A Novel Precision Delivery Drone, Summer 2017
Thesis: Modifying Smart Home to Smart Phone Notifications using Reinforcement Learning Algorithms, Spring 2017