I am a Partner Research Area Manager at Microsoft Research. I oversee the research area on human-centered AI, where we advance the state-of-the-art in Responsible AI, human-AI collaboration, sensing, signal processing, productivity, future of work and mental well-being. The team is passionate about developing real systems, tools and prototypes to address the challenges of having AI systems in the real world.
​
I am an Affiliate Faculty with the University of Washington.
​
I serve as Technical Advisor for Microsoft's Internal Committee on AI, Engineering and Ethics. I lead efforts at Microsoft on reliability&safety of AI systems, in particular on developing tools, best practices and guidance towards developing Responsible AI. In my role, my investigations focus on the frontier of Responsible AI, in understanding the new risks and opportunities that arise from the developments in fundamental AI technologies.
​
My research focuses on developing AI systems that can function reliably in the open world in collaboration with people. I am particularly interested in the impact of AI on society and developing AI systems that are reliable, unbiased and trustworthy.
​
I have over 60 peer-reviewed publications at the top Artificial Intelligence and HCI venues including AAAI, IJCAI, AAMAS, CHI and CSCW. I have served in the program committees of AAAI, IJCAI, HCOMP, AAMAS, WWW, UAI and Collective Intelligence Conference. I am a Senior Member of AAAI.
​
I served as a member of the first study panel of AI 100. Our report is available here.
​
I get asked a lot about how to pronounce my name. The easiest pronunciation for English speakers is calling me A.J.
​
The best way to contact me is emailing eckamar <at> microsoft <dot> com
Personal email: ecekamar <at> gmail <dot> com
​
If you are a PhD student working at the intersection of AI and Society (including topics on human-machine collaboration, approaches for developing reliable, trustworthy unbiased and explainable AI) and considering doing an internship email me!
Backward Compatibility of AI Systems for Human-AI Teamwork
​
Many real-world applications of AI are deployed to support human decision-making in domains such as healthcare and criminal justice. In these settings, the success of human-AI partnership requires for the human to develop a mental model of the AI system performance, including its failures. However, when the system is updated for improved performance, this may not necessarily lead to improved team performance as the update may break the mental model of the user.
​
In our AAAI 2019 paper we carried out human studies to study the impact of AI updates on team performance. Our studies showed that an update that improves the performance of the system alone may hurt the team performance if the update is not compatible with the user mental model. We redesigned loss function of popular ML models to take compatibility into account while retraining.
​
Link to the AAAI 2019 paper is available here.
Transfer from Simulation to Real-World: Blind Spots of Reinforcement Learning​
​
Simulation is a common method to train agents before real-world deployment. While simulation provides a cost-effective method to learn, mismatches between simulation and the real-world cause systematic errors or blind spots in agents. Such blind spots could lead to drastic mistakes while functioning in the world.
​
In our AAMAS 2018 paper, we formalized how blind spots emerge when agents train in incomplete simulation, which leads to representational incompleteness. We proposed a machine learning methodology for modeling agent blind spots from limited human feedback in the form of corrections and demonstrations.
​
Link to the AAMAS 2018 paper is available here.
​
In our AAAI 2019 paper, we removed the assumption of the human being optimal and extended our methodology to settings where both the human and the machine have non-overlapping blind spots. We proposed a learning methodology that combine data collected from simulation and human demonstrations to learn policies for joint execution.
​
Link to the AAAI 2019 paper is available here.
Debugging and Troubleshooting of AI Systems
​
Good results we get in the lab with existing, static data sets do not guarantee satisfactory performance in the real world. Across different applications, errors that AI systems make in the real world lead to biases, reliability and safety concerns. Our job as developers of AI systems should not end with training in the lab, thus we are investigating debugging and troubleshooting methods for characterizing system performance in the real world.
​
In our AAAI 2017 paper, we studied blind spots of supervised learning and proposed a human-in-the-loop approach for efficiently discovering blind spots and creating an interpretable representation of it.
​
Link to the AAAI 2017 paper is available here.
​
We are investigating algorithms, models and workflows for the ideal use of human intelligence for debugging of machine learning systems.
​
Link to HCOMP 2018 paper on Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure
​
Link to AAAI 2017 paper on On Human Intellect and Machine Failures: Troubleshooting Integrative Machine Learning Systems.
SELECTED PRESS COVERAGE AND TALKS
Recording of our panel on AI and Automation form Aspen Technology Forum 2017
Recording of my talk from Collective Intelligence 2017
Recording of my talk from Collective Intelligence 2017