Michael is currently pursuing his PhD in Computer Science. His research interests include automated reasoning, reasoning under uncertainty, and artificial general intelligence.
His current research is focused on uncertainty qualification (as contrasted with uncertainty quantification). That is, he's interested in creating AI agents which can qualify the uncertainty of their beliefs in a cognitively-plausible way (e.g. “I believe it is overwhelmingly likely that formula phi holds.”), and update their beliefs when they receive new information (which may be inconsistent with prior beliefs).
His personal website (with a link to his CV) is here.