Skip to content

About me

I work for Amazon.com since 2022. My area of research includes reinforcement learning method development in industrial settings and a theoretical study of Markov decision processes and non-regular statistics. My CV is here.

 

Previously, I was a postdoctoral fellow at the University of North Carolina (UNC), Chapel Hill. Before my postdoc, I did my Ph.D. in biostatistics at UNC.  I am interested in statistical methods and theory development. I like problem-solving and coming up with practical tools that enhance human life and science. I have the following branches of research interests.

Reinforcement learning

Every decision making problem can be recast to reinforcement learning (RL) from deciding what to have for dinner to finding the best driving route to a destination. Many interesting and impactful opportunities lie out there: e.g., operational optimizations in companies, finding feasible drug molecules in pharmaceutical companies, etc. Why not use science and growing amount of data to support the optimal decisions? Lots of methodological developments are called for in this field.

Precision medicine, causal inference, machine learning

Unlike the traditional approaches in medicine, where drugs and treatments are developed in the one-size-fits-all spirit, precision medicine exploits patient heterogeneity and finds the optimal treatment tailored for each individual. We call such regime an individualized treatment rule. Identifying such rules is often learned by machine learning techniques, such as random forest, kernel regression, and deep learning. Also, the causal inference framework provides fundamental grounds for precision medicine.

Identifying the individualized treatment rules is also a reinforcement learning problem. Especially when it involves multiple stages of treatments, interesting challenges arise. Q-learning is one of the popular reinforcement learning algorithms. Recently, V-learning (Luckett et al., 2017) was developed for a simple class of online Markov policies.

Empirical processes and survival analysis

The empirical process framework is an incredibly useful and fundamental tool for investigating the theoretical behavior of statistical methods, such as consistency, asymptotic normality, and rate of convergence. It extends the classical theory of asymptotics into a functional framework.

Survival analysis is widely used in biomedical research. Censoring often occurs in survival data and brings unique challenges in the analysis. Many survival analysis problems can be well analyzed by the empirical process theory.

Genomics

Genomic information has become more accessible at a reasonable cost. It means that more and more data are generated at a faster rate. A larger amount of data call for more methodology development for genomics. I develop methods for or analyze the single-cell RNA sequencing data and microbiome data.

Comments are closed.