Mark P. Van Oyen, Professor

Industrial and Operations Engineering, Rm 2853

University of Michigan

Ann Arbor, MI 48109-2117


Selected recent research:

  • Contextual online learning for medical decision making in chronic diseases: Chronic diseases are the leading cause of mortality and disability worldwide, requiring the surveillance and monitoring of patients to assess disease progression and determine if a treatment regime is warranted. Even when a suitable treatment is prescribed, dosing it correctly remains a significant challenge because the proper dosage is highly variable among patients. This involves adaptively learning a personalized disease progression control model conditional on patient-specific contextual information. We formulate this as a new contextual multi-armed bandit learning model.
    • Keyvanshokooh, E., Zhalechian, M., Shi, C., Van Oyen, M. Marrying Contextual Learning with Online Sub-gradient Descent. 

  • Personalized recommendation system under limited resources and delayed-feedback: Consider sequentially offering ads to a population of interested consumers. The marketing budget/resources available over time are limited. A special feature is that the feedback on the success of the add is delayed feedback is delayed in time.  We create a contextual learning bandit algorithm with theoretical guarantee for dynamically learning user-specific behavior and generating personalized recommendations in the presence of the dynamic constraints and delayed-feedback.
    • Zhalechian, M., Shi, C., Van Oyen, M. Personalized Recommendation System under Limited Resources and Delayed Feedback: A Contextual Learning Approach. 

  • Reinforcement learning under non-stationary kernelized Markov decision processes: Developing a reinforcement learning algorithm for minimizing regret in a non-stationary Markov decision processes (MDPs) with continuous states. Past observation could become obsolete due to the changes in the environment, so the algorithm needs to explore such changes and avoid exploiting possibly outdated observations. This challenge is found in many real-world applications, including medical decision making for chronic diseases. Decisions made in the present will not be identified as good or bad until a future time.A reinforcement algorithm can be implemented to capture long term value. The patient’s behavior is non-stationary and can change over time in practice. Our algorithm  enjoys a sub-linear regret and provides an online policy for a non-stationary kernelized MDP setting.  
    • Zhalechian, M., Shi, C., Van Oyen, M. Non-Stationary Reinforcement Learning for Kernelized MDP. Working Paper.

Research Interests:

  • Healthcare Operations and Delivery; Operational systems engineering
  • Medical decision making: monitoring for progression and “control” of controllable risk factors (application to Glaucoma) – machine learning approaches; Kalman filtering and control; clincal decision support tools
  • Hospital Readmissions prediction and reduction
  • Surgical planning and scheduling for access delay control; robustness approaches.
  • Mortality risk based admissions/routing control to ICUs and intermediate/progressive care units
  • Capacity management and planning and scheduling for outpatient care, particularly in integrated services networks
  • Planning and scheduling for Clinical Research and Clinical Research Units
  • Emergency Department redesign for improved patient flow
  • Resilient supply chains: design of flexibility in supply chains, operations.  Designing flexible backup supplier systems. Flexible operations for shipbuilding and maintenance.

 Research methodologies incorporated:

  • Stochastic processes, stochastic control, simulation, queueing networks, applied probability, Markov Decision Processes, sample path methods, 
  • Machine learning, online learning, online optimization
  • Medical decision making models based on state space modeling, Kalman filtering, Linear Quadratic Gaussian (LQG) control (a class of optimal stochastic estimation and control problems).
  • Mathematical programming:  emphasis on novel stochastic models of operations planning and control that are converted into math programs as a vehicle for numerical solution to otherwise intractable models.
  • Systems theory including inventory, supply chain, production, service science

My Teaching emphasizes  applied probability and its use in operations engineering and management (queueing networks and their optimization and application, IOE 545) and healthcare/hospital systems improvement (e.g., IOE 481 Practicum in Hospital Systems)

Awards & News

  • 2019 President-elect of INFORMs Health Applications Society (HAS) & 2019 INFORMS Annual Conference Cluster Chair
  • Student Esmaeil Keyvanshokooh awarded U. Mich. Rackham Predoctoral Fellowship 2019-20.
  • Invited to present the 2019 Vinod Sahney Distinguished Lecture on Health Systems Innovation, Northeastern University
  • Honorable mention, IISE Transaction journal 2019 best applications paper in operations engineering and analytics for “Surgery Scheduling with Recovery Resources,” Bam, M., B.T. Denton, and M.P. Van Oyen.
  • Research paper that (a) won the 2016 Manufacturing and Service Operations Management (MSOM) Best Paper award, (b) won the 2016 MSOM Service S.I.G. paper competition,  and (c) was unanimously selected by the journal’s editors as 1 of the 5 best papers through 2015 in MSOM for distribution to Deans and Department Heads internationally to promote the journal:  Saghafian, S., W. Hopp, M.P. Van Oyen, J.S. Desmond, and S. Kronick, Complexity-Augmented Triage: A Tool for Improving Patient Safety and Operational Efficiency, Manufacturing and Service Operations Management (MSOM), 16:3, (2014) 329-45.
  • First Place, Best Paper Award, 2016 College of Healthcare Operations Management of the Production and Operations Management Society (POMS), for “Dynamic Personalized Monitoring and Treatment Control of Glaucoma,” P. Kazemian, Jonathan Helm, Mariel Lavieri, Joshua Stein, Mark Van Oyen.
    • Advisee Pooyan Kazemian earned finalist standing in the 2015 IBM service science competition; moreover an earlier version was a Finalist presentation in competition for Decision Analysis Society (DAS) of INFORMS DAS Practice Award at 2014 INFORMS Conference
  • My Ph.D. student, Pooyan Kazemian (now at Internal Medicine, Harvard Univ.), earned 2nd place in the INFORMS 2016 George B. Dantzig Dissertation Award. Mariel Lavieri and Jonathan Helm were also particularly important mentors on his dissertation.

Current and former PhD students include:

  • Eungab Kim (Professor and former Dean, Ewha University, Seoul, Korea)
  • Esma Gel (Professor and group lead, School of Computing, Informatics, and Decision Systems Engineering, Arizona State University)
  • Eylem Tekin (Rice University)
  • Damon Williams (multiple positions, including Adj. Prof. of Industrial and Systems Engineering, GA Tech)
  • Jonathan E. Helm (Assoc. Prof, Kelley School of Business, Indiana U.)    
  • Soroush Saghafian (Ass’t. Professor, Harvard Kennedy School) 
  • Hoda Parvin (Oper. Res. Scientist at Amazon & formerly Mathematics & Statistics, Georgetown U.)
  • Fang Dong (Lead Statistician at Merkle, formerly Research analyst with Ford Credit)
  • Pooyan Kazemian, (research fellow at the Massachusetts General Hospital (MGH) and Harvard Medical School (HMS) – Medical Practice Evaluation Center)
  • Jivan Deglise-Hawkinson, Ph.D.  (Revenue Management-Operations Research, American Airlines)
  • Maya Bam, Ph.D. candidate, Co-Chair with B. Denton (GM Research)
  • Amirhossein Meisami, Ph.D. (Data Scientist at Adobe Research)
  • Esmaeil Keyvanshokooh, Ph.D. Candidate
  • Mohammad (Pedram) Zhalechian, Ph.D. student, Candidate
  • Isaac A. Jones, Ph.D. Student, Candidate
  • Minsuk (John) Suh  – coadvisor (Professor, Graduate School of Technology and Innovation Management, Hanyang University, Seoul, Korea)
  • Luz Adriana Caudillo Fuentes – coadvisor  (Adjunct professor, Anahuac University; formerly The Walt Disney Company)
  • Shervin Beygi (AhmadBeygi),. Ph.D. – post-doc (Data Scientist at Boeing, formerly Veterans Administration – VA)