In operating rooms, excessive cognitive stress can impede the performance of a surgeon, while low engagement can lead to unavoidable mistakes due to complacency. As a consequence, there is a strong desire in the surgical community to be able to monitor and quantify the cognitive stress of a surgeon while performing surgical procedures. Quantitative cognitive-load-based feedback can also provide valuable insights during surgical training to optimize training efficiency and effectiveness. Various physiological measures have been evaluated for quantifying cognitive stress for different mental challenges. In this paper, we present a study using the cognitive stress measured by the task evoked pupillary response extracted from the time series eye-tracking measurements to predict task difficulties in a virtual reality based robotic surgery training environment. In particular, we proposed a differential-task-difficulty scale, utilized a comprehensive feature extraction approach, and implemented a multitask learning framework and compared the regression accuracy between the conventional single-task-based and three multitask approaches across subjects.