A Single Shot State Detection (SSSD) method is proposed to support a laparoscopic surgery skills training system - Computer-Assisted Surgical Trainer (CAST). CAST actively assists a trainee with visual, audio, or force guidance during different surgical practice tasks. In each task, the guidance is provided according to the target object state, which is one of the key components of CAST. We propose SSSD using deep neural networks to detect object states in a single image. We first model semantic objects to recognize objects' state given a training task and then apply a deep learning algorithm, single shot detector (SSD), to detect the semantic objects. The contribution of this research is to present a unified object state model collaborating with a deep learning object detector, which can be applied to the surgical training simulator, as well as other visual sensing and automation systems.