Semi-Automatic Video Segmentation to Support Eye Gaze Research in Autism
Analyzing the eye gaze pattern of a child looking at a video is helpful for studying preferential looking in children with ASD. However, such analysis requires manual annotation of the video, which is a human intensive and time consuming process (several hours of human labor is typically needed to annotate 1-2 minutes long video). In this work, we propose a technique to significantly speed up the video annotation process. Our approach requires only a few minutes of human interaction, and about two hours of computational time to process 1-2 minutes long video.
Arridhana Ciptadi , Agata Rozga, Gregory D. Abowd and James M. Rehg. Semi-Automatic Video Segmentation to Support Eye Gaze Research in Autism. Presented at the Expeditions Research Annual Meeting. Boston, MA, September, 2011.
The authors would like to thank Marcus Autism Center for providing the videos used in this work. Portions of this work were supported in part by NSF Expedition Award number 1029679.
The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without explicit permission of the copyright holder.