Minimizing Human Effort in Interactive Tracking by Incremental Learning of Model Parameters
The past decade has seen an explosive growth of video data. The ability to easily annotate/track objects in videos has the potential for tremendous impact across multiple application domains. For example, in computer vision annotated video data can be used as an extremely valuable source of information for the training and evaluation of object detectors (video provides continuous view of how an object's appearance might change due to viewpoint effects). In sports, video-based analytics is becoming increasingly popular. In behavioral science, video has been used to assist the coding of children's behavior (e.g., for studying infant attachment, typical development, and autism)
Arridhana Ciptadi, and James M. Rehg. Minimizing Human Effort in Interactive Tracking by Incremental Learning of Model Parameters. In Proc. IEEE Intl. Conf. on Computer Vision (ICCV 2015), Santiago, Chile, December, 2015.
To be released
The authors would like to thank Dr. Daniel Messinger and The Early Play and Development Laboratory at the University of Miami for providing the videos used in the Infant-Mother Interaction Dataset. Portions of this work were supported in part by NSF Expedition Award number 1029679 and the Intel Science and Technology Center in Embedded Computing.
The documents contained in these directories are included by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a non-commercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without explicit permission of the copyright holder.