This paper is published in Volume-3, Issue-2, 2017
Area
Computer Science And Engineering
Author
Harun Adam Shaikh, Dhanashree Arote, Trupti Phatak, Prof. Nilima Nikam
Org/Univ
Yadavrao Tasgaonkar Institute of Engineering and Technology Karjat, Maharashtra, India
Keywords
Segmentation, Foreground, Background.
Citations
IEEE
Harun Adam Shaikh, Dhanashree Arote, Trupti Phatak, Prof. Nilima Nikam. Live Video Segmentation and Tracking Video Object, International Journal of Advance Research, Ideas and Innovations in Technology, www.IJARIIT.com.
APA
Harun Adam Shaikh, Dhanashree Arote, Trupti Phatak, Prof. Nilima Nikam (2017). Live Video Segmentation and Tracking Video Object. International Journal of Advance Research, Ideas and Innovations in Technology, 3(2) www.IJARIIT.com.
MLA
Harun Adam Shaikh, Dhanashree Arote, Trupti Phatak, Prof. Nilima Nikam. "Live Video Segmentation and Tracking Video Object." International Journal of Advance Research, Ideas and Innovations in Technology 3.2 (2017). www.IJARIIT.com.
Harun Adam Shaikh, Dhanashree Arote, Trupti Phatak, Prof. Nilima Nikam. Live Video Segmentation and Tracking Video Object, International Journal of Advance Research, Ideas and Innovations in Technology, www.IJARIIT.com.
APA
Harun Adam Shaikh, Dhanashree Arote, Trupti Phatak, Prof. Nilima Nikam (2017). Live Video Segmentation and Tracking Video Object. International Journal of Advance Research, Ideas and Innovations in Technology, 3(2) www.IJARIIT.com.
MLA
Harun Adam Shaikh, Dhanashree Arote, Trupti Phatak, Prof. Nilima Nikam. "Live Video Segmentation and Tracking Video Object." International Journal of Advance Research, Ideas and Innovations in Technology 3.2 (2017). www.IJARIIT.com.
Abstract
This process is mainly focused on segmenting the foreground and the background for extracting the region of interest. This process is used not only the segmentation of the region of interest and also track the activity of the person. This process is done using the datasets, like UCLA and VIRAT Datasets. Hence the dataset video is processed to segment as well as to track the activity. This model provides a generic representation of an activity sequence that can extend to any number of objects and interactions in a video. We show that the recognition of activities in a video can be posted as an inference problem on the graph. In this paper, rather than modeling activities in videos individually, we jointly model and recognize related activities in a scene using both motion and context features. This is motivated by the observations that activities related in space and time rarely occur independently and can serve as the context for each other. We propose a two-layer conditional random field model that represents the action segments and activities in a hierarchical manner. The model allows the integration of both motion and various context features at different levels and automatically learns the statistics that capture the patterns of the features.