Local Feature based Descriptors and their Applications
This paper presents a study on SIFT (Scale Invariant Feature transform) which is a method for extracting distinctive invariant features from images that can be used to perform reliable matching between different views of an object or scene. The features are invariant to image scaling, translation, and rotation, and partially invariant to illumination changes and affine or 3D projection and SURF (Speeded-up Robust features) which is speeded up the SIFT’s detection process without scarifying the quality of the detected points. SURF approximates or even outperforms previously proposed schemes with respect to repeatability, distinctiveness, and robustness, yet can be computed and compared much faster.
Published by: Ridhi Jindal
Author: Ridhi Jindal
Paper ID: M1P3-1137
Paper Status: published
Published: January 3, 2015
A Beam Technology on the Pen
Today we see projector everywhere which run on the different techniques. This has made life of the user to costlier. So, we present a work on beam or a halo technique on the pen. This technology gives projecting information on the walls. The main implementation behind is applying a beam or a halo technology to the pen, and then we can use that pen to projecting an information on the walls. The projector used is LCD projector.
Published by: Pradeep M, B.C Arjun
Author: Pradeep M
Paper ID: M1P2-1145
Paper Status: published
Published: November 20, 2014
Classifications & Misclassifications of EEG Signals using Linear and Ada Boost Support Vector Machines
Epilepsy is one of the frequent brain disorder due to transient and unexpected electrical interruptions of brain. Electroencephalography (EEG) is one of the most clinically and scientifically exploited signals recorded from humans and very complex signal. EEG signals are non-stationary as it changes over time. So, discrete wavelet transform (DWT) technique is used for feature extraction. Classifications and misclassifications of EEG signals of linearly separable support vector machines are shown using training and testing datasets. Then AdaBoost support vector machine is used to get strong classifier.
Published by: Neelam Rout
Author: Neelam Rout
Paper ID: M1P2-1150
Paper Status: published
Published: November 20, 2014
Dynamic Load Calculation in a Distributed System using Centralized Approach
The building of networks and the establishment of communication protocols have led to distributed systems, in which computers that are linked in a network cooperate on a task. The task is divided by the master node into small parts (sub problems) and is given to the nodes of the distributed system to solve, which gives better performance in time complexity to solve the problem compared to the time required to solve the problem in a single machine. Load balancing is the process of redistributing the work load among nodes of the distributed system to improve both resource utilization and job response time while also avoiding a situation where some nodes are heavily loaded while others are idle or doing little work. So before sending these parts of problem by the master to the nodes, master node should know the actual work load of all the nodes. We try a dynamic approach to find out the work load of each participating nodes in the distributed system by the master before sending the parts of the problem to the nodes. This paper describes an algorithm which runs in the master machine and collects information from the nodes of the distributed system(client server application) and calculates the current work load of the nodes of the distributed system. The algorithm is developed in such a way that it can calculate the loads of the nodes dynamically. This means the loads can be evaluated if new nodes are added or deleted or during current performance of the nodes. The whole system is implemented on linux machine and local area network.
Published by: Biswajit Sarma, Srishti Dasgupta
Author: Biswajit Sarma
Paper ID: M1P2-1146
Paper Status: under-process
Submitted: November 20, 2014
A survey Report for Data Mining based on Web research
Web Data Mining is an important area of Data Mining which deals with the extraction of interesting knowledge from the World Wide Web. It defines the application of data mining techniques to extract knowledge from web data, including web documents, hyperlinks between documents, usage logs of web sites, etc. Therefore, the process of extracting useful information from the contents of web documents and the Content data is the collection of facts a web page is designed to contain, it may consist of text, images, audio, video, or structured records such as lists and tables. The data used for web content mining includes both text and graphical data. Content mining is divided into two parts, one is webpage content mining and other is search result mining. Here, it defines the information retrieval and information extraction from web and making research for data mining.
Published by: Gaurav Saini
Author: Gaurav Saini
Paper ID: M1P2-1140
Paper Status: published
Published: November 20, 2014
A Survey on Web Research for Data Mining
Web mining is the application of data mining techniques to extract knowledge from web data, including web documents, hyperlinks between documents, usage logs of web sites, etc. The process of extracting useful information from the contents of web document is data mining. Content data is the collection of facts a web page is designed to contain. It may consist of text, images, audio, video, or structured records such as lists and tables. The large and dynamic information source that is structurally complex and ever growing, the World Wide Web is fertile ground for data mining principles, or Web mining. Here, it defines the information retrieval and information extraction from web and making research for data mining.
Published by: Gaurav Saini
Author: Gaurav Saini
Paper ID: M1P1-1145
Paper Status: published
Published: November 16, 2014