Software Defect Prediction Using Support Vector Machine
Developing a defect free software system is very difficult and most of the time there are some unknown bugs or unforeseen deficiencies even in software projects where the principles of the software development methodologies were applied carefully. Due to some defective software modules, the maintenance phase of software projects could become really painful for the users and costly for the enterprises. In previous work, original data was taken with 21 features and 21 features are having high dimension features which increase the complexity of processing. Ignored the boundary decision for software default predictor because boundary condition is not detected by the previously used classifier. Features of compaction were not considered because of that information is overlapped and the prediction error is increased. They are not able to train the component based classifier which results in more prediction error.
Published by: Ramandeep Kaur, Harpreet Kaur
Author: Ramandeep Kaur
Paper ID: V3I1-1197
Paper Status: published
Published: January 13, 2017
Measurment of Density and Specific Heat Capacity of Different Nanofluids
In the study, the paper shows measurement of density and specific heat capacity of different nanofluids. A temperature range of 30°C to 50 °C for a few molecule volume concentrations is displayed. The particular heat estimations of three nanofluids containing aluminum oxide, zinc oxide, and silicon dioxide nanoparticles. The work has been done at various temperature range 313K to 353K (30 to 80°C) with differing distinctive volume concentration (1%,2%3%,4%). which is the typical scope of operation of car coolants and building heating fluids. The measured qualities are contrasted and existing conditions for the particular heat of Nanofluids. In this manner, again broad connection was produced for the particular heat as elements of particle volumetric, temperature, and the particular heat of both the molecule and the base liquid from the present arrangement of estimations. The effect of nanoparticle connection was tried for both CUO also, Al2O3, ZNO in a eutectic blend of sodium and potassium nitrate. Results demonstrated an improvement in particular heat limit (CP) for both kinds of nanoparticles.
Published by: P. Madhu, P. G. Rajasekhar
Author: P. Madhu
Paper ID: V3I1-1196
Paper Status: published
Published: January 13, 2017
Adaptive Packet Filtering Techniques for Linux Firewall
Packet filtering techniques play an important role in many of network devices such as firewalls, IPSec Gateways. Firewall plays an important role in safeguarding any system from any external attacks to the system. It can be used to safeguard hosts as well as networks. This research focuses on studying the performance impact and the sensitivity of the Linux firewall (IP tables) also improve by using this research. And these are improving to become fast. A firewall designed in Linux, the user can edit the source code and change it depending on the security requirements for the LAN. At any time one can configure the firewall to encrypt, to decrypt, accept, deny, or proxy all packets that are being sent between any two systems depending on the rules. On the basis of this, the user can be blocked or given access to a network using a good tree algorithm. There are two approaches for the filtering, first by using the early rejection of unwanted flows without impacting other flows significantly. Second, we present a new packet filtering optimization technique that uses adaptive statistical search trees to utilize important traffic characteristics and minimize the average packet matching time. The proposed techniques timely adapt to changes in the traffic conditions by performing simple calculations for optimizing the search data structure. The proposed techniques can significantly minimize the packet filtering time with reasonable memory space requirements.
Published by: Atul J. Jayant, Prajakta S. Tambade, Sanjay Kadam
Author: Atul J. Jayant
Paper ID: V3I1-1194
Paper Status: published
Published: January 12, 2017
Short Comings of Present Education System and How to Make It Employment Oriented
The aim of any education system is to provide inclusive quality education and learning opportunities for all which ensures that a learner is eventually transformed into a good human being imbibed with moral and ethical values and is equipped with adequate employment skills (self-employment or job). In addition, the individual attains good communication skills imbibed with logical reasoning power and analytical powers so that his intellectual ability is not confined to his own field but can be used in any situation and in any field. Thus as a useful member of society, this pass out student, is ready to contribute to Gross National Income through any sector- agriculture, manufacturing, service or the education sector itself. However, the present system of education is not fulfilling the aim of education as enumerated in the first paragraph above. The pass out students of the present education system is not employment ready. The present examination system, which basically tests the cramming capabilities of students, is not appropriate to evaluate the skills/ knowledge acquired by the students. This paper discusses shortcomings of present education system and the changes that should be brought which will ensure that a learner eventually turns out as good human being imbibed with moral and ethical values and is equipped with adequate employment skills (self-employment or job) - thus ready to contribute to Gross National Income through any sector- agriculture, manufacturing, service or the education sector itself.
Published by: Col H. R. Ruhil
Author: Col H. R. Ruhil
Paper ID: V3I1-1193
Paper Status: published
Published: January 11, 2017
Establish the Value Stream Mapping for Lead Time Evaluation by Lean Concept
Toyota started practicing TPS (Toyota Production System) since 1950 successfully. Due to the success of TPS, Many companies across the world started using the same. Later in the 1990s. M. Womack and Jones gave the name it as a lean manufacturing. Lean manufacturing is now one of the most powerful manufacturing systems in the Competitive world. Numerous organizations around the world have implemented and adopted it to enhance their productivity through reduction and elimination of Waste. This project report is on understanding and implementation of one of the Lean tools which are; Value Stream Mapping, for Directional Control Valve in Bosch Rexroth (India) Pvt. Ltd. – Sanand Plant, Ahmedabad. Value Stream Mapping is a lean tool to identify the value added and non-value added activity during the production. Using this, identify the Waste and removing it along the processes which are based on the principle of Bosch Production System. (TPS – Toyota Production System) This intends the checking of the Inventory level during the process of Manufacturing and Assembling of the products.
Published by: Nirali Pandya, Ketan Dhruv, Pratik Kikani, Dr. G. D Acharya
Author: Nirali Pandya
Paper ID: V3I1-1192
Paper Status: published
Published: January 11, 2017
An Intelligent Scientific Workflows Failure Prediction Model using Ensemble Learning Technique
Cloud computing is a distributed computing paradigm which is considered as the computing platform that is going to be the pioneering field for the next ten years. Apart from several industrial, business applications being deployed, this paradigm is additionally attracting several scientific communities to utilize the services of the cloud for running massive scale knowledge and computation intensive applications like a montage, that is employed in astronomy. workflow is defined as a group of task and dependencies between the tasks that are used for expressing numerous scientific applications. The main issue in running these workflow applications is mapping the tasks of the workflow to an appropriate resource in the cloud environment. Scheduling these workflows in a computing environment. To overcome these failures, the workflow scheduling system should be fault tolerant. The fault tolerance by using replication and resubmission of tasks supported the priority of the tasks. The replication of tasks depends on a heuristic metric that is calculated by finding the trade-off between the replication issue and resubmission issue. As scientific workflows scale to many thousands of distinct tasks, failures because of the software package and hardware faults become progressively common. We study job failure models for data collected from different scientific applications, by our proposed framework. In particular, we show that the Ensemble Learning classifier can accurately predict the failure probability of jobs. Failure prediction models have been implemented through machine learning approaches and evaluated performance metrics. The models allow us to predict job failures for a given execution resource and then use these failure predictions for two higher-level goals: (1) to suggest a better job assignment, and (2) to provide quantitative feedback to the workflow component developer about the robustness of their application codes.
Published by: Parminderjeet Kaur,
Author: Parminderjeet Kaur
Paper ID: V3I1-1189
Paper Status: published
Published: January 10, 2017