This paper is published in Volume-8, Issue-3, 2022
Area
Computer Science
Author
Gaurav Thakur
Org/Univ
Thapar Institute of Engineering and Technology, Patiala, Punjab, India
Keywords
Workload, Capacity Management, Sliding Window, Resource Pool.
Citations
IEEE
Gaurav Thakur. Resource demand prediction for minimizing power consumption at data centers, International Journal of Advance Research, Ideas and Innovations in Technology, www.IJARIIT.com.
APA
Gaurav Thakur (2022). Resource demand prediction for minimizing power consumption at data centers. International Journal of Advance Research, Ideas and Innovations in Technology, 8(3) www.IJARIIT.com.
MLA
Gaurav Thakur. "Resource demand prediction for minimizing power consumption at data centers." International Journal of Advance Research, Ideas and Innovations in Technology 8.3 (2022). www.IJARIIT.com.
Gaurav Thakur. Resource demand prediction for minimizing power consumption at data centers, International Journal of Advance Research, Ideas and Innovations in Technology, www.IJARIIT.com.
APA
Gaurav Thakur (2022). Resource demand prediction for minimizing power consumption at data centers. International Journal of Advance Research, Ideas and Innovations in Technology, 8(3) www.IJARIIT.com.
MLA
Gaurav Thakur. "Resource demand prediction for minimizing power consumption at data centers." International Journal of Advance Research, Ideas and Innovations in Technology 8.3 (2022). www.IJARIIT.com.
Abstract
Technical progress in servers, systems, and capacity virtualization is empowering the production of resource pools of servers that allows various application workloads to share each server in the pool. This approach proposes and assesses the parts of a capacity management process for creating an efficient utilization of such pools and also facilitates a huge quantity of business administrations. The objective of our approach is to give a capacity management procedure to resource pools that let capacity organizers coordinate-free market activity for resource limits in a given interval of time. In this approach, we will describe the workloads of big business applications to pick up experiences for their conduct. In this paper, we will follow a trace-based approach for capacity management that relies upon the definition of required capacity and portrayal of workload request designs. The exactness of scope quantification expectations relies upon our capacity to describe workload request examples, to perceive patterns for expected changes in future requests, and reflect business conjectures for any sudden changes in future requests. A contextual analysis with 6 months of information that speaks to the asset utilization of 159 workloads in a venture server farm shows the adequacy of the proposed limit administration handle. Our results and conclusion will show that whenever we will use 8 processor systems, we will predict the exact future per-server required capacity to 1 processor every 98 percent of the time. This approach will enable a 38 percent reduction in processor utilization as compared to today’s current best practice for workload placement. This knowledge will help resource pool operators to decide on the best capacity for their server pools.