With an increased usage of the internet, the data usage is also getting increased exponentially year on year. So obviously to handle such an enormous data we needed a better platform to process data. So a programming model was introduced called Map Reduce, which process big amounts of data in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner. Since HADOOP has been emerged as a popular tool for BIG DATA implementation, the paper deals with the overall architecture of HADOOP along with the details of its various components.
By Jagjit Kaur | Heena Girdher" HADOOP: A Solution to Big Data Problems using Partitioning Mechanism Map-Reduce" Published in International Journal of Trend in Scientific Research and Development (ijtsrd), ISSN: 2456-6470, Volume-2 | Issue-4 , June 2018,
URL: http://www.ijtsrd.com/papers/ijtsrd14374.pdf
Direct Link - http://www.ijtsrd.com/computer-science/database/14374/hadoop-a-solution-to-big-data-problems-using-partitioning-mechanism-map-reduce/jagjit-kaur
call for paper papers conference, international journal of management, open access journal of management
No comments:
Post a Comment