Hadoop Software For Large Amount Of Data

Last Updated: 28 Jan 2021
Pages: 2 Views: 349

Its a platform managed under the apache software foundation and its an open source and its deal with big data with any data type structer semi structer or unstructers and give the result in very short time it allows to work with structured and unstructured data arrays of dimension from 10 to 100 gb and even more.

V.Burunova and its structer is a group of clusters or one each of them contains groups of nodes too and each cluster has two type of node name node and data node name node is a unique node on cluster and it knows any data block location on cluster and data node is the remining node in cluster and that have done by using a set of servers which called a cluster.

Hadoop has two layers cooperate together first layer is mapreduce and it task is divided data processing across multiple servers and the second one is hadoop distributed file system hdfs and its task is storing data on multiple clusters and these data are separated as a set of blocks. Hadoop make sure the work is correct on clusters and it can detect and retrieve any error or failure for one or more of connecting nodes and by this way hadoop efforts increasing in core processing and storage size and high availability. Hadoop is usually used in a large cluster or a public cloud service such as yahoo.

Order custom essay Hadoop Software For Large Amount Of Data with free plagiarism report

feat icon 450+ experts on 30 subjects feat icon Starting from 3 hours delivery
Get Essay Help

Facebook twitter and amazon hadeer mahmoud 2018 hadoops features: Scalable: Hadoop able to work with huge applications and it can run analyze store process distribute large amount of data across thousands of nodes and servers which handle thousands terabytes of data or more also it can add additional nodes to clusters and these servers work parallel. Hadoop better than traditional relational database systems because rdbms cant expand to deal with huge data.

Single write multiple read the data on cluster can be read from multiple source at the same time data avalibility: When data is sent to a data node that hadoop creates multiple copies of data on other nodes in the cluster to keep data available if there a failure on one of nodes on cluster.

Cite this Page

Hadoop Software For Large Amount Of Data. (2018, Apr 25). Retrieved from https://phdessay.com/hadoop-software-for-large-amount-of-data/

Don't let plagiarism ruin your grade

Run a free check or have your essay done for you

plagiarism ruin image

We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Save time and let our verified experts help you.

Hire writer