What is BigData?!How is it secure!!

Last Updated: 28 Jan 2021
Pages: 7 Views: 171

Nowadays the volume of data and information has grown massively since the beginning of computer , so did the ways of processing and handling those on-growing data , the hardware software and so did the ability to keep those data secure has evolved as well , mobiles , social-media and all deferent types of data caused the data to grow even more and more !! the huge data volume has exceeded a single machine processing capacity and conventional competing mechanisms !

Which led to the use of parallel and distributed processing mechanisms but hence data are expected to increase even more ,the mechanisms and technique as well as hardware, software need to be improved . IntroductionSince the beginning of computers, the people had used landline phones but now they have smartphones. Apart from that, they are also using bulky desktops for processing data, they were using floppiest then hard disk and nowadays they are using cloud for storing data. Similarly, nowadays even self-driving cars have come up and it is one of the Internet of things (IOT) examples.

We can notice due to this enhancement of technology we're generating a huge amount of data. Let's take the example of IOT, have imagined how much data is generated due to using the smart air conditioners, this device actually monitors the body temperature and the outside temperature and accordingly decides what should be the temperature of the room. So, we can actually, see that because of IOT we are generating a huge amount of data.

Order custom essay What is BigData?!How is it secure!! with free plagiarism report

feat icon 450+ experts on 30 subjects feat icon Starting from 3 hours delivery
Get Essay Help

Another example of smartphones, every action even one video or image that is sent through any messenger app will generate data. The data that generate from varicose resources are in structured, semi-structured and structured format. List this data is not in a format that our relational database can handle and apart from that even the volume of data has also increased exponentially.

We can define Big data as a collection of data sets very large and complex that it is difficult to analyze using conventional data processing applications or database system tools. In this paper firstly, we will define the big data and how to classify a data as big data. Then, we will discuss the privacy and the security in big data and how the infrastructure techniques can process, store and often also analyses a huge amount of data with different formats.

Therefore we'll see how Hadoop solve these problems and understand few components of Hadoop framework as well as NoSQL and cloud. What is a big data and how to consider a data as a big data? A widely definition of big data belongs to IDC: "big data technologies describe a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data, by enabling the high-velocity capture, discovery, and/ or analysis" [ (Reinsel, 2011) ]

According to the 4V's we can classify the data as a big dataThe 4V's are: 1- Volume of data: it is tremendously large. 2- Variety: different kinds of data is being generated from various sources: Structured: have a proper schema for your data in a tabular format like table.semi-structured schema is not defined properly like XML E-mail and CSV format. un-structured like audio video images. 3- Velocity: data is being generated at an alarming rate.

With Clint-server model the time came for the web applications and the internet boom. Nowadays everyone started using all this applications not only from their computers and also from smartphones. So more users more appliances and hence a lot of data. 4- Value: mechanism to bring the correct meaning out of the data. We need to make sure that whatever analysis we have done it is of some value. That is it will help in business to grow. Or it has some value to it. [ (MATTURDI Bardi1, 2014) ]

Infrastructure techniques  There are many tools and technologies used to deal with a huge amount of data (manage, analyze, and organize them): Hadoop: It's an open source platform managed under the Apache Software Foundation, and its also called-Apache Hadoop-, and it applies processing a huge amount of data "It allows to work with structured and unstructured data arrays of dimension from 10 to 100 Gb and even more"[ (V.Burunova)] and that have done by using a set of servers .

Hadoop consists of two modules that are, MapReduce which distributed data processing among multiple servers and Hadoop Distributed File System (HDFS) for storing data on distributed clusters. Hadoop monitors the correct work of clusters and can detect and retrieve any error or failure for one or more of connecting nodes and by this way Hadoop efforts increasing in core processing and storage size and high availability.

"Hadoop is usually used in a large cluster or a public cloud service such as Yahoo!, Facebook, Twitter, and Amazon" [ (Hadeer Mahmoud, 2018)]. NoSql:Nowaday, the global Internet is handled with many users and large data. To make large numbers of users use it simultaneously. To support this, we will use the NoSql database technology. NoSql: it is non-relational database starting in 2009 used for distributed data management system [ (Harrison, 2010)] Characteristics of NoSql :

  • Schema less: data insert into Nosql without first defining a rigid database it provides immense application flexibility.
  • Auto-Sharding: data prevalence through server automatically, without requiring application to participate
  • Scalable replication and distribution: more machine can be easily added to the system according to the requirements of the user and software.
  • Queries return answer quickly.
  • Open source development.

The popular models of NoSql:

  • Key value-store.
  • Column Oriented
  • Document Store
  • Graph Database [ (Abhishek Prasad1, 2014)]

2.MapReduce frame work :is an algorithm that was created by google to handle and process massive amounts of Data (BigData) in reasonable time using parallel and distributed computing techniques, in other-words data are processed in a distributed way before transmission, this algorithm simply divides Big volumes of data into many smaller chunks.

These chunks are map-ed to many computers then after doing the required calculations the data are brought back together to reduce the resulting data set , so as you can see the MapReduce algorithm consists of to main functions :  User-defined Map function : This function takes an input pair and generates a Key/Value set of pairs, the MapReduce library puts all values with same integrated key, then it will be passed to the reduce function.

User-defined Reduce function:  Function that accepts all integrated keys and related values from the map function to combine values in-order to form a smaller set of values . Its generally produce 1 or 0 output values.   MapReduce programs can be run in 3 modes:  A. Stand-Alone Mode: only runs JVM (java virtual machine) , no distributed components it uses Linux file system.   B. Pseudo-Distributed Mode: starts a several JVM processes on the same machine.C. Fully-Distributed Mode: runs on multiple machines distributed mode it uses the HDFS.

Sparks. (Yang, 2012 )Stands for Scalable Big Bioacoustics Pressing Platform.Is a scalable audio framework existed to handle and process large audio files efficiently by converting the acoustic recordings into a spectrograms(Visual representation of the sound) and then it analyses the recording areas ,this framework is implemented using BigData platforms such as HDFS and Spark .

B2P2 main components are:A. Master Node: this node is responsible of manage distribution and control all other nods , its main function are :1-File-distributor, Distribution-Manager : it splits the file into smaller chunks to be distributed on the slave nodes.2-Job-Distributor, Process-Manager: assigns processing tasks that runs on each slave node and gather the outputted files. (Srikanth Thudumu, 2016)A Comprehensive Study on Big Data Security and Integrity Over Cloud Storage Big data requires a tremendous measure of capacity.

Information in Big data might be in an unstructured organization, without standard designing, and information sources can be passed the conventional corporate database. Putting away little and medium measured business association's information in a cloud as Big Data is a superior choice for information examination work store Big Data in Network-Attached Storage (NAS).

The Big Data put away in the cloud can be broke down utilizing a programming procedure called MapReduce in which question is passed and information are brought. e extricated inquiry comes about is at that point lessened to the informational index important to question. is inquiry handling is at the same time done utilizing NAS gadgets. though MapReduce calculation utilization in Big Data is all around refreshing by numerous analysts as it is without an outline and file free, it requires parsing of each record at perusing point.

Is the greatest hindrance of MapReduce calculation use for inquiry preparing in distributed computing. Securing Big Data in Cloud there are a few techniques that canbe utilized to secure hugeinformation in cloud conditions. Inthis area, we will analyze a couple oftechniques.1- Source Validation and Filtering:Data is originating from varioussources, with various arrangementsand merchants. the capacity expertought to confirm and approve thesource before putting away theinformation in distributed storage.the information is sifted through thepassage point itself so security canbe kept up.

Application Software Security:the essential worry of Big Data is tostore a gigantic volume ofinformation and not about security.Subsequently, it is prudent to utilizeinitially secure renditions of soproduct to get the data. through opensource, so product and freeware maybe modest, it might bring aboutsecurity breaks.

Access Control andAuthentication:the distributed storage supplier mustactualize secure access control andconfirmation systems. It needs tofurnish a few solicitations of theclient's with their parts. at thedifficulty in forcing theseinstruments is that solicitationsmight be from various areas.Scarcely any safe cloud specialistorganizations give validation andaccess control just on enrolled IPtends to in this way guaranteeingsecurity vulnerabilities24.

Securingfavored client get to requires all-around characterized securitycontrols and approaches. (Ramakrishnan2, 2016)

References 

  • Abhishek Prasad1, B. N. (2014). A Comparative Study of NoSQL Databases. India: National Institute of Technology.Hadeer Mahmoud, A. H. (2018).
  • An approach for Big Data Security bassed on Hadoop Distributed file system . Egypt: Aswan University.Harrison, B. G. (2010). In Search of the Elastic Database. Information Today.MATTURDI Bardi1, Z. X. (2014).
  • Big Data security and privacy: A review. Beijing: University of Science and Technology.Ramakrishnan2, J. R. (2016). A Comprehensive Study on Big Data Security. Indian: ournal of Science and Technology.Reinsel, J. G. (2011).
  • Extracting Value from Chaos. IDC Go-to-Market Services.Srikanth Thudumu, S. G. (2016). A Scalable Big Bioacoustic Processing Platform. Sydney: IEEE.V.Burunova, A. (n.d.). The Big Datsa Analysis. Russia: Saint-Petersburg Electrotechnical University.Yang, G. (2012 ).
  • The Application of MapReduce in the Cloud Computing. Hubei: IEEE.

Cite this Page

What is BigData?!How is it secure!!. (2018, Apr 25). Retrieved from https://phdessay.com/what-is-bigdata-how-is-it-secure/

Don't let plagiarism ruin your grade

Run a free check or have your essay done for you

plagiarism ruin image

We use cookies to give you the best experience possible. By continuing we’ll assume you’re on board with our cookie policy

Save time and let our verified experts help you.

Hire writer