Big Data For Dummies
Book image
Explore Book Buy On Amazon

Take stock of the type of data you are dealing with in your big data project. Many organizations are recognizing that a lot of internally generated data has not been used to its full potential in the past.

By leveraging new tools, organizations are gaining new insight from previously untapped sources of unstructured data in e-mails, customer service records, sensor data, and security logs. In addition, much interest exists in looking for new insight based on analysis of data that is primarily external to the organization, such as social media, mobile phone location, traffic, and weather.

The exploratory stage for big data

In the early stages of your analysis, you will want to search for patterns in the data. It is only by examining very large volumes of data that new and unexpected relationships and correlations among elements may become apparent. These patterns can provide insight into customer preferences for a new product, for example. You will need a platform for organizing your big data to look for these patterns.

Hadoop is widely used as an underlying building block for capturing and processing big data. Hadoop is designed with capabilities that speed the processing of big data and make it possible to identify patterns in huge amounts of data in a relatively short time. The two primary components of Hadoop — Hadoop Distributed File System (HDFS) and MapReduce — are used to manage and process your big data.

FlumeNG for big data integration

It is often necessary to collect, aggregate, and move extremely large amounts of streaming data to search for hidden patterns in big data. Traditional integration tools such as ETL would not be fast enough to move the large streams of data in time to deliver results for analysis such as real-time fraud detection. FlumeNG loads data in real time by streaming your data into Hadoop.

Typically, Flume is used to collect large amounts of log data from distributed servers. It keeps track of all the physical and logical nodes in a Flume installation. Agent nodes are installed on the servers and are responsible for managing the way a single stream of data is transferred and processed from its beginning point to its destination point.

In addition, collectors are used to group the streams of data into larger streams that can be written to a Hadoop file system or other big data storage container. Flume is designed for scalability and can continually add more resources to a system to handle extremely large amounts of data in an efficient way. Flume’s output can be integrated with Hadoop and Hive for analysis of the data.

Flume also has transformation elements to use on the data and can turn your Hadoop infrastructure into a streaming source of unstructured data.

Patterns in big data

You find many examples of companies beginning to realize competitive advantages from big data analytics. For many companies, social media data streams are increasingly becoming an integral component of a digital marketing strategy. In the exploratory stage, this technology can be used to rapidly search through huge amounts of streaming data and pull out the trending patterns that relate to specific products or customers.

The codifying stage for big data

With hundreds of stores and many thousands of customers, you need a repeatable process to make the leap from pattern identification to implementation of new product selection and more targeted marketing. After you find something interesting in your big data analysis, codify it and make it a part of your business process.

To codify the relationship between your big data analytics and your operational data, you need to integrate the data.

Big data integration and incorporation stage

Big data is having a major impact on many aspects of data management, including data integration. Traditionally, data integration has focused on the movement of data through middleware, including specifications on message passing and requirements for application programming interfaces (APIs). These concepts of data integration are more appropriate for managing data at rest rather than data in motion.

The move into the new world of unstructured data and streaming data changes the conventional notion of data integration. If you want to incorporate your analysis of streaming data into your business process, you need advanced technology that is fast enough to enable you to make decisions in real time.

After your big data analysis is complete, you need an approach that will allow you to integrate or incorporate the results of your big data analysis into your business process and real-time business actions.

Companies have high expectations for gaining real business value from big data analysis. In fact, many companies would like to begin a deeper analysis of internally generated big data, such as security log data, that was not previously possible due to technology limitations.

Technologies for high-speed transport of very large and fast data are a requirement for integrating across distributed big data sources and between big data and operational data. Unstructured data sources often need to be moved quickly over large geographic distances for the sharing and collaboration.

Linking traditional sources with big data is a multistaged process after you have looked at all the data from streaming big data sources and identified the relevant patterns. After narrowing the amount of data you need to manage and analyze, now you need to think about integration.

About This Article

This article is from the book:

About the book authors:

Judith Hurwitz is an expert in cloud computing, information management, and business strategy. Alan Nugent has extensive experience in cloud-based big data solutions. Dr. Fern Halper specializes in big data and analytics. Marcia Kaufman specializes in cloud infrastructure, information management, and analytics.

This article can be found in the category: