Experts from Intel and its partners were found at The Big Data & Cloud Summit 2013, discussing technologies on big data and how businesses and govt organizations can leverage the insights that predictive technologies can bring to the table.
“The world generates one petabyte of data every 11 seconds or the equivalent of 13 years of consecutive high definition video”, says Intel. Every individual and organization in the world should be able to unlock the intelligence available in big data, proposes Intel. The company aims to address the cost, complexity and confidentiality concerns associated with managing, storing and securing the massive amounts of data.
Aziz Safa, GM of Intel’s internal IT group explains that Intel is now starting to capture more data from its sensors for identifying patterns that might help optimize the process and save the company time and money. Machine learning makes the root-cause analysis possible without a human, because the algorithms can sift through potentially thousands of data points about each chip to find the common patterns among those that got messed up.
Intel is able to store all this data because of Hadoop, and it has been doing so using its own distribution of Hadoop for about a year. With Hadoop, Safa said, Intel is able to dump everything in one place and then normalize it or analyze it however makes sense.
When asked about optimizing sales using Predictive Analysis , Safa said :
Advanced analytics is not something new we’re doing for manufacturing, it’s something new were doing outside manufacturing.” He elaborated about how Intel is also using big data in order to inform better sales and marketing decisions, too. The idea there is to collect historical data about Intel’s 140,000 customers in order to let sales reps focus on the right ones (kind of like an internal version of Infer). One part of this process is a similarity analysis of sorts to find customers that have the same types of buying patterns or perhaps similar needs, kind of like how Amazon recommends products that are often purchased together or viewed by the same people.
It seems that recommendation engine is built using a Hadoop-based set of machine learning libraries called Mahout, at least according to a July 2012 Intel whitepaper describing then-current big data proof-of-concept projects. By using Hadoop to process more unstructured data about accounts which is presently a data warehouse-driven process, Safa said Intel aims to give salespeople alerts about when is the best time to engage certain customers and what they might need. He wouldn’t go into details about just how much money the company’s big data efforts might make or save it, but he did say the company doesn’t devise proof-of-concept projects for problems worth less than $10 million a year and that the work around keeping the manufacturing facilities operating optimally is definitely increasing profits. The whitepaper laying out Intel’s plans for improving sales says the project version of the system resulted in $3 million in estimated incremental revenue and is expected to result in an additional $20 million when rolled out globally.
Intel plans to make tens of millions with the help of Big Data
Intel is addressing these concerns by delivering open data management and analytics software platforms, including the Intel Distribution of Apache Hadoop software (Intel Distribution) and the Intel Enterprise Edition for Lustre software. Intel has identified multiple routes to market for its big data solutions: system integrators (SI), independent software vendors (ISV) original equipment manufacturers (OEM) and training partners form the basis of this go to market plan. By partnering with all key segments of the technology ecosystem, SIs, ISVs and OEMs Intel aims to equip these partners with the capability to address their customers’ big data challenges and to create new revenue streams. By working with training partners, Intel plans to engage and educate customers about the user case scenarios and opportunities big data holds for a range of businesses.