Big Data Consulting and Implementation Solutions

Today’s business organizations are collecting vast amounts of data – varying in structure, complexity and size. However one thing all these organizations are discovering is that a wealth of strategic value that lies in this data is difficult to extract using traditional relational database management tools. Thankfully, in about the same time, several data-management tools have emerged loosely termed as “big data tools”.

Besides strong technical competency, using big data tools requires a fundamental shift in how organizations view their data, its structure, usage scenarios and the roles & responsibilities of the IT and user organizations.

Accion Labs Big Data practice assists our clients in both strategic upstream activities such as evaluating and developing big data road-map to implementation and support of large environments.

Big Data Capabilities

Accion Labs offers a range of Big Data consulting & implementation services to develop your big data solutions. With a team of big-data architects, data scientists, PMs and consultants trained at leading Big Data user firms such as Google, Yahoo and Facebook, we offer a balanced mix of strategy, design and implementation expertise.

We have delivered Big Data solutions in a range of industries such as media, telecom, manufacturing, pharmaceutical, chemical aviation and retail industries.

Below are some of the technologies our Big Data practice works with:

Programming Languages Java, Python, Javascript (client-side as well as NodeJS)
Distributed File Systems Apache Hadoop HDFS, Tachyon,
Key/Value Data Stores Apache Accumulo, BerkleyDB, MemcachedDB, Redis, Amazon DynamoDB
Column-oriented Data Stores Apache Hive, Apache Hbase, Apache Cassandra, Amazon Redshift
Document-oriented Data Stores MongoDB, CouchDB, Riak, RethinkDB
Graph-oriented Data Stores Apache Giraph, Neo4J, Blueprints, OrientDB, GraphX
Relational Data Stores Oracle, Mysql, PostgreSQL, MariaDB, Greenplum, Teradata, BlinkDB, Shark
Search Platforms Apache Solr, Elastic Search, GSA
Text Processing Apache Tika, Apache Mahout, Apache Stanbol
In-memory/Realtime Processing Apache Spark, Apache Spark Streaming, Apache Storm
Statistics, VisualizationJanuary R, Gnuplot, VizQL (Tableau), D3JS, Leaflet (maps)
Cloud Platforms Amazon AWS, OpenStack

Big Data Case Stories

Customer: A large retail/supply chain firm
Developed an ETL and analytic environment to analyze their POS data with over 100TB of data collected from over 20 different channels using Hadoop MapReduce programs over HDFS repository
Environment: Hadoop, HDFS, Java, Linux

Customer: An online publishing firm
Developed a number of log analysis tools for analyzing web-site traffic across multiple web servers with consolidated results
Environment: Hadoop, HDFS, Java, Flume, Linux

Customer:A leading pharma firm
Developed a patent search technology that supports multiple persistence and indexing engines, with synchronous CRUD operations across all engines.
Environment: Hadoop, Flume, MongoDB, Neo4J, Java/J2EE

Customer:A telecom firm
Developed a number of log analysis tools for analyzing web-site traffic across multiple web servers with consolidated results
Environment: Hadoop, HDFS, Java, Flume, Linux

Customer: a leading telecom firm
Developed a CRM Analytic tool to analyze call-center logs, IVR logs and customer support email into a combined repository to provide a 360 degree view of all customer support conversations
Environment: Hadoop, Hive, HBase, Java/J2EE, C, MQ Series

Big Data Services

Advisory services

Our Advisory services include:

  • Identifying/defining Big Data business/project initiatives
  • Developing a Big Data implementation road-map
  • Creating proof of concepts, white papers, technology / tool evaluation services
  • Providing a road-map to help clients choose appropriate technologies / frameworks / tools
  • Implementing best-practices and industry standards
  • Implementing new tools, technologies to provide innovative solutions

Execution services

Our execution services include:

  • Planning, design and implementation of a Hadoop and other Big Data environments
  • Developing/enhancing Java or C++ or LAMP based applications on existing or new Hadoop implementations
  • Troubleshooting/performance optimization of existing Hadoop implementations
  • Data quality management and data harmonization projects
  • Testing/QA of big data applications, automation of data validations and regression test scenarios
  • Documentation, programmer trainings, reverse-engineering, upgrade, maintenance, migration and other steady-state services