Archive for the 'Big Data' Category

Big Data Integration Best Practices

Last week I hosted a SnapLogic webinar that featured a great overview of how to approach the increasing requirements for big data integration by Mark Smith, CEO & Chief Research Officer, Ventana Research. As opposed to simply trying to re-purposing your old ETL tools or hiring developers to write custom code, Mark shared 5 best practices for attaining excellence in big data integration:

  1. Evaluate efficiency of processes
  2. Examine new approaches
  3. Evaluate technology needs
  4. Investigate dedicated technology
  5. Gain benefits that outweigh costs

I’ve embedded the slides below and you can watch the webinar here.

2015 Technology Predictions

I enjoy reviewing the predictions from technology pundits this time of year. I particularly appreciate it when industry analysts take the time to review their predictions from the previous year. Here are a few:

And kudos to Gartner’s Doug Laney for this post – A Look Back on My Information and Analytics Strategy Research from 2014, which also includes links to some of Gartners big data management predictions.

Speaking of predictions, here’s SnapLogic’s Gaurav Dhillon sharing a few of his predictions for 2015:

CIOs are Getting SMACT: #Social, #Mobile, #Analytics, #Cloud, #IoT

Check out this Infographic from SnapLogic – Why Are CIOs Getting SMACT?
[Infographic] Why Are CIOs Getting SMACT?

Integration Innovation – Why Legacy ETL and EAI Vendors are Struggling

I wrote a post on the SnapLogic blog this week about the wave of innovation that is happening in the data and application integration market and introduced two new data management acronyms (like we need more, I know) – OETL and OEAI:

  • Old Extract, Transform, Load
  • Old Enterprise Application Integration

There’s no shortage articles (and books) on disruptive innovation and why it’s so hard for on-premises software vendors to transition to the new era of social, mobile, analytics and big data, and the internet of things (SMACT). Here are 10 reasons (some unique and some applicable to all mature technology vendors) why legacy data integration and middleware vendors are struggling to re-invent themselves:

  • Cannibalization of the Core On-Premises Business
  • Heritage Matters in the Cloud
  • EAI without the ESB
  • Beyond ETL
  • Point to Point Misses the Point
  • Franken-tegration
  • Big Data Integration is not Core…or Cloud
  • An On-Ramp to On-Prem
  • Focus and DNA

You can read the entire post here. Let me know if you agree / disagree – I clearly have somewhat of a bias.

Here’s a powerpoint I worked on in 2007 that continues to be appropriate today.

SnapReduce: Cloud Integration and Big Data

For 5+ years this blog has focused primarily on the topic of integrating cloud applications like, Workday, ServiceNow, Zuora, etc. with each other and with on-premises applications like SAP and Oracle. Occasionally I’ve written about the shift to cloud-based business intelligence tools and platforms, but it’s been mostly all things software as a service (SaaS) and integration platform as a service (iPaaS).

Thanks primarily to YARN and some of the advances in the Hadoop 2.0 platform, this week SnapLogic announced SnapReduce 2.0. Cloud application integration has expanded to big data integration. Here’s SnapLogic’s Chief Scientist Greg Benson discussing the news.

The Cloud Integration Dilemma for Enterprise IT

According to Gaurav Dhillon, CEO of SnapLogic, there’s not just an Innovator’s Dilemma hitting legacy technology vendors, there’s an Integrator’s Dilemma that the customers of traditional middleware and data integration providers are struggling with in the face of the today’s industry mega-trends. As he states:

“The dilemma for enterprise IT organizations is that their legacy integration technologies were built before the era of big data, social, mobile and cloud computing and simply can’t keep up.”

Here’s a video of Gaurav talking about the Integrator’s Dilemma:

Ken Rudin Talks Big Data and Big Business Impact @ #strataconf

Image representing Ken Rudin as depicted in Cr...

Image by None via CrunchBase

Ken Rudin runs the analytics team at Facebook. Today at Strata and Hadoop World he delivered a fantastic presentation on Big Data and how Facebook approaches analytics.  Instead of adding another V to the Big Data pile, he instead focused on the most important I in BI – Impact. I had the pleasure of working with Ken Rudin for 2.5 years at an early-stage cloud analytics company called LucidEra, where he pioneered something called, “The Pipeline Healthcheck.” Some of the messages he conveyed at #strataconf are ones he’s held for a long time:

  • Analytics is not just about getting the answers, it’s about knowing what questions to ask.
  • It’s not about insight, it’s about impact.

You can watch his presentation below. It’s clear, concise and will have an impact – particularly if you’re a business analyst or planning to become one. I agree with this post from BI guru, Cindi Howson:

“If I were to ask a biz sponsor or BI team to watch 1 keynote, it would be Ken Rudin, Facebook.”

Well done Ken!

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Join 5,264 other followers

Follow Me on Twitter

Sheffield View

Trying to keep up with Big Data Vendor Landscape

A Passion for Research

Focusing on CRM, cloud computing, ERP and enterprise software

SnapLogic Blog

Thoughts on Integration Platform as a Service (iPaaS), Analytics and the SnapLogic Integration Cloud

Learning by Shipping

products, development, management...

Laurie McCabe's Blog

Perspectives on the SMB Technology Market


Getting From $0 to $100m ARR Faster. With Less Stress. And More Success.


Get every new post delivered to your Inbox.

Join 5,264 other followers

%d bloggers like this: