Confluent’s user conference kicked off in Austin this week with two announcements that were aimed at democratizing data streaming in organizations. The company, which has grown out of the Apache Kafka open source movement, recognizes that as enterprises shift from batch processing and historical data use, to real-time data streams (or ‘data in motion’), this is going to require tooling, guidance and support to expand systems beyond companies that can invest in expensive, highly technical engineering teams.
This was a key part of the themes explored by Confluent CEO Jay Kreps this week during both his keynote and the press briefing. Kreps argued that Confluent – and more broadly, data streaming – is at an inflection point of technology adoption, whereby it’s rising on the curve towards becoming popular enough to disrupt industries and the economy.
He used the example of electricity to explain, however, that there is an inherent tension at play between disruptive new technologies and managing the demand/supply dynamics in order to fuel more adoption and use cases. Kreps said:
When electricity was invented, you might imagine we went from the steam powered factory with the crankshaft and then something happens with Benjamin Franklin and everything is electrified,
It’s actually a little more complicated than that, how a new technology is developed and how it rolls out. There’s actually a chicken and egg problem to solve. How do you get the egg if you don’t have the chicken? And there wouldn’t have been a chicken if you didn’t have an egg?
It turns out you have kind of a virtuous cycle here, where there’s some increase in supply, some early innovation starts to drive a little demand for electricity and a few use cases here and there. But it’s hard, it’s difficult.
Additional demand drives more supply, right? There’s more markets to sell electricity to and and this starts to kind of spin something up and that’s how you go up that adoption curve – that’s how you start to hit that inflection point, And at a certain point it becomes reliable enough and plentiful enough and well known enough and there’s enough skills around it that it starts to really take off.
Kreps said that in many ways this is what data streaming is experiencing, where companies have been partaking in some experimental projects that have stimulated more opportunities for vendors, such as Confluent. But this virtuous cycle of innovation stimulating more supply/demand has reached a “deployment period” now, Kreps added, and the capabilities are expanding and building.
Making Apache Kafka approachable
Kreps believes that real-time data streaming and Confluent have an opportunity to impact economies in the same way that electricity did. Whether or not you buy into that idea, it’s hard not to argue that organizations are wising up to the fact that data is fundamental to their future success. And when you think about relying on historical data, which can often paint an inaccurate picture about what’s happening ‘now’, it’s not too hard to understand why Confluent and this idea of ‘real time data events’ is seeing success.
It’s not just the real-time nature of data streaming that’s appealing, it’s the opportunity to understand every data event that has taken place at any moment in time – whereas traditional databases are prone to errors and corrections that make this difficult. It’s no surprise that Confluent’s offering has been compared to distributed ledger technology (or Blockchain) a number of times at the event this week (although they are distinctly different).
But Kreps is right to say that this will require effort on Confluent’s part, and the part of the ecosystem, to make Apache Kafka and data streaming more accessible to a wider audience. Or, to put it another way, make it easier to use. Many of the conversations I’ve had with customers since I started covering this area have focused on both the organizational shift required to adopting a real-time streaming operating model, as well as the investment required in engineering skills to make it successful.
This is the next stage of Confluent’s journey, since its successful launch of its cloud offering. I asked Kreps what his larger customers are asking of the company, what do they still want to see from Confluent? And his response tied directly into this agenda. He said:
I think there’s probably two really big driving needs. One is very crisply stated and the other is less crisp. The first is around governance, these larger companies are incredibly diverse and they need automated ways of governing things, so that the right thing happens and the data is understandable.
The second one is more vague – how can we make it easy? How can we make it easy to adopt?
Early on it was really only approachable for the Silicon Valley tech companies who were going to hire a big team of engineers to run Kafka and then kind of build it by hand, into each system they had. It was very valuable for them, but a large undertaking, hard to do. So if you think about it from Confluent’s point of view, what is it we need to do? We need to make it easy, make it approachable, and then that makes it applicable to the rest of the world.
I think that is the combination of cloud, making the operations problem go away. Also, helping to orchestrate that mindset shift for these bigger companies. And then tools, like the stream processing tools, KSQL, Stream Designer – I think that can make people much more productive in this area.
However, Kreps also was keen to point out that you don’t have to reinvent the wheel on your first attempt. Companies need time to adjust and part of that involves starting small, identifying quick wins, and then scaling up. He added:
This whole idea of real time streaming data, it’s kind of obvious that [everything]] should work that way – but it doesn’t today. So how do you get from here to there?
The first use case is the first use case, you don’t have to transform the whole organization to do that one thing. But we try to work with organizations throughout that whole journey. How can you do the first thing as easily as possible and how do you spread that across the organization? What do you have to have in place when data is exchanged broadly across the organization?
I think this frames Confluent’s ambitions nicely for the next few years. Data streaming is highly technical and complex, but if it’s going to change the way the world operates, Kreps knows that it needs to be made more accessible. We’ve seen this in other sectors – NoSQL databases, for example – where vendors that have built the technical foundation of their platform have then been able to move up the pyramid and get the attention of business leaders too. Confluent is on an interesting path and it’s building an impressive portfolio of customers. How it adapts and responds to reaching this ‘inflection point’ – and going beyond it – will be its defining moment.