Using any technology should allow a business to beat its competitors not just in the straightaways, but `in the turns’ where the difficulties that stymie a company’s future can be found.
This, at least, is the view of Rick McConnell, CEO of data observability specialist, Dynatrace. In practice, the company has a range of data management tools covering Robotic Process Automation, Artificial Intelligence and Process Mining, but it is the way that these can be combined that creates the ability to observe what data is doing and why, that helps users negotiate those inevitable turns of technical complexity that confront them all:
The challenge is an explosion of data. And this volume is overwhelming, especially when we add infrastructure and applications, real user data, logs, and more. And not only is it huge volume of data, but it’s also expanding in its complexity. We need a way to manage that.
McConnell was speaking at the company’s recent London conference, where it introduced the next stage in pulling more of that available data together in order to build richer, more comprehensive observations of what is happening in the processes that lie at the heart of all business activity. The keyword here is ‘context’, where being able to identify where and when data came from, what other data, applications and processes it interacts with and why, and a range of other contextual relationship information, means that users can be presented with a full picture of what any data is about, and why.
It also aims to provide this transparently and quickly in order to overcome the current position of users having to trawl through a growing number of data siloes with hand-crafted, single-function data extraction tools that have to be re-crafted every time there is a slight change to a particular process.
As Bernd Greifeneder, SVP, Chief Technology Officer and Founder of Dynatrace, told delegates, current extraction tools lose all the data’s context and semantics and exploiting it becomes a complicated task, not least because so much data is held in different siloes running different storage and database technologies. In addition, having the data in context is a great advantage when it comes to managing security issues.
Another key trend he highlights is the growth in automation, which in turn means there is a need to automate how the data is made available for use in automated systems. This needs good APIs, of course, but it also means having services designed to work in hyperscale environments as this is the easiest way to automate at scale.
That is the background to the announcement at the conference of a new core technology for the company, called Grail. Side-stepping any ‘holy’ allusions to legendary goblets, Greifender points out that the derivation of the name is geared closely to its core deliverables -GRaph, AI, and Limitless:
Because it’s foundational, we always wanted to build something that brings together time series database, event database, user search and database, a traces database – in the graph database. And let’s enrich this with domain experience and semantic experience or semantic knowledge, all in one. It takes the best from a data lake, the best from their house, but makes it different and even better. Think of it as a graph database kind of approach, but all in one, not separate.
It has also been designed from the ground up to be highly parallel in operation, capable of working across 1000s of nodes, and to be easy to use because it is fronted by a new query language that will be ubiquitous across all the tools in the Dynatrace platform.
Greifeneder is not against the notion of siloes, just the number of them, so Grail is effectively one overall storage silo where all the data is kept fully in context to allow the provision of causational analytics at the end. It also contains a data lake capable of working with the thousands of petabytes that make up modern massively-large datasets, all managed so that schema on read is the answer for instant query ability.
The query language has been built with the goal of being easy to read and use. It’s a step-by-step piped data processing language intended to eliminate the complexities of SQL’s Inner Join/Outer Join. It has also been designed for sharing queries with colleagues, he explains:
Anyone who has used the Splunk query language will find it familiar and very easy to move to. It also addresses productivity. We have even built parsing into the query language because we heard requests for it so often from customers.
Context equals history
A key part of building the contextual relationships between data comes from tying Grail into a close relationship with Smartscape, the company’s application mapping toolset. Through this, the interactions, and billions of interdependencies between applications and the data they both work with and generate can be identified, outlining the contextual relationships between the data.
This will also help advance the ability of users to exploit historic data when part of that single Grail silo. It is one of the fancies of many areas of business to look back on what or how a business was doing more than 12 months ago. Going back years can reveal long trends, or repeated errors of both human judgement and coding technologies that only reveal themselves in analysis of long-term, historic data. It is easily spoken of, but by no means easy to achieve. This, according to Greifeneder, is where the contextual capabilities of Grail can add a significant bonus to the management of data needed for ever-deeper analysis.
The company now has over 600 extensions on the Dynatrace hub to bring all this data into the Dynatrace environment and eventually into Grail, so that it is all together, in context.
In addition, Grail can then provide the ‘fuel’ for the company’s AI service, Davis, in a form that helps queries to be formulated automatically. Greifeneder explains:
We don’t want users to sit there day in and day out thinking about what is the right query. Now we want Davis to help you get your answers quicker, as automatically as possible. We do all this to fuel scalability, security and, finally, automation, because there’s way too much data to not automate.
As a start, Dynatrace has taken what is seen as an urgent use case as a worked example for users – making log management and analytics integral to and powered by Grail so that logs no longer stand alone. Users on the Grail Beta program have already been regularly ingesting 10 terabytes of data a day, and Greifeneder confidently predicts that by the first quarter next year, this will pass the 100 terabytes per day mark.
Grail will be compatible with the three main hyperscale service providers, and has just recently launched onto the first, AWS. Google and Microsoft Azure compatible versions will become available during the first quarter next year.
Speaking to diginomica after the event, Greifeneder talked more about some of his fundamental beliefs that contributed to Grail’s design.
The importance of context when it comes to data security, for example. He has long been aware that every security tool will do a scan and throw up several thousand vulnerabilities. But a list that has no context is, at the very least, not a good way `to start each day’, he says:
“So now you need context in order to understand what is not relevant, what is the risk of them and which one of them has higher risk. One context of this is exposure to the public internet versus the internal, or whether this is even hooked up to a production database versus only a test database. Grail tells you okay, there’s the vulnerability, but we rank this up because we know this is publicly exposed, or this is connected to your production database. This one is not ever loaded. It’s just test code, or just some third party code that happens to be there, so we rank it down. We know which application, and even which end user, has triggered what.”
Asked if he saw the development of far richer tools like Grail as a two-edged sword, with its benefits being countered by the possibility that the better it does its job, the more problems are created in terms of increased granularity of the `problems’ identified, he countered with a clear alternative:
“I wouldn’t. I think the better we do our job, the closer we get to the precise answers, and automation. We provide answers through Intelligent Automation.”
He also defended against questions around what might come next, such as the development of one sample use case solution suggested the development of more, including examples from customers and third parties, to be available in a repository?
“You keep asking. There’s a line and you’re at the line.”
It is very easy to take that as a `yes’, and there would certainly be advantages for users in that being the case. But users will have to make up their own minds about the possibility.