Q. What were the technology challenges that you were trying to overcome at the start of ActiveViam?
In 2000, banks were still using overnight batch processing to process their data – all systems were end-of-day. All analysis was deferred to the next day, the data could not inform today’s decisions. For example, in order to run Value-at-Risk (VaR), there were several steps that needed to happen – you needed to extract the data, run a batch of calculations and write the intermediary results in the database several times.
There were new risk managers who had another vision. They wanted more interactive analytics, something they could use for decision making, not just for reporting. They wanted to work on intraday data and even with VaR – non-linear aggregation – they wanted to calculate the raw data on-the-fly so they could explore the data, slice and dice it to understand why a particular number was the way it was and run ‘What-If’ simulations.
We saw an opportunity to make it instantaneous so that it could be used for decision-making. There was no technology at the time capable of doing it, so we decided to write our own database designed for fast analytics, real-time data updates and sophisticated data models.
Q. How does ActiveViam compare to its competitors?
Even today, we are the fintech that can solve the challenges of its customers from inside the database. We own the database and have evolved it and improved it and we continue improving it to solve the new business challenges from inside the database and it makes our software unique. It makes ActiveViam unique compared to other fintechs who have to re-use existing data technologies and so they have to compromise and try to do the best they can.
We made our own database and in fact it was a big amount of work – we have invested a full decade of R&D to really make it great.
Q. You often mention the “three pillars” of ActiveViam technology. What are they and why are they so important in risk management technology?
- Performance – we had to make something that could do on-the-fly, in a few seconds, what other systems do in the night during an hour’s batch. We had to do a big performance jump and that’s why we pioneered in-memory computing, column stores and of course scale out architecture, the ability to distribute data and calculations in a cluster as well as scale up – the ability to distribute calculations on the many cores of bigger servers.
- Real-time – We had to support continuously-streaming data. The capacity to do aggregations and fast analytical queries on data that changes continuously – intraday data that you update that you modify, real-time market data if needed. It was a lot of work, we had to write a special multi-version concurrency control engine supporting real-time updates and fast queries at the same time. Of course, that’s also in this real-time capacity that we have built our capacity for real time push, which is very useful for the front office use case, ‘What-If’ analysis, pre-deal checking and modern signoff workflows where adjustments from analysts are integrated instantly.
- Risk-specific calculations – It’s not just sum, count and group, the types of aggregates you can do in a standard database. We knew that we had to support and optimize for non-linear aggregations such as Value at Risk, estimating PnL in real-time based on sensitivity ladders, applying netting sum rules for credit risk, performing user based dynamic bucketing for asset management… Today that also includes KPIs such as FRTB capital requirement, with its complicated cross-correlation sums.
Q. Why is multi-dimensional analysis so important, especially in the field of risk management?
The three pillars above explain why we decided to go with a multi-dimensional model – analytical cubes.
We’ve chosen to implement our technology as a cube because, on one hand, it offers an intuitive experience to front-end users who understand better hierarchies and metrics. That’s how they think of the business – not tables and rows. In addition, a multi-dimensional model is what allows us to isolate the aggregation business logic from the dashboarding and the data exploration.
For instance in an Atoti market risk application, Value at Risk is defined as just one simple formula. It gives complete freedom to the end users because the multidimensional engine knows how to apply that formula on the fly, according to the hierarchies, grouping and filters applied by the user.
Q. How does this vision translate today?
Our focus in 2021 is cost and simplicity, because that’s where the market is going. We’ve achieved the peak of performance possible on the current hardware, so what matters now is making it more accessible, both in terms of cost and in terms of requirements of skill and training. We want the power of our data model to be used by every business and every team today who needs it.
Over the past few months, we introduced several new features to work towards that goal:
- The Python library in atoti and Atoti enables banks to be more agile, to make their analytics evolve much quicker to fit their needs of the moment. Those needs are constantly evolving and so the analytics must follow.
- ActiveViam’s Direct Query makes virtually all the data available at any time from the same point of access. Looking at longer histories or at a seldom-used but sometimes-relevant dataset is now not just possible but easy and cost-effective.
See Antoine present and demonstrate Direct Query:
Taken all together, those features outline our vision for a unified risk analytics platform, where traders and analysts from all desks have their own analytics applications and dashboards, perfectly fitted to their needs, but with the same technological foundation underneath and the same database that is able to make any relevant piece of data available at any time. It facilitates collaboration to a degree that is truly transformative, and virtually eliminates the multiple, painful and slow reconciliation processes that plague risk professionals today.