By Martijn Groot, VP Marketing and Strategy, Alveo
The volume and diversity of data available to banks and other financial services firms has grown enormously and continues to do so. One of the drivers for this is that firms need to disclose more detail on transactions, investments and portfolios to comply with the ongoing push towards more transparency, including in the emerging environmental, social and governance (ESG) arena.
External reporting aside, data is being generated and collected in ever greater volumes through digitalisation simply as a by-product of business activities.
These growing volumes of data offer financial services firms a raft of potential benefits. But to capitalise fully, organisations need to find ways to effectively harness these ever-growing volumes of data to deliver competitive edge and, to put it simply, avoid not seeing the wood for the trees.
Opportunity and challenge
New content sets are packaged in different ways and made available via traditional, file-based delivery, as well as via APIs. In addition, firms can use methods such as Natural Language Processing to directly extract content from text-based data. Working with shopping lists of instrument or entity identifiers or with keywords to analyse textual data can help focus on extracting the required content. The curation or quality-control of data then requires the integration of multiple data sets from different sources to come to a composite picture.
A conservative approach to data acquisition is no longer a viable option. In the past, drawn-out data preparation processes were typically driven by monthly or quarterly reporting cycles, leading to insights that were either inaccurate, dated or both. The combination of processing data over a long period of time and relying on data that is ultimately poor in quality to drive business decisions will be insufficient to enable financial services organisations to keep pace with more nimble fintechs and challenger banks. Collection, aggregation, curation and analysis is now a continuous function.
To properly steer the data management function, firms must first decide its objectives. This can focus on regular supply of data sets for BAU operations or of enabling data scientists self-service data collection and analysis. That’s crucial as there is little point collecting and scrutinising data unless there is a clear objective in place: a defined goal for the business to work towards.
Next, they need to make use of data scientists to gather the right data and ‘ask the right questions’. The ‘right’ data will depend on the clients, markets and geographies the firm works with and can lead to lists of interest specifying what needs to be collected. Linked to that will be metadata requirements, e.g. SLAs that specify service windows, turnaround times and quality metrics. The cycle time required for data preparation and curation is shrinking all the time thanks to the advanced technologies now in place to harvest data, combine data sets and derive live insights from it. The questions that need answering and the use cases in scope will steer data collection and curation process.
Today, a skilled data analyst can do all this and translate the data into the big picture, bird’s-eye view that the C-Suite needs to base decisions on. Here we look at what’s making this possible and the benefits it brings.
Changing processes
Recent years have seen significant changes in the data management and analytics processes used by financial services firms and together these changes are helping to empower analysts, quants and data scientists as well as the firms they work for.
Historically the two disciplines have been largely separate. The data management process typically involves activities such as data sourcing, cross-referencing and ironing out any discrepancies via reconciliations and data cleansing processes.
Data analytics processes are typically carried out afterwards in a variety of desk-level tools and libraries, close to the users and typically operating on separately stored subsets of data. This divide has created problems for many financial institutions, with the separation impacting time to access and to insight and acting as a brake on the decision-making processes that drive business success.
Today that’s rapidly changing. A shift to the cloud and the adoption of cloud native technologies is helping firms transition to a more integrated approach to data management and analytics.
Apache Cassandra for example, has emerged as a highly-scalable, open-source, distributed database, that makes it easier to securely store and managing large volumes of financial time series data. Apache Spark is a unified data engine for big data processing. Taken together, the two, and other associated tools, are helping to facilitate the integration of data and analytics, which in turn can help support the decision-making process.
Any data used to drive decision making does also need to be of the highest quality of course. Otherwise, the analytics will not work effectively and the intelligence on which senior decision-makers rely to drive the business strategy will not necessarily be accurate.
All the above explains how analytics has been empowered within financial services organisations today. But given all that, how do organisations get that analytics quickly and efficiently into the hands of senior decision makers within the firm and ensure that they can use it to drive the execution of business strategy.
Driving the decision-making process
As the data management function expands to cover more data sources, becomes an increasingly real-time function and extends into analytics, it is positioned to empower staff working in different functions through providing them self-service capabilities and easy access to the data they require to drive better informed decision-making.
Suppliers of data management solutions have shifted their service model from software to managed services. Increasingly this is evolving further into a Data-as-a-Service (“DaaS”) model where suppliers not only host and run the data management infrastructure, but also perform checks on the data. A client can view the entire data sets and will have dashboards into the data preparation processes but can also get different selections of data formatted in different ways for the last-mile integration with their business applications.
This frees up staff on the side of the client to focus on value-added data analysis, Data-as-a-Service can cover any data set but includes processing a range of third-party data sources in pricing and reference data, to curves and benchmark data, ESG and alternative data and corporate actions. Offering a firm cleansed and fully prepared data will facilitate any consuming business process including risk management, performance management and compliance.
Quants and data analysts can then take these prepared data sets and use it to attain the key metrics that can then play into senior decision-making processes. Data scientists are looking at historical data across asset classes looking to distil information down into factors including ESG criteria to operationalise it into their investment decision-making process, and increasingly too, they are incorporating innovative data science solutions, including AI and machine learning, into market analysis and investment processes.
The new methodology enables the faster creation of proprietary analytics, with any combination of data types, to support activities including investment decisions, valuations, stress-tests, performance analysis and risk management). By disseminating such information to C-Suite decision-makers and providing them with the necessary context and detail, data scientists can be instrumental in driving business strategy. Self-service capabilities to request new sources or review the lists of interest make for a much shorter change cycle in data supply.
Data-as-a-Service offerings can put both the operations and analytics of a firm on solid, consistent data foundation. It will shorten the change cycle and increase the quality of data provisioning to all business functions. Combined with quality metrics on the different data sets and sources, it can lead to ongoing improvement in the effectiveness of a data operation.