Traditionally (and slightly simplistically speaking), modelling has been a topic in which a few key variables (also called inputs, independent variables or “drivers”) are used to calculate the values of dependent variables based on a knowledge (or assumed knowledge or hypotheses) about the behaviour of a system. Typically, a small number of input assumptions can result in large sets of calculations. The role of historical data would be to calibrate the model (e.g. estimate the value of the input assumptions, such as the average growth rate achieved in the past). On the other hand, traditionally, data analysis has involved using (possibly large) data sets to conduct a small set of calculations (e.g. calculate two linear regression coefficients based on thousands of data points)
In modern analysis, the boundaries between the two are disappearing: Not only are larger data sets available to be used in the traditional modelling sense, but also the evolving capabilities of Excel to work with relational databases and work with large data sets (e.g. PowerQuery and PowerPivot) mean that the tools are available to implement some types of models that could in the past only have been conceived of, but not readily implemented. Moreover, in some cases (such as some machine learning approaches), “the data is the model”; large data sets may be fed into algorithms which make predictions of the future, without any a priori definition of inputs/outputs nor directionality of logic.
Therefore, we believe that a professional financial modelling qualification needs to included data analytics, if it is to be credible and sustainable, and allow professional analysts to be versatile in their area of application, and in their approach to problem-solving and decision-support.