Fast prediction algorithms

The human race is producing ever larger amounts of data. Computers have the ability to learn from data but dealing with huge volumes of it still poses a challenge. This project developed powerful, ultrafast algorithms for learning from Big Data.

  • Portrait / project description (completed research project)

    Dropdown Icon

    “Gaussian processes” (GPs) refer to a family of powerful algorithms used in machine learning. These algorithms have many virtues. They can learn from any data, no matter how complex. They have good mathematical properties for reliable predictions. They are clear and readily understandable. But this power comes at the cost of processing time that is incompatible with Big Data. Nonetheless, in some cases it is possible to approximate these algorithms to compute data of any size. This project aimed at extending this capability to all cases by developing algorithms for Big Data that have the potential to transform applications.

  • Background

    Dropdown Icon

    Machine learning is a field of research that creates increasingly smarter computer procedures (algorithms) for analysing Big Data. This combination of Big Data and intelligent algorithms offers an unprecedented opportunity to learn more about the world and, thus, to accelerate progress. But the sheer amount of data available also presents challenges: large amounts of complex data must be analysed in ways that are reliable and relevant, and the data must be processed efficiently.

  • Aim

    Dropdown Icon

    The usefulness of an algorithm in analysing data is limited by the speed required to process it. Much research in machine learning is thus geared towards combining power and speed, but this has yet to be achieved with Big Data. This project focused on a new approach to create algorithms that deliver both maximum power and speed. They enable the promise of Big Data to be fulfilled.

  • Relevance/application

    Dropdown Icon

    The project involved two real-world applications for the new algorithms. The first was a collaboration with MeteoSwiss to improve the prediction of rainfall intensity from huge amounts of radar data for meteo alerting.

    The second collaboration, with Armasuisse, should predict electrosmog from massive amounts of sensor data. Many more problems can be addressed using the same algorithms, for example in the areas of medicine, business and science.

  • Results

    Dropdown Icon

    The project advanced the state-of-the-art in two families of methods for GP regression with Big Data: sparse inducing points methods and local GP methods.

    Regarding the first family, sparse inducing points methods, the project achieved significant advances in training such approximation methods in the mini-batch setting with stochastic gradient descent techniques. In particular a recursive method for training sparse GPs was developed that, by exploiting Kalman filter-like updates, reduced the number of parameters to be estimated and increased overall performance. By following a similar line of research another method, based on information filter updates and independence assumptions, provided up to four times computational speed-ups compared to state-of-the-art.

    The project also developed a correlated method for local GP approximations. This method allows for a precise control on the level of sparsity and locality of the approximation. It is thus easier to tune to specific applications. Moreover, by properly accounting for correlations in a product of expert setting, the proposed method achieves better performance than state-of-the-art approximations.

    The project produced several contributions to top journals and conferences in machine learning, statistics and control theory.

  • Original title

    Dropdown Icon

    State space Gaussian processes for big data analytics