Machine learning models: robustness and generalisability

The project focused on developing machine learning theory and methodology for learning systems. Hence, new methods for machine teaching, enhancing robustness in learning systems, and certifying neural network decisions, promising substantial learning capabilities in diverse domains were developed.

  • Portrait / project description (completed research project)

    Dropdown Icon

    This project developed new joint optimisation and learning theoretic foundations of neural networks to not only break the computational barriers for scalability, but also certify the quality of the learning results. The basic plan of attack is supported by the research teams’ theoretical preliminary results of trading-off the data-size and computation with guarantees, as well as their recent computational methods that already solve large-scale, convex optimisation problems as relaxations to non-convex problems.

  • Background

    Dropdown Icon

    A joint study of optimisation and function representations is currently in its infancy even for basic minimisation formulations in machine learning. By taking a unified approach in the broader setting of non-convex optimisation, the project sought to develop new methods that are applicable beyond data generation, compression, domain adaptation, and control tasks.

  • Aim

    Dropdown Icon

    Three interrelated research trusts were investigated:

    (1) An accurate and scalable prediction framework with neural networks and Langevin dynamics

    (2) A flexible and robust decision framework via reinforcement learning

    (3) Generalisation and certification of neural networks

  • Relevance/application

    Dropdown Icon

    The theory and methodology of this project are applicable well beyond data generation, compression, domain adaptation, and control tasks, promising substantial learning capabilities in diverse domains.

  • Results

    Dropdown Icon

    The results of this project are

    1. Joint optimisation and learning theoretic foundations for neural networks with verifiable results
    2. Training of learning systems via iterative teaching
    3. New robust reinforcement learning techniques

    An optimisation framework to estimate the Lipschitz constant of neural networks, which is key in their generalisation as well as their verification was developed. The approach is based on finding a polynomial certificate via Krivine certificates as well as Lasserre hierarchies. The team has also studied regularisation of neural networks with the 1-path-norm and developed computational tools for obtaining numerical solutions. In addition, new tools that quantify learning generalisation were derived.

    Machine teaching problems were considered where a teacher provides examples to accelerate learning. The settings where the teacher is robust to the diversity of the students in terms of their learning rates were studied. The project also extended this problem to the inverse reinforcement learning setting and observed that learning progress can be sped up drastically.

    Finally, a new mixed Nash equilibrium framework for robust reinforcement learning was developed. The key idea is to consider a lifted version of the robust reinforcement learning problem and then use the Langevin dynamics developed to sample from the solution distributions. This approach not only increases robustness but also generalisation performance in general.

  • Original title

    Dropdown Icon

    Theory and methods for accurate and scalable learning machines