Cognitive economy intelligence platform for the resillience of economic ecosystems

TNT members involved in this project:
Nobody is listed for this project right now!
Show all

Natural disasters, pandemics, financial- & political crises, supply shortages or demand shocks propagate through hidden and intermediate linkages across the global economic system. This is a consequence of the continuous international division of business and labor which is at the heart of globalisation. The aim of the project is to provide a platform that expounds the complex supply chains and reveal the linkages, compounded risks and provide companies with predictions regarding their exposure in various granularities.

The supply chain naturally can be cast in the form of a knowledge graph that is enriched with various heterogeneous data sources helping to describe various kinds of linkages, disambiguate relations and allows tracking them over time. Using machine learning models, the inherently incomplete Knowledge Graph can be completed to expose salient dependencies and commonalities. Furthermore, they can be used to identify clusters, trends and contagion. To do this, both temporal and spatial methods incl. flavours of (knowledge-) graph neural networks (GNN) will be employed.

To have a highly accurate predictive machinery that can continuously adjust to incoming data, it is of essence to contemplate over the adequate choice of hyperparameters and architectures of neural networks. Furthermore, with involved methods such as (KG-)GNNs, it becomes impractical to invest compute resources repeatedly to adjust the hyperparameter settings to maintain performance. Therefore, we aim at developing a hyperparameter optimization scheme which harnesses the specific topology inherent to graphs and makes efficient use of the invested compute using Bayesian Optimization, meta-learning and multi-fidelity techniques. An understanding of the hyperparameter optimization process is crucial to drastically reduce the amount of required computation, e.g. we can make use of how well a small budget invested in a single configuration correlates with the final performance. A deep understanding of this correlation allows us to probe configurations with small budgets and terminate early, should they underperform.

 

Since predictive performance and a model’s “perceptual focus” hinges on the hyperparameter setting, it is crucial to go beyond mere predictive qualities and gain insights into the inner workings of both the model and crucially the hyperparameter space. 

Regarding a model’s hyperparameter induced “perception” i.e. local functional approximation capabilities, it is sensible to look towards the stability of the model wrt to its settings. Considering the already invested compute on a multitude of hyperparameter configurations and the implied predictive differences, we can exploit both to get an uncertainty estimate and boost the performance of our predictions via ensembling. Gaining insight in how sensible the model reacts to changes of its hyperparameter helps both developing the model and exploit the relative hyperparameter importance wrt the performance, achieving further speed-ups.

InfAI, DATEV eg., eccenca GmbH, Implisense GmbH, Deutsches Institut für Wirtschaftsforschung, Leibniz Informationszentrum Technik und Naturwissenschaften, Hamburger Informatik Technologie-Center e.V., Selbstregulierung Informationswirtschaft e.V., Infineon Technologies AG, Siemens AG, Forschungszentrum L3S

For more detail please visit https://coypu.org/

  • Technical Report
    • René Sass, Eddie Bergman, André Biedenkapp, Frank Hutter, Marius Lindauer
      DeepCAVE: An Interactive Analysis Tool for Automated Machine Learning
      Workshop on Adaptive Experimental Design and Active Learning in the Real World (ReALML@ICML’22), p. 6, June 2022