Driving collective data governance through smart engagement platforms

The goal of this project is to bring data governance closer to the actual workplaces that are using the data by tapping into the many business applications that shape systems of engagement, ranging from physical board rooms and design workshops to virtual team collaboration tools such as Confluence and Slack.

We will develop a platform incorporating Artificial Intelligence (AI) to learn more of the data usage context from the bottom-up and provide reliable and machine-accurate recommendations for the next best action to take directly in the core business applications. Supporting the low-level decisions business users face when interacting with data, will allow them to spent more time on focusing on real impactful problems. The rich data engagement context generated by the proposed approach will help users to promote their own data sets resulting from engagements and ultimately build trust around them.

We also want to go beyond the level of reports, but incorporate data mining models as well, such that new insights in the data can be gained. For a stakeholder of a model obtained through data mining it is important that the model is interpretable. While the term interpretability has no clear definition and has been subject to discussions, we refer by interpretability in the context of this project to transparency. The model is referring to terms that are familiar to the user and the user can understand the reasoning used in the model. A typical example of an interpretable model is a decision tree. The reasoning of the model is simple and if the terms used in the tree, also referred to as attributes in the context of machine learning, are familiar to the user, the model can be considered to be transparent. Moreover, we will allow the user to interact with the model. The occurrences or lacking of certain terms, can for example be questioned by the user through a dialogue conversation. Bearing in mind the EU directive (article 22 of the GDPR) requesting that all AI that has impact on human lives will need to be accountable, the interpretability of machine learning based models will be key for the usage of these models.

Consortium:

  • Collibra nv/sa
  • Artificial Intelligence Lab, Vrije Universiteit Brussel
  • Kenniscentrum Mobile & Wearable, Erasmus University College

Project Info

Start: 01/05/2018

End: 01/05/2021

Funding: INNOVIRIS TeamUp

Involved Members: