As more and more organizations are adopting AI models for decisions which leave a lasting impact on society, it’s fundamental that we understand the large-scale consequences of these decisions, and be able to scrutinize them, and most importantly, ensure that they do not create new biases nor propagate existing ones. Being able to not only map but quantify the strength of causal relationships is fundamental if we want to understand and certify that decisions are fair, through assessment of both direct and potentially indirect forms of discrimination. This presentation discusses the importance of establishing comprehensive frameworks to ensure responsible, fair and accountable utilization of AI technologies.