Explainability is one of the concepts that dominate debates about the regulation of machine learning algorithms. In my presentation I will argue that in its current form, post-hoc explanation algorithms are unsuitable to achieve the law's objectives, for rather fundamental reasons. In particular, most situations where explanations are requested are adversarial, meaning that the explanation provider and receiver have opposing interests and incentives, so that the provider might manipulate the explanation for her own ends. I then discuss a theoretical analysis of Shapley Value based explanation algorithms that open the door to more formal guarantees for posthoc explanations.