AI is at the heart of modern autonomy. We are increasingly delegating critical decisions, including ones that affect our health and lives, to AI systems. In that case, the margin of error is negligible. Yet the model is only as good as its design, training and testing.
The widespread concern about Deepfakes is its potential to spread misinformation and discredit individuals. We have been using video, photo and audio as evidence in our civil and criminal justice systems but what is the next step if these could be manipulated easily and perfectly? Will we, as a society, have to move on from relying on our sense of sight and observation to establish objective facts? While there are ways through which deepfakes can be detected, to realise it is a case of deepfakes might take us some time.