This paper investigates the question of what artificial intelligence can contribute to moral progress. For this, a working concept of moral progress is proposed that mainly rests on the idea that the generally accepted set of moral beliefs is not the sole measure for assessing the progress of a moral community. Additionally, a specification of what kind of AI can be applied here ought to be proposed. Since morality in many ways is dependent on complex thought and language, natural language using chatbots are being identified as the appropriate candidate with which AI may aid us in being moral. Considering the requirements for chatbots and their current state of development, we can infer that there will be problems for applying this technology to moral progress: from first instructive examples of unguided moral bots going evil to corruptible bots and privacy issues, the optimism of AI engineers towards this technology deserves skepticism. The paper proposes that the best way of integrating morality in chatbots is to conceptualize them as a digital Socrates that aim to confront us with inconsistencies in our own moral beliefs and thereby help us to work through our moral questions ourselves.