Since COVID-19, the level of distrust in traditional healthcare systems and medicine research has exponentially increased, supported by the spread of medical myths. Previous studies show the potential of LLMs to spread fake news, but our study is the first to focus on medical myths in particular, and on the impact of the persuasiveness of LLM-generated content. In order to evaluate the persuasiveness of LLMs and their potential to spread medical myths, we conduct a large experiment on human subjects. We present them with LLM-generated and human-written medical myths and assess their reaction. In both cases, to reduce possible bias, we do not tell the participants that they are reading myths, and we do not specify the source (LLM or human) of the texts. We find out that LLM-generated myths tend to be 32% more plausible than human-written ones. Further, participants are more likely to spread these generated medical myths, up to 3 months after the experiment.