➀ Researchers found that even 0.001% misinformation in AI training data can compromise the entire system; ➁ The study injected AI-generated medical misinformation into a commonly used LLM training dataset, leading to a significant increase in harmful content; ➂ The researchers emphasized the need for better safeguards and security research in the development of medical LLMs.