A Large Language Model for Medical Knowledge


  Bosselut Antoine
  Chen Zeming
  Jaggi Martin

Research Partners

International Committee of The Red Cross ICRC

Many efforts have already been made to harness and improve LLMs’ medical knowledge and reasoning capabilities but, to date, the resulting AI is either closed source (e.g. MedPaLM and GPT-4) or limited in scale, at around 13-billion parameters, which restricts their access or ability.

Seeking to improve access and representation, we have developed MEDITRON 7B and 70B, a pair of open-source LLMs with 7 and 70-billion parameters respectively, adapted to the medical domain, and described in their pre-print MEDITRON-70B: Scaling Medical Pretraining for Large Language Models.

Building on the open-access Llama-2 model released by Meta, with continual input from clinicians and biologists, MEDITRON was trained on carefully curated, high-quality medical data sources. This included peer-reviewed medical literature from open-access repositories like PubMed and a unique set of diverse clinical practice guidelines, covering multiple countries, regions, hospitals, and international organizations.


  A Large Language Model for Medical Knowledge