Biomed Clip

Path:/datasets/ai/biomed-clip
URL:https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
Downloaded:10-16-2024
Cite:Zhang, Sheng, et al. “BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs.” arXiv preprint arXiv:2303.00915 (2023)
Variant:
  • BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
  • BiomedNLP-BiomedBERT-base-uncased-abstract
Bibtex:
@misc{zhang2025biomedclipmultimodalbiomedicalfoundation, title={BiomedCLIP: a multimodal biomedical foundation model pretrained from fifteen million scientific image-text pairs},  author={Sheng Zhang and Yanbo Xu and Naoto Usuyama and Hanwen Xu and Jaspreet Bagga and Robert Tinn and Sam Preston and Rajesh Rao and Mu Wei and Naveen Valluri and Cliff Wong and Andrea Tupini and Yu Wang and Matt Mazzola and Swadheen Shukla and Lars Liden and Jianfeng Gao and Angela Crabtree and Brian Piening and Carlo Bifulco and Matthew P. Lungren and Tristan Naumann and Sheng Wang and Hoifung Poon}, year={2025}, eprint={2303.00915}, archivePrefix={arXiv}, primaryClass={cs.CV}, url={https://arxiv.org/abs/2303.00915}, }