โš  Hosted on HuggingFace Free Tier — CPU Only, No GPU
This Space runs on CPU only. HuggingFace VLM models (Qwen2-VL, InternVL2, etc.) require a GPU and will not load here — even SmolVLM will be extremely slow. Ollama is also unavailable (no local server on hosted infrastructure).

For figure analysis, use API backends only: select API (Cloud) as the VLM provider and enter an OpenAI, Anthropic, or Google API key. The full NER pipeline works normally on CPU.

Want full GPU model support and Ollama? Clone the repo and run it locally →

๐Ÿ”ฌ BioMed Paper Information Extractor

End-to-end pipeline for automated biomedical literature analysis โ€” figure digitization via VLM and named entity recognition via configurable NER models.

Single Paper Analysis

Enter a PMC ID, PMC URL, or any paper URL. Leave blank for a random open-access paper.

Example papers
Text Method

PMC only โ€” HTML: scrape | JATS: structured XML

Load a VLM and NER model below before analyzing
โ— VLM: Not loaded
โ— NER: Not loaded

VLM Provider
Vision Model

Inject chart-extracted table into VLM prompt

VLM: Not loaded

NER Model

NER: Not loaded


โ—‹
Fetch
โ—‹
Image Analysis
โ—‹
Entity Extraction
โ—‹
Complete