Skip to content
Navigation Menu
Toggle navigation
Sign in
Product
GitHub Copilot
Write better code with AI
GitHub Advanced Security
Find and fix vulnerabilities
Actions
Automate any workflow
Codespaces
Instant dev environments
Issues
Plan and track work
Code Review
Manage code changes
Discussions
Collaborate outside of code
Code Search
Find more, search less
Explore
Why GitHub
All features
Documentation
GitHub Skills
Blog
Solutions
By company size
Enterprises
Small and medium teams
Startups
Nonprofits
By use case
DevSecOps
DevOps
CI/CD
View all use cases
By industry
Healthcare
Financial services
Manufacturing
Government
View all industries
View all solutions
Resources
Topics
AI
DevOps
Security
Software Development
View all
Explore
Learning Pathways
Events & Webinars
Ebooks & Whitepapers
Customer Stories
Partners
Executive Insights
Open Source
GitHub Sponsors
Fund open source developers
The ReadME Project
GitHub community articles
Repositories
Topics
Trending
Collections
Enterprise
Enterprise platform
AI-powered developer platform
Available add-ons
GitHub Advanced Security
Enterprise-grade security features
Copilot for business
Enterprise-grade AI features
Premium Support
Enterprise-grade 24/7 support
Pricing
Search or jump to...
Search code, repositories, users, issues, pull requests...
Search syntax tips
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign in
Sign up
Reseting focus
You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
Dismiss alert
{{ message }}
EvolvingLMMs-Lab
/
lmms-eval
Public
Notifications
You must be signed in to change notification settings
Fork
270
Star
2.4k
Code
Issues
217
Pull requests
6
Discussions
Actions
Projects
0
Security
Insights
Additional navigation options
Code
Issues
Pull requests
Discussions
Actions
Projects
Security
Insights
[Common Issue] Common issues you might encounter when using lmms-eval
#186 ·
kcz358
opened
on Aug 9, 2024
Discussion: Update GPT eval models
#294 ·
zhijian-liu
opened
on Oct 4, 2024
2
New Task Guide
#299 ·
Luodian
opened
on Oct 6, 2024
Issues
Search Issues
is
:
issue
state
:
open
is:issue state:open
Search
Labels
Milestones
New issue
Search results
Open
Closed
Expected all tensors to be on the same device
Status: Open.
#654
In EvolvingLMMs-Lab/lmms-eval;
·
Cuzyoung
opened
on Apr 26, 2025
Coco caption evaluation error
Status: Open.
#653
In EvolvingLMMs-Lab/lmms-eval;
·
lanyuki
opened
on Apr 26, 2025
Bug: The images blending within the same batch leads to the performance degradation of Qwen2.5_VL when the batch size increases
Status: Open.
#648
In EvolvingLMMs-Lab/lmms-eval;
·
ashun989
opened
on Apr 22, 2025
How can I become the contributor?
Status: Open.
#646
In EvolvingLMMs-Lab/lmms-eval;
·
Hoantrbl
opened
on Apr 22, 2025
Big results gap for qwenvl2.5-7b
Status: Open.
#645
In EvolvingLMMs-Lab/lmms-eval;
·
Blissy-32
opened
on Apr 21, 2025
API calls are Sync
Status: Open.
#636
In EvolvingLMMs-Lab/lmms-eval;
·
CLARKBENHAM
opened
on Apr 17, 2025
Is batch inference enabled now?
Status: Open.
#635
In EvolvingLMMs-Lab/lmms-eval;
·
Cola-any
opened
on Apr 15, 2025
How to eval llava by pruned
Status: Open.
#634
In EvolvingLMMs-Lab/lmms-eval;
·
Darryl-lilin
opened
on Apr 15, 2025
Evaluating llava Qwen
Status: Open.
#627
In EvolvingLMMs-Lab/lmms-eval;
·
mzamini92
opened
on Apr 11, 2025
openai_compatible model not complatible with chat/completions endpoint due to max_output_tokens
Status: Open.
#625
In EvolvingLMMs-Lab/lmms-eval;
·
dtrawins
opened
on Apr 9, 2025
about llava-interleave model inconsistent performance
Status: Open.
#623
In EvolvingLMMs-Lab/lmms-eval;
·
lllxxzzzzz
opened
on Apr 8, 2025
Can't debug with only one process on one GPU.
Status: Open.
#619
In EvolvingLMMs-Lab/lmms-eval;
·
Creator404
opened
on Apr 7, 2025
You can’t perform that action at this time.