Skip to content
Navigation Menu
Toggle navigation
Sign in
Product
GitHub Copilot
Write better code with AI
GitHub Advanced Security
Find and fix vulnerabilities
Actions
Automate any workflow
Codespaces
Instant dev environments
Issues
Plan and track work
Code Review
Manage code changes
Discussions
Collaborate outside of code
Code Search
Find more, search less
Explore
Why GitHub
All features
Documentation
GitHub Skills
Blog
Solutions
By company size
Enterprises
Small and medium teams
Startups
Nonprofits
By use case
DevSecOps
DevOps
CI/CD
View all use cases
By industry
Healthcare
Financial services
Manufacturing
Government
View all industries
View all solutions
Resources
Topics
AI
DevOps
Security
Software Development
View all
Explore
Learning Pathways
Events & Webinars
Ebooks & Whitepapers
Customer Stories
Partners
Executive Insights
Open Source
GitHub Sponsors
Fund open source developers
The ReadME Project
GitHub community articles
Repositories
Topics
Trending
Collections
Enterprise
Enterprise platform
AI-powered developer platform
Available add-ons
GitHub Advanced Security
Enterprise-grade security features
Copilot for business
Enterprise-grade AI features
Premium Support
Enterprise-grade 24/7 support
Pricing
Search or jump to...
Search code, repositories, users, issues, pull requests...
Search syntax tips
Provide feedback
Saved searches
Use saved searches to filter your results more quickly
Sign in
Sign up
Reseting focus
You signed in with another tab or window.
Reload
to refresh your session.
You signed out in another tab or window.
Reload
to refresh your session.
You switched accounts on another tab or window.
Reload
to refresh your session.
Dismiss alert
{{ message }}
vllm-project
/
vllm
Public
Notifications
You must be signed in to change notification settings
Fork
7.1k
Star
45.8k
Code
Issues
1.7k
Pull requests
594
Discussions
Actions
Projects
10
Security
Insights
Additional navigation options
Code
Issues
Pull requests
Discussions
Actions
Projects
Security
Insights
Files
main
Breadcrumbs
vllm
/
find_cuda_init.py
Copy path
Blame
Blame
Latest commit
History
History
35 lines (25 loc) · 875 Bytes
main
Breadcrumbs
vllm
/
find_cuda_init.py
Top
File metadata and controls
Code
Blame
35 lines (25 loc) · 875 Bytes
Raw
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
# SPDX-License-Identifier: Apache-2.0
import
importlib
import
traceback
from
typing
import
Callable
from
unittest
.
mock
import
patch
def
find_cuda_init
(
fn
:
Callable
[[],
object
])
->
None
:
"""
Helper function to debug CUDA re-initialization errors.
If `fn` initializes CUDA, prints the stack trace of how this happens.
"""
from
torch
.
cuda
import
_lazy_init
stack
=
None
def
wrapper
():
nonlocal
stack
stack
=
traceback
.
extract_stack
()
return
_lazy_init
()
with
patch
(
"torch.cuda._lazy_init"
,
wrapper
):
fn
()
if
stack
is
not
None
:
print
(
"==== CUDA Initialized ===="
)
print
(
""
.
join
(
traceback
.
format_list
(
stack
)).
strip
())
print
(
"=========================="
)
if
__name__
==
"__main__"
:
find_cuda_init
(
lambda
:
importlib
.
import_module
(
"vllm.model_executor.models.llava"
))
You can’t perform that action at this time.