(GPU accelerated) Multi-arch (linux/amd64, linux/arm64/v8) Julia docker images. Please submit Pull Requests to the GitLab repository. Mirror of
-
Updated
May 29, 2024 - Dockerfile
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
(GPU accelerated) Multi-arch (linux/amd64, linux/arm64/v8) Julia docker images. Please submit Pull Requests to the GitLab repository. Mirror of
✨ Zero-code distributed tracing and profiling, observability via eBPF 🚀
FlashInfer: Kernel Library for LLM Serving
AI tool for finding the most aesthetic frames in a video. 🎞️➜🖼️
DaCe - Data Centric Parallel Programming
A high-throughput and memory-efficient inference and serving engine for LLMs
CUDA C++ Core Libraries
DBCSR: Distributed Block Compressed Sparse Row matrix library
Fpassword merges Hashcat's hash-cracking precision with Hydra's parallelized network login, offering penetration testers a powerful tool for swift hash deciphering and simultaneous login attempts across diverse protocols.
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
Domain specific library for electronic structure calculations
CUDA Templates for Linear Algebra Subroutines
Drop-in, local AI alternative to the OpenAI stack. Multi-engine (llama.cpp, TensorRT-LLM). Powers 👋 Jan
Homework assignments for C4AI Beginners in Research-Driven Studies
OneDiff: An out-of-the-box acceleration library for diffusion models.
Created by Nvidia
Released June 23, 2007