![]() |
2 days ago | |
---|---|---|
.github | 2 days ago | |
applications | 2 days ago | |
colossalai | 2 days ago | |
docker | 4 days ago | |
docs | 2 days ago | |
examples | 3 days ago | |
inference @ 56b35f3c06 | 2 months ago | |
op_builder | 4 days ago | |
requirements | 3 days ago | |
tests | 4 days ago | |
.clang-format | 2 months ago | |
.compatibility | 4 months ago | |
.coveragerc | 1 week ago | |
.cuda_ext.json | 3 months ago | |
.flake8 | 1 year ago | |
.gitignore | 3 days ago | |
.gitmodules | 3 months ago | |
.isort.cfg | 2 months ago | |
.pre-commit-config.yaml | 2 months ago | |
.style.yapf | 2 months ago | |
CHANGE_LOG.md | 3 months ago | |
CONTRIBUTING.md | 2 months ago | |
LICENSE | 2 months ago | |
MANIFEST.in | 4 months ago | |
README.md | 2 days ago | |
pytest.ini | 3 weeks ago | |
setup.py | 1 month ago | |
version.txt | 2 days ago |
Colossal-AI provides a collection of parallel components for you. We aim to support you to write your distributed deep learning models just like how you write your model on your laptop. We provide user-friendly tools to kickstart distributed training and inference in a few lines.
Parallelism strategies
Heterogeneous Memory Management
Friendly Usage
Inference
ColossalChat: An open-source solution for cloning ChatGPT with a complete RLHF pipeline. [code] [blog] [demo] [tutorial]
Acceleration of AIGC (AI-Generated Content) models such as Stable Diffusion v1 and Stable Diffusion v2.
Acceleration of AlphaFold Protein Structure
over 3x acceleration
2x faster training, or 50% longer sequence length
Please visit our documentation and examples for more details.
Requirements:
If you encounter any problem with installation, you may want to raise an issue in this repository.
You can easily install Colossal-AI with the following command. By default, we do not build PyTorch extensions during installation.
pip install colossalai
Note: only Linux is supported for now.
However, if you want to build the PyTorch extensions during installation, you can set CUDA_EXT=1
.
CUDA_EXT=1 pip install colossalai
Otherwise, CUDA kernels will be built during runtime when you actually need them.
We also keep releasing the nightly version to PyPI every week. This allows you to access the unreleased features and bug fixes in the main branch. Installation can be made via
pip install colossalai-nightly
The version of Colossal-AI will be in line with the main branch of the repository. Feel free to raise an issue if you encounter any problems. :)
git clone https://github.com/hpcaitech/ColossalAI.git
cd ColossalAI
# install colossalai
pip install .
By default, we do not compile CUDA/C++ kernels. ColossalAI will build them during runtime. If you want to install and enable CUDA kernel fusion (compulsory installation when using fused optimizer):
CUDA_EXT=1 pip install .
For Users with CUDA 10.2, you can still build ColossalAI from source. However, you need to manually download the cub library and copy it to the corresponding directory.
# clone the repository
git clone https://github.com/hpcaitech/ColossalAI.git
cd ColossalAI
# download the cub library
wget https://github.com/NVIDIA/cub/archive/refs/tags/1.8.0.zip
unzip 1.8.0.zip
cp -r cub-1.8.0/cub/ colossalai/kernel/cuda_native/csrc/kernels/include/
# install
CUDA_EXT=1 pip install .
You can directly pull the docker image from our DockerHub page. The image is automatically uploaded upon release.
Run the following command to build a docker image from Dockerfile provided.
Building Colossal-AI from scratch requires GPU support, you need to use Nvidia Docker Runtime as the default when doing
docker build
. More details can be found here. We recommend you install Colossal-AI from our project page directly.
cd ColossalAI
docker build -t colossalai ./docker
Run the following command to start the docker container in interactive mode.
docker run -ti --gpus all --rm --ipc=host colossalai bash
Join the Colossal-AI community on Forum, Slack, and WeChat(微信) to share your suggestions, feedback, and questions with our engineering team.
Referring to the successful attempts of BLOOM and Stable Diffusion, any and all developers and partners with computing powers, datasets, models are welcome to join and build the Colossal-AI community, making efforts towards the era of big AI models!
You may contact us or participate in the following ways:
Thanks so much to all of our amazing contributors!
We leverage the power of GitHub Actions to automate our development, release and deployment workflows. Please check out this documentation on how the automated workflows are operated.
This project is inspired by some related projects (some by our team and some by other organizations). We would like to credit these amazing projects as listed in the Reference List.
To cite this project, you can use the following BibTeX citation.
@article{bian2021colossal,
title={Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training},
author={Bian, Zhengda and Liu, Hongxin and Wang, Boxiang and Huang, Haichen and Li, Yongbin and Wang, Chuanrui and Cui, Fan and You, Yang},
journal={arXiv preprint arXiv:2110.14883},
year={2021}
}
Colossal-AI has been accepted as official tutorial by top conferences SC, AAAI, PPoPP, CVPR, ISC, etc.