该项目从 https://github.com/hpcaitech/ColossalAI.git 镜像。
拉取镜像更新于 。
- 9月 06, 2023
-
-
由 Hongxin Liu 创作于
-
- 9月 05, 2023
-
-
由 Hongxin Liu 创作于
[shardformer] update hybrid parallel plugin and fix bugs
-
由 Hongxin Liu 创作于
-
由 Hongxin Liu 创作于
-
由 Hongxin Liu 创作于
* [legacy] move engine to legacy * [example] fix seq parallel example * [example] fix seq parallel example * [test] test gemini pluging hang * [test] test gemini pluging hang * [test] test gemini pluging hang * [test] test gemini pluging hang * [test] test gemini pluging hang * [example] update seq parallel requirements
-
由 Hongxin Liu 创作于
* [legacy] move trainer to legacy * [doc] update docs related to trainer * [test] ignore legacy test
-
由 Hongxin Liu 创作于
-
由 Hongxin Liu 创作于
* [zero] add method to update master params * [zero] update zero plugin * [plugin] update low level zero plugin
-
由 Hongxin Liu 创作于
-
由 flybird11111 创作于
[shardformer] update shardformer readme [shardformer] update shardformer readme
-
由 Bin Jia 创作于
* add optional overlap for plugin * remove fixed todo
-
- 9月 04, 2023
-
-
由 Hongxin Liu 创作于
-
由 Baizhou Zhang 创作于
-
由 flybird11111 创作于
* [shardformer] fix opt test hanging * fix * test * test * test * fix test * fix test * remove print * add fix * [shardformer] add bert finetune example * [shardformer] add bert finetune example * [shardformer] add bert finetune example * [shardformer] add bert finetune example * [shardformer] add bert finetune example * [shardformer] add bert finetune example * [shardformer] fix epoch change * [shardformer] broadcast add pp group * [shardformer] fix opt test hanging * fix * test * test * [shardformer] zero1+pp and the corresponding tests (#4517) * pause * finish pp+zero1 * Update test_shard_vit.py * [shardformer/fix overlap bug] fix overlap bug, add overlap as an option in shardco… (#4516) * fix overlap bug and support bert, add overlap as an option in shardconfig * support overlap for chatglm and bloom * [shardformer] fix emerged bugs after updating transformers (#4526) * test * fix test * fix test * remove print * add fix * [shardformer] add bert finetune example * [shardformer] add bert finetune example * [shardformer] Add overlap support for gpt2 (#4535) * add overlap support for gpt2 * remove unused code * remove unused code * [shardformer] support pp+tp+zero1 tests (#4531) * [shardformer] fix opt test hanging * fix * test * test * test * fix test * fix test * remove print * add fix * [shardformer] pp+tp+zero1 [shardformer] pp+tp+zero1 [shardformer] pp+tp+zero1 [shardformer] pp+tp+zero1 [shardformer] pp+tp+zero1 [shardformer] pp+tp+zero1 * [shardformer] pp+tp+zero1 * [shardformer] pp+tp+zero1 * [shardformer] pp+tp+zero1 * [shardformer] pp+tp+zero1 * [shardformer] fix submodule replacement bug when enabling pp (#4544) * [shardformer] support sharded optimizer checkpointIO of HybridParallelPlugin (#4540) * implement sharded optimizer saving * add more param info * finish implementation of sharded optimizer saving * fix bugs in optimizer sharded saving * add pp+zero test * param group loading * greedy loading of optimizer * fix bug when loading * implement optimizer sharded saving * add optimizer test & arrange checkpointIO utils * fix gemini sharding state_dict * add verbose option * add loading of master params * fix typehint * fix master/working mapping in fp16 amp * [shardformer] add bert finetune example * [shardformer] add bert finetune example * [shardformer] add bert finetune example * [shardformer] add bert finetune example * [shardformer] fix epoch change * [shardformer] broadcast add pp group * rebase feature/shardformer * update pipeline * [shardformer] fix * [shardformer] fix * [shardformer] bert finetune fix * [shardformer] add all_reduce operation to loss add all_reduce operation to loss * [shardformer] make compatible with pytree. make compatible with pytree. * [shardformer] disable tp disable tp * [shardformer] add 3d plugin to ci test * [shardformer] update num_microbatches to None * [shardformer] update microbatchsize * [shardformer] update assert * update scheduler * update scheduler --------- Co-authored-by:
Jianghai <72591262+CjhHa1@users.noreply.github.com> Co-authored-by:
Bin Jia <45593998+FoolPlayer@users.noreply.github.com> Co-authored-by:
Baizhou Zhang <eddiezhang@pku.edu.cn>
-
由 Jianghai 创作于
* pytree test * test bert * test bert * test bert * revise * add register * add register
-
由 yingliu-hpc 创作于
[coati] Add chatglm in coati
-
由 binmakeswell 创作于
* [doc] add llama2 benchmark * [doc] add llama2 benchmark
-
由 binmakeswell 创作于
* [doc] add llama2 news * [doc] add llama2 news * [doc] add llama2 news
-
由 Hongxin Liu 创作于
* [zero] update checkpoint io to save memory * [checkpointio] add device map to save memory
-
- 9月 01, 2023
-
-
由 Hongxin Liu 创作于
-
由 Mashiro 创作于
-
由 栾鹏 创作于
fix dockerfile build
-
由 LuGY 创作于
* fix zero ckptio with offload * fix load device * saved tensors in ckpt should be on CPU * fix unit test * fix unit test * add clear cache * save memory for CI
-
由 Baizhou Zhang 创作于
* hybrid plugin support huggingface from_pretrained * add huggingface compatibility tests * add folder cleaning * fix bugs
-
- 8月 31, 2023
-
-
由 Baizhou Zhang 创作于
* implement sharded optimizer saving * add more param info * finish implementation of sharded optimizer saving * fix bugs in optimizer sharded saving * add pp+zero test * param group loading * greedy loading of optimizer * fix bug when loading * implement optimizer sharded saving * add optimizer test & arrange checkpointIO utils * fix gemini sharding state_dict * add verbose option * add loading of master params * fix typehint * fix master/working mapping in fp16 amp
-
由 Baizhou Zhang 创作于
-
- 8月 30, 2023
-
-
由 Hongxin Liu 创作于
-
由 Tian Siyuan 创作于
Co-authored-by:
Siyuan Tian <siyuant@vmware.com> Co-authored-by:
Hongxin Liu <lhx0217@gmail.com>
-
由 ChengDaqi2023 创作于
-
由 flybird11111 创作于
* [shardformer] fix opt test hanging * fix * test * test * test * fix test * fix test * remove print * add fix * [shardformer] pp+tp+zero1 [shardformer] pp+tp+zero1 [shardformer] pp+tp+zero1 [shardformer] pp+tp+zero1 [shardformer] pp+tp+zero1 [shardformer] pp+tp+zero1 * [shardformer] pp+tp+zero1 * [shardformer] pp+tp+zero1 * [shardformer] pp+tp+zero1 * [shardformer] pp+tp+zero1
-
由 Lufang Chen 创作于
Co-authored-by:
lufang.chen <lufang.chen@nio.com>
-
由 Ying Liu 创作于
-
由 flybird11111 创作于
* [shardformer] fix opt test hanging * fix * test * test * test * fix test * fix test * remove print * add fix
-
由 Ying Liu 创作于
-
由 yingliu-hpc 创作于
[coati] update ci
-
由 ver217 创作于
-
- 8月 29, 2023
-
-
由 Bin Jia 创作于
* add overlap support for gpt2 * remove unused code * remove unused code
-
由 yingliu-hpc 创作于
* update configuration of chatglm and add support in coati * add unit test & update chatglm default config & fix bos index issue * remove chatglm due to oom * add dataset pkg in requirement-text * fix parameter issue in test_models * add ref in tokenize & rm unnessary parts * separate source & target tokenization in chatglm * add unit test to chatglm * fix test dataset issue * update truncation of chatglm * fix Colossalai version * fix colossal ai version in test
-
由 Baizhou Zhang 创作于
-
- 8月 28, 2023
-
-
由 Hongxin Liu 创作于
* [example] transfer llama-1 example * [example] fit llama-2 * [example] refactor scripts folder * [example] fit new gemini plugin * [cli] fix multinode runner * [example] fit gemini optim checkpoint * [example] refactor scripts * [example] update requirements * [example] update requirements * [example] rename llama to llama2 * [example] update readme and pretrain script * [example] refactor scripts
-