Skip to content

Releases: InternLM/xtuner

XTuner Release V0.1.19

11 May 09:50
aab528c
Compare
Choose a tag to compare

What's Changed

  • [Fix] LLaVA-v1.5 official settings by @LZHgrla in #594
  • [Feature] Release LLaVA-Llama-3-8B by @LZHgrla in #595
  • [Improve] Add single-gpu configs for LLaVA-Llama-3-8B by @LZHgrla in #596
  • [Docs] Add wisemodel badge by @LZHgrla in #597
  • [Feature] Support load_json_file with json.load by @HIT-cwh in #610
  • [Feature]Support Mircosoft Phi3 4K&128K Instruct Models by @pppppM in #603
  • [Fix] set dataloader_num_workers=4 for llava training by @LZHgrla in #611
  • [Fix] Do not set attn_implementation to flash_attention_2 or sdpa if users already set it in XTuner configs. by @HIT-cwh in #609
  • [Release] LLaVA-Phi-3-mini by @LZHgrla in #615
  • Update README.md by @eltociear in #608
  • [Feature] Refine sp api by @HIT-cwh in #619
  • [Feature] Add conversion scripts for LLaVA-Llama-3-8B by @LZHgrla in #618
  • [Fix] Convert nan to 0 just for logging by @HIT-cwh in #625
  • [Docs] Delete colab and add speed benchmark by @HIT-cwh in #617
  • [Feature] Support dsz3+qlora by @HIT-cwh in #600
  • [Feature] Add qwen1.5 110b cfgs by @HIT-cwh in #632
  • check transformers version before dispatch by @HIT-cwh in #672
  • [Fix] convert_xtuner_weights_to_hf with frozen ViT by @LZHgrla in #661
  • [Fix] Fix batch-size setting of single-card LLaVA-Llama-3-8B configs by @LZHgrla in #598
  • [Feature] add HFCheckpointHook to auto save hf model after the whole training phase by @HIT-cwh in #621
  • Remove test info in DatasetInfoHook by @hhaAndroid in #622
  • [Improve] Support safe_serialization saving by @LZHgrla in #648
  • bump version to 0.1.19 by @HIT-cwh in #675

New Contributors

Full Changelog: v0.1.18...v0.1.19

XTuner Release V0.1.18

19 Apr 11:21
ae1d981
Compare
Choose a tag to compare

What's Changed

  • set dev version by @LZHgrla in #537
  • [Fix] Fix typo by @KooSung in #547
  • [Feature] support mixtral varlen attn by @HIT-cwh in #564
  • [Feature] Support qwen sp and varlen attn by @HIT-cwh in #565
  • [Fix]Fix attention mask in default_collate_fn by @pppppM in #567
  • Accept pytorch==2.2 as the bugs in triton 2.2 are fixed by @HIT-cwh in #548
  • [Feature] Refine Sequence Parallel API by @HIT-cwh in #555
  • [Fix] Enhance split_list to support value at the beginning by @LZHgrla in #568
  • [Feature] Support cohere by @HIT-cwh in #569
  • [Fix] Fix rotary_seq_len in varlen attn in qwen by @HIT-cwh in #574
  • [Docs] Add sequence parallel related to readme by @HIT-cwh in #578
  • [Bug] SUPPORT_FLASH1 = digit_version(torch.version) >= digit_version('2… by @HIT-cwh in #587
  • [Feature] Support Llama 3 by @LZHgrla in #585
  • [Docs] Add llama3 8B readme by @HIT-cwh in #588
  • [Bugs] Check whether cuda is available when choose torch_dtype in sft.py by @HIT-cwh in #577
  • [Bugs] fix bugs in tokenize_ftdp_datasets by @HIT-cwh in #581
  • [Feature] Support qwen moe by @HIT-cwh in #579
  • [Docs] Add tokenizer to sft in Case 2 by @HIT-cwh in #583
  • bump version to 0.1.18 by @HIT-cwh in #590

Full Changelog: v0.1.17...v0.1.18

XTuner Release V0.1.17

03 Apr 05:49
afc9e33
Compare
Choose a tag to compare

What's Changed

  • [Fix] Fix PyPI package by @LZHgrla in #540
  • [Improve] Add LoRA fine-tuning configs for LLaVA-v1.5 by @LZHgrla in #536
  • [Configs] Add sequence_parallel_size and SequenceParallelSampler to configs by @HIT-cwh in #538
  • Check shape of attn_mask during attn forward by @HIT-cwh in #543
  • bump version to v0.1.17 by @LZHgrla in #542

Full Changelog: v0.1.16...v0.1.17

XTuner Release V0.1.16

29 Mar 10:32
0b5708c
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.1.15...v0.1.16

XTuner Release V0.1.15

18 Mar 09:42
e128b77
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.1.14...v0.1.15

XTuner Release V0.1.14

28 Feb 08:47
e6fcce1
Compare
Choose a tag to compare

What's Changed

  • set dev version by @LZHgrla in #341
  • [Feature] More flexible TrainLoop by @LZHgrla in #348
  • [Feature]Support CEPH by @pppppM in #266
  • [Improve] Add --repetition-penalty for xtuner chat by @LZHgrla in #351
  • [Feature] Support MMBench DDP Evaluate by @pppppM in #300
  • [Fix] KeyError of encode_fn by @LZHgrla in #361
  • [Fix] Fix batch_size of full fine-tuing LLaVA-InternLM2 by @LZHgrla in #360
  • [Fix] Remove system for alpaca_map_fn by @LZHgrla in #363
  • [Fix] Use DEFAULT_IMAGE_TOKEN instead of '<image>' by @LZHgrla in #353
  • [Feature] Support internlm sft by @HIT-cwh in #302
  • [Fix] Add attention_mask for default_collate_fn by @LZHgrla in #371
  • [Fix] Update requirements by @LZHgrla in #369
  • [Fix] Fix rotary_base, add colors_map_fn to DATASET_FORMAT_MAPPING and rename 'internlm_repo' to 'intern_repo' by @HIT-cwh in #372
  • update by @HIT-cwh in #377
  • Delete useless codes and refactor process_untokenized_datasets by @HIT-cwh in #379
  • [Feature] support flash attn 2 in internlm1, internlm2 and llama by @HIT-cwh in #381
  • [Fix] Fix installation docs of mmengine in intern_repo_dataset.md by @LZHgrla in #384
  • [Fix] Update InternLM2 apply_rotary_pos_emb by @LZHgrla in #383
  • [Feature] support saving eval output before save checkpoint by @HIT-cwh in #385
  • fix lr scheduler setting by @gzlong96 in #394
  • [Fix] Remove pre-defined system of alpaca_zh_map_fn by @LZHgrla in #395
  • [Feature] Support Qwen1.5 by @LZHgrla in #407
  • [Fix] Fix no space in chat output using InternLM2. (#357) by @KooSung in #404
  • [Fix] typo: --system-prompt to --system-template by @LZHgrla in #406
  • [Improve] Add output_with_loss for dataset process by @LZHgrla in #408
  • [Fix] Fix dispatch to support transformers>=4.36 & Add USE_TRITON_KERNEL environment variable by @HIT-cwh in #411
  • [Feature]Add InternLM2-Chat-1_8b full config by @KMnO4-zx in #396
  • [Fix] Fix extract_json_objects by @fanqiNO1 in #419
  • [Fix] Fix pth_to_hf error by @LZHgrla in #426
  • [Feature] Support Gemma by @PommesPeter in #429
  • add refcoco to llava by @LKJacky in #425
  • [Fix] Inconsistent BatchSize of LengthGroupedSampler by @LZHgrla in #436
  • bump version to v0.1.14 by @LZHgrla in #431

New Contributors

Full Changelog: v0.1.13...v0.1.14

XTuner Release V0.1.13

19 Jan 10:33
0633939
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.1.12...v0.1.13

XTuner Release V0.1.12

17 Jan 02:51
6c4c73b
Compare
Choose a tag to compare

What's Changed

Full Changelog: v0.1.11...v0.1.12

XTuner Release V0.1.11

26 Dec 09:57
de4122f
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.1.10...v0.1.11

XTuner Release V0.1.10

11 Dec 09:43
1d10561
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: v0.1.9...v0.1.10