Skip to content

The source code for Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning. #1 on the ReClor Leaderboard and we are the first group scored above 90% on the hidden test set around the world.. The paper has been accepted by the Findings of ACL-24.

License

Notifications You must be signed in to change notification settings

Strong-AI-Lab/Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

41 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Logical-Equivalence-driven-AMR-Data-Augmentation-for-Representation-Learning

We proposed a new AMR-based logic-driven data augmentation for contrastive learning intermediate training and then we conduct the downstream tasks require logical reasoning including logical reasoning reading comprehension tasks (ReClor and LogiQA) and natural language inference tasks (MNLI, MRPC, RTE, QNLI and QQP). Our AMR-LDA model (AMR-LDA Prompt Augmentation+GPT4) and AMR-LDA (DeBERTa-v2-xxlarge-AMR-LDA-Cont) lead the ReClor leaderboard and we are the first group scored above 90% on the hidden test set around the world. Our paper has been accepted by the Findings of ACL-24.

To replicate our experiment result, you can follow the following steps.

  1. Install the all required packages from the requirements_latest.txt pip install -r requirements_latest.txt

Logical equivalence-driven data augmentation

Synthetic sentences generation

  1. You can run logical_equivalence_synthetic_dataset.py to automatically generate sentences which is ready for the stage-1 finetuning.
  2. All code about logical equivalence data augmentation can be found in logical_equivalence_functions.py. You can run the script by python logical_equivalence_functions.py
  3. To adjust the porprotion of positive and negative samples in the stage-1 finetuning, you can run the negative_sample_extention.py.

Logical equivalence-driven data augmentation for representation learning

You can follow the running script script_running_notes.txt and use the training commands to conduct stage-1 finetuning and stage-2 finetuning. Please remember you need to conduct the stage-1 finetuning firstly and then conduct the stage-2 finetuning. The main function code is in BERT/run_glue_no_trainer.py. Here is an example of stage-1 finetuning.

python run_glue_no_trainer.py \
 --seed 2021 \
 --model_name_or_path roberta-large \
 --train_file ../output_result/Synthetic_xfm_t5wtense_logical_equivalence_train_v4.csv \
 --validation_file ../output_result/Synthetic_xfm_t5wtense_logical_equivalence_validation_v4.csv \
 --max_length 256 \
 --per_device_train_batch_size 32 \
 --learning_rate 2e-5 \
 --num_train_epochs 10 \
 --output_dir Transformers/roberta-large-our-model-v4/

Here is an example of stage-2 finetuning on MRPC.

python run_glue_no_trainer.py \
 --seed 42 \
 --model_name_or_path Transformers/roberta-large-our-model-v4/ \
 --task_name mrpc \
 --max_length 256 \
 --per_device_train_batch_size 32 \
 --learning_rate 2e-5 \
 --num_train_epochs 10 \
 --output_dir Transformers/mrpc/synthetic-logical-equivalence-finetuned-roberta-large-v4/

For the stage-2 finetuning on ReClor and LogiQA, you need to run the commands under the BERT/scripts. Here is an example of stage-2 finetuning for ReClor.

export RECLOR_DIR=reclor_data
export TASK_NAME=reclor
export MODEL_NAME=microsoft/deberta-v2-xxlarge
export OUTPUT_NAME=deberta-v2-xxlarge

CUDA_VISIBLE_DEVICES=3 python run_multiple_choice.py \
   --model_type debertav2 \
   --model_name_or_path $MODEL_NAME \
   --task_name $TASK_NAME \
   --do_train \
   --evaluate_during_training \
   --do_test \
   --do_lower_case \
   --data_dir $RECLOR_DIR \
   --max_seq_length 256 \
   --per_gpu_eval_batch_size 4   \
   --per_gpu_train_batch_size 4   \
   --gradient_accumulation_steps 24 \
   --learning_rate 1e-05 \
   --num_train_epochs 10.0 \
   --output_dir Checkpoints/$TASK_NAME/${OUTPUT_NAME} \
   --logging_steps 200 \
   --save_steps 200 \
   --adam_betas "(0.9, 0.98)" \
   --adam_epsilon 1e-6 \
   --no_clip_grad_norm \
   --warmup_proportion 0.1 \
   --weight_decay 0.01

Here is an example of stage-2 finetuning for LogiQA.

export RECLOR_DIR=logiqa_data
export TASK_NAME=logiqa
export MODEL_NAME=microsoft/deberta-v2-xxlarge
export OUTPUT_NAME=deberta-v2-xxlarge

CUDA_VISIBLE_DEVICES=3 python run_multiple_choice.py \
  --model_type debertav2 \
  --model_name_or_path $MODEL_NAME \
  --task_name $TASK_NAME \
  --do_train \
  --evaluate_during_training \
  --do_test \
  --do_lower_case \
  --data_dir $RECLOR_DIR \
  --max_seq_length 256 \
  --per_gpu_eval_batch_size 4   \
  --per_gpu_train_batch_size 4   \
  --gradient_accumulation_steps 24 \
  --learning_rate 1e-05 \
  --num_train_epochs 10.0 \
  --output_dir Checkpoints/$TASK_NAME/${OUTPUT_NAME} \
  --logging_steps 200 \
  --save_steps 200 \
  --adam_betas "(0.9, 0.98)" \
  --adam_epsilon 1e-6 \
  --no_clip_grad_norm \
  --warmup_proportion 0.1 \
  --weight_decay 0.01

Citation

If the paper and code are helpful, please kindly cite our paper:

@inproceedings{Bao24amrlda,
  author    = {Qiming Bao and
               Alex Yuxuan Peng and
               Zhenyun Deng and
               Wanjun Zhong and
               Gaël Gendron and
               Neşet Tan and
               Nathan Young and
               Yang Chen and
               Yonghua Zhu and
               Michael Witbrock and
               Jiamou Liu},
  title     = {Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning},
  booktitle = {Findings of ACL},
  publisher = {{ACL}},
  year      = {2024}
}

About

The source code for Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning. #1 on the ReClor Leaderboard and we are the first group scored above 90% on the hidden test set around the world.. The paper has been accepted by the Findings of ACL-24.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages