Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add gpt-j-6b model into torchbench #1702

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

chuanqi129
Copy link
Contributor

@chuanqi129 chuanqi129 commented May 28, 2023

Add HF model gpt-j-6b into torchbench

Work for roadmap #1293

$ python run.py hf_GPTJ --torchdynamo inductor
Running eval method from hf_GPTJ on cpu in dynamo inductor mode with input batch size 1 and precision fp32.
CPU Total Wall Time: 3891.941 milliseconds
CPU Peak Memory:               47.6953 GB
Correctness:                         True
PT2 Compilation time:      99.554 seconds


class Model(HuggingFaceModel):
task = NLP.LANGUAGE_MODELING
DEFAULT_TRAIN_BSIZE = 1
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the correct batch size? We should use the default batch size in the upstream code if possible.

train_deterministic: false
not_implemented:
# hf_GPTJ model doesn't support JIT
- jit: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's remove jit option since it has been deprecated in the default CI.

DEFAULT_TRAIN_BSIZE = 1
DEFAULT_EVAL_BSIZE = 1

def __init__(self, test, device, jit=False, batch_size=None, extra_args=[]):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here, remove the jit=False and jit=jit arguments.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants