Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to comprehend the evaluation results? #7

Open
yaoyiyao-yao opened this issue Feb 7, 2021 · 1 comment
Open

How to comprehend the evaluation results? #7

yaoyiyao-yao opened this issue Feb 7, 2021 · 1 comment

Comments

@yaoyiyao-yao
Copy link

Hello,
when I get the evaluation results,I am not sure the meaning of "predicted_parse_error" and "exact".I guess when "predicted_parse_error" is true it means the model can’t produce a predicted sql,is that right?And I found "exact" has 3 possibilities:true,0 and false,I guess when "exact" is true ,it means the predicted sql is right.But what do 0 and false mean?
Thank you.

@Impavidity
Copy link
Contributor

You can use the official evaluation script https://github.com/taoyds/spider to evaluate the outputs.
For customized evaluation in the codebase,
"predicted_parse_error": true means the model cannot produce sql.
both 0 and false means the predicted sql is wrong.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants