New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tool sometimes misinterprets multiple-choice questions #15
Comments
Here's a partially successful example for this image: The different options for question 20 in the doc have been correctly parsed, but the hint text was not. |
Another mostly successful example from the same form as above: The options for question 23 in the form were correctly determined, as was the fact that only one response is allowed. The conditional date fields were not picked up, but this isn't surprising as the multiple-choice component doesn't support them. This is a good example of where you might choose to structure this question differently in the web version anyway, using multiple pages and routing. |
Here's an example of it getting it wrong, from the same form: It made 2 errors:
What's interesting (and frustrating) is that the question is nearly identical to this one, which was successfully parsed. It does occasionally get it right: |
Here's another example of a mostly successful extraction, from question 42 of this image: The hint text isn't carried over, and is added to the question title. |
It's now getting an isolated version of this example right: |
Another fail, from this image: It chose checkboxes instead of radios. I wonder if I can get it to understand the difference based on the hint text?... |
Yes, I can! This was fixed in this commit by adding the following to the description text for the
I'd tried a few other variants before finding one that worked, which is interesting. I think what made it work was the confidence of the statement. Saying if any part of the question, and that it is (rather than probably is). Also expressing it as a standalone sentence, rather than appending it as a clause to another sentence. Notice that the question in the example doesn't contain the exact text that I cite in the schema, but it still matches. |
Accurately interpreting multiple choice questions (beyond simple yes/no) is a challenge. Lets capture examples of the tool successfully and unsuccessfully doing this, to determine how we might improve the performance.
The text was updated successfully, but these errors were encountered: