Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ You can apply prompts to examples from datasets of the [Hugging Face Datasets li
INPUT: What label best describes this news article?
Carlyle Looks Toward Commercial Aerospace (Reuters) Reuters - Private investment firm Carlyle Group,\which has a reputation for making well-timed and occasionally\controversial plays in the defense industry, has quietly placed\its bets on another part of the market.
>>> print("TARGET: ", result[1])
TARGET: Business
TARGET: ['Business']
```

In the case that you are looking for the prompts available for a particular subset of a dataset, you should use the following syntax:
Expand Down
13 changes: 12 additions & 1 deletion promptsource/templates.py
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,18 @@
# These are users whose datasets should be included in the results returned by
# filter_english_datasets (regardless of their metadata)

INCLUDED_USERS = {"Zaid", "craffel", "GEM", "aps", "khalidalt", "shanya", "rbawden", "BigScienceBiasEval", "gsarti"}
INCLUDED_USERS = {
"Zaid",
"craffel",
"GEM",
"aps",
"khalidalt",
"shanya",
"rbawden",
"BigScienceBiasEval",
"gsarti",
"Muennighoff",
}

# These are the metrics with which templates can be tagged
METRICS = {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,9 @@ templates:
original_task: true
name: '1'
reference: ''
2eac2f74-ec50-4ca4-9124-1fdaad7b10b8: !Template
2eac2f74-ec50-4ca4-9124-1fdaad7b10b7: !Template
answer_choices: first ||| second
id: 2eac2f74-ec50-4ca4-9124-1fdaad7b10b8
id: 2eac2f74-ec50-4ca4-9124-1fdaad7b10b7
jinja: "Two sentences will follow. Is the first or second sentence more likely\
\ to be true? {% set shuffled_order = [0, 1] | random %} {% if shuffled_order\
\ == 0 %} \n1: {{sent_more}} \n2: {{sent_less}} {% else %} \n1: {{sent_less}}\
Expand Down Expand Up @@ -49,9 +49,9 @@ templates:
original_task: true
name: A_stereotype_true
reference: ''
5b9ff951-8625-4a9b-b45e-acea13104726: !Template
5b9ff951-8625-4a9b-b45e-acea13104725: !Template
answer_choices: 1 ||| 2
id: 5b9ff951-8625-4a9b-b45e-acea13104726
id: 5b9ff951-8625-4a9b-b45e-acea13104725
jinja: "Which of the provided options is more realistic? {% set shuffled_order\
\ = [0, 1] | random %} {% if shuffled_order == 0 %} \n1: {{sent_more}} \n2:\
\ {{sent_less}} {% else %} \n1: {{sent_less}} \n2: {{sent_more}} {% endif %}\
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -64,9 +64,9 @@ templates:
original_task: false
name: A_stereotype_true_fr
reference: ''
939ebb9f-afe2-4b2a-af40-adf943374da8: !Template
939ebb9f-afe2-4b2a-af40-adf943374da7: !Template
answer_choices: 1 ||| 2
id: 939ebb9f-afe2-4b2a-af40-adf943374da8
id: 939ebb9f-afe2-4b2a-af40-adf943374da7
jinja: "{% set shuffled_order = [0, 1] | random %} {% if shuffled_order == 0 %}\
\ \n1: {{sent_more}} \n2: {{sent_less}} {% else %} \nSituation 1: {{sent_less}}\
\ \n2: {{sent_more}} {% endif %} \nDis moi quelle situation est la plus plausible.\
Expand Down
110 changes: 110 additions & 0 deletions promptsource/templates/Muennighoff/xwinograd/en/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
dataset: Muennighoff/xwinograd
subset: en
templates:
28d31908-4aee-4545-aff2-7528cbf39197: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: 28d31908-4aee-4545-aff2-7528cbf39197
jinja: "{{sentence}}\nReplace the _ in the above sentence with the correct option:\
\ \n- {{option1}}\n- {{option2}}\n|||\n{% if answer == '1' %} {{option1}} {%\
\ else %} {{ option2 }} {% endif %}"
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: Replace
reference: ''
50ce5113-882f-4a9d-b21d-8d98b4644295: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: 50ce5113-882f-4a9d-b21d-8d98b4644295
jinja: 'Fill in the _ in the below sentence:

{{sentence}}


Choices:

- {{ option1 }}

- {{ option2 }}


Answer: ||| {% if answer == ''1'' %} {{option1}} {% else %} {{ option2 }} {%
endif %}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: fill in the blank
reference: ''
7f0f6d33-25e2-4394-b1f0-49a2a54767aa: !Template
answer_choices: True ||| False
id: 7f0f6d33-25e2-4394-b1f0-49a2a54767aa
jinja: 'The _ in the sentence below refers to {{option1}}. True or False?

{{sentence}}|||

{{answer_choices[answer|int - 1]}}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: false
name: True or False
reference: ''
80f9679e-7b6c-4ee7-a348-e905ed9aaf9e: !Template
answer_choices: '{{ option1 }} ||| {{ option2 }}'
id: 80f9679e-7b6c-4ee7-a348-e905ed9aaf9e
jinja: '{{ sentence }} In the previous sentence, does _ refer to {{ option1 }}
or {{ option2 }}? ||| {% if answer == ''1'' %} {{option1}} {% else %} {{ option2
}} {% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: does underscore refer to
reference: ''
bd40cf1f-bda2-4757-b1b5-f1a20a3f7202: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: bd40cf1f-bda2-4757-b1b5-f1a20a3f7202
jinja: '{{sentence}}

What does the _ in the above sentence refer to? {{ option1 }} or {{ option2
}}? ||| {% if answer == ''1'' %} {{option1}} {% else %} {{ option2 }} {% endif
%}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: underscore refer to
reference: ''
ec365d5d-bb5c-488c-93a0-4f90e6011c5d: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: ec365d5d-bb5c-488c-93a0-4f90e6011c5d
jinja: 'In the sentence below, does the _ stand for {{answer_choices[0]}} or {{answer_choices[1]}}?

{{sentence}}|||

{{answer_choices[answer | int - 1]}}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: stand for
reference: ''
110 changes: 110 additions & 0 deletions promptsource/templates/Muennighoff/xwinograd/fr/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
dataset: Muennighoff/xwinograd
subset: en
templates:
38d31908-4aee-4545-aff2-7528cbf39197: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: 38d31908-4aee-4545-aff2-7528cbf39197
jinja: "{{sentence}}\nReplace the _ in the above sentence with the correct option:\
\ \n- {{option1}}\n- {{option2}}\n|||\n{% if answer == '1' %} {{option1}} {%\
\ else %} {{ option2 }} {% endif %}"
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: Replace
reference: ''
60ce5113-882f-4a9d-b21d-8d98b4644295: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: 60ce5113-882f-4a9d-b21d-8d98b4644295
jinja: 'Fill in the _ in the below sentence:

{{sentence}}


Choices:

- {{ option1 }}

- {{ option2 }}


Answer: ||| {% if answer == ''1'' %} {{option1}} {% else %} {{ option2 }} {%
endif %}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: fill in the blank
reference: ''
8f0f6d33-25e2-4394-b1f0-49a2a54767aa: !Template
answer_choices: True ||| False
id: 8f0f6d33-25e2-4394-b1f0-49a2a54767aa
jinja: 'The _ in the sentence below refers to {{option1}}. True or False?

{{sentence}}|||

{{answer_choices[answer|int - 1]}}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: false
name: True or False
reference: ''
90f9679e-7b6c-4ee7-a348-e905ed9aaf9e: !Template
answer_choices: '{{ option1 }} ||| {{ option2 }}'
id: 90f9679e-7b6c-4ee7-a348-e905ed9aaf9e
jinja: '{{ sentence }} In the previous sentence, does _ refer to {{ option1 }}
or {{ option2 }}? ||| {% if answer == ''1'' %} {{option1}} {% else %} {{ option2
}} {% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: does underscore refer to
reference: ''
cd40cf1f-bda2-4757-b1b5-f1a20a3f7202: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: cd40cf1f-bda2-4757-b1b5-f1a20a3f7202
jinja: '{{sentence}}

What does the _ in the above sentence refer to? {{ option1 }} or {{ option2
}}? ||| {% if answer == ''1'' %} {{option1}} {% else %} {{ option2 }} {% endif
%}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: underscore refer to
reference: ''
fc365d5d-bb5c-488c-93a0-4f90e6011c5d: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: fc365d5d-bb5c-488c-93a0-4f90e6011c5d
jinja: 'In the sentence below, does the _ stand for {{answer_choices[0]}} or {{answer_choices[1]}}?

{{sentence}}|||

{{answer_choices[answer | int - 1]}}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: stand for
reference: ''
110 changes: 110 additions & 0 deletions promptsource/templates/Muennighoff/xwinograd/pt/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
dataset: Muennighoff/xwinograd
subset: en
templates:
38d31908-4aee-4545-aff2-7528cbf39197: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: 38d31908-4aee-4545-aff2-7528cbf39197
jinja: "{{sentence}}\nReplace the _ in the above sentence with the correct option:\
\ \n- {{option1}}\n- {{option2}}\n|||\n{% if answer == '1' %} {{option1}} {%\
\ else %} {{ option2 }} {% endif %}"
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: Replace
reference: ''
60ce5113-882f-4a9d-b21d-8d98b4644295: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: 60ce5113-882f-4a9d-b21d-8d98b4644295
jinja: 'Fill in the _ in the below sentence:

{{sentence}}


Choices:

- {{ option1 }}

- {{ option2 }}


Answer: ||| {% if answer == ''1'' %} {{option1}} {% else %} {{ option2 }} {%
endif %}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: fill in the blank
reference: ''
8f0f6d33-25e2-4394-b1f0-49a2a54767aa: !Template
answer_choices: True ||| False
id: 8f0f6d33-25e2-4394-b1f0-49a2a54767aa
jinja: 'The _ in the sentence below refers to {{option1}}. True or False?

{{sentence}}|||

{{answer_choices[answer|int - 1]}}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: false
name: True or False
reference: ''
90f9679e-7b6c-4ee7-a348-e905ed9aaf9e: !Template
answer_choices: '{{ option1 }} ||| {{ option2 }}'
id: 90f9679e-7b6c-4ee7-a348-e905ed9aaf9e
jinja: '{{ sentence }} In the previous sentence, does _ refer to {{ option1 }}
or {{ option2 }}? ||| {% if answer == ''1'' %} {{option1}} {% else %} {{ option2
}} {% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: does underscore refer to
reference: ''
cd40cf1f-bda2-4757-b1b5-f1a20a3f7202: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: cd40cf1f-bda2-4757-b1b5-f1a20a3f7202
jinja: '{{sentence}}

What does the _ in the above sentence refer to? {{ option1 }} or {{ option2
}}? ||| {% if answer == ''1'' %} {{option1}} {% else %} {{ option2 }} {% endif
%}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: underscore refer to
reference: ''
fc365d5d-bb5c-488c-93a0-4f90e6011c5d: !Template
answer_choices: '{{option1}} ||| {{option2}}'
id: fc365d5d-bb5c-488c-93a0-4f90e6011c5d
jinja: 'In the sentence below, does the _ stand for {{answer_choices[0]}} or {{answer_choices[1]}}?

{{sentence}}|||

{{answer_choices[answer | int - 1]}}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: stand for
reference: ''
Loading