-
Notifications
You must be signed in to change notification settings - Fork 30.8k
Closed
Description
❓ Questions & Help
SYSTEM
OS: Linux pop-os 5.0.0
Python version: 3.6.8
Torch version: 1.3.0
Transformers version: 2.1.1
I am running this linux VM with the above software versions on a Windows 10 laptop.
I am running the following code:
import torch
from transformers import GPT2Tokenizer, GPT2LMHeadModel
sentence = 'Natural language processing tasks are typically approached with'
tokenizer = GPT2Tokenizer.from_pretrained('gpt2')
context_tokens = tokenizer.encode(sentence, add_special_tokens=False)
context = torch.tensor(context_tokens, dtype=torch.long)
num_samples = 1
context = context.unsqueeze(0).repeat(num_samples, 1)
generated = context
model = GPT2LMHeadModel.from_pretrained('gpt2')
model.eval()
length = 20
with torch.no_grad():
for jj in range(5):
for _ in range(length):
outputs = model(generated)
next_token_logits = outputs[0][:, -1, :]
next_token = torch.argmax(next_token_logits, dim=-1).unsqueeze(-1)
generated = torch.cat((generated, next_token), dim=1)
out = generated
out = out[:, len(context_tokens):].tolist()
for o in out:
text = tokenizer.decode(o, clean_up_tokenization_spaces=True)
What I was noticing was that GPT2 starts to produce repetitive text (see below) with this approach. I am not sure the best way to prevent this from happening and was wondering if others had any ideas? Thank you in advance!
OUTPUT
a single task, such as a word search, and the task is then repeated. The task is then repeated for each word in the search.
The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in the search. The task is then repeated for each word in
Metadata
Metadata
Assignees
Labels
No labels