albert pretrained example

from transformers import AlbertTokenizer, AlbertForQuestionAnswering
import torch

tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
model = AlbertForQuestionAnswering.from_pretrained('albert-base-v2')
question, text = "Who was Jim Henson?", "Jim Henson was a nice puppet"
input_dict = tokenizer.encode_plus(question, text, return_tensors='pt')
start_scores, end_scores = model(**input_dict)

Here is what the above code is Doing:
1. We load the pre-trained model and tokenizer.
2. We create a dictionary with the question and text as keys and the question and text as values.
3. We pass the dictionary to the model and get the start and end scores.
4. We use the torch.argmax() function to get the index of the highest scoring token for each.
5. We use the tokenizer.convert_ids_to_tokens() function to get the string representation of the token.
6. We print the answer.