Yes, you are getting the correct output.
import torch
Input = torch.tensor([[1.,0.0,0.0],[1.,0.0,0.0]])
label = torch.tensor([0, 0])
print(torch.nn.functional.cross_entropy(Input,label))
# tensor(0.5514)
torch.nn.functional.cross_entropy
function combines log_softmax
and nll_loss
in a single function:
It is equivalent to :
torch.nn.functional.nll_loss(torch.nn.functional.log_softmax(Input, 1), label)
Code for reference:
print(torch.nn.functional.softmax(Input, 1).log())
tensor([[-0.5514, -1.5514, -1.5514],
[-0.5514, -1.5514, -1.5514]])
print(torch.nn.functional.log_softmax(Input, 1))
tensor([[-0.5514, -1.5514, -1.5514],
[-0.5514, -1.5514, -1.5514]])
print(torch.nn.functional.nll_loss(torch.nn.functional.log_softmax(Input, 1), label))
tensor(0.5514)
Now, you see:
torch.nn.functional.cross_entropy(Input,label)
is equal to
torch.nn.functional.nll_loss(torch.nn.functional.log_softmax(Input, 1), label)