Skip to main content

All Questions

0 votes
1 answer
45 views

Why don't I get the same result as with tensorflow's method when I write my own expression?

I'm learning logistic regression and I want to calculate what is the value of the cross entropy loss function during minimizing it via gradient descent, but when I use tensorflow's ...
Ereghard's user avatar
  • 124
1 vote
0 answers
52 views

How is error calculated in a simple logistic regression neural network?

Below is the following dataframe reporting the results of training a dataset on a binary classification problem: columns a and b represent x and y respectively and the structure of the neural network ...
Sunny's user avatar
  • 302
-1 votes
1 answer
2k views

Difference between Logistic Loss and Cross Entropy Loss

I'm confused about logistic loss and cross entropy loss in binary classification scenario. According to Wikipedia (https://en.wikipedia.org/wiki/Loss_functions_for_classification), the logistic loss ...
YQ.Wang's user avatar
  • 1,177
1 vote
2 answers
2k views

How to write custom CrossEntropyLoss

I am learning Logistic Regression within Pytorch and to better understand I am defining a custom CrossEntropyLoss as below: def softmax(x): exp_x = torch.exp(x) sum_x = torch.sum(exp_x, dim=1,...
A.E's user avatar
  • 1,023
0 votes
1 answer
206 views

how to make my logistic regression faster

I have to do simple logistic regression (only in numpy, I can't use pytourch or tensorflow). Data: part of MNIST Goal: I should have accuracy about 86%. Unfortunately i have only about 70%, and my ...
Janek Podwysocki's user avatar
5 votes
5 answers
11k views

Comparing MSE loss and cross-entropy loss in terms of convergence

For a very simple classification problem where I have a target vector [0,0,0,....0] and a prediction vector [0,0.1,0.2,....1] would cross-entropy loss converge better/faster or would MSE loss? When I ...
aa1's user avatar
  • 803
101 votes
3 answers
67k views

How to choose cross-entropy loss in TensorFlow?

Classification problems, such as logistic regression or multinomial logistic regression, optimize a cross-entropy loss. Normally, the cross-entropy layer follows the softmax layer, which produces ...
Maxim's user avatar
  • 53.8k
0 votes
1 answer
145 views

Does binary log loss exclude one part of equation based on y?

Assuming the log loss equation to be: logLoss=−(1/N)*∑_{i=1}^N (yi(log(pi))+(1−yi)log(1−pi)) where N is number of samples, yi...yiN is the actual value of the dependent variable, and pi...piN is the ...
Liam Hanninen's user avatar
1 vote
1 answer
551 views

Need help understanding the Caffe code for SigmoidCrossEntropyLossLayer for multi-label loss

I need help in understanding the Caffe function, SigmoidCrossEntropyLossLayer, which is the cross-entropy error with logistic activation. Basically, the cross-entropy error for a single example with ...
auro's user avatar
  • 1,089