1,528 questions
-1
votes
0
answers
13
views
Designing a model with a one-dimensional input and a higher-dimensional output
I have a dataset where each sample has 70 features and one target. I want to design a model that takes the target values as input and outputs the 70 features.
It's somewhat a conditional data ...
0
votes
0
answers
17
views
AutoEncoder Reconstruction error is not decreasing while training data increse
I'm using an AE to compress an 58 dimensional data into 8 dimension. I have used the same architect of AE with different number of data points. All the data points are independent to each other and ...
0
votes
1
answer
71
views
CNN Autoencoder takes a very long time to train
I have been training a CNN Autoencoder on binary images (pixels are either 0 or 1) of size 64x64. The model is shown below:
import torch
import torch.nn as nn
import torch.nn.functional as F
class ...
1
vote
1
answer
47
views
LogVar layer of a VAE only returns zeros
I'm building a Variational auto encoder (VAE) with tfjs.
For now I'm only exploring with the fashionMNIST dataset and a simple model as follows:
input layer (28*28*1)
flatten
intermediate_1 (dense 50 ...
0
votes
0
answers
56
views
Autoencoder for multi-label classification task
I'm working on a multi-label classification problem using an autoencoder-based neural network built in PyTorch. The overall idea of my approach is as follows:
I load my dataset from a CSV file, ...
0
votes
1
answer
65
views
gradient tape for custom loss function
I'm currently working with an autoencoder in hopes to test its accuracy vs pca. My tutor asked me to add a custom loss function that involves the derivatives of the decoder output with respect to the ...
0
votes
0
answers
40
views
Pytorch LSTM-VAE not able to learn
I have some problem to make a LSTM-VAE for anomalies detection on multivariate signals (no constant duration). I found some informations in this forum and original papers to apply good practices. Even,...
0
votes
0
answers
47
views
Is it Possible to feed Embeddings generate by BERT to a LSTM based autoencoder to get the latent space?
I've just learn about how BERT produce embeddings. I might not understand it fully.
I was thinking of doing a project of leveraging those embeddings and feed it to an autoencoder to generate latent ...
0
votes
0
answers
22
views
LSTM autoencoder very poor results
I am working on blockchain transaction anomaly detection system and testing various models. Currently I am stuck on a LSTM autoencoder. I have preprocessed transaction data from ethereum network (used ...
0
votes
0
answers
9
views
Facing ResourceExhaustedError while training an autoencoder using k-fold cross validation
I'm trying to train an stacked denoising autoencoder. Also, since I have a small data set, I implement 10-fold cross validation to find the best hyperprameters. In every fold I build a new model and ...
0
votes
0
answers
22
views
VAE with Gumbel softmax on MNIST dataset
What could be the issue of kl loss going to 0? reconstruction loss is small, but every image is the same, and does not represent any digit.
Here is my encoder/decoder architecture I used, I think the ...
-1
votes
1
answer
35
views
How do I display the images generated by an autoencoder?
I created an autoencoder using python, with no errors. However, I do not know the code for how do display the generated images from the autoencoder. The code of the autoencoder is shown below:
import ...
0
votes
0
answers
27
views
Autoencoder Training Loss Doesn't Decrease Despite Cloned Code and Dataset
My first post on stackoverflow so thanks for any responses!
I'm learning about anomaly detection using autoencoders and found a useful looking github repo notebook link here
I've cloned the repo to my ...
0
votes
0
answers
17
views
Precision-Recall curve for LSTM Autoencoder results
I am using an LSTM autoencoder to detect anomalies in a time series. I use the reconstruction error to check if a point is anomalous or not. However, my question is: can I use the ...
0
votes
0
answers
88
views
Learning rate for Autoencoder
if I set lower bound to zero , it look like linear other wise it look little better, is this good/high/low learning rate ?
Autoencoder , batch size 16 , learning rate : 3 e-5 , and loss function root ...