Positional embeddings are introduced into a transformer in order to add positional information to a word embedding.
Now, suppose we have an existing data embedding that can be for any data domain word/image. We don't have the original text/image before being encoded but the final embedding. Is it possible to extract positional embedding from an embedding or do we have to run positional embedding on the raw input (text/image) please? Is there a way around it in case we don't have the original raw input but only its embedding?
