The problem might arise because of the meta-text in the .csv or .txt file that is not really written there but is copied when its content is loaded somewhere.
I think it is better to first import your text in an array or a string and then split it and save into the dataframe specifically when your data is not too large.
import csv
arrays = []
path = "C:\\Users\\Souro\\Downloads\\AXISBANK.csv"
with open(path, 'r') as f:
reader = csv.reader(f)
for row in reader:
row = str(row).replace('\\', '') #deleting # delete backslash
arrays.append(row)
Then take a look at arrays[:10] to find where the meta data ends and delete the unwanted data (meta data) and then converting the 'arrays' array into the dataframe. for instance:
arrays = arrays[9:]
df = pd.DataFrame(arrays[1:], columns=arrays[0]) #arrays[0] is the columns names
aboutAbout your comments:
ifIf you look at the text in each row (print each row), you would find out that a backslash is at the end of each row, so by replace('\',' ') we are substituting each backslash with nothing(''). why two \? It is the way that we declare backslash, otherwise, it won't be recognized.
row=strrow = str(row).replace('\\',' ''')
and aboutwe are substituting each backslash by nothing (''), effectively deleting it. Why '\\'? The backslash usually introduces an escape sequence (e.g. you can write '\n' for a newline character), so you have to escape it (raw parsing like r'a\b' works, but r'\' does not: the creator of Python chose for the latter to be considered a syntax error instead).
And
open('text.txt', 'r')
It opens the file 'text.txt'text.txt in readingread-only mode (rr).