I have a csv file that has 73 rows of data and 16 columns and I want to read it and pass it to a pandas dataframe but when I do
data_dataframe = pd.read_csv(csv_file, sep = ',')
I get 3152 rows and 22 columns with 73 rows and 16 columns of data and the rest just pure NaN values. How can I tell pandas to read the valid rows and columns data and avoid all these extra NaN ones?