0

For example, I have a dataframe

df = pd.DataFrame({'key': ['one', 'three', 'two', 'one'], 'B': [1, 2, 3, 4], 'C': [-1, 8, 9, 11]})

I want the output in Excel spreadsheet to be - row order sorted by column key, only output the first column key for each unique column key value, with extra empty row between different key values - empty columns between column key and B

This is what I want to have in the output:

Key     B  C
One     1  -1
        4  11

Three   2  8

Two     3  9

What would be the most compact way to accomplish this? Thanks.

0

1 Answer 1

1

Got your point.

But this is a little bit odd, why don't you firstly filter your collection to a correct one?

Suggest you are using xlwt module, first adjust your collection, then do merge to get your xls file correct.

#!/usr/bin/env python
# encoding: utf-8

import xlwt

df = {'key': ['one', 'three', 'two', 'one'], 'B': [1, 2, 3, 4], 'C': [-1, 8, 9, 11]}

# filter your collection, let the same key to be close for easy merging
df['key'].sort()
df['B'][1], df['B'][3] = df['B'][3], df['B'][1]
df['C'][1], df['C'][3] = df['C'][3], df['C'][1]

current_file = xlwt.Workbook()
table = current_file.add_sheet('sheet1', cell_overwrite_ok=True)

table.write(0, 0, 'key')
for title_index, text in enumerate(df['key']):
    table.write(title_index+1, 0, text)


df.pop('key')
merging_list = []
for index, letter in enumerate(df.keys()):
    table.write(0, index+1, letter)
    nr = 1
    for content in df[letter]:
        table.write(nr, index+1, content)
        nr += 1

# you can merge more if you have multiple duplicate keys
# but do not foget to remember their indexes
table.merge(1, 2, 0, 0)


current_file.save('/tmp/test.xls')

then check your file /tmp/test.xls:

enter image description here

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.