1

Reading through PEP 3101, it discusses how to subclass string.Formatter to define your own formats, but then you use it via myformatter.format(string). Is there a way to just make it work with f-strings?

E.g. I'm looking to do something like f"height = {height:.2f}" but I want my own float formatter that handles certain special cases.

3
  • 1
    F-strings are actually processed at compile time, not runtime. you can't override the default f-string formatter is due to how Python implements f-strings at a fundamental level.
    – Bhargav
    Commented Dec 23, 2024 at 19:11
  • 2
    Here is something. Well, a lot actually xD
    – LMC
    Commented Dec 23, 2024 at 19:34
  • Egads. When a solution has "evil" in the first sentence, it's probably a good idea to avoid it. It was educational though. Commented Dec 23, 2024 at 19:43

1 Answer 1

1

As I mentioned in comments F-strings are actually processed at compile time, not runtime. you can't override the default f-string formatter is due to how Python implements f-strings at a fundamental level.

For example if you run

f"height = {height:.2f}"

It effectively converts

"height = {}".format(format(height, '.2f'))

conversion happens during compilation, there's no way to "hook into" or modify this process at runtime.

What you can do is.

use decimal Module for Precision Control.

Reference - https://realpython.com/how-to-python-f-string-format-float/

from decimal import Decimal

value = Decimal('1.23456789')
print(f"Value: {value:.2f}")

Output:

Value: 1.23

Decimal implements the format method and handles the standard format specifiers like .2f.

Side Note:

We aren't really "overriding" the default formatter here - it's implementing the existing format specification protocol.

As your goal is to handle special cases for float formatting, using Decimal could solve your problem...

2
  • What's the point of Decimal there? You get the same output with the float value = 1.23456789. Commented Dec 23, 2024 at 19:56
  • @KellyBundy Decimal for precision control and deterministic behavior, which floats may lack due to their inherent binary floating-point representation. Decimal allows for arbitrary precision arithmetic. This is particularly useful when dealing with financial or scientific calculations where exact decimal representation is crucial. so i took example witrh decimal. Whenever there is need. I definitly goes for decimal rather than float. Floats are susceptible to rounding errors because they use a binary representation
    – Bhargav
    Commented Dec 23, 2024 at 20:01

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.