0

I have a postgres database inside a docker container. I compose it up with this docker-compose.yaml:

services:
  
  db:
    container_name: postgres
    image: postgres:latest
    restart: always
    volumes:
      - db-data:/var/lib/postgresql/data
    environment:
      POSTGRES_DB: ${POSTGRES_DB}
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_HOST: ${POSTGRES_HOST}
      POSTGRES_PORT: ${POSTGRES_PORT}

    healthcheck:
      test: [ "CMD", "pg_isready" , "-d", "${POSTGRES_DB}", "-U", "${POSTGRES_USER}"]
      # The `interval` option specifies how often to run the health check.
      interval: 10s
      timeout: 5s
      retries: 5

volumes:
  db-data:

I get the environment variables from an .env file. Then I have a python script that uses SQLAlchemy to create table schema and a function to init the table like this:

from sqlalchemy import Column, Integer, String, DateTime, Float
from sqlalchemy import inspect
from sqlalchemy.orm import Mapped, DeclarativeBase
from pgvector.sqlalchemy import Vector

class CallData(DeclarativeBase):
    __tablename__ = "call_data"

    id: Mapped[int] = Column(Integer, primary_key=True)
    nombre: Mapped[str] = Column(String(255), nullable=False)
    entidad: Mapped[str] = Column(String(255), nullable=False)
    descripcion = Vector(dim=300, nullable=False)

    def __repr__(self):
        return f"""
        <CallData(id={self.id}, titulo={self.nombre}, entidad_convocante={self.entidad}, 
        descripcion={self.descripcion})>
        """
    
    @classmethod
    def init_table(cls, engine):
        """Initialize the table in the database."""
        if not inspect(engine).has_table(cls.__tablename__):
            cls.metadata.create_all(engine)
            print(f"Table {cls.__tablename__} created.")
        else:
            print(f"Table {cls.__tablename__} already exists.")

Then I have the following fucntion to send data to the database

def send_to_db(contenido):
    load_dotenv()
    engine = create_engine(
        f"postgresql+psycopg://{os.getenv("POSTGRES_USER")}:{os.getenv("POSTGRES_PASSWORD")}@{os.getenv("POSTGRES_HOST)}:{os.getenv("POSTGRES_PORT")}/{os.getenv("POSTGRES_DB")}",
    )
    Session = sessionmaker(bind=engine)
    CallData.init_table(engine)
    with Session() as session:
        # Assuming `contenido` is a list of dictionaries with the data to be inserted
        for entry in contenido:
            call_data = CallData(
                nombre=entry["convocatoria"],
                entidad=entry["entidad"],
                descripcion=entry["descripcion"],
            )
            session.add(call_data)
        session.commit()

However, I am getting this error when the CallData.init_table() command is run. Particulary, on if not inspect(engine).has_table(self.__tablename__)

(psycopg.OperationalError) connection failed: connection to server at ${POSTGRES_HOST}, port ${POSTGRES_PORT} failed: FATAL:  password authentication failed for user ${POSTGRES_USER}
connection to server at ${POSTGRES_HOST}, port ${POSTGRES_PORT} failed: FATAL:  password authentication failed for user ${POSTGRES_USER}"
(Background on this error at: https://sqlalche.me/e/20/e3q8)

I have composed down and up again, and I got the same error. I think it's because I have to parse through the password on inspect, somehow. How can I solve it?

7
  • It's not related to inspect, the error is happening there because it is the first time that the engine tries to connect to the database. Trying printing the environment variables before you create the engine and see if they are being populated correctly. Commented Apr 10 at 14:13
  • I have printed the environment variables while debugging. They have mapped correctly into my uri. Commented Apr 10 at 14:24
  • The error message suggests that the calls to os.getenv are not returning the correct values. Try printing that complete Fstring at the point that you call create_engine.
    – OldBoy
    Commented Apr 10 at 14:32
  • The credentials are not correct however. This is not a SQLalchemy issue, this is a local setup issue with your docker compose configuration. We can’t help here because don’t even know if you are running the Python code from your host system or from inside a container, and if that container is inside our outside the compose cluster. Commented Apr 10 at 14:34
  • Thanks @MartijnPieters, I'll delete the question. Fyi, I am running the code from my host system. With respect to the second question, I can't answer that. Commented Apr 10 at 14:37

1 Answer 1

0

After two days (no kidding) trying to solve the issue, I've managed to. Fyi, I have been running the python script outside of a docker container and it tried to connect to the docker container of the postgres. Besides, I am running the code inside WSL2. The solution was rather weird, but I leave it in case someone stambles in the same exact issue. I had postgresql install on my local WSL abd I uninstalled postgresql and purge it

sudo apt-get --purge remove postgresql
sudo apt-get purge postgresql*

This is the explanation i found from a friend. PostgreSQL installed natively on Ubuntu was likely binding to localhost:5432, blocking the Docker container (which was also trying to expose its PostgreSQL service on the same port). So when Python tried to connect to localhost:5432, it hit the local PostgreSQL server — not the one running in Docker — and the credentials didn't match.

I hope this helps if anyone has a similar issue as mine

EDIT 2: Technically speaking, changing the port to something else or disabling native postgres should also solved it as explained by @snakecharmerb. But in my case, I don't need native postgres.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.