0

Problem Summary

I'm experiencing permission issues when deploying a PostgreSQL 9.6 container in Docker Swarm that uses an NFS-mounted volume for data storage. The container fails to start with the error:

FATAL: data directory "/var/lib/postgresql/data/pg_data" has wrong ownership
HINT: The server must be started by the user that owns the data directory.

Environment Details

  • Platform: Docker Swarm
  • PostgreSQL Version: 9.6.24
  • NFS Server: re0srt10003.eresz03.com
  • NFS Mount: /vol/re0srt10003_vol011/NFS_customer_data_02/Production_data/nfs01/
  • Note: → only AD valid users (unix/windows) can be used to access a file share → local users access is not permitted (users that only exists on the local linux system) → Please be aware of that new file shares will be on new Server re0srt10003.eresz03.com and as mentioned in our docupedia page local users (Example UID 1000 / GID 1000) are not working. No exceptions possible! Please use your Domain Users and Groups (UID 188044 / GID 806642981) to connect to the share.

Current Configuration

Dockerfile used to create image

FROM postgres:9.6.24

ENV http_proxy=http://proxy.com:8686
ENV https_proxy=http://proxy.com:8686

RUN sed -i '/stretch-updates/d' /etc/apt/sources.list && \
    sed -i 's|http://deb.debian.org/debian|http://archive.debian.org/debian|g' /etc/apt/sources.list && \
    sed -i 's|http://security.debian.org/debian-security|http://archive.debian.org/debian-security|g' /etc/apt/sources.list && \
    rm -f /etc/apt/sources.list.d/pgdg.list && \
    echo 'Acquire::Check-Valid-Until "false";' > /etc/apt/apt.conf.d/10-no-check-valid-until && \
    apt-get update && \
    apt-get install -y --allow-unauthenticated postgresql-contrib && \
    apt-get clean && \
    rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

RUN usermod -u 188044 postgres
RUN groupmod -g 806642981 postgres

RUN su root -c "chown -R postgres:postgres /var/lib/postgresql"

USER postgres
STOPSIGNAL SIGINT

Docker Compose Service

version: '3.7'
services:
  fossology-scheduler:
    command: scheduler
    image: fossology:$VERSION
    environment:
      - FOSSOLOGY_DB_HOST=fossology-db
      - FOSSOLOGY_DB_NAME=fossology
      - FOSSOLOGY_DB_USER=fossy
      - FOSSOLOGY_INSTANCE=$INSTANCE
    networks:
      - fossology-net
    volumes:
      - fossy_repo:/srv/fossology/repository
    deploy:
      placement:
        constraints:
          - node.labels.fossology == true
    secrets:
      - source: fossology-db-pwd
        target: fossology.pwd
        uid: '188044'
        gid: '806642981'
        mode: 0400
    hostname: fossology-scheduler.localhost

  fossology-web:
    command: web
    image: fossology:$VERSION
    environment:
      - FOSSOLOGY_DB_HOST=fossology
      - FOSSOLOGY_DB_NAME=fossology
      - FOSSOLOGY_DB_USER=fossui
      - FOSSOLOGY_SCHEDULER_HOST=fossology-scheduler
      - FOSSOLOGY_INSTANCE=$INSTANCE
    user: fossui
    networks:
      - fossology-net
      - nginx-net
    volumes:
     - fossy_repo:/srv/fossology/repository
    deploy:
      placement:
        constraints:
          - node.labels.fossology == true
    secrets:
      - source: fossology-db-pwd
        target: fossology.pwd
        uid: '188044'
        gid: '806642981'
        mode: 0400

  fossology-db:
    image: postgres_9.6:01
    environment:
      - POSTGRES_DB=fossology
      - POSTGRES_USER=fossui
      - POSTGRES_PASSWORD=password
      - POSTGRES_INITDB_ARGS='-E UTF8'
      - PGDATA=/var/lib/postgresql/data
    ports:
      - target: 5432
        published: 9999
        protocol: tcp
        mode: ingress
    networks:
      - fossology-net
    volumes:
      - fossy_pg_data:/var/lib/postgresql/data
    deploy:
      placement:
        constraints:
          - node.labels.fossology == true

networks:
  fossology-net:
    name : fossology-net-$INSTANCE
    driver: overlay
    external: true
  nginx-net:
    external: true

volumes:
  fossy_repo:
    driver: local
    driver_opts:
      type: nfs
      o: "addr=re0srt10003.eresz03.com,rw,nfsvers=4"
      device: ":/vol/re0srt10003_vol011/NFS_customer_data_02/Production_data/nfs01/repo"
  fossy_pg_data:
    driver: local
    driver_opts:
      type: nfs
      o: "addr=re0srt10003.eresz03.com,rw,nfsvers=4"
      device: ":/vol/re0srt10003_vol011/NFS_customer_data_02/Production_data/nfs01/db"
secrets:
  fossology-db-pwd:
    external: true

What I've Tried

  • Modified the Dockerfile to change postgres user UID/GID to match domain requirements
  • Verified the user mapping inside the container shows correct IDs id postgres uid=188044(postgres) gid=806642981(postgres) groups=806642981(postgres),101(ssl-cert)
  • Confirmed NFS mount is accessible by ownership shows as below, drwx--S--- 8 188044 806642981 db
  • Tried to run the service without NFS volume - working as expected.

Additional Context

This setup worked previously with local user/root user, but fails when migrating to the new NFS-based storage system that enforces domain users.

Any suggestions regarding the possible root causes or alternative ways to address this issue would be highly appreciated!.

0

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.