I can't get an elastic beanstalk app with a public ELB but private EC2 instances to work.
I created a basic eb config with eb init
. This results in the following config:
branch-defaults:
default:
environment: test3
group_suffix: null
global:
application_name: test
branch: null
default_ec2_keyname: null
default_platform: Node.js
default_region: us-east-1
include_git_submodules: true
instance_profile: null
platform_name: null
platform_version: null
profile: null
repository: null
sc: null
workspace_type: Application
I use the default VPC, but with 2 custom public subnets (to prevent peering CIDR conflicts with another account and an external DB).
I now try to deploy the eb app with the following options:
# powershell
eb create --profile dev `
--sample `
--vpc.id vpc-123abc
--vpc.ec2subnets "subnet-123,subnet-456" `
--vpc.elbsubnets "subnet-123,subnet-456" `
-sr arn:aws:iam::<account>:role/service-role/aws-elasticbeanstalk-service-role `
--vpc.elbpublic `
test8
This does not work. The initial instance never passes health checks and the creation of the EB environment is considered a failure after 15 minutes or so. It is never accessible from the internet.
HOWEVER, adding the option --vpc.publicip
makes this work! -- I can access the webpage just fine from the internet using the EB environment address. But I shouldn't need public ips on each instance. So what am I missing?
I ran both these commands with different environment names and compared their security groups, ELB settings, etc. I can't find any differences. Why do these instances need public IPs to pass health checks and connect to the ELB?