Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Network selection #48

Open
scaleoutsean opened this issue Aug 31, 2019 · 1 comment
Open

Network selection #48

scaleoutsean opened this issue Aug 31, 2019 · 1 comment
Labels
enhancement New feature or request

Comments

@scaleoutsean
Copy link
Contributor

Is this a bug report or feature request?

  • Feature Request

How to reproduce it (minimal and precise):

  • If you want containers to access some external network, it becomes quite hard to accommodate that after K8S has been set up.

Environment:

  • Bionic 18.04
  • 4.15.0-54
  • make versions output:
=== BEGIN Version Info ===
Repo state: 62d403b37d433db7d3eed6d8a98136837441aadb (dirty? NO)
make: /usr/bin/make
kubectl: /usr/bin/kubectl
grep: /bin/grep
cut: /usr/bin/cut
rsync: /usr/bin/rsync
openssl: /usr/bin/openssl
/dev/urandom: OK
Vagrant version:
Vagrant 2.2.5
vboxmanage version:
6.0.10r132072
=== END Version Info ===

Feature Request

Consider the possibility of either allowing additional NICs, or choosing the type of 2nd network (eth1), to more easily access other networks from VMs (and Pod Network).
Currently MASTER_IP uses eth1 and NODE_IP_NW can't "collide" with host network (although it wouldn't necesarily collide if it was a bridged network), so one can't have Pods on default network.

Are there any similar features already existing:

Manual tinkering with Vagrant files.

What should the feature do:

One of the following with the option to use NODE_IP_NW on that network:

  1. Allow the selection of eth1 NIC type (intnet or bridged, for example)
  2. Add the third NIC (eth2) using bridged adapter (maybe make that default VM route)

What would be solved through this feature:

Access from and to other networks, both on hypervisor, but also external. Currently if one has existing services on intnet or vboxnet15 and vbox37 and this project has to pick one vboxnet, it becomes necessary to install multiple clusters or edit Vagrant networks which either in this VM or existing VMs.

Does this have an impact on existing features:

I can't think of anything that stands out. If Pod Network was bridged, we'd have to ask for a range of unallocated IPs (NODE_IP_NW, documentation) and maybe ping-probe the range for availability before deployment.

@galexrt
Copy link
Owner

galexrt commented Oct 14, 2019

@scaleoutsean I'll look into adding option 2 you mentioned soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants