Private isolated subnets are a common practice in datacenters and server deployment. They make sense for both performance and security. Server to server traffic within a group cannot be degraded by outside traffic, if isolated. Traffic on an isolated subnet within a physically secure datacenter can pass unencrypted with minimal concern for security. Also two networks simply have more bandwidth than one. So private subnets improve both performance and security.
This is standard practice in production network deployments, since at least the 1980’s. Yet many developers are unfamiliar with this practice. Folk from the desktop world often design applications assuming a single network, and a single IP address / hostname / network interface per machine. (Thus the elaboration above.)
In my current exercise I am building a backend service for a OpenStack cloud. Connectivity is a bit complex in the cloud backend (as it must). I want to be certain my code does not accidentally assume connectivity not present. I want my network topology in testing to emulate a production environment.
Thus the present exercise.
My present work is for a very large company. We have quite a lot of hardware in the software development lab, but not expertise in production deployments. This means a lot of lab irregularities to work around.
As a developer, I just want to get my job done. I need to build a test / development environment with our existing lab environment. If I had an excess of time and resources, I would re-run wires and reconfigure fancy Cisco switches. Becoming expert on quirky gear I might never see again is simply not on my list of priorities. This means I want to route a private subnet - somehow - over the existing networks.
For OpenStack development / testing the host/network configuration I want looks like this:
- Host A runs all the OpenStack services, with the exception of nova-compute and cinder-block, in a VM. The use of a VM allows me to develop with more than one OpenStack version (or variant) with identical network access.
- Host B runs just cinder-block, again in a VM so I can test with different OpenStack versions. Access to the underlying storage pool managed by cinder-block is only guaranteed on this host.
- In addition there are four hosts that run only nova-compute. These are not VMs as I want realistic performance for the running OpenStack instances. (The four hosts can be partitioned between any OpenStack versions under test.)
Only host A has access to both private and public subnets. The remaining hosts are only connected to the private subnet(s).With six hosts I believe this gives a realistic and very flexible development setup.
Thus my first attempt