k8s middleware add tests and docs update (#501)

* add cidrs opt

* remove state data from middleware object

* update k8s docs

* Add integration tests

* add unit tests for cidr and pods config

* more README fixes, separate dev notes

* adjust section headers

* fix typo
This commit is contained in:
Chris O'Haver 2017-02-02 16:51:42 -05:00 committed by John Belamaric
parent 8beb1b2166
commit 77f957d443
5 changed files with 512 additions and 322 deletions

View file

@ -0,0 +1,159 @@
# Basic Setup for Development and Testing
## Launch Kubernetes
Kubernetes is launched using the commands in the `.travis/kubernetes/00_run_k8s.sh` script.
## Configure kubectl and Test
The kubernetes control client can be downloaded from the generic URL:
`http://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY}`
For example, the kubectl client for Linux can be downloaded using the command:
`curl -sSL "http://storage.googleapis.com/kubernetes-release/release/v1.2.4/bin/linux/amd64/kubectl"`
The `contrib/kubernetes/testscripts/10_setup_kubectl.sh` script can be stored in the same directory as
kubectl to setup kubectl to communicate with kubernetes running on the localhost.
## Launch a kubernetes service and expose the service
The following commands will create a kubernetes namespace "demo",
launch an nginx service in the namespace, and expose the service on port 80:
~~~
$ ./kubectl create namespace demo
$ ./kubectl get namespace
$ ./kubectl run mynginx --namespace=demo --image=nginx
$ ./kubectl get deployment --namespace=demo
$ ./kubectl expose deployment mynginx --namespace=demo --port=80
$ ./kubectl get service --namespace=demo
~~~
The script `.travis/kubernetes/20_setup_k8s_services.sh` creates a couple of sample namespaces
with services running in those namespaces. The automated kubernetes integration tests in
`test/kubernetes_test.go` depend on these services and namespaces to exist in kubernetes.
## Launch CoreDNS
Build CoreDNS and launch using this configuration file:
~~~ txt
# Serve on port 53
.:53 {
kubernetes coredns.local {
resyncperiod 5m
endpoint http://localhost:8080
namespaces demo
# Only expose the records for kubernetes objects
# that matches this label selector.
# See http://kubernetes.io/docs/user-guide/labels/
# Example selector below only exposes objects tagged as
# "application=nginx" in the staging or qa environments.
#labels environment in (staging, qa),application=nginx
}
#cache 180 coredns.local # optionally enable caching
}
~~~
Put it in `~/k8sCorefile` for instance. This configuration file sets up CoreDNS to use the zone
`coredns.local` for the kubernetes services.
The command to launch CoreDNS is:
~~~
$ ./coredns -conf ~/k8sCorefile
~~~
In a separate terminal a DNS query can be issued using dig:
~~~
$ dig @localhost mynginx.demo.coredns.local
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47614
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;mynginx.demo.coredns.local. IN A
;; ANSWER SECTION:
mynginx.demo.coredns.local. 0 IN A 10.0.0.10
;; Query time: 2 msec
;; SERVER: ::1#53(::1)
;; WHEN: Thu Jun 02 11:07:18 PDT 2016
;; MSG SIZE rcvd: 71
~~~
# Implementation Notes/Ideas
## Internal IP or External IP?
* Should the Corefile configuration allow control over whether the internal IP or external IP is exposed?
* If the Corefile configuration allows control over internal IP or external IP, then the config should allow users to control the precedence.
For example a service "myservice" running in namespace "mynamespace" with internal IP "10.0.0.100" and external IP "1.2.3.4".
This example could be published as:
| Corefile directive | Result |
|------------------------------|---------------------|
| iporder = internal | 10.0.0.100 |
| iporder = external | 1.2.3.4 |
| iporder = external, internal | 10.0.0.100, 1.2.3.4 |
| iporder = internal, external | 1.2.3.4, 10.0.0.100 |
| _no directive_ | 10.0.0.100, 1.2.3.4 |
## TODO
* SkyDNS compatibility/equivalency:
* Kubernetes packaging and execution
* Automate packaging to allow executing in Kubernetes. That is, add Docker
container build as target in Makefile. Also include anything else needed
to simplify launch as the k8s DNS service.
Note: Dockerfile already exists in coredns repo to build the docker image.
This work item should identify how to pass configuration and run as a SkyDNS
replacement.
* Identify any kubernetes changes necessary to use coredns as k8s DNS server. That is,
how do we consume the "--cluster-dns=" and "--cluster-domain=" arguments.
* Work out how to pass CoreDNS configuration via kubectl command line and yaml
service definition file.
* Ensure that resolver in each kubernetes container is configured to use
coredns instance.
* Update kubernetes middleware documentation to describe running CoreDNS as a
SkyDNS replacement. (Include descriptions of different ways to pass CoreFile
to coredns command.)
* Remove dependency on healthz for health checking in
`kubernetes-rc.yaml` file.
* Functional work
* Calculate SRV priority based on number of instances running.
(See SkyDNS README.md)
* Performance
* Improve lookup to reduce size of query result obtained from k8s API.
(namespace-based?, other ideas?)
* reduce cache size by caching data into custom structs, instead of caching whole API objects
* add (and use) indexes on the caches that support indexing
* Additional features:
* Implement IP selection and ordering (internal/external). Related to
wildcards and SkyDNS use of CNAMES.
* Expose arbitrary kubernetes repository data as TXT records?
* DNS Correctness
* Do we need to generate synthetic zone records for namespaces?
* Do we need to generate synthetic zone records for the skydns synthetic zones?
* Test cases
* Implement test cases for SkyDNS equivalent functionality.
* Add test cases for lables based filtering
* Test with CoreDNS caching. CoreDNS caching for DNS response is working
using the `cache` directive. Tested working using 20s cache timeout
and A-record queries. Automate testing with cache in place.
* Automate CoreDNS performance tests. Initially for zone files, and for
pre-loaded k8s API cache. With and without CoreDNS response caching.
* Try to get rid of kubernetes launch scripts by moving operations into
.travis.yml file.
* Find root cause of timing condition that results in no data returned to
test client when running k8s integration tests. Current work-around is a
nasty hack of waiting 5 seconds after setting up test server before performing
client calls. (See hack in test/kubernetes_test.go)

View file

@ -1,339 +1,131 @@
# kubernetes
*kubernetes* enables reading zone data from a kubernetes cluster. Record names
are constructed as "myservice.mynamespace.type.coredns.local" where:
*kubernetes* enables reading zone data from a kubernetes cluster.
It implements the spec defined for kubernetes DNS-Based service discovery:
https://github.com/kubernetes/dns/blob/master/docs/specification.md
* "myservice" is the name of the k8s service (this may include multiple DNS labels,
such as "c1.myservice"),
Examples:
Service `A` records are constructed as "myservice.mynamespace.svc.coredns.local" where:
* "myservice" is the name of the k8s service
* "mynamespace" is the k8s namespace for the service, and
* "type" is svc or pod
* "coredns.local" is the zone configured for `kubernetes`.
* "svc" indicates this is a service
* "coredns.local" is the zone
## Syntax
Pod `A` records are constructed as "1-2-3-4.mynamespace.pod.coredns.local" where:
~~~
kubernetes [ZONES...]
~~~
* "1-2-3-4" is derived from the ip address of the pod (1.2.3.4 in this example)
* "mynamespace" is the k8s namespace for the service, and
* "pod" indicates this is a pod
* "coredns.local" is the zone
* `ZONES` zones kubernetes should be authoritative for. Overlapping zones are ignored.
Endpoint `A` records are constructed as "epname.myservice.mynamespace.svc.coredns.local" where:
* "epname" is the hostname (or name constructed from IP) of the endpoint
* "myservice" is the name of the k8s service that the endpoint serves
* "mynamespace" is the k8s namespace for the service, and
* "svc" indicates this is a service
* "coredns.local" is the zone
Or if you want to specify an endpoint:
Also supported are PTR and SRV records for services/endpoints.
~~~
kubernetes [ZONES...] {
endpoint ENDPOINT
}
~~~
## Configuration Syntax
* **ENDPOINT** the kubernetes API endpoint, defaults to http://localhost:8080
This is an example kubernetes middle configuration block, with all options described:
TODO(...): Add all the other options.
## Examples
This is the default kubernetes setup, with everything specified in full:
~~~
# Serve on port 53
.:53 {
# use kubernetes middleware for domain "coredns.local"
```
# kubernetes <zone> [<zone>] ...
#
# Use kubernetes middleware for domain "coredns.local"
# Reverse domain zones can be defined here (e.g. 0.0.10.in-addr.arpa),
# or instead with the "cidrs" option.
#
kubernetes coredns.local {
# Kubernetes data API resync period
# resyncperiod <period>
#
# Kubernetes data API resync period. Default is 5m
# Example values: 60s, 5m, 1h
#
resyncperiod 5m
# Use url for k8s API endpoint
endpoint https://k8sendpoint:8080
# endpoint <url>
#
# Use url for a remote k8s API endpoint. If omitted, it will connect to
# k8s in-cluster using the cluster service account.
#
endpoint https://k8s-endpoint:8080
# The tls cert, key and the CA cert filenames
# tls <cert-filename> <key-filename> <cacert-filename>
#
# The tls cert, key and the CA cert filenanames for remote k8s connection.
# This option is ignored if connecting in-cluster (i.e. endpoint is not
# specified).
#
tls cert key cacert
# Only expose the k8s namespace "demo"
# namespaces <namespace> [<namespace>] ...
#
# Only expose the k8s namespaces listed. If this option is omitted
# all namespaces are exposed
#
namespaces demo
# lables <expression> [,<expression>] ...
#
# Only expose the records for kubernetes objects
# that match this label selector. The label
# selector syntax is described in the kubernetes
# API documentation: http://kubernetes.io/docs/user-guide/labels/
# Example selector below only exposes objects tagged as
# "application=nginx" in the staging or qa environments.
#labels environment in (staging, qa),application=nginx
#
labels environment in (staging, qa),application=nginx
# The mode of responding to pod A record requests.
# pods <disabled|insecure|verified>
#
# Set the mode of responding to pod A record requests.
# e.g 1-2-3-4.ns.pod.zone. This option is provided to allow use of
# SSL certs when connecting directly to pods.
# Valid values: disabled, verified, insecure
# disabled: default. ignore pod requests, always returning NXDOMAIN
# disabled: Do not process pod requests, always returning NXDOMAIN
# insecure: Always return an A record with IP from request (without
# checking k8s). This option is is vulnerable to abuse if
# used maliciously in conjuction with wildcard SSL certs.
# verified: Return an A record if there exists a pod in same
# namespace with matching IP. This option requires
# substantially more memory than in insecure mode, since it
# will maintain a watch on all pods.
# Default value is "disabled".
#
pods disabled
}
# Perform DNS response caching for the coredns.local zone
# Cache timeout is specified by an integer in seconds
#cache 180 coredns.local
}
~~~
Defaults:
* If the `namespaces` keyword is omitted, all kubernetes namespaces are exposed.
* If the `resyncperiod` keyword is omitted, the default resync period is 5 minutes.
* The `labels` keyword is only used when filtering results based on kubernetes label selector syntax
is required. The label selector syntax is described in the kubernetes API documentation at:
http://kubernetes.io/docs/user-guide/labels/
* If the `pods` keyword is omitted, all pod type requests will result in NXDOMAIN
# cidrs <cidr> [<cidr>] ...
#
# Expose cidr ranges to reverse lookups. Include any number of space
# delimited cidrs, and or multiple cidrs options on separate lines.
# kubernetes middleware will respond to PTR requests for ip addresses
# that fall within these ranges.
#
cidrs 10.0.0.0/24 10.0.10.0/25
### Basic Setup
#### Launch Kubernetes
Kubernetes is launched using the commands in the `.travis/kubernetes/00_run_k8s.sh` script.
#### Configure kubectl and Test
The kubernetes control client can be downloaded from the generic URL:
`http://storage.googleapis.com/kubernetes-release/release/${K8S_VERSION}/bin/${GOOS}/${GOARCH}/${K8S_BINARY}`
For example, the kubectl client for Linux can be downloaded using the command:
`curl -sSL "http://storage.googleapis.com/kubernetes-release/release/v1.2.4/bin/linux/amd64/kubectl"`
The `contrib/kubernetes/testscripts/10_setup_kubectl.sh` script can be stored in the same directory as
kubectl to setup kubectl to communicate with kubernetes running on the localhost.
#### Launch a kubernetes service and expose the service
The following commands will create a kubernetes namespace "demo",
launch an nginx service in the namespace, and expose the service on port 80:
~~~
$ ./kubectl create namespace demo
$ ./kubectl get namespace
$ ./kubectl run mynginx --namespace=demo --image=nginx
$ ./kubectl get deployment --namespace=demo
$ ./kubectl expose deployment mynginx --namespace=demo --port=80
$ ./kubectl get service --namespace=demo
~~~
The script `.travis/kubernetes/20_setup_k8s_services.sh` creates a couple of sample namespaces
with services running in those namespaces. The automated kubernetes integration tests in
`test/kubernetes_test.go` depend on these services and namespaces to exist in kubernetes.
#### Launch CoreDNS
Build CoreDNS and launch using this configuration file:
~~~ txt
# Serve on port 53
.:53 {
kubernetes coredns.local {
resyncperiod 5m
endpoint http://localhost:8080
namespaces demo
# Only expose the records for kubernetes objects
# that matches this label selector.
# See http://kubernetes.io/docs/user-guide/labels/
# Example selector below only exposes objects tagged as
# "application=nginx" in the staging or qa environments.
#labels environment in (staging, qa),application=nginx
}
#cache 180 coredns.local # optionally enable caching
}
~~~
Put it in `~/k8sCorefile` for instance. This configuration file sets up CoreDNS to use the zone
`coredns.local` for the kubernetes services.
The command to launch CoreDNS is:
~~~
$ ./coredns -conf ~/k8sCorefile
~~~
In a separate terminal a DNS query can be issued using dig:
~~~
$ dig @localhost mynginx.demo.coredns.local
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 47614
;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;mynginx.demo.coredns.local. IN A
;; ANSWER SECTION:
mynginx.demo.coredns.local. 0 IN A 10.0.0.10
;; Query time: 2 msec
;; SERVER: ::1#53(::1)
;; WHEN: Thu Jun 02 11:07:18 PDT 2016
;; MSG SIZE rcvd: 71
~~~
TODO(miek|...): below this line file bugs or issues and cleanup:
## Implementation Notes/Ideas
### Basic Zone Mapping
The middleware is configured with a "zone" string. For
example: "zone = coredns.local".
The Kubernetes service "myservice" running in "mynamespace" would map
to: "myservice.mynamespace.coredns.local".
The middleware should publish an A record for that service and a service record.
If multiple zone names are specified, the records for kubernetes objects are
exposed in all listed zones.
For example:
# Serve on port 53
.:53 {
# use kubernetes middleware for domain "coredns.local"
kubernetes coredns.local {
# Use url for k8s API endpoint
endpoint http://localhost:8080
}
# Perform DNS response caching for the coredns.local zone
# Cache timeout is specified by an integer argument in seconds
# (This works for the kubernetes middleware.)
#cache 20 coredns.local
#cache 160 coredns.local
}
```
### Internal IP or External IP?
* Should the Corefile configuration allow control over whether the internal IP or external IP is exposed?
* If the Corefile configuration allows control over internal IP or external IP, then the config should allow users to control the precedence.
## Wildcards
For example a service "myservice" running in namespace "mynamespace" with internal IP "10.0.0.100" and external IP "1.2.3.4".
This example could be published as:
| Corefile directive | Result |
|------------------------------|---------------------|
| iporder = internal | 10.0.0.100 |
| iporder = external | 1.2.3.4 |
| iporder = external, internal | 10.0.0.100, 1.2.3.4 |
| iporder = internal, external | 1.2.3.4, 10.0.0.100 |
| _no directive_ | 10.0.0.100, 1.2.3.4 |
Some query labels accept a wildcard value to match any value.
If a label is a valid wildcard (\*, or the word "any"), then that label will match
all values. The labels that accept wildcards are:
* _service_ in an `A` record request: _service_.namespace.svc.zone.
* e.g. `*.ns.svc.myzone.local`
* _namespace_ in an `A` record request: service._namespace_.svc.zone.
* e.g. `nginx.*.svc.myzone.local`
* _port and/or protocol_ in an `SRV` request: __port_.__protocol_.service.namespace.svc.zone.
* e.g. `_http.*.service.ns.svc.`
* multiple wild cards are allowed in a single query.
* e.g. `A` Request `*.*.svc.zone.` or `SRV` request `*.*.*.*.svc.zone.`
### Wildcards
Publishing DNS records for singleton services isn't very interesting. Service
names are unique within a k8s namespace, therefore multiple services will be
commonly run with a structured naming scheme.
For example, running multiple nginx services under the names:
| Service name |
|--------------|
| c1.nginx |
| c2.nginx |
or:
| Service name |
|--------------|
| nginx.c3 |
| nginx.c4 |
A DNS query with wildcard support for "nginx" in these examples should
return the IP addresses for all services with "nginx" in the service name.
TBD:
* How does this relate the the k8s load-balancer configuration?
## TODO
* SkyDNS compatibility/equivalency:
* Kubernetes packaging and execution
* Automate packaging to allow executing in Kubernetes. That is, add Docker
container build as target in Makefile. Also include anything else needed
to simplify launch as the k8s DNS service.
Note: Dockerfile already exists in coredns repo to build the docker image.
This work item should identify how to pass configuration and run as a SkyDNS
replacement.
* Identify any kubernetes changes necessary to use coredns as k8s DNS server. That is,
how do we consume the "--cluster-dns=" and "--cluster-domain=" arguments.
* Work out how to pass CoreDNS configuration via kubectl command line and yaml
service definition file.
* Ensure that resolver in each kubernetes container is configured to use
coredns instance.
* Update kubernetes middleware documentation to describe running CoreDNS as a
SkyDNS replacement. (Include descriptions of different ways to pass CoreFile
to coredns command.)
* Remove dependency on healthz for health checking in
`kubernetes-rc.yaml` file.
* Expose load-balancer IP addresses.
* Calculate SRV priority based on number of instances running.
(See SkyDNS README.md)
* Functional work
* (done. '?' not supported yet) ~~Implement wildcard-based lookup. Minimally support `*`, consider `?` as well.~~
* (done) ~~Note from Miek on PR 181: "SkyDNS also supports the word `any`.~~
* Implement SkyDNS-style synthetic zones such as "svc" to group k8s objects. (This
should be optional behavior.) Also look at "pod" synthetic zones.
* Implement test cases for SkyDNS equivalent functionality.
* SkyDNS functionality, as listed in SkyDNS README: https://github.com/kubernetes/kubernetes/blob/release-1.2/cluster/addons/dns/README.md
* Expose pods and srv objects.
* A records in form of `pod-ip-address.my-namespace.cluster.local`.
For example, a pod with ip `1.2.3.4` in the namespace `default`
with a dns name of `cluster.local` would have an entry:
`1-2-3-4.default.pod.cluster.local`.
* SRV records in form of
`_my-port-name._my-port-protocol.my-namespace.svc.cluster.local`
CNAME records for both regular services and headless services.
See SkyDNS README.
* A Records and hostname Based on Pod Annotations (k8s beta 1.2 feature).
See SkyDNS README.
* Note: the embedded IP and embedded port record names are weird. I
would need to know the IP/port in order to create the query to lookup
the name. Presumably these are intended for wildcard queries.
* Performance
* Improve lookup to reduce size of query result obtained from k8s API.
(namespace-based?, other ideas?)
* Additional features:
* Reverse IN-ADDR entries for services. (Is there any value in supporting
reverse lookup records?) (need tests, functionality should work based on @aledbf's code.)
* (done) ~~How to support label specification in Corefile to allow use of labels to
indicate zone? For example, the following
configuration exposes all services labeled for the "staging" environment
and tenant "customerB" in the zone "customerB.stage.local":
kubernetes customerB.stage.local {
# Use url for k8s API endpoint
endpoint http://localhost:8080
labels environment in (staging),tenant=customerB
}
Note: label specification/selection is a killer feature for segmenting
test vs staging vs prod environments.~~ Need label testing.
* Implement IP selection and ordering (internal/external). Related to
wildcards and SkyDNS use of CNAMES.
* Flatten service and namespace names to valid DNS characters. (service names
and namespace names in k8s may use uppercase and non-DNS characters. Implement
flattening to lower case and mapping of non-DNS characters to DNS characters
in a standard way.)
* Expose arbitrary kubernetes repository data as TXT records?
* DNS Correctness
* Do we need to generate synthetic zone records for namespaces?
* Do we need to generate synthetic zone records for the skydns synthetic zones?
* Test cases
* Test with CoreDNS caching. CoreDNS caching for DNS response is working
using the `cache` directive. Tested working using 20s cache timeout
and A-record queries. Automate testing with cache in place.
* Automate CoreDNS performance tests. Initially for zone files, and for
pre-loaded k8s API cache. With and without CoreDNS response caching.
* Try to get rid of kubernetes launch scripts by moving operations into
.travis.yml file.
* Find root cause of timing condition that results in no data returned to
test client when running k8s integration tests. Current work-around is a
nasty hack of waiting 5 seconds after setting up test server before performing
client calls. (See hack in test/kubernetes_test.go)

View file

@ -91,7 +91,7 @@ func kubernetesParse(c *caddy.Controller) (*Kubernetes, error) {
for _, cidrStr := range args {
_, cidr, err := net.ParseCIDR(cidrStr)
if err != nil {
return nil, errors.New(c.Val() + " contains an invalid cidr: " + cidrStr)
return nil, errors.New("Invalid cidr: " + cidrStr)
}
k8s.ReverseCidrs = append(k8s.ReverseCidrs, *cidr)
@ -106,7 +106,7 @@ func kubernetesParse(c *caddy.Controller) (*Kubernetes, error) {
case PodModeDisabled, PodModeInsecure, PodModeVerified:
k8s.PodMode = args[0]
default:
return nil, errors.New("pods must be one of: disabled, verified, insecure")
return nil, errors.New("Value for pods must be one of: disabled, verified, insecure")
}
continue
}

View file

@ -1,6 +1,7 @@
package kubernetes
import (
"net"
"strings"
"testing"
"time"
@ -9,6 +10,11 @@ import (
unversionedapi "k8s.io/client-go/1.5/pkg/api/unversioned"
)
func parseCidr(cidr string) net.IPNet {
_, ipnet, _ := net.ParseCIDR(cidr)
return *ipnet
}
func TestKubernetesParse(t *testing.T) {
tests := []struct {
description string // Human-facing description of test case
@ -19,6 +25,8 @@ func TestKubernetesParse(t *testing.T) {
expectedNSCount int // expected count of namespaces.
expectedResyncPeriod time.Duration // expected resync period value
expectedLabelSelector string // expected label selector value
expectedPodMode string
expectedCidrs []net.IPNet
}{
// positive
{
@ -30,6 +38,8 @@ func TestKubernetesParse(t *testing.T) {
0,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
{
"kubernetes keyword with multiple zones",
@ -40,6 +50,8 @@ func TestKubernetesParse(t *testing.T) {
0,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
{
"kubernetes keyword with zone and empty braces",
@ -51,6 +63,8 @@ func TestKubernetesParse(t *testing.T) {
0,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
{
"endpoint keyword with url",
@ -63,6 +77,8 @@ func TestKubernetesParse(t *testing.T) {
0,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
{
"namespaces keyword with one namespace",
@ -75,6 +91,8 @@ func TestKubernetesParse(t *testing.T) {
1,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
{
"namespaces keyword with multiple namespaces",
@ -87,6 +105,8 @@ func TestKubernetesParse(t *testing.T) {
2,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
{
"resync period in seconds",
@ -99,6 +119,8 @@ func TestKubernetesParse(t *testing.T) {
0,
30 * time.Second,
"",
defaultPodMode,
nil,
},
{
"resync period in minutes",
@ -111,6 +133,8 @@ func TestKubernetesParse(t *testing.T) {
0,
15 * time.Minute,
"",
defaultPodMode,
nil,
},
{
"basic label selector",
@ -123,6 +147,8 @@ func TestKubernetesParse(t *testing.T) {
0,
defaultResyncPeriod,
"environment=prod",
defaultPodMode,
nil,
},
{
"multi-label selector",
@ -135,6 +161,8 @@ func TestKubernetesParse(t *testing.T) {
0,
defaultResyncPeriod,
"application=nginx,environment in (production,qa,staging)",
defaultPodMode,
nil,
},
{
"fully specified valid config",
@ -150,6 +178,8 @@ func TestKubernetesParse(t *testing.T) {
2,
15 * time.Minute,
"application=nginx,environment in (production,qa,staging)",
defaultPodMode,
nil,
},
// negative
{
@ -161,6 +191,8 @@ func TestKubernetesParse(t *testing.T) {
-1,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
{
"kubernetes keyword without a zone",
@ -171,6 +203,8 @@ func TestKubernetesParse(t *testing.T) {
0,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
{
"endpoint keyword without an endpoint value",
@ -183,6 +217,8 @@ func TestKubernetesParse(t *testing.T) {
-1,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
{
"namespace keyword without a namespace value",
@ -195,6 +231,8 @@ func TestKubernetesParse(t *testing.T) {
-1,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
{
"resyncperiod keyword without a duration value",
@ -207,6 +245,8 @@ func TestKubernetesParse(t *testing.T) {
0,
0 * time.Minute,
"",
defaultPodMode,
nil,
},
{
"resync period no units",
@ -219,6 +259,8 @@ func TestKubernetesParse(t *testing.T) {
0,
0 * time.Second,
"",
defaultPodMode,
nil,
},
{
"resync period invalid",
@ -231,6 +273,8 @@ func TestKubernetesParse(t *testing.T) {
0,
0 * time.Second,
"",
defaultPodMode,
nil,
},
{
"labels with no selector value",
@ -243,6 +287,8 @@ func TestKubernetesParse(t *testing.T) {
0,
0 * time.Second,
"",
defaultPodMode,
nil,
},
{
"labels with invalid selector value",
@ -255,6 +301,98 @@ func TestKubernetesParse(t *testing.T) {
0,
0 * time.Second,
"",
defaultPodMode,
nil,
},
// pods disabled
{
"pods disabled",
`kubernetes coredns.local {
pods disabled
}`,
false,
"",
1,
0,
defaultResyncPeriod,
"",
PodModeDisabled,
nil,
},
// pods insecure
{
"pods insecure",
`kubernetes coredns.local {
pods insecure
}`,
false,
"",
1,
0,
defaultResyncPeriod,
"",
PodModeInsecure,
nil,
},
// pods verified
{
"pods verified",
`kubernetes coredns.local {
pods verified
}`,
false,
"",
1,
0,
defaultResyncPeriod,
"",
PodModeVerified,
nil,
},
// pods invalid
{
"invalid pods mode",
`kubernetes coredns.local {
pods giant_seed
}`,
true,
"Value for pods must be one of: disabled, verified, insecure",
-1,
0,
defaultResyncPeriod,
"",
PodModeVerified,
nil,
},
// cidrs ok
{
"valid cidrs",
`kubernetes coredns.local {
cidrs 10.0.0.0/24 10.0.1.0/24
}`,
false,
"",
1,
0,
defaultResyncPeriod,
"",
defaultPodMode,
[]net.IPNet{parseCidr("10.0.0.0/24"), parseCidr("10.0.1.0/24")},
},
// cidrs ok
{
"Invalid cidr: hard",
`kubernetes coredns.local {
cidrs hard dry
}`,
true,
"Invalid cidr: hard",
-1,
0,
defaultResyncPeriod,
"",
defaultPodMode,
nil,
},
}
@ -312,5 +450,22 @@ func TestKubernetesParse(t *testing.T) {
t.Errorf("Test %d: Expected kubernetes controller to be initialized with label selector '%s'. Instead found selector '%s' for input '%s'", i, test.expectedLabelSelector, foundLabelSelectorString, test.input)
}
}
// Pods
foundPodMode := k8sController.PodMode
if foundPodMode != test.expectedPodMode {
t.Errorf("Test %d: Expected kubernetes controller to be initialized with pod mode '%s'. Instead found pod mode '%s' for input '%s'", i, test.expectedPodMode, foundPodMode, test.input)
}
// Cidrs
foundCidrs := k8sController.ReverseCidrs
if len(foundCidrs) != len(test.expectedCidrs) {
t.Errorf("Test %d: Expected kubernetes controller to be initialized with %d cidrs. Instead found %d cidrs for input '%s'", i, len(test.expectedCidrs), len(foundCidrs), test.input)
}
for j, cidr := range test.expectedCidrs {
if cidr.String() != foundCidrs[j].String() {
t.Errorf("Test %d: Expected kubernetes controller to be initialized with cidr '%s'. Instead found cidr '%s' for input '%s'", i, test.expectedCidrs[j].String(), foundCidrs[j].String(), test.input)
}
}
}
}

View file

@ -255,6 +255,66 @@ var dnsTestCasesPodsVerified = []test.Case{
},
}
var dnsTestCasesCidrReverseZone = []test.Case{
{
Qname: "123.0.0.10.in-addr.arpa.", Qtype: dns.TypePTR,
Rcode: dns.RcodeSuccess,
Answer: []dns.RR{},
},
{
Qname: "100.0.0.10.in-addr.arpa.", Qtype: dns.TypePTR,
Rcode: dns.RcodeSuccess,
Answer: []dns.RR{
test.PTR("100.0.0.10.in-addr.arpa. 303 IN PTR svc-1-a.test-1.svc.cluster.local."),
},
},
{
Qname: "110.0.0.10.in-addr.arpa.", Qtype: dns.TypePTR,
Rcode: dns.RcodeSuccess,
Answer: []dns.RR{
test.PTR("115.0.0.10.in-addr.arpa. 303 IN PTR svc-1-b.test-1.svc.cluster.local."),
},
},
{
Qname: "115.0.0.10.in-addr.arpa.", Qtype: dns.TypePTR,
Rcode: dns.RcodeSuccess,
Answer: []dns.RR{
test.PTR("115.0.0.10.in-addr.arpa. 303 IN PTR svc-c.test-1.svc.cluster.local."),
},
},
}
var dnsTestCasesPartialCidrReverseZone = []test.Case{
{
// In exposed range, record not present = OK + No data
Qname: "99.0.0.10.in-addr.arpa.", Qtype: dns.TypePTR,
Rcode: dns.RcodeSuccess,
Answer: []dns.RR{},
},
{
// In exposed range, record present = OK + Data
Qname: "100.0.0.10.in-addr.arpa.", Qtype: dns.TypePTR,
Rcode: dns.RcodeSuccess,
Answer: []dns.RR{
test.PTR("100.0.0.10.in-addr.arpa. 303 IN PTR svc-1-a.test-1.svc.cluster.local."),
},
},
{
// In exposed range, record present = OK + Data
Qname: "110.0.0.10.in-addr.arpa.", Qtype: dns.TypePTR,
Rcode: dns.RcodeSuccess,
Answer: []dns.RR{
test.PTR("115.0.0.10.in-addr.arpa. 303 IN PTR svc-1-b.test-1.svc.cluster.local."),
},
},
{
// Out of exposed range, record present = pass to next middleware (not existing in test) = FAIL
Qname: "115.0.0.10.in-addr.arpa.", Qtype: dns.TypePTR,
Rcode: dns.RcodeServerFailure,
Answer: []dns.RR{},
},
}
func createTestServer(t *testing.T, corefile string) (*caddy.Instance, string) {
server, err := CoreDNSServer(corefile)
if err != nil {
@ -275,7 +335,7 @@ func doIntegrationTests(t *testing.T, corefile string, testCases []test.Case) {
// Work-around for timing condition that results in no-data being returned in
// test environment.
time.Sleep(5 * time.Second)
time.Sleep(1 * time.Second)
for _, tc := range testCases {
@ -340,3 +400,27 @@ func TestKubernetesIntegrationPodsVerified(t *testing.T) {
`
doIntegrationTests(t, corefile, dnsTestCasesPodsVerified)
}
func TestKubernetesIntegrationCidrReverseZone(t *testing.T) {
corefile :=
`.:0 {
kubernetes cluster.local {
endpoint http://localhost:8080
namespaces test-1
cidrs 10.0.0.0/24
}
`
doIntegrationTests(t, corefile, dnsTestCasesCidrReverseZone)
}
func TestKubernetesIntegrationPartialCidrReverseZone(t *testing.T) {
corefile :=
`.:0 {
kubernetes cluster.local {
endpoint http://localhost:8080
namespaces test-1
cidrs 10.0.0.96/28 10.0.0.120/32
}
`
doIntegrationTests(t, corefile, dnsTestCasesPartialCidrReverseZone)
}