Chapter 7. Integrating with others
Until now, we’ve looked over Kong’s powerful functionality. It’s obvious that Kong excels with managing the APIs, but it can’t cover up all of the DevOps by itself. Because of this, in this chapter, we’ll introduce some of the integrations for Kong to cover up DevOps efficiently.
Additionally, we’ll cover some of the integrations that help you with configuring Kong without typing commands on terminal.
All of the code for the sample project in this book can be found
here
.
Deployment
So far we’ve used Kong by simply installing on the host, but installing it isn’t the best way in some cases.
For example, let’s say that you’re running lots of instances on a server and you need to install and configure Kong on your entire host. Those tasks are repeated over and over again and consume a lot of time.
Or let’s say you have to scale your server and it has to be done in the identically same environment. It’s pretty hard to configure the same environment because it has a dependency with the host operating system.
In situations like this, there are some better ways to deploy, and some candidates for this situation are Docker
and Kubernetes
. In this section, we’ll look over how to deploy our services and Kong with other options, and how to configure them.
Docker
Docker is a Linux container based on an open source virtualization platform. A container is an isolated process that is an encapsulated form of an application and its dependencies. Based on this, Docker brings lots of benefits to DevOps.
-
Container shares resources from the host os
- Compared to the virtual machine, resources are used much more efficient.
-
Isolated from the host os
- It doesn’t affect the host, which means it has great portability
-
Container’s Linux kernel sharing architecture
- Due to Lightweight architecture, it can be run blazingly fast.
-
Easy to build, ship, and run distributed applications
- Dependency and configurations are contained in one. So any other installs or configures aren’t needed.
Installation
Issue the following command to install Docker.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg |
sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu
$(
lsb_release -cs)
stable"
sudo apt-get update
sudo apt-get install -y docker-ce
The Docker command needs root privileges by default. But If you don’t want to type sudo
with each command, adding the user in the Docker group allows you to issue the docker
command without sudo
.
sudo usermod -aG docker ${
USER
}
su - ${
USER
}
To verify that docker
is installed properly, run a simple Docker image, hello-world
.
docker run hello-world
Unable to find image 'hello-world:latest'
locally
latest: Pulling from library/hello-world
ca4f61b1923c: Pull complete
Digest: sha256:ca0eeb6fb05351dfc8759c20733c91def84cb8007aa89
a5bf606bc8b315b9fc7
Status: Downloaded newer image for
hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
...
Deploy services
To deploy a container with Docker, first, we have to build a container image with Dockerfile
. Dockerfile
is a file that describes how to build a container image such as details about setting up an application. Below is an example of Dockerfile
.
FROM python:alpine
WORKDIR /usr/src/app
COPY . .w
RUN pip install --no-cache-dir -r requirements.txt
ENTRYPOINT ["python"]
CMD ["app.py"]
We’ve created a Dockerfile
for you in each of cinema's
microservices folder. Move your directory into the movies
microservice folder, and you can start building up containers.
# from your cinema working directory, change directory to each microservices
cd
movies
docker build -t
"cinema:movies"
.
Sending build context to Docker daemon 7.168kB
Step 1/6 : FROM python:alpine
---> 29b5ce58cfbc
Step 2/6 : WORKDIR /usr/src/app
---> 5640e81529c0
Step 3/6 : COPY . .
---> 5c62e72f7d3d
Step 4/6 : RUN pip install --no-cache-dir -r requirements.txt
---> Running in 5a7d78e988af
Collecting
flask==
0.12
(
from -r requirements.txt
(
line 1
))
...... output truncated ......
Removing intermediate container 5a7d78e988af
---> 83cc65606f99
Step 5/6 : ENTRYPOINT
[
"python"
]
---> Running in 48ad756f3ff8
Removing intermediate container 48ad756f3ff8
---> 0732b6c1df93
Step 6/6 : CMD
[
"app.py"
]
---> Running in f6babb5dbbbe
Removing intermediate container f6babb5dbbbe
---> c78beadd2f69
Successfully built c78beadd2f69
Successfully tagged cinema:movies
You’ve just created Docker images tagged with cinema:movies
. To start the Docker image:
docker run -d --name cinema_movies \
-p <movies_port>:5000 \
cinema:movies
By issuing the command below, verify that the cinema
movies microservice has been running properly.
curl localhost:<movies_port>
{
"subresource_uris"
: {
"movie"
: "/movies/<id>"
,
"movies"
: "/movies"
}
,
"uri"
: "/"
}
Let’s run all the other microservices. Be aware that the users
container runs a bit different with others since Users
microservice needs movies
and the bookings
service.
# from your cinema working directory
cd
bookings
docker build -t
"cinema:bookings"
.
docker run -d --name cinema_bookings
\
-p <bookings_port>:5000
\
cinema:bookings
# from your cinema working directory
cd
showtimes
docker build -t
"cinema:showtimes"
.
docker run -d --name cinema_showtimes
\
-p <showtimes_port>:5000
\
cinema:showtimes
# from your cinema working directory
cd
users
docker build -t
"cinema:users"
.
docker run -d --name cinema_users
\
--link cinema_bookings:bookings
\
--link cinema_movies:movies
\
-p <users_port>:5000
\
cinema:users
By default, since containers reside independently from the host, each of the containers can’t communicate with others. To make containers communicate, you have to use the --link <name or id>:alias
option for the target container when you run it. The --link cinema_bookings:bookings
option describes the link cinema_bookings
container with the current container, and cinema_bookings
will be called as the bookings
alias.
More details about docker commands can be found
here
.
Deploy Kong
Before you run Kong
, prepare your database with Docker or create your own. Here, we’ll provision a database with Docker.
docker run -d --name kong-database \
-p 5432:5432 \
-e "POSTGRES_USER=kong"
\
-e "POSTGRES_DB=kong"
\
postgres:9.5
docker run --rm \
--link kong-database:kong-database \
-e "KONG_DATABASE=postgres"
\
-e "KONG_PG_HOST=kong-database"
\
kong kong migrations up
migrating core for
database kong
core migrated up to: 2015-01-12-175310_skeleton
...... output truncated ......
oauth2 migrated up to: 2017-10-11-oauth2_new_refresh_token_ttl_config_value
58
migrations ran
After provisioning your database, run Kong Docker with each microservice linked with it.
docker run -d --name kong \
--link kong-database:kong-database \
--link cinema_showtimes:showtimes \
--link cinema_bookings:bookings \
--link cinema_movies:movies \
--link cinema_users:users \
-e "KONG_DATABASE=postgres" \
-e "KONG_PG_HOST=kong-database" \
-e "KONG_PROXY_ACCESS_LOG=/dev/stdout" \
-e "KONG_ADMIN_ACCESS_LOG=/dev/stdout" \
-e "KONG_PROXY_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_ERROR_LOG=/dev/stderr" \
-e "KONG_ADMIN_LISTEN=0.0.0.0:8001" \
-e "KONG_ADMIN_LISTEN_SSL=0.0.0.0:8444" \
-p 8000:8000 \
-p 8443:8443 \
-p 8001:8001 \
-p 8444:8444 \
kong
To check that Kong has deployed successfully issue the command below.
> curl $(
minikube service kong --url)
{
"message"
: "no API found with those values"
Creating API
From here you can start Kong with 8000
and 8001
. As we linked each microservices with aliases, each microservice can be called through <alias>:5000
. To verify it, run curl commands with docker exec
to pass the command to the container.
docker exec
-it kong curl movies:5000
{
"subresource_uris"
: {
"movie"
: "/movies/<id>"
,
"movies"
: "/movies"
}
,
"uri"
: "/"
}
Each of the services are accessible through <alias>:5000
. For this reason, when adding API to Kong in Docker, we must specify each upstream URL with <alias>:5000
.
curl -i -X POST http://localhost:8001/apis/ \
--data 'name=movies'
\
--data 'hosts=cinema.com'
\
--data 'uris=/movies'
\
--data 'upstream_url=http://movies:5000'
Let’s check whether it works correctly with the upstream_url.
curl -X GET http://localhost:8000/movies \
--header 'Host: cinema.com'
{
"subresource_uris"
: {
"movie"
: "/movies/<id>"
,
"movies"
: "/movies"
}
,
"uri"
: "/"
}
Congratulations! We’ve just created the first API with Kong in the Docker environment!
Kubernetes
Kubernetes is a container orchestration tool for automating deployment, scaling, and management of containerized applications. By grouping containers into logical units, managing containers becomes much easier. Kubernetes is built upon Google’s 15 years of experience of running production.
Kubernetes can help you in several ways:
To get started, first you need to create a Kubernetes cluster. Options are using cloud provider’s Kubernetes service, creating your own cluster, or launching the pre-configured cluster with minikube.
In this section, we’ll be using minikube for the Kubernetes cluster and launch our microservices and Kong.
Installation
For run commands against Kubernetes, a command line tool, kubectl, is required. Issue the below command to install kubectl
.
curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.9.3/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/kubectl
To verify that kubectl
is installed correctly, issue kubectl
on command. It’ll describe the available commands with it.
kubectl controls the Kubernetes cluster manager.
Find more information at https://github.com/kubernetes/kubernetes.
Basic Commands (
Beginner)
:
create Create a resource from a file or from stdin.
expose Take a replication controller, service, deployment or pod and expose it as a new Kubernetes Service
run Run a particular image on the cluster
set
Set specific features on objects
...... output truncated ......
Use "kubectl <command> --help"
for
more information about a given command.
Use "kubectl options"
for
a list of global command
-line options (
applies to all commands)
.
Since we don’t have a Kubernetes cluster yet, prior to sending commands with kubectl, install minikube first.
Before installing minikube, you need a Hypervisor in your host.
Available options are VirtualBox, VMware, etc.
curl -Lo minikube https://storage.googleapis.com/minikube/releases/v0.25.0/minikube-linux-amd64
chmod +x minikube
sudo mv minikube /usr/local/bin/
To start the Kubernetes cluster, issue minikube start
.
Starting local
Kubernetes v1.9.0 cluster...
Starting VM...
Downloading Minikube ISO
142.22 MB / 142.22 MB [============================================]
100.00% 0s
Getting VM IP address...
Moving files into cluster...
Downloading localkube binary
162.41 MB / 162.41 MB [============================================]
100.00% 0s
Setting up certs...
Connecting to cluster...
Setting up kubeconfig...
Starting cluster components...
Kubectl is now configured to use the cluster.
Loading cached images from config file.
The command will download a new Virtual Host with hypervisor and creates Kubernetes Host. To use the minikube Kubernetes cluster and verify it, issue the command below.
> kubectl config use-context minikube
Switched to context "minikube"
.
> kubectl cluster-info
Kubernetes master is running at https://192.168.99.101:8443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'
.
Deploy your services
Before we deploy, let’s briefly look over how Kubernetes works.
A Kubernetes cluster is constructed with multiple nodes, which is another host, and nodes are constructed with lots of pods
, which is a logical group of container applications. pods
are deployed through Deployment
, and these deployed containers are accessible through service
, which helps the container to be exposed publicly.
Here, we’ll deploy our cinema
microservice with Kubernetes, and access it through service
. Since minikube can’t fetch the images of the host docker, we’ll re-build the image for the minikube and deploy to minikube.
# from your cinema working directory, change directory to each microservices
cd
movies
eval
$(
minikube docker-env
)
docker build -t cinema:movies .
We’ve just built an image for minikube. Now to deploy our service, create a file movies.yaml
, and fill the content like this.
# movies.yaml
apiVersion:
v1
kind:
Service
metadata:
name:
movies
spec:
type:
LoadBalancer
ports:
-
port:
5000
selector:
app:
movies
---
apiVersion:
extensions/v1beta1
kind:
Deployment
metadata:
name:
cinema-movies
spec:
replicas:
1
template:
metadata:
labels:
name:
movies
app:
movies
spec:
containers:
-
name:
cinema-movies
image:
cinema:movies
imagePullPolicy:
Never
ports:
-
containerPort:
5000
---
It creates a deployment
of cinema-movies, and exposes this container with movies
service. To launch a deployment, issue kubectl create
.
kubectl create -f movies.yaml
service "movies"
created
deployment "cinema-movies"
deleted
To verify that our service has been deployed correctly, issue kubectl get all
to check all of the deployment and services.
kubectl get all
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/cinema-movies 1
1
1
1
6s
...... output truncated ......
NAME READY STATUS RESTARTS AGE
po/cinema-movies-dc8fd748-2bqbk 1/1 Running 0
6s
...... output truncated ......
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(
S)
AGE
svc/movies LoadBalancer 10.99.224.193 <pending> 5000:32498/TCP 16m
...... output truncated ......
To access our movies
microservice container, issue a command with minikube service <svc_name> --url
.
curl $(
minikube service movies --url)
{
"subresource_uris"
: {
"movie"
: "/movies/<id>"
,
"movies"
: "/movies"
}
,
"uri"
: "/"
}
Deploy Kong
Follow the instructions to start the Kong Kubernetes deployment
here
.
git clone git@github.com:Kong/kong-dist-kubernetes.git
cd
kong-dist-kubernetes
kubectl create -f postgres.yaml
kubectl create -f kong_migration_postgres.yaml
kubectl create -f kong_postgres.yaml
It will create postgres Kubernetes deployment, prepare the datastore with kong_migration_postgres
, and finally deploy the Kong. To check whether Kong is deployed correctly, issue kubectl get all
.
kubectl get all
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/kong-rc 3
3
3
3
25s
...... output truncated ......
NAME READY STATUS RESTARTS AGE
po/kong-rc-6bc75bc959-n224l 1/1 Running 0
25s
po/kong-rc-6bc75bc959-pkr9q 1/1 Running 0
25s
po/kong-rc-6bc75bc959-vwgs4 1/1 Running 0
25s
po/postgres-z9nnp 1/1 Running 0
46m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(
S)
AGE
svc/kong-admin LoadBalancer 10.97.39.131 <pending> 8001:32292/TCP 25s
svc/kong-admin-ssl LoadBalancer 10.109.226.134 <pending> 8444:30999/TCP 25s
svc/kong-proxy LoadBalancer 10.100.167.232 <pending> 8000:31927/TCP 25s
svc/kong-proxy-ssl LoadBalancer 10.106.129.233 <pending> 8443:31133/TCP 25s
...... output truncated ......
You can check three instances if Kong is running on each pod. By issuing the below command, you can access Kong and kong-admin.
> curl $(
minikube service kong-proxy --url)
{
"message"
: "no API found with those values"
}
> curl $(
minikube service kong-admin --url)
{
"version"
: "0.12.1"
,
"plugins"
: {
...... output truncated ......
}
,
"tagline"
: "Welcome to kong"
,
"configuration"
: {
...... output truncated ......
}
,
"lua_version"
: "LuaJIT 2.1.0-beta3"
,
"hostname"
: "Kong"
}
Creating an API
From now you can start Kong with $(minikube service <svc_name> --url)
. Similar to Docker, each microservices can be accessible through <svc_name>:5000
. For this reason, when adding API to Kong in Kubernetes, we must specify each upstream url with <svc_name>:5000
.
curl -i -X POST $(
minikube service kong-admin --url)
/apis/ \
--data 'name=movies'
\
--data 'hosts=cinema.com'
\
--data 'uris=/movies'
\
--data 'upstream_url=http://movies:5000'
Let’s check whether it works correctly with the upstream_url.
curl -X GET $(
minikube service kong-proxy --url)
/movies \
--header 'Host: cinema.com'
{
"subresource_uris"
: {
"movie"
: "/movies/<id>"
,
"movies"
: "/movies"
}
,
"uri"
: "/"
}
Congratulations! We’ve just created the first API with Kong in a Kubernetes environment!
Monitor & analysis
Downtime with the API will have a far-reaching impact on your business. Users will drop from your service and due to uncomfortable experiences, your service reputation might take a loss. In this situation, solving the service defect is a time-critical job and without any visibility to APIs, diagnosing and solving the issue might take a long time. By monitoring the API in real time, the issues surface to the top and can be easily taken care of. This not only resolves the problem, but by analyzing the service log, you can prevent, identify, and solve the error before it breaks out.
Due to Kong API Management, logging becomes much easier. Without using it, users have to add a logging module to each service and monitor them separately to check their status. However, by putting Kong in front of each service, users only have to monitor this single entry point. It can collect all of the traffic to your services, so you only have to care about Kong.
In this section, we’ll find out how to monitor Kong in real time and how to visualize and analyze with these logs.
Datadog (not open source)
Datadog is a monitoring and analytics service for cloud applications. By collecting all of the metrics from servers, databases, tools, and services, it provides real-time visibility of your entire stack with a unified view. Datadog suggests powerful monitoring & analytics services with various integrations. For a full list of integrations go
here
.
Datadog provides a 14 day free trial for new users with as many servers as you want. Or if you’re a student, with
Github Student pack
you can claim two years of a free Pro Account for
Datadog Student pack
.
In this section, we’ll look over monitoring Kong with datadog.
Here, we’ll use the latest version of datadog agent v5.21.2
.
Installation
To monitor Kong, you need to install
Datadog Agent
first. Replace
<API_KEY>
with your datadog
API Key
.
# Linux (Debain / Redhat)
export
DD_API_KEY=
<API_KEY>
;
bash -c
"
$(
curl -L https://raw.githubusercontent.com/DataDog/dd-agent/master/packaging/datadog-agent/source/install_agent.sh
)
"
Configuration
After installing it, you need to create a configuration file for Kong.
In the Datadog Agent’s conf.d
directory (/etc/dd-agent/conf.d
), copy kong.yaml.example
and configure it with your Kong instance.
init_config:
instances:
-
kong_status_url:
http://localhost:8001/status/
# For every instance, you need an `kong_status_url` and can optionally supply a list of tags.
After configuring, restart the agent to send Kong metrics to Datadog. To validate your config, run info
command with datadog-agent. It’ll show you result of Kong configuration on Checks
section.
sudo /etc/init.d/datadog-agent restart
[
ok ]
Restarting datadog-agent (
via systemctl)
: datadog-agent.service.
sudo /etc/init.d/datadog-agent info
Checks
======
kong
-------
- instance #0 [OK]
- Collected 26
metrics, 0
events &
1
service check
By configuring the Datadog agent, metrics from
\status
of the Kong admin API will flow to Datadog. But to collect the full list of metrics with Kong, you need to configure the
Kong Datadog Plugin
.
curl -X POST http://localhost:8001/apis/{api}/plugins \
--data "name=datadog" \
--data "config.host=127.0.0.1" \
--data "config.port=8125"
This will collect metrics from each API, such as request.count
, http_status_code
, latency
, etc.
Setting up dashboard
To visualize your metrics, you can use datadog’s pre-set Kong dashboard or create your own Datadog dashboard.
From the
Integrations
tab on Datadog, search Kong. By enabling
kong integration
, the pre-setted Kong
dashboard
will show up in your dashboard.
Also, you can customize the dashboard with each API’s collected metric. Through this, you can visualize your API usage and analyze your API.
ELK Stack
ELK is the acronym for three open source projects:
- Elasticsearch: An open source, distributed, RESTful, JSON-based search engine.
- Logstash: Data processing pipeline that ingests data from multiple sources and sends it to Elasticsearch.
- Kibana: Visualize data with charts and graphs in Elasticsearch.
The basic workflow of ELK stack is this:
- Users can collect their log (eg. service, DB, infrastructure) with logstash
and send it to elasticsearch
.
- With this collected data; users can search and analyze specific data with elasticsearch
.
- Kibana as a visualization tool; users can visualize their data and monitor them in real time.
Installation
Here, we’ll just skip everything related to
X-pack
, since explaining it will be out of the boundary of
Monitor & Analysis
for Kong.
More details about installing with full functionality of the ELK Stack is
here
.
Since elasticsearch
works as the hub of this stack, we have to install it first.
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch |
sudo apt-key add -
sudo apt-get install apt-transport-https
echo
"deb https://artifacts.elastic.co/packages/6.x/apt stable main"
|
sudo tee -a /etc/apt/sources.list.d/elastic-6.x.list
sudo apt-get update &&
sudo apt-get install elasticsearch
Installing elasticsearch
doesn’t make itself run, so we have to enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable
elasticsearch.service
sudo systemctl start elasticsearch.service
To verify it’s running correctly, issue the below command:
curl localhost:9200
{
"name"
: "TXEVdGI"
,
"cluster_name"
: "elasticsearch"
,
...... output truncated ......
"tagline"
: "You Know, for Search"
}
Elasticsearch
is running properly! Next step is for Kibana
, the visualization tool.
sudo apt-get install kibana
Also installing kibana
doesn’t make itself run, so we have to enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable
kibana.service
sudo systemctl start kibana.service
To verify it’s running correctly, type localhost:5601
in your browser.
If you are not running Kibana in your host, you can port foward to access the remote host’s port.
ex)
ssh <kibana_host> -L 5601:localhost:5601
Details about port fowarding are
here
.
Lastly, for collecting data from remote services, install logstash
.
sudo apt-get install logstash
Installing logstash
doesn’t make itself run, so we have to enable and start it:
sudo systemctl daemon-reload
sudo systemctl enable
logstash.service
sudo systemctl start logstash.service
We’ll skip verifying the logstash
running status since it must be configured before running it.
You can check its status with systemd
.
sudo systemctl status logstash.service
Configuration
To start collecting data, we need to create a pipeline for logstash
. By default, logstash
reads configuration files from the conf.d
folder located in logstash
home folder /etc/logstash
.
Create a file with the name kong.conf
in /etc/logstash/conf.d
and fill it like below.
# kong.conf
input {
tcp {
port => 12345
codec => json
}
}
filter {
date {
match => [ "timeMillis", "UNIX_MS" ]
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
This configuration will make logstash
work in this sequence:
- Listen to TCP requests on port 12345
and parse it with the json
format
- With its input, modify or format with the filter
section, which here is the timestamp
format.
- Lastly, send results data to elasticsearch
, which is located at localhost:9200
.
And print the status of logstash
to stdout
output with rubydebug
format.
We’ve just configured logstash
, so let’s restart it to work correctly.
sudo systemctl restart logstash.service
And now its time to configure Kong. To send Kong logs to logstash
, we need to configure Kong with the TCP/UDP Plugin. Since we’ve configured logstash
to collect data from the TCP
request, we’ll add the TCP logging
plugin here.
curl -X POST http://localhost:8001/plugins \
--data "name=tcp-log"
\
--data "config.host=localhost"
\
--data "config.port=12345"
{
"created_at"
:1518266093000,
"config"
:{
"host"
:"localhost"
,
"port"
:12345,
"timeout"
:10000,
"keepalive"
:60000,
"tls"
:false
}
,
"id"
:"2a984c58-dbc8-4c35-a813-516158437c06"
,
"enabled"
:true,
"name"
:"tcp-log"
}
Setting up dashboard
To monitor and analyze the logs from Kong, we have to look up elasticsearch
for our data with using Kibana.
In order to visualize and monitor data in kibana, we need to create an index pattern to retrieve data from elasticsearch
.
By typing the right index pattern (in here: logstash-*
) and time filter, you can lookup your data in the Discover
tab in kibana.
And with the collected data, you can visualize data with chats, table, maps, etc.
Configuration
Configuring Kong through the command line is a pretty tough job.
For example, let’s say you are adding a new API. You have to look up Kong documentation for properly required attributes. And next, you have to type it on the command line while paying attention not to misspell any of the words.
Preparing all of the parameters with the appropriate attribute or headers takes time and a typo with the command will cause you to re-type it again. To keep configuration simple, there are some open source integrations to help you with it. There are lists of options for this, such as ‘Konga', ‘Kong-Dashboard', ‘KongDash', etc., but they basically do the same thing.
In this section, we’ll only look at ‘Konga,’ which has richer functionalities than other integrations.
Konga
The description of
Konga
on the Github repository is: More than just another GUI to KONG Admin API.
This helps you with configuring Kong Admin API through the GUI dashboard, and it has much richer functionalities.
- Backup, restore and migrate Kong Nodes using Snapshots.
- Monitor Node and API states using health checks.
- Email/Application notifications.†
- Multiple users.
(from the official repository of Konga)
†: Slack Notification
Installation
For prerequisites for installation needs:
- A running Kong instance
- NodeJS / NPM
If you are prepared for prerequisites, installation for Konga is pretty simple.
git clone https://github.com/pantsel/konga.git
cd
konga
npm install
Or you can run Konga with Docker.
docker run -p 1337:1337
--link kong:kong
--name konga
pantsel/konga
To verify it’s running correctly, open a browser and type http://localhost:1337
. If you succeed with installing, you’ll see a login page for Konga GUI.
The default password for Login is:
- Admin login: admin | password: adminadminadmin
- Demo user login: demo | password: demodemodemo
After login succeeds, you’ll see a Welcome message with Kong Admin configuration.
Fill in the form to configure with your own Kong Admin URL.
Configure Kong
Putting the right Kong Admin URL and creating a connection will show you the main dashboard that describes the status of your Kong node. It provides you with the information throughout the Kong nodes that can be accessed through http://<Kong_Admin_URL>/status
.
Now we can add our API through this GUI dashboard. By clicking the API tab and Add New API
button, you’ll see a form with a bunch of hints for the parameters to be filled in with.
By clicking Submit API
, you’ve just created your own API without typing any parameters or headers on the terminal. Konga does all the configuration stuff and sends it to the Admin API for you.
To add a plugin for your API, click Add Plugin
on your API’s Plugins tab. You can choose from the plugin list and easily add one of them to your API.
Also for the consumer, you can easily add new consumer and configure credential, ACL, APIs or etc.
Additional Functionalities
Without using any integration, to backup your Kong settings, you have to access your datastore and create a dump file for that database. If your database is used for multipurpose and complexly configured, it’ll be hard to backup your settings. Or if you are using different datastores (Cassandra, Postgresql) for each Kong nodes, due to two datastores aren’t compatible with each other, the backup will be a tough process.
Konga’s snapshot function can easily help you with this by abstracting datastore. It has own persistence mechanism for storing users and configuration, and by this users can backup, restore and migrate Kong Nodes easily.
By clicking Instant Snapshot
from the snapshot tab, backup for Kong node is created with a short amount of time.
On each snapshot, you can check each category (API, Plugin, Consumer) details and choose which one to restore or export.
When service is in operation, your server must stay healthy. Any incidents must be dealt with in a short amount of time. To be informed about your resources status, Konga can help you with health notifications.
If your Kong node is down, or one of your APIs is not responding, Konga will send you email or an application notification.
After configuring the health check for Node or API, notifications will be sent according to the Notification
section from the settings tab.
If you are using Slack with your team, by adding the Slack webhook URL on Konga, your team can be notified of the health status.
Summary
You are now prepared to use Kong with Docker and Kubernetes and other tools that make it easier to run and monitor Kong.