Chapter 6. Extending Kong
Kong has a bunch of features out of the box, including different authentication strategies, request/response transformations, traffic control, logging systems support and more (the full list of available plugins is available in the
documentation
, including Kong enterprise plugins). But it is also very flexible in adding custom plugins -
Lua language
,
OpenResty
platform,
Lapis framework
and Kong DAO factory allows you to build a plugin of any complexity, add new entrypoints to Kong admin, work with a databases, etc. The power of the plugin system is limited only by system resources and a developer’s imagination.
In this chapter we’ll cover how to create a custom plugin from scratch, how to organize modules, and how to test and install it. And we’ll start with the latest versions, since it’s convenient to see how the plugin works during development.
To create a Kong plugin you need a running Kong instance. A
local Kong installation
is the easiest way to get started, and for the examples covered in this chapter we assume you have a default Kong setup without any plugins configured. Of course it’s possible to use a Docker container for our cinema microservices example, although it gets more complicated installing and testing the plugin. In the real world, building a Kong plugin in Docker is only required for unsupported systems (e.g. Windows).
So, let’s start with plugin installation.
All of the code for the sample project in this book can be found
here
.
Installing custom plugins
It’s possible to install a plugin from any source, but the fastest and easiest way - is to use
luarocks
. It’s a package manager for Lua and Kong’s official installation package includes it already. So, just search for the desired plugin either on the website or locally with the
luarocks search
command and run the
install
:
luarocks install <your-plugin>
Where <your-plugin>
-is the name of the rock file, usually located at luarocks.org
, but its possible to use a direct URL to it.
It will install the latest version of the plugin to the rocks tree-a path in your system where Lua is able to load modules. You can see detailed information about the plugin (and any Lua library) with the show
command:
luarocks show <your-plugin>
Besides the description, it will show the physical path of installed modules.
It’s possible to install any version (in case the latest doesn’t work for you or it is broken):
luarocks install <your-plugin> <version>
Manually
While developing a plugin locally, you can’t install it from luarocks-it simply doesn’t exist there. For that there are a couple more methods to install a plugin:
- Modify the lua_package_path
configuration option the way it will point to your plugin path.
- With the luarocks make
command.
- With Docker.
Before using any of them, you need to upload a plugin to the target machine (can skip this step for local development). It’s totally up to you how to deliver the plugin code to the server, either in the archive or with git-all will work.
To change the Kong
lua_package_path
configuration, you can either edit a
lua_package_path
variable in
nginx-kong.conf
or set the environment variable
KONG_LUA_PACKAGE_PATH
. The value of this variable is in a standard Lua path form, with
;
as a separator.
;;
refers to original search path, so make sure not to remove it. The full description of
lua_package_path
is available at
Lua nginx module documentation
. So, if you have the plugin in
/kong/plugins/your-plugin
directory, your
lua_package_path
may look like this:
lua_package_path ./?.lua;./?/init.lua;/kong/plugins/your-plugin/?.lua;;
With this method you won’t pollute the Lua rocks tree and you will just need to restart a server if the plugin code changes.
To use a luarocks make
command, you need to have a source code of the plugin somewhere at your machine and run that command:
cd <path to your plugin>
luarocks make
It will install the plugin to kong/plugins/<your-plugin>
in your rocks tree path. After each code change-luarocks make
should be run again with Kong restart to apply changes.
To develop plugins in Docker, you need a Kong image, and you must load a volume with your plugin code into a directory, where Kong will find it (may vary, depending on your Kong installation), and reload Kong server after each code change. Don’t worry if it sounds complex, we will have a detailed example installing a custom plugin into our cinema Docker Compose configuration.
Loading plugins
Once your plugin is installed, you need to load it. And again, you have a couple of ways to do so: via the custom_plugins
configuration attribute in nginx-kong.conf
or with the KONG_CUSTOM_PLUGINS
environment variable. Multiple plugins should be separated by commas:
custom_plugins = your-plugin1,your-plugin2
After a Kong reload, you will be able to use your plugin. In the cluster you need to repeat the installation process on each node. To verify that the plugin was installed correctly, hit the kong API and check the plugins
field under `configuration. Assuming Kong API is listening at 8001 port locally (Python is used for pretty print here):
curl -XGET "localhost:8001"
|
python -mjson.tool
{
"configuration"
: {
...
"plugins"
: {
"acl"
: true
,
"aws-lambda"
: true
,
...
"your-plugin"
: true
,
...
Once loading is done, we are ready to apply the plugin to APIs and consumers.
Building your own plugin
Integrating with a third-party service is a typical task developers solve with Kong. Since each service has its own API, business logic and purpose, writing a custom plugin is a good way to make it work. It’s impossible to write a universal plugin, but we will make an attempt to create a general purpose notification plugin, which may be used for notificating different services when specified Kong endpoints are hit.
Kong plugins are written in the Lua language, but if you’re not familiar with it, its not a problem. Lua is a scripting language, and it supports different programming paradigms as Kong provides you with a clean, understandable interface to work with it, so even without knowledge you can develop it fast.
So, our plugin will send an asynchronous request to the external server on each request to the Kong API it applied to. Optionally, it will send a notification for specified consumers. It should allow you to configure a target server and the data which you send. Furthermore, it will save the history of requests into the database. We will cover: the Kong plugin interface, how to create a database migration, how to query a Kong database, and how to send a separate request from the Kong plugin (we will write tests for that).
Kong has a beautiful and very detailed
documentation
about how to create a plugin. Let’s start with a basic code structure. Kong plugins live in the
kong/plugins
namespace. The kong-notification-plugin directory itself can be anywhere, because it’s the root directory of the plugin. Also,
here is a link
to the final plugin source code, and the path:
kong-notification-plugin
| └── kong
| └── plugins
| └── notification
Okay, now we can get started:
mkdir kong-notification-plugin
cd kong-notification-plugin
mkdir -p kong/plugins
touch kong/plugins/handler.lua
touch kong/plugins/schema.lua
Here we created a basic plugin structure that now looks like this:
kong-notification-plugin/kong/plugins/notification
├── handler.lua
└── schema.lua
Now, we can install it to our cinema Docker Compose configuration. First, we need to load a volume to both kong-migration
and kong
services:
services:
...
kong-migration:
build:
./kong
volumes:
-
../kong-notification-plugin/kong/plugins/notification:/usr/local/share/lua/5.1/kong/plugins/notification:ro
...
kong:
build:
./kong
volumes:
-
../kong-notification-plugin/kong/plugins/notification:/usr/local/share/lua/5.1/kong/plugins/notification:ro
This setup expects you to have kong-notification-plugin
and kong-book-example
directories at the same level in your directory structure. The container path /usr/local/share/lua/5.1/kong/plugins/notification
is special-it will allow Kong to find your plugin. Using a different path and changing the KONG_LUA_PACKAGE_PATH
environment variable will not work, and your plugin migrations will not be loaded (there are a couple issues regarding this problem at Github at the moment of writing this book). The last step is to load plugin to Kong-for that simply add the KONG_CUSTOM_PLUGINS
environment variable:
services:
...
kong-migration:
...
environment:
...
- KONG_CUSTOM_PLUGINS=notification
kong:
...
environment:
...
- KONG_CUSTOM_PLUGINS=notification
That’s it. Running docker-compose up -d
will run the container with the plugin enabled. After a code change we need to reload a server. One way is to run docker-compose stop
and docker-compose up
, but there is a faster way-exec
into running a Kong container and reload Kong directly:
docker exec -it 545018ec05dd sh
/ # kong reload
Kong reloaded
where 545018ec05dd
is the Kong container id (you will have a different id), which you can easily fit on one line:
docker ps | grep "kongbookexample_kong" | head -1 | cut -d' ' -f1
545018ec05dd
But notice, a Kong reload will not run any migrations we will add in the future. We still need to run a kong-migration
container to do that.
If you develop your own plugin and don’t use Docker, we recommend running the Kong server locally with a modified lua_package_path
variable, which will include a path to your plugin root directory. That way you will be able to test the plugin immediately after the code change and the Kong reload.
Plugin configuration
Plugin configuration is a way to change how a plugin behaves for different APIs or consumers. In the datastore, all plugins are saved in the plugins
table with name
, api_id
, consumer_id
and its config
, which is a JSON field received by the Kong API when the plugin is added.
We want you to be able to send notifications to different servers and by different methods for different APIs and consumers. So, the notification plugin will have two configuration parameters: url
where the request will be sent and the method
. We will also have timeout
and keepalive
, which will be used later for the connection configuration.
In the code, the plugin configuration is defined in the schema.lua
file, which should return a Lua table:
-
no_consumer
- allows you to apply a plugin to specific consumers if false
(a default)
-
fields
- actual configuration table
-
self_check
- custom validation functions
-- schema.lua
return
{
fields
=
{
url
=
{
type
=
"string"
,
required
=
true
},
method
=
{
type
=
"string"
,
required
=
true
},
timeout
=
{
default
=
10000
,
type
=
"number"
},
keepalive
=
{
default
=
60000
,
type
=
"number"
}
}
}
In the snippet above we added two required fields: url
and method
, both of string
type and two optional with defaults: timeout
and keealive
. It’s possible to set a default value, and make a configuration field immutable (not allowed to change after set the first time) or unique.
Having such a confuguration schema will not allow a client to apply an invalid one, like missing fields or wrong values. It’s also a good idea to add some validations, like methods allowed, and validating a URL by the regex:
-- schema.lua
...
url
=
{
type
=
"string"
,
required
=
true
,
regex
=
"^((http[s]?):\/)?\/?([^:\/\s]+)((\/\w+)*\/)([\w\-\.]+[^#?\s]+)(.*)?(#[\w\-]+)?$"
},
method
=
{
type
=
"string"
,
required
=
true
,
enum
=
{
"GET"
,
"POST"
,
"PUT"
}}
...
Fortunately, Kong already has a url
type defined, so using the regex validator is not needed, so just change a type to URL:
-- schema.lua
url
=
{
type
=
"url"
,
required
=
true
},
Kong supports a custom validator within the func
parameter. It receives a value and should return either true
or false, <error message>
, where <error message>
is your validation error. The difference between func
and self_check
is that the latest is applied for the whole schema after all other validations are passed, so it’s possible to validate rules applied to multiple configuration fields (e.g. a whitelist or blacklist set).
Kong supports many other configuration types, including complex (table
, array
, etc). For example, if you need a nested configuration, you can add a schema
parameter to validate nested fields:
-- schema.lua
let
nested_schema
=
{
fields
=
{...}}
...
nested_field
=
{
type
=
"table"
,
schema
=
nested_schema
}
...
The full list of available types and validators is available in the
Kong documentation
. That’s it with configuration for now, since we have to implement the logic of the plugin.
The core of the plugin
The execution process consists of several phases, and there are some limitations about what is possible in each phase and callback. The table below displays the Lua Nginx module directives and appropriate Kong callbacks in execution order:
Order
|
Lua nginx module function
|
Kong callback
|
Phase
|
1
|
init_worker_by_lua
|
:init_worker()
|
initialization
|
2
|
ssl_certificate_by_lua
|
:certificate()
|
rewrite/access
|
3
|
rewrite_by_lua
|
:rewrite()
|
rewrite/access
|
4
|
access_by_lua
|
:access()
|
rewrite/access
|
5
|
header_filter_by_lua
|
:header_filter()
|
content
|
6
|
body_filter_by_lua
|
:body_filter()
|
content
|
7
|
log_by_lua
|
:log()
|
log
|
The complete description of execution phases and Lua Nginx module directives is available at the
github repository
.
For the notification plugin we will implement an :access()
callback, since it gets executed for each request before it’s being proxied to the upstream. Kong provides a base_plugin
module that is convenient to use (but not required) to log each hit of the callback of our plugin:
-- handler.lua
local
BasePlugin
=
require
"kong.plugins.base_plugin"
-- import Kong base plugin
local
NotificationHandler
=
BasePlugin
:
extend
()
-- inherit from Kong base plugin
function
NotificationHandler
:
new
()
NotificationHandler
.
super
.
new
(
self
,
"notification"
)
-- call super module function with a plugin name (used for logging)
end
function
NotificationHandler
:
access
(
config
)
-- implement access() callback
NotificationHandler
.
super
.
access
(
self
)
-- Custom logic goes here
end
return
NotificationHandler
Inheriting from the Kong base plugin allows you to log each callback call with a given plugin name. Here is an example for the access()
callback:
-- kong/plugins/base_plugin.lua
function
BasePlugin
:
new
(
name
)
self
.
_name
=
name
end
...
function
BasePlugin
:
access
()
ngx_log
(
DEBUG
,
"executing plugin
\"
"
,
self
.
_name
,
"
\"
: access"
)
end
When we call a function like NotificationHandler.super.access(self)
, the code from the super module will be executed and a new DEBUG level log entry with the message executing plugin "notification": access
will be added.
It’s possible to not inherit from the base_plugin
, but instead inherit from the Object
:
-- handler.lua
local
Object
=
require
"kong.vendor.classic"
local
NotificationHandler
=
Object
:
extend
()
function
NotificationHandler
:
access
(
config
)
-- implement access() callback
-- Custom logic goes here
end
return
NotificationHandler
In this case there is no need to set a name or call super functions, and of course nothing will be logged under the hood.
So, the access
callback receives a single parameter, config
, which is a table described in schema.lua
. It is the config for the exact api
and consumer
, stored in the config
column of the plugins
table in the datastore. The idea is to send an asynchronous request to the config.url
with the method config.method
. The algorithm is described next.
The first step is to parse a URL and get the protocol, domain, port, and path. For that task we can use a socket.url
module included in Kong dependencies:
-- handler.lua
local
socket_url
=
require
"socket.url"
local
function
parse_url
(
url
)
local
parsed_url
=
socket_url
.
parse
(
url
)
if
not
parsed_url
.
port
then
if
parsed_url
.
scheme
==
"http"
then
parsed_url
.
port
=
80
elseif
parsed_url
.
scheme
==
"https"
then
parsed_url
.
port
=
443
end
end
if
not
parsed_url
.
path
then
parsed_url
.
path
=
"/"
end
return
parsed_url
end
To be able to send an external HTTP request, we need a kind of HTTP client, but Kong already has tools (actually it’s a Lua nginx module) for sending requests with UDP
and TCP
protocols:
-
ngx.socket.tcp
for TCP connections
-
ngx.socket.udp
for UDP connections
Create a socket and connect it to the remote server. Also, it’s a good idea to add logging in case of a failed connection:
--handler.lua
...
function
NotificationHandler
:
access
(
config
)
NotificationHandler
.
super
.
access
(
self
)
local
parsed_url
=
parse_url
(
config
.
url
)
local
host
=
parsed_url
.
host
local
port
=
tonumber
(
parsed_url
.
port
)
local
sock
=
ngx
.
socket
.
tcp
()
-- Creates and returns a TCP or stream-oriented unix domain socket object
sock
:
settimeout
(
config
.
timeout
)
-- Set the timeout value in milliseconds for subsequent socket operations
ok
,
err
=
sock
:
connect
(
host
,
port
)
if
not
ok
then
ngx
.
log
(
ngx
.
ERR
,
"[notification-log] failed to connect to "
..
host
..
":"
..
tostring
(
port
)
..
": "
,
err
)
return
end
end
Notice the config
variable is the table with keys, defined in the schema.lua
module of the plugin, and values are set for the request API and the consumer.
In case the remote server uses the
https
protocol we need to add the
SSL handshake
:
--handler.lua
...
if
parsed_url
.
scheme
==
"https"
then
local
_
,
err
=
sock
:
sslhandshake
(
true
,
host
,
false
)
if
err
then
ngx
.
log
(
ngx
.
ERR
,
"[notification-log] failed to do SSL handshake with "
..
host
..
":"
..
tostring
(
port
)
..
": "
,
err
)
end
end
Before sending a message we need to create it:
--handler.lua
local
cjson
=
require
"cjson"
local
string_format
=
string.format
local
cjson_encode
=
cjson
.
encode
local
function
get_message
(
config
,
parsed_url
)
local
url
if
parsed_url
.
query
then
url
=
parsed_url
.
path
..
"?"
..
parsed_url
.
query
else
url
=
parsed_url
.
path
end
local
body
=
cjson_encode
(
{
consumer
=
ngx
.
ctx
.
authenticated_consumer
,
api
=
ngx
.
ctx
.
api
}
)
local
headers
=
string_format
(
"%s %s HTTP/1.1
\r\n
Host: %s
\r\n
Connection: Keep-Alive
\r\n
Content-Type: application/json
\r\n
Content-Length: %s
\r\n
"
,
config
.
method
:
upper
(),
url
,
parsed_url
.
host
,
#
body
)
return
string_format
(
"%s
\r\n
%s"
,
headers
,
body
)
end
ngx.ctx
is a table that stores per-request Lua context data and has a life time identical to the current request. It’s possible to add keys to the table (in order to share some requested data between plugins).
The message consists of two parts: headers and a body. Headers have a protocol, method, host, content-type
set to application/json
since we will send a JSON, content-length
, and keep-alive set connection headers. The body is just a JSON encoded consumer (if it exists) and API tables.
Finally, we can send a message:
-- handler.lua
...
ok
,
err
=
sock
:
send
(
get_message
(
config
,
parsed_url
))
if
not
ok
then
ngx
.
log
(
ngx
.
ERR
,
"[notification-log] failed to send data to "
..
host
..
":"
..
tostring
(
port
)
..
": "
,
err
)
end
When the message is sent, it’s a good idea to set keepalive
. It will put the current co-socket connection back to the pool so it can be used by another connection:
--handler.lua
...
ok
,
err
=
sock
:
setkeepalive
(
config
.
keepalive
)
if
not
ok
then
ngx
.
log
(
ngx
.
ERR
,
"[notification-log] failed to keepalive to "
..
host
..
":"
..
tostring
(
port
)
..
": "
,
err
)
return
end
It’s worth mentioning that you can use functions from the other plugins and actually, from any Lua module from the Kong core and the libraries it depends upon. The list of Kong dependencies is available in the kong rockspec
file.
The code logic is done. We parse a URL from the config, generate a message to send, create a TCP connection, and send that message to the remote server. We log errors on each possible point of failure. We use the keepalive
configuration to return the connection back to the pool for better performance, and finally we use timeout
to limit the waiting time for each of the socket operations.
Despite how it works, the code has a significant issue, it’s synchronous. That means until we get a response from the remote server we notify, Kong won’t send a response back to the client. The solution is to use
ngx.timer.at
with zero delay. It will run code asynchronously to the original context creating the timer.
-- handler.lua
function
NotificationHandler
:
access
(
config
)
NotificationHandler
.
super
.
access
(
self
)
local
ok
,
err
=
ngx
.
timer
.
at
(
0
,
send
,
config
,
ngx
.
ctx
)
if
not
ok
then
ngx
.
log
(
ngx
.
ERR
,
"failed to create timer: "
,
err
)
return
end
end
Now, all of the core logic is in the send function:
-- handler.lua
function
send
(
premature
,
config
,
ctx
)
-- do some routine job in Lua just like a cron job
if
premature
then
return
end
...
end
We moved ngx.ctx
to a separate argument, since it won’t be available directly in the asynchronous function. The rest of the logic doesn’t change. Now we return upstream responses immediately to the client and do a notification request asynchronously.
Plugin migrations and Dao factory
Our plugin can send a notification request to the configured server with the API and consumer information. But we don’t store anything in our system about that. We can add more logs, but it would not be easy to find them later without an external logging tool. For this plugin we will save events we’re sending into the database, to be able to extend the Kong Admin API to search them.
Working with a database from a custom plugin starts with creating a migration for the database. We will support both databases, postgres
and cassandra
, but you can create a migration only for the database you will actually use (double square brackets in Lua used to enclose literal strings that traverse several lines):
kong-notification-plugin/kong/plugins/notification/migrations
├── postgres.lua
└── cassandra.lua
-- cassandra.lua
return
{
{
name
=
"2018-01-27-841841_init_notification"
,
up
=
[[
CREATE TABLE IF NOT EXISTS notifications(
id uuid,
api_id uuid,
consumer_id uuid,
params text,
timeout int,
keepalive int,
created_at timestamp,
PRIMARY KEY (id)
);
]]
,
down
=
[[
DROP TABLE notifications;
]]
}
}
-- postgres.lua
return
{
{
name
=
"2018-01-27-841841_init_notification"
,
up
=
[[
CREATE TABLE IF NOT EXISTS notifications(
id uuid,
api_id uuid REFERENCES apis (id),
consumer_id uuid REFERENCES consumers (id),
timeout integer,
keepalive integer,
params text,
created_at timestamp without time zone default (CURRENT_TIMESTAMP(0) at time zone 'utc'),
PRIMARY KEY (id)
);
]]
,
down
=
[[
DROP TABLE notifications;
]]
}
}
Migrations are simply tables with name
, up
, and down
keys:
-
name
is used to define which migrations Kong already applied (executed migrations saved into schema_migrations
table).
-
up
- is the query for the forward change.
-
down
- is the query for the rollback change.
There is a different type of migration when it’s required to update the plugin config in every place where it’s applied. It’s possible to write such migrations manually manipulating with JSON, but there is a better way: use the Kong migrations helper:
-- handler.lua
local
plugin_config_iterator
=
require
(
"kong.dao.migrations.helpers"
).
plugin_config_iterator
return
{
...
{
name
=
"2018-01-28-841841_notification"
,
up
=
function
(
_
,
_
,
dao
)
for
ok
,
config
,
update
in
plugin_config_iterator
(
dao
,
"notification"
)
do
if
not
ok
then
return
config
end
config
.
keepalive
=
60000
local
ok
,
err
=
update
(
config
)
if
not
ok
then
return
err
end
end
end
,
down
=
function
(
_
,
_
,
dao
)
end
-- not implemented
}
}
In the example above we set the keepalive
config parameter to 60000 in each place where the plugin is applied.
Now, when the database is ready, it’s required to create a so-called DAO object, which provides a layer of abstraction over the database table. In the code we will work with the DAO object, but on save or update it will be transformed to the appropriate database row and vice versa. Getting a row from the database will be transformed into a DAO object. It’s possible to work with many tables from the same plugin, so the DAO module should return a table of DAO objects:
-- daos.lua
local
SCHEMA
=
{
primary_key
=
{
"id"
},
table
=
"notifications"
,
fields
=
{
id
=
{
type
=
"id"
,
dao_insert_value
=
true
},
created_at
=
{
type
=
"timestamp"
,
immutable
=
true
,
dao_insert_value
=
true
},
api_id
=
{
type
=
"id"
,
required
=
true
,
foreign
=
"apis:id"
},
consumer_id
=
{
type
=
"id"
,
required
=
true
,
foreign
=
"consumers:id"
},
params
=
{
type
=
"string"
,
required
=
true
}
}
}
return
{
notifications
=
SCHEMA
}
-- this plugin only results in one custom DAO, named `notifications`
dao_inserted_value
-means that the value will be inserted by the DAO itself. immutable
-doesn’t allow you to update the field. foreign
-is a foreign key (postgres only).
In the same way we handled the plugin configuration, we defined fields with their types and additional options. Each field can have the same options as a field in
schema.lua
and some
new options are possible
(e.g.:
dao_insert_value
,
foreign
, and
primary_key
).
The notification plugin needs to save a message it sent to the remote server to the database. We load a DAO factory and get notifications
for the DAO object as it’s property. The DAO object has an insert
function to create a new row in the database:
-- handler.lua
local
singletons
=
require
"kong.singletons"
local
dao_factory
=
singletons
.
dao
local
notifications_dao
=
dao_factory
.
notifications
local
notification
,
err
=
notifications_dao
:
insert
(
{
api_id
=
ngx
.
ctx
.
api_id
,
consumer_id
=
ngx
.
ctx
.
authenticated_credential
.
consumer_id
,
params
=
message
}
)
It’s possible to work with core DAO objects (APIs, consumers, plugins), so we can query those tables from the plugin:
local
singletons
=
require
"kong.singletons"
local
dao_factory
=
singletons
.
dao
local
apis_dao
=
singletons
.
dao
.
apis
local
consumers_dao
=
singletons
.
dao
.
consumers
local
plugins_dao
=
singletons
.
dao
.
plugins
To do that just get a required DAO from the factory and use one of the existing query functions:
Function
|
Description
|
DAO:count (tbl)
|
Count the number of rows.
|
DAO:delete (tbl)
|
Delete a row.
|
DAO:find (tbl)
|
Find a row.
|
DAO:find_all (tbl)
|
Find all rows.
|
DAO:find_page (tbl, page_offset, page_size)
|
Find a paginated set of rows.
|
DAO:insert (tbl, options)
|
Insert a row.
|
DAO:new (db, model_mt, schema, constraints)
|
Instanciate a DAO.
|
DAO:update (tbl, filter_keys, options)
|
Update a row.
|
We have already learned how to create database migrations for the plugin tables and plugin configuration. Also, we used a DAO factory to save data into the plugin related table. One more extendable part of Kong we didn’t touch yet is the Admin API.
Extending the Admin API
Once we write the requests to the Kong database, we should be able to read them. Of course it’s possible querying the database directly, but Kong provides a convenient way to implement custom API endpoints, so we can create a basic CRUD set of operations around notifications.
Kong will load admin endpoints from the api.lua
file, so let’s create it. The plugin structure should now look like this:
kong-notification-plugin/kong/plugins/notification
├── migrations
├── api.lua
├── daos.lua
├── handler.lua
└── schema.lua
The API module should return a Lua table with a list of endpoint paths in keys and another table as values describing HTTP methods. You can think about it as executing this function (let’s name it an action
) for the given endpoint(s) on the given method. Each action receives these arguments:
-
self
- is the Lapis request object. We can get different useful information from it, like request params, cookies, headers, session, etc. The full list of supported parameters and methods is available in the Lapis documentation
-
dao_factory
- is the object we used to get the Dao for our plugin.
-
helpers
- a table with responses
and yield_error
keys. The full description is available at Kong’s documentation
.
For the notification plugin it might look like this:
-- api.lua
return
{
[
"/notification"
]
=
{
GET
=
function
(
self
,
dao_factory
,
helpers
)
-- ...
end
}
}
Since we don’t want to modify existing notifications in the database, we will implement the GET method. The desired response will be a paginated JSON list of notifications. For that we can use the Kong CRUD helpers module: kong.api.crud_helpers
. It has a list of convenient functions to work with the Kong database and is completely compatible with the action’s arguments:
-- api.lua
local
crud
=
require
"kong.api.crud_helpers"
Now, use the CRUD helpers function:
-- api.lua
...
GET
=
function
(
self
,
dao_factory
,
helpers
)
crud
.
paginated_set
(
self
,
dao_factory
.
notifications
)
end
Wow, that looks very simple! And it is, since all of the work is done under the hood by Kong and the Lapis framework. It will process the request, get pagination params, create a database query to the notifications table, and return a JSON response to the client, all in a single line.
In response Kong will send a JSON with two fields:
{
"data": [
{
"api_id": "adedc2c2-7e38-4327-ab15-448c511d705e",
"created_at": 1517609518000,
"id": "898b864c-c5fb-46dc-a003-837ac02fef86",
"params": ...
},
...
],
"total": 15
}
data
- actually holds a list of notifications total
- total number of items by the given filter
By default Kong will return the first 100 rows from the database. To change the size of the page, just set the size
query parameter:
curl -XGET "http://localhost:8001/notification?size=5"
That will return only five items:
{
"data": [
{
"api_id": "adedc2c2-7e38-4327-ab15-448c511d705e",
"created_at": 1517609518000,
"id": "898b864c-c5fb-46dc-a003-837ac02fef86",
"params": ...
},
...
],
"next": "http://localhost:8001/notification?offset=Mg%3D%3D&size=5",
"offset": "Mg==",
"total": 15
}
The response has a different structure with a couple new fields:
next
- is the link to the next page offset
- Base64 encoded number of the next page
But what if we want to do some filter queries, like getting notifications by consumer or API? Kong has helpers for that too. Let’s add another endpoint to get notifications by api_id:
-- api.lua
...
[
"/notification/api/:api_name_or_id"
]
=
{
GET
=
function
(
self
,
dao_factory
,
helpers
)
crud
.
paginated_set
(
self
,
dao_factory
.
notifications
)
end
}
It looks the same, except the URL, and it won’t work as expected, so we need to tell Kong somehow to apply api_id
from the request to the database query. To accomplish it we will use the Lapis before
callback. It applies to each request just before the action will be executed, so we can do some extra validations and modify self.params
.
-- api.lua
...
[
"/notification/api/:api_name_or_id"
]
=
{
before
=
function
(
self
,
dao_factory
,
helpers
)
crud
.
find_api_by_name_or_id
(
self
,
dao_factory
,
helpers
)
self
.
params
.
api_id
=
self
.
api
.
id
end
,
Notice the api_name_or_id
that is special and will be processed by the Kong CRUD helper. Once we set api_id
to the params, the paginated_set
helper will run a filter query to get notifications by api_id
from the params
.
In the same fashion we can implement a filter by consumer_id
, except the api_name_or_id
, which will change to username_or_id
:
-- api.lua
...
[
"/notification/consumer/:username_or_id"
]
=
{
before
=
function
(
self
,
dao_factory
,
helpers
)
crud
.
find_consumer_by_username_or_id
(
self
,
dao_factory
,
helpers
)
self
.
params
.
consumer_id
=
self
.
consumer
.
id
end
,
GET
=
function
(
self
,
dao_factory
,
helpers
)
crud
.
paginated_set
(
self
,
dao_factory
.
notifications
)
end
}
Similarly, it’s easy to add a filter by both api_id
and consumer_id
, but the main change is in the routing:
-- api.lua
[
"/notification/api/:api_name_or_id/consumer/:username_or_id"
]
=
{
And before the callback (it will retrieve both the API and consumer from the database):
-- api.lua
before
=
function
(
self
,
dao_factory
,
helpers
)
crud
.
find_api_by_name_or_id
(
self
,
dao_factory
,
helpers
)
crud
.
find_consumer_by_username_or_id
(
self
,
dao_factory
,
helpers
)
self
.
params
.
api_id
=
self
.
api
.
id
self
.
params
.
consumer_id
=
self
.
consumer
.
id
end
In case the API or consumer was not found in the database, the Kong helper will return a 404 status code.
And don’t forget we can still filter notifications by api_id
or consumer_id
at the notification
URL just by sending the appropriate query params:
curl -XGET "http://localhost:8001/notification?size=2&api_id=adedc2c2-7e38-4327-ab15-448c511d705e"
The Lapis framework allows you to render HTML, not only JSON. And again we have several options on how to do that.
We can render HTML directly in the action, such as:
-- api.lua
GET
=
function
(
self
,
dao_factory
,
helpers
)
return
self
:
html
(
function
()
h2
(
"Notifications"
)
element
(
"table"
,
{},
function
()
thead
(
function
()
return
tr
(
function
()
th
(
"Id"
)
th
(
"Api ID"
)
th
(
"Consumer ID"
)
th
(
"Created"
)
return
th
(
"Params"
)
end
)
end
)
end
)
end
)
end
Or we can use a
moonscript
and generate a Lua template from it. Detailed information is available at the moonscript website and
Lapis documentation
. Once the template is generated, we can use it in the action (Lapis layout is used as an example):
-- api.lua
local
template
=
require
(
"lapis.views.layout"
)
GET
=
function
(
self
,
dao_factory
,
helpers
)
return
{
render
=
template
}
end
Another way to render HTML is with
etlua
. It’s a
template language
that compiles to Lua. To use it with Kong we need to enable it in the Lapis app and set a
views_prefix
to the directory where we will store templates:
-- api.lua
GET
=
function
(
self
,
dao_factory
,
helpers
)
self
.
app
:
enable
(
"etlua"
)
self
.
app
.
views_prefix
=
"kong.plugins.notification.views"
return
{
render
=
"index"
}
end
Then a simple template can look like this:
-- views/index.etlua
<
div
class
=
"my_page"
>
Here
is
a
random
number
:
<%=
math.random
()
%>
</
div
>
We will use only JSON responses in the notification plugin, but it’s good to know that any Kong plugin can be turned into a full HTML application.
Kong gives developers a lot for building a custom Admin API. Using the powerful Lapis framework with convenient helpers to work with database queries, you can build a paginated response, and even return HTML pages.
Execution order
Your plugin may depend on context, set by other plugins, especially if you depend on Kong core plugins. For that reason Kong has a plugin order priority, which defines plugin execution orders. It means that each plugin callback (
access
,
log
,
body_filter
, etc.) will be executed in each plugin by order, defined with order priority. For example, according to the
Kong documentation
an
access
callback will be executed in the
bot-detection
plugin first, then
cors
, etc.
To set the custom plugin priority, just set a PRIORITY
value for the plugin handler:
-- handler.lua
NotificationHandler
.
PRIORITY
=
0
The greater the priority, the earlier the plugin will be executed. For our notification plugin it’s fine to set the lowest priority.
It’s also possible to modify the priority of other plugins, but it should be done extremely carefully and in rare cases, since it will affect the whole execution order and may break things. For example, the code below will change the priority of the correlation-id
plugin:
-- handler.lua
CorrelationIdHandler
.
PRIORITY
=
1501
Plugin testing
Lua is a great programming language, but like any programming language it’s not perfect. Since its a scripting language, all of the exceptions are happening durin runtime. Furthermore, it’s absolutely required to understand if the plugin we wrote does the task it’s supposed to do. The first step to make sure that a plugin actually works is to load it to Kong. In the case of syntax errors it will be noticed and Kong won’t start. The next step is to add the plugin to the API and try to hit it under different conditions. If everything works fine, then we’re lucky and wrote the perfect code on the first try. Otherwise, we will get a 500 status code response from Kong and exception stacktrace in the logs. Usually the code we write is not perfect and testing different conditions manually takes time and is not convenient. It may take a while to make sure that the plugin works correctly for all cases, but there is a better solution: writing tests.
We may write two basic types of Kong plugin tests: unit and integration.
Unit tests
Unit tests are usually used for business logic, which has no Kong context, and
pure functions
is a perfect example. Each test is just a call of a function we want to test under some condition. We check that it returns an expected result, otherwise the test will fail. Lua has many different testing frameworks, but Kong documentation suggests to use
busted
, which we will use for both: unit and integration tests. It’s easier to run tests locally, but we will consider resting in our Docker environment as well.
First, we need to install busted
locally:
luarocks install busted
Second, copy bin/busted
from the Kong repository to the plugin directory. It will allow you to test the plugin code with all of its dependencies. To test the parse_url
function we need to make it callable from outside the handler.lua
:
-- handler.lua
function
NotificationHandler
:
parse_url
(
raw_url
)
...
end
That way we can call it in the test module directly. By default, busted
will run test files with the _spec
suffix, so we can create a handler_spec.lua
inside the specs
directory:
kong-notification-plugin
└── bin
└── busted
└── kong
└── plugins
└── notification
├── migrations
├── api.lua
├── daos.lua
├── handler.lua
└── schema.lua
└── specs
└── handler_spec.lua
Let’s write a few tests for parsing regular URL, HTTPS, and invalid URL. Testing with busted
is simple: the describe
function defines a context and takes name and function arguments; it
function defines a test that receives a test name and a function to execute. Context allows you to group tests, add tags, insulate or expose environment, etc:
-- handler_spec.lua
local
handler
=
require
(
"kong.plugins.notification.handler"
)
describe
(
"testing parse_url #unit"
,
function
()
it
(
"test success http"
,
function
()
local
parsed_url
=
handler
:
parse_url
(
"http://localhost/test?query=param"
)
assert
.
are
.
same
(
{
host
=
"localhost"
,
port
=
80
,
path
=
"/test"
,
query
=
"query=param"
,
scheme
=
"http"
,
authority
=
"localhost"
},
parsed_url
)
end
)
it
(
"test success https"
,
function
()
local
parsed_url
=
handler
:
parse_url
(
"https://localhost/test"
)
assert
.
are
.
same
(
{
host
=
"localhost"
,
port
=
443
,
path
=
"/test"
,
scheme
=
"https"
,
authority
=
"localhost"
},
parsed_url
)
end
)
it
(
"test parse invalid url"
,
function
()
local
parsed_url
=
handler
:
parse_url
(
"invalid"
)
assert
.
are
.
same
(
{
path
=
"invalid"
},
parsed_url
)
end
)
end
)
In each of the tests above we call the
parse_url
method handler with a URL and assert that the result is equal as expected. The full list of assertions is available at
busted documentation
. We used a tag
unit
defined in the describe title as
#unit
for later usage.
To run the tests just call the bin/busted
with the path to the test file as an argument:
$
bin/busted spec/handler_spec.lua
***
3
successes / 0
failures / 0
errors / 0
pending : 0.002079 seconds
Tests are passed, so now it’s time to test the get_message
function, but first we have to make it callable from the outside:
-- handler.lua
function
NotificationHandler
:
get_message
(
config
,
parsed_url
)
...
end
Let’s add a success get_message
test:
-- handler_spec.lua
describe
(
"testing get_message #unit"
,
function
()
it
(
"test success get_message no query"
,
function
()
local
config
=
{
method
=
"GET"
}
local
parsed_url
=
{
host
=
"localhost"
,
path
=
"/test"
}
ngx
.
ctx
=
{
authenticated_consumer
=
{
id
=
1
},
api
=
{
id
=
1
}}
local
message
=
handler
:
get_message
(
config
,
parsed_url
)
local
body
=
[[{"consumer":{"id":1},"api":{"id":1}}]]
local
expected_message
=
string_format
(
"%s %s HTTP/1.1
\r\n
Host: %s
\r\n
Connection: Keep-Alive
\r\n
Content-Type: application/json
\r\n
Content-Length: %s
\r\n\r\n
%s"
,
"GET"
,
"/test"
,
"localhost"
,
#
body
,
body
)
assert
.
are
.
same
(
expected_message
,
message
)
end
)
end
)
Notice the usage of ngx.ctx
. We can set values that will be used in the handler. We can also test that it fails to get a message if ngx.ctx
is nil or config is nil, etc:
-- handler_spec.lua
describe
(
"testing get_message #unit"
,
...
it
(
"test fail get_message ctx is nil"
,
function
()
ngx
.
ctx
=
nil
assert
.
has
.
errors
(
function
()
handler
:
get_message
({},
{})
end
)
end
)
it
(
"test fail get_message ctx is nil"
,
function
()
local
config
=
nil
ngx
.
ctx
=
{
authenticated_consumer
=
{
id
=
1
},
api
=
{
id
=
1
}}
assert
.
has
.
errors
(
function
()
handler
:
get_message
(
nil
,
{})
end
)
end
)
end
To run tests with specified tags use the --tags
option:
$
bin/busted --tags=
unit
******
6
successes / 0
failures / 0
errors / 0
pending : 0.043482 seconds
To run tests in Docker we need to do some preparations:
RUN
luarocks install busted
RUN
ln -s /usr/local/openresty/bin/resty /usr/local/bin/resty
RUN
mkdir -p /home/kong/bin
RUN
mkdir -p /home/kong/spec
COPY busted /home/kong/bin/busted
RUN
chmod 755
/home/kong/bin/busted
We also installed busted
from luarocks
, adding resty
to $PATH
, creating directories to store a busted
script from Kong and our tests, then we copy busted
into the image, and finally we set permissions.
Also, we can add volume with tests from our plugin directory:
services:
...
kong:
build:
./kong
volumes:
-
../kong-notification-plugin/kong/plugins/notification:/usr/local/share/lua/5.1/kong/plugins/notification:ro
-
../kong-notification-plugin/spec/handler_spec.lua:/home/kong/spec/handler_spec.lua:ro
...
To run the tests we need to start the container, exec
into it and run bin/busted
from the /home/kong
directory. /home/kong
is not special. You can choose any directory, you only need to be sure it will have the bin/busted
script and the spec
directory with tests:
docker ps | grep "kongbookexample_kong" | head -1 | cut -d' ' -f1
docker exec -it 70f93e33186f sh
/ # cd home/kong/
/home/kong # bin/busted --tags=unit
******
6 successes / 0 failures / 0 errors / 0 pending : 0.005295 seconds
Integration tests
The next step is to create integration tests that will actually start the Kong and hit its endpoints. To achieve that locally, we need to copy spec/helpers.lua
and spec/kong_tests.conf
files from the Kong source into the plugin root directory. In helpers.lua
we may change the BIN_PATH
variable to the path of your Kong executable (depends on your Kong installation). For example, you can have kong
locally in the path, and bin/kong
won’t work, so set it to just kong
:
-- helpers.lua
local
BIN_PATH
=
"kong"
...
You also may change kong_tests.conf
to match your desired testing configuration, e.g. set the database, hosts, and ports. Just disable SSL support and custom DNS hostfiles for simplicity by commenting appropriate lines out and adding notification plugins to custom_plugins:
-- kong_tests.conf
# ssl_cert = spec/fixtures/kong_spec.crt
# ssl_cert_key = spec/fixtures/kong_spec.key
# admin_ssl_cert = spec/fixtures/kong_spec.crt
# admin_ssl_cert_key = spec/fixtures/kong_spec.key
...
# dns_hostsfile = spec/fixtures/hosts
custom_plugins = notification
The next step is to create test database migrations. We run the Kong migrations command with our test configuration and notification
plugin enabled:
KONG_CUSTOM_PLUGINS=
notification kong migrations up -c spec/kong_tests.conf
Finally, we can write an integration test. The first algorithm is just to test that we can apply a plugin to the API and hit that API without errors. As a test template, we will use one from the
Kong official documentation
and modify it a bit:
-- 01-access_spec.lua
local
helpers
=
require
"spec.helpers"
describe
(
"notification plugin #integration"
,
function
()
local
proxy_client
local
admin_client
setup
(
function
()
assert
(
helpers
.
dao
.
apis
:
insert
{
name
=
"test-api"
,
method
=
"GET"
,
upstream_url
=
"https://www.google.com"
}
)
-- start Kong with your testing Kong configuration (defined in "spec.helpers")
assert
(
helpers
.
start_kong
({
custom_plugins
=
"notification"
}))
admin_client
=
helpers
.
admin_client
()
end
)
teardown
(
function
()
if
admin_client
then
admin_client
:
close
()
end
helpers
.
stop_kong
()
end
)
before_each
(
function
()
proxy_client
=
helpers
.
proxy_client
()
end
)
after_each
(
function
()
if
proxy_client
then
proxy_client
:
close
()
end
end
)
In this test we insert a single API into the database and start Kong with the notification
plugin enabled. We also start the Admin API.
In the test we need to make sure we can add our plugin to the endpoint, hit that endpoint, and check that the request was written to the notifications
table by accessing the Admin API:
-- 01-access_spec.lua
...
describe
(
"add notification plugin"
,
function
()
it
(
"success add notification plugin to api"
,
function
()
-- add notification plugin to api
local
res
=
assert
(
add
admin_client
:
send
{
method
=
"POST"
,
path
=
"/apis/"
..
api
.
id
..
"/plugins/"
,
body
=
{
name
=
"notification"
,
config
=
{
url
=
"http://127.0.0.1:9001/"
,
method
=
"GET"
}
},
headers
=
{
[
"Content-Type"
]
=
"application/json"
}
}
)
assert
.
res_status
(
201
,
res
)
-- hit api
local
res
=
assert
(
proxy_client
:
send
{
method
=
"GET"
,
path
=
"/test"
}
)
assert
.
res_status
(
200
,
res
)
-- get notifications from the database
local
res
=
admin_client
:
send
{
method
=
"GET"
,
path
=
"/notification"
,
headers
=
{
[
"Content-Type"
]
=
"application/json"
}
}
local
body
=
cjson
.
decode
(
assert
.
res_status
(
200
,
res
))
assert
.
equal
(
1
,
body
.
total
)
end
)
end
)
The test looks complex, but basically it just has three requests and assertions, and the latest response has a single notification in it. It’s worth noticing, that Kong clean tests databases after each testing, so when tests are passed we won’t see any APIs or notifications in the database. The invalid config is simpler:
-- 01-access_spec.lua
...
it
(
"fail to add notification plugin without url"
,
function
()
local
res
=
assert
(
admin_client
:
send
{
method
=
"POST"
,
path
=
"/apis/"
..
api
.
id
..
"/plugins/"
,
body
=
{
name
=
"notification"
,
config
=
{
method
=
"GET"
}
},
headers
=
{
[
"Content-Type"
]
=
"application/json"
}
}
)
local
body
=
assert
.
res_status
(
400
,
res
)
local
json
=
cjson
.
decode
(
body
)
assert
.
same
({[
"config.url"
]
=
"url is required"
},
json
)
end
)
it
(
"fail to add notification plugin without method"
,
function
()
local
res
=
assert
(
admin_client
:
send
{
method
=
"POST"
,
path
=
"/apis/"
..
api
.
id
..
"/plugins/"
,
body
=
{
name
=
"notification"
,
config
=
{
url
=
"http://127.0.0.1:9001/"
}
},
headers
=
{
[
"Content-Type"
]
=
"application/json"
}
}
)
local
body
=
assert
.
res_status
(
400
,
res
)
local
json
=
cjson
.
decode
(
body
)
assert
.
same
({[
"config.method"
]
=
"method is required"
},
json
)
end
)
end
...
We send a single request and assert appropriate error messages in the response. Now it’s time to run the tests:
$
bin/busted --tags=
integration
***
3
successes / 0
failures / 0
errors / 0
pending : 4.76591 seconds
Integration testing in Docker is more complicated, but still possible. First, we need to setup a tests database to not pollute our kong
database:
services:
...
kong-postgres-test:
image:
postgres:alpine
environment:
-
POSTGRES_DB=kong_tests
-
POSTGRES_USER=kong
ports:
-
"5434:5432"
It’s a copy of our database service, but exposes a different port: 5434. Next we need to setup a separate migration service:
services:
...
kong-migration-test:
build:
./kong
volumes:
-
../kong-notification-plugin/kong/plugins/notification:/usr/local/share/lua/5.1/kong/plugins/notification:ro
links:
-
kong-postgres-test
environment:
-
KONG_DATABASE=postgres
-
KONG_PG_HOST=kong-postgres-test
-
KONG_PG_DATABASE=kong_tests
-
KONG_CUSTOM_PLUGINS=notification
command:
kong migrations up
It’s similar to our migration service, but uses our new kong_tests
database. Next, we need to update a kong
service adding a volume with integration tests, links to the test database, and a dependency from the kong-migration-test
service:
services:
...
kong:
volumes:
...
- ../kong-notification-plugin/spec/01-access_spec.lua:/home/kong/spec/01-access_spec.lua:ro
links:
...
- kong-postgres-test
depends_on:
-
kong-migration
-
kong-migration-test
Finally, we need to add helpers.lua
and kong_tests.conf
to the /home/kong/spec
directory in our image:
COPY helpers.lua /home/kong/spec/helpers.lua
COPY kong_tests.conf /home/kong/spec/kong_tests.conf
The helpers.lua
is the same as we used for local testing, but kong_tests.conf
has different admin_listen_ssl
and dns_resolver
values:
# kong_tests.conf
admin_listen_ssl = 127.0.0.1:8445
dns_resolver = 127.0.0.11
Notice the dns_resolver
value. Just run the simple command inside the running container:
/ # grep nameserver /etc/resolv.conf | cut -d ' ' -f 2
127.0.0.11
Once all the preparations are done, we can run the tests:
/home/kong # bin/busted
*********
9 successes / 0 failures / 0 errors / 0 pending : 0.958038 seconds
Everything works! That’s it for testing. We wrote unit and integration tests, tested success and failure cases, both locally and in the Docker environment. Now it will be much easier to change plugins in the future without worrying about breaking anything. Despite integration tests requiring some preparation work, it’s worth it. Manual testing is much slower and not so reliable. Also, the way to setup integration tests may change in the future. For instance, a suite of testing tools, independent of the main Kong repository, may be released, but until that happens a custom solution is required.
Share it with luarocks
Once the plugin is done and tested, it is time to share it with the world. For that task we will use luarocks
. The process consists of three steps:
- Creating a rockspec
file
- Creating an api-key
at https://luarocks.org
- Running luarocks upload
to create a rock
file based on rockspec
, and upload it to the luarocks repository
A rockspec
file has the description of the package, source, and dependencies, along with information with files to compile:
-- kong-plugin-notification-0.1.0-1.rockspec
package
=
"kong-plugin-notification"
version
=
"0.1.0-1"
supported_platforms
=
{
"linux"
,
"macosx"
}
source
=
{
url
=
"git://github.com/backstopmedia/kong-book-example"
,
tag
=
"0.1.0"
,
branch
=
"notification"
}
description
=
{
summary
=
"A simple notification plugin"
,
homepage
=
"http://getkong.org"
,
license
=
"MIT"
}
dependencies
=
{}
local
pluginName
=
"notification"
local
prefix
=
"kong.plugins."
..
pluginName
build
=
{
type
=
"builtin"
,
modules
=
{
[
prefix
..
".migrations.cassandra"
]
=
"kong/plugins/"
..
pluginName
..
"/migrations/cassandra.lua"
,
[
predix
..
".migrations.postgres"
]
=
"kong/plugins/"
..
pluginName
..
"/migrations/postgres.lua"
,
[
prefix
..
".handler"
]
=
"kong/plugins/"
..
pluginName
..
"/handler.lua"
,
[
prefix
..
".schema"
]
=
"kong/plugins/"
..
pluginName
..
"/schema.lua"
,
[
prefix
..
".api"
]
=
"kong/plugins/"
..
pluginName
..
"/api.lua"
,
[
prefix
..
".daos"
]
=
"kong/plugins/"
..
pluginName
..
"/daos.lua"
,
}
}
There are some naming requirements:
- The package name should match the prefix of the rockspec
filename, but the kong-plugin
prefix is used as a convention.
- The version should match the version in the rockspec
filename too. The trailing 1
is the version of the rockspec
file itself.
We should enumerate all modules that are the part of the plugin in the
build
section, since the modules not included in that table won’t be available after publishing the plugin. Notice the
source
field that is required and we will use a Github repository as a source. The full description of the
rockspec
format is available in the
luarocks documentation
.
Once
rockspec
is ready, we need to have an
api-key
to upload it to the luarocks repository. To do that sign up at
https://luarocks.org
and generate a key at
https://luarocks.org/settings/api-keys
. The last step is to upload the plugin:
luarocks upload kong-plugin-notification-0.1.0-1.rockspec --api-key=
<your api key>
If you uploaded the wrong code, it’s possible to overwrite an existing version at the luarocks repository, so just use a --force
key:
luarocks upload kong-plugin-notification-0.1.0-1.rockspec --api-key=
<your api key> --force
Summary
In this chapter we discussed how to install and enable Kong plugins in the system, we built a custom notification plugin from scratch, and shared it with the community via luarocks. The final source code is available at
github
. The plugin sends an asynchronous request to the remote server, saves data in its own database table, and modifies the Kong Admin API by adding new endpoints to search notifications in a convenient way. Kong supports building HTML pages in the Admin API, and we discussed several options of how to do that. Unit and integration tests helped to make sure everything works as expected and we covered basic plugin use cases with them.
So, Kong provides a very powerful, flexible and convenient interface for writing custom plugins of any complexity, and even without Lua knowledge it’s possible to write one in a sane period of time.