After installing Filebeat, we need to configure it for sending log data to Elasticsearch or Logstash.
In this chapter, we will skip the Logstash part and send the log data directly to the Elasticsearch cluster. So, for configuring Filebeat, we need to open the filebeat.yml file, which we can open using this command:
sudo vim /etc/filebeat/filebeat.yml
The preceding command will open the filebeat.yml file for editing. Filebeat has predefined default values for different configurations, which we can change as per our requirements. So, to configure Filebeat to send the log data, we need to define the single prospector for a single path; for example, to read various logs from the /var/logs location, we can define the following prospector:
filebeat.prospectors: - type: log enabled: true paths: - /var/log/*.log
In the previous expression, we are setting the Filebeat prospector, in which we are defining the type as log, enabling it and providing the path of the log directory. This path tells Filebeat to fetch all files with the .log extension, inside the /var/log directory.
In case we want to read the log files from any location other than /var/log/, we have to create a new prospector and provide the path of the location. If we want to target all files from all different directories inside the /var/log/ directory, we need to give the following path:
paths: - /var/log/*/*.log
The preceding path tells Filebeat to check the files inside /var/log/ and then all directories inside the log directory for their files with an extension of .log, so in this way, we can target all log files inside the /var/log/ directory.
There are different ways to enable the modules in Filebeat:
- We can enable Filebeat modules using the filebeat.yml file. We need to make the following changes on the filebeat.yml file:
filebeat.modules: - module: nginx - module: mysql - module: system
- We can enable the module from the modules.d directory. We can enable the modules by running this command:
sudo filebeat modules enable apache2
If we want to see enabled and disabled modules, we can run the following command:
sudo filebeat modules list
- We can enable the module along with the filebeat command execution. When we want to run a specific module, we can do it by passing the --modules flag. This module-enabling process is session-specific and quite handy when we want to run a specific module for a particular session only. The following command shows the module-enabling process:
sudo filebeat -e --modules nginx,mysql,system
After that, we need to provide the Elasticsearch credentials inside the file. The following section shows us the Elasticsearch block under the filebeat.yml file:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
username: "elastic"
password: "yourpassword"
Here, we need to provide the host location for Elasticsearch server.
We need to provide the username and password of Elasticsearch in case authentication is enabled for Elasticsearch. We can leave this as commented in case authentication is not enabled. Once this is done, we can start the Filebeat service.
In case we want to set up the default Filebeat dashboard for Kibana, we need to configure it by configuring the setup.kibana endpoint:
setup.kibana:
host: "localhost:5601"
In the preceding expression, we are providing the host URL of Kibana. I have configured Kibana on localhost, but if you have configured it on another server, you need to provide the Kibana URL of that server.
Once we have completed these configurations, Filebeat will start sending data into Elasticsearch in the default index as filebeat-*; for example, filebeat-6.2.3-2018.06.07. We can verify the index name by referring to the list of indices in Elasticsearch using the following command:
curl -XGET "http://localhost:9200/_cat/indices"
Once we get the Filebeat index in the listing, we can get the details of that index by typing this command:
curl -XGET "http://localhost:9200/filebeat-6.2.3-2018.06.07/_search"
After running the preceding command, we can get the following response:
{
"took": 5,
"timed_out": false,
"_shards": {
"total": 3,
"successful": 3,
"skipped": 0,
"failed": 0
},
"hits": {
"total": 91285,
"max_score": 1,
"hits": [
{
"_index": "filebeat-6.2.3-2018.06.07",
"_type": "doc",
"_id": "Dr4i22MByi9VfmHrn07x",
"_score": 1,
"_source": {
"@timestamp": "2018-06-07T04:39:24.855Z",
"offset": 32411,
"beat": {
"hostname": "ANUGGNLPTP0184",
"name": "ANUGGNLPTP0184",
"version": "6.2.3"
},
"prospector": {
"type": "log"
},
"mysql": {
"error": {
"thread_id": "0",
"level": "Note",
"message": "Shutting down plugin 'INNODB_FT_INDEX_TABLE'",
"timestamp": "2018-06-07T04:39:24.855267Z"
}
},
"source": "/var/log/mysql/error.log",
"fileset": {
"module": "mysql",
"name": "error"
}
}
}
]
}
}
This way, we have successfully configured Filebeat to send data to the Elasticsearch cluster. Once the data is in Elasticsearch, we can configure the index pattern into Kibana, as I explained in the previous chapters.