2. Deployment Architecture
2.1 Creating Filebeat ServiceAccount and ClusterRole
2.2 Creating Filebeat ConfigMap
2.3 Deploying Filebeat DaemonSet
In this tutorial we will learn about configuring Filebeat to run as a DaemonSet in our Kubernetes cluster in order to ship logs to the Elasticsearch backend. We are using Filebeat instead of FluentD or FluentBit because it is an extremely lightweight utility and has a first class support for Kubernetes. It is best for production level setups. This blog post is the second in a two-part series. The first post runs through the deployment architecture for the nodes and deploying Kibana and ES-HQ.
Filebeat will run as a DaemonSet in our Kubernetes cluster. It will be:
Deploy the following manifest to create the required permissions for Filebeat pods.
We should make sure that ClusterRole permissions are as limited as possible from the security point of view. If either of the pods associated with this service account gets compromised then the attacker would not be able to gain access to the entire cluster or applications running in it.
Use the following manifest to create a ConfigMap which will be used by Filebeat pods.
Important concepts for the Filebeat ConfigMap:
Use the manifest below to deploy the Filebeat DaemonSet.
Let’s see what is going on here:
This makes sure that our Filebeat DaemonSet schedules a pod on the master node as well. Once the Filebeat DaemonSet is deployed we can check if our pods get scheduled properly.
If we tail the logs for one of the pods we can clearly see that it connected to Elasticsearch and has started harvester for the files. The snippet below shows this:
Once we have all our pods running, then we can create an index pattern of the type filebeat-* in Kibana. Filebeat indexes are generally timestamped. As soon as we create the index pattern all the searchable available fields can be seen and should be imported. Lastly, we can search through our application logs and create dashboards if needed. It is highly recommended to use JSON logger in our applications because it makes log processing extremely easy and messages can be parsed easily.
This concludes our logging setup. All of the provided configuration files have been tested in production environments and are readily deployable. Feel free to reach out should you have any questions around it.
This article was written by our guest blogger Vaibhav Thakur. If you liked this article, check out his LinkedIn for more.