How to see if filebeat data is being sent to logstash
When I open up Kibana interface, I get an error to configure index when logstash-* is entered as a query:
kibana error: please specify a default index pattern
How can I see if filebeat is sending logs to logstash? I followed the filebeat and ELK stack tutorials exactly. I can see data when I enter in filebeat-* into Kibana, but nothing when I enter in logstash-* into Kibana.
Solution 1:
If you followed the official Filebeat getting started guide and are routing data from Filebeat -> Logstash -> Elasticearch, then the data produced by Filebeat is supposed to be contained in a filebeat-YYYY.MM.dd
index. It uses the filebeat-*
index instead of the logstash-*
index so that it can use its own index template and have exclusive control over the data in that index.
So in Kibana you should configure a time based index pattern based on the filebeat-*
index pattern instead of logstash-*
. Alternatively you could run the import_dashboards
script provided with Filebeat and it will install an index pattern into Kibana for you. The path to the import_dashboards
script may vary based on how you installed Filebeat. This is for Linux when installed via RPM or deb.
/usr/share/filebeat/scripts/import_dashboards -es http://localhost:9200
You can check if data is contained in a filebeat-YYYY.MM.dd
index in Elasticsearch using a curl command that will print the event count.
curl http://localhost:9200/filebeat-*/_count?pretty
And you can check the Filebeat logs for errors if you have no events in Elasticsearch. The logs are located at /var/log/filebeat/filebeat
by default on Linux. You can increase verbosity by setting logging.level: debug
in your config file.