Hello, fellow Splunkers.
I am currently trying to create a stacked timechart column using a simple search query: timechart count by type limit=0
Since Splunk uses lexicographical ordering by default, I ended up getting an undesired end result.
In this particular situation, I have several types(values of the single field) that I would like to display next to each other in the column chart and to do that I have tried assigning a numerical values by using eval/case commands and then sorting the values like this:
timechart count by type limit=0 | eval sort_field=case(type="type1",1, type="type2",2, type="type3",3, type="type4",4) | sort sort_field
This approach had no effect whatsoever and no values were changed/re-ordered. As far as I can tell, not even the sort_field was created.I think there is a specific behavior when using this method with timecharts/charts, but I have not yet figured out a working solution. I managed to debug it a little bit further by trying to re-construct the search bit-by-bit and when I removed the timechart:
... | eval sort_field=case(type="type1",1, type="type2",2, type="type3",3, type="type4",4) | sort sort_field
I could see that the field(sort_field) only had 1 value(the first order value):
I do believe that there is a major syntax error on my part or something else entirely that I don't fundamentally understand yet.
To sum it up, I am trying to create a column chart showing the count of events based on their type over a period of time. The problem is that the types(chart legend values) are being alphabetically ordered and I would like them to appear in a custom order on the chart.
Perhaps there is someone with more charting experience willing to lend a helping hand? It would be most appreciated.
... View more
I am currently trying to implement a certain solution by sending logs from an analytics system over to Splunk for visualisation purposes. I have, however, currently hit a roadblock of sorts when trying to properly format and display critical events for usability purposes.
What I would like to know is whether there is a way to highlight newly received or specific events in a dashboard? This is critical from the user perspective because if the solution is horizontally scaled, there are going to be a lot of events populating the dashboards and missing a potential incident is not an option.
I have already created a dashboard and visually formatted it, with the current search string for the dashboard being: sourcetype=test host=xxxx string | fields _time, host, customfield | fields - _raw
The current structure of the dashboard is the following: Statistics table, Wrap results: false.
The ideal end result would be either highlighting certain events based on a specific string (for example "Persons" in the provided picture) or some sort of a solution where the user could "acknowledge" the events, marking them as "Seen" or any other similar solution.
I have read through a lot of the documentation already, but I haven't been able to find any solid information on the implementation of my desired result yet. Since I still consider myself to be rather new to Splunk, I was hoping that some of the more advanced users here would have a suggestion on how to proceed.
Thanks in advance!
... View more
In my environment, I have set up an Universal Forwarder that is monitoring a single server .log file, which is then forwarded to a Splunk indexer instance for parsing etc. as a specific sourcetype(log4j). My Universal Forwarder configuration is as follows:
host = 1
On the indexer, I have noticed several issues, both with timestamp parsing and event breaking. As you can see in the following image, there are events mixed in with local timestamps dating 3 hours ago, but Splunk has assigned the current time for said event. On top of that, Splunk has made a separate event for the Headers: and Payload: entries, which should have been a part of the event below. Note that these events all come from the same host and all have the same sourcetype.
For additional context, the following image visualizes the format of the .log file as seen on the forwarding instance. Note how there is a slight gap between the second event's Content-Type and Headers fields, which, I believe, is what is causing Splunk to assign it to a separate event.
Here is the props.conf that I currently have set on my indexer instance:
As well as the limits.conf, although, to my understanding, it shouldn't affect the parsing behavior:
max_rawsize_perchunk = 0
Splunk is unexpectedly breaking up events;
There are events dated back exactly 3 hours mixed in with current events;
Could this be a timezone issue? Both of the instances seem to have the same timezone (EEST), but there seem to be events dated back exactly 3 hours mixed in with current events. What could be the possible cause of this?
Thanks in advance!
... View more