All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

There several solution, I usually prefer the one described in the Splunk Dashboard Examples App (https://splunkbase.splunk.com/app/1603) in the example "link switch". In this way you can have in the... See more...
There several solution, I usually prefer the one described in the Splunk Dashboard Examples App (https://splunkbase.splunk.com/app/1603) in the example "link switch". In this way you can have in the same dashboard more dashboards, each one with a limited set of panels, and pass from one to another using the link switch in the header. Ciao. Giuseppe
Hi @Zane, no, this parameter is when you rotate a file on the same folder, but, if I correctly understood, yu move it in another folder. The solution is to put under monitoring also the destination... See more...
Hi @Zane, no, this parameter is when you rotate a file on the same folder, but, if I correctly understood, yu move it in another folder. The solution is to put under monitoring also the destination folder to tale only the events between the last read and the rotation, remembering that you cannot use crcSalt = <SOUCE> in your inputs.conf. Ciao. Giuseppe
hello Team,   We are trying to develop a function to validate if a user id and pwd are valid(in an artifact). Thought of using an LDAP BIND unfortunately we are getting an error that we cannot impo... See more...
hello Team,   We are trying to develop a function to validate if a user id and pwd are valid(in an artifact). Thought of using an LDAP BIND unfortunately we are getting an error that we cannot import LDAP3. has anyone developed an app or function to test this?
| makeresults format=csv data="process,message A,message 0 B,message 0 A,message 1 B,message 1 A,message 2 B,message 2 A,message 1 B,message 3 A,message 2 A,message 1 A,message 2" | eventstats count ... See more...
| makeresults format=csv data="process,message A,message 0 B,message 0 A,message 1 B,message 1 A,message 2 B,message 2 A,message 1 B,message 3 A,message 2 A,message 1 A,message 2" | eventstats count as repeats by process message | where repeats > 1 As you can see, only messages that are repeated are shown
Your input set up seems a little unconventional, please confirm how you have actually set up the inputs and where all the tokens are coming from
Hi Splunkers, I have a huge report with 15 to 20 pages worth of information which I need to show in a dashboard panel. Is there any way that I can add “expand” “collapse” option to showcase my data ... See more...
Hi Splunkers, I have a huge report with 15 to 20 pages worth of information which I need to show in a dashboard panel. Is there any way that I can add “expand” “collapse” option to showcase my data in a better way especially for the non splunk users to understand it better. Thanks
Sorry - but how does this pick up a set of messages on the same process repeating?
Hi  @gcusello  thanks for your answer, but I can‘t control delayed the rotation, due to those log file not managed by us, so if it's possible,adjusting from the Splunk side would be great. so as y... See more...
Hi  @gcusello  thanks for your answer, but I can‘t control delayed the rotation, due to those log file not managed by us, so if it's possible,adjusting from the Splunk side would be great. so as you said, "you can delay the rotation (30/60 seconds are sufficient) so Splunk will read also the last event in the file",  according to this, i found there is a parameter in inputs.conf, "time_before_close", it's 3 by default,  can i adjust this value so as to delay UF close monitored files?for example, set it as 30? thanks so much. \Zane
| eventstats count as repeats by process message | where repeats > 1
The messages are valid but once starting to loop indicates issue with process - messages can be from different processes but I am only interested in messages repeating on same process.
You could put your collect in an appendpipe in your alert search, something like this <your search> | appendpipe [| rename x as y | table y | collect index=other | where false()] | fields... See more...
You could put your collect in an appendpipe in your alert search, something like this <your search> | appendpipe [| rename x as y | table y | collect index=other | where false()] | fields - y  
It is possible but it depends on your messages. Are the messages from the process unique apart from when they are repeated? Can you correlate messages from the same instance of the process without ... See more...
It is possible but it depends on your messages. Are the messages from the process unique apart from when they are repeated? Can you correlate messages from the same instance of the process without confusing them with messages from another instance of the process? Are the loops any bigger or smaller than two messages? What do you need to be kept in the report, e.g. all messages, just the process id?, just the time of the first duplicated message, just the fact that a process has looped?
Hi @Manish_Sharma, you can give grants to a role to access an app, then you can give ro the role access to one or more indexes, but when a role has access to one index, you cannot restrict access to... See more...
Hi @Manish_Sharma, you can give grants to a role to access an app, then you can give ro the role access to one or more indexes, but when a role has access to one index, you cannot restrict access to a part of it. If you need to do this you have to apply one of the following workaround: in the app, creare distinct dashboard for each role, displaying only the permitted events and disabling access to the direct search form, you can schedule a search that exports the data for the limited users in a summary index (you don't have additional costs) and give access to the restricted uses only to the Summary index. I prefer the second one that's easier. Ciao. Giuseppe
Hi @bobmccoy, this surely is a very slow search, avoid to use join command, Splunk isn't a DB! let me understand: you want all the notables for the users in the user_watchlist lookup, is it correct... See more...
Hi @bobmccoy, this surely is a very slow search, avoid to use join command, Splunk isn't a DB! let me understand: you want all the notables for the users in the user_watchlist lookup, is it correct? If tis is your requirement, you could try something like this: index=notable [ | inputlookup user_watchlist WHERE _key=* | rename _key as user asset AS src | fields user src | dedup user asset ] | where isnotnull(src) | mvexpand src | mvexpand user | dedup src user | eval user=mvindex(split(user,"@"),0) | rename src as asset | eval asset=lower(asset) Ciao. Giuseppe
Hi @the_sigma , if the timestamp it's at the beginning of the event, you could try: TIME_PREFIX = ^\[ TIME_FORMAT = %Y%m%d:%H%M%S.%3N If it isn't at the end of the event, please share some sample ... See more...
Hi @the_sigma , if the timestamp it's at the beginning of the event, you could try: TIME_PREFIX = ^\[ TIME_FORMAT = %Y%m%d:%H%M%S.%3N If it isn't at the end of the event, please share some sample of your events, eventually masked, but with the same structure. Ciao. Giuseppe  
Hi @josephjohn2211, I suppose that you have these information in an index and when you say "table" you're speaking of an index, if not, please correct me. Anyway, if you already extracted fields (c... See more...
Hi @josephjohn2211, I suppose that you have these information in an index and when you say "table" you're speaking of an index, if not, please correct me. Anyway, if you already extracted fields (called timestamp, InProgress and NotYetStarted), you have to create a search checking the presence of values in the three fields to trigger when they are empty, something like this. index=ACTUAL_END_TIME NOT (InProgress=* NotYetStarted=*) If you have results the alert triggers. The alert must start to trigger at 7.00 but at what hour it must stop? in my sample I use 18:00, so you can schedule the alert using this cron expression: */30 7-18 * * * Please, if possible, avoid to use spaces, dots or special chars (as "-") in you field names, otherwise you have to use quotes for those fields. If instead you didn't extract fields, you should share some sample (both of  rows with the three fields and without them) so I can help you. Ciao. Giuseppe
We have a job that occasionally loops around the same code spewing out same set of messages [2 different messages from same job] - is it possible to identify processes where the last 2 messages match... See more...
We have a job that occasionally loops around the same code spewing out same set of messages [2 different messages from same job] - is it possible to identify processes where the last 2 messages match the previous 2 messages...   . . . message1 message2 message1 <-- starts repeating/looping here message2 message1 message2 message1 message2 . . Any help appreciated.   Mick
Hi @tr_newman, why don't you use two different alerts, one for each system with its own field names? Ciao. Giuseppe
Hi Team,  I am reaching out to seek your valuable inputs regarding setting up restrictions on app-level logs under a particular index in Splunk. The use case is as follows: We have multiple appl... See more...
Hi Team,  I am reaching out to seek your valuable inputs regarding setting up restrictions on app-level logs under a particular index in Splunk. The use case is as follows: We have multiple application logs that fall under a single index. However, we would like to set up restrictions for a specific app name within that index. While we are aware of setting up restrictions at the index level, we are wondering if there is a way to further restrict access to logs at the app level. Our goal is to ensure that only authorized users have access to the logs of the specific app within the designated index. Thank you in advance for your assistance and expertise. We look forward to your valuable inputs
Hi @Zane, you could put under monitoring both the folders. If you don't use the crcSal = <SOURCE> option, Splunk read only the last events in the rotated file and doesn't index twice the logs event... See more...
Hi @Zane, you could put under monitoring both the folders. If you don't use the crcSal = <SOURCE> option, Splunk read only the last events in the rotated file and doesn't index twice the logs event if tey have a different filename (Remember that the above option must to be not present!). Otherwise, if you rename the file before rotating (adding e.g. the new data to the file name), you can delay the rotation (30/60 seconds are sufficient) so Splunk will read also the last event in the file before i's moved to the new folder. Ciao. Giuseppe