All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Yes, you can use different column names No, I don't think this method can be done in a loop, however, there may be other ways to solve this which might be applied in a loop, but it may depend on the... See more...
Yes, you can use different column names No, I don't think this method can be done in a loop, however, there may be other ways to solve this which might be applied in a loop, but it may depend on the column names and the relationship between the pairs of column names, e.g. A and A1, B and B1, etc.
Hi @Vishal2 , you already have the solution: you have to create a lookup containing the list of monitored hosts and run the above search, what's your doubt? Ciao. Giuseppe P.S.: Karma Points are ... See more...
Hi @Vishal2 , you already have the solution: you have to create a lookup containing the list of monitored hosts and run the above search, what's your doubt? Ciao. Giuseppe P.S.: Karma Points are appreciated by all the contributors
Power Shell for linux? cat filename | grep or awk?
Hi Team, I want to create a splunk dashboard with the avearge response time taken by the all the API's wich follow this condition. Example: I have below API's /api/cvraman/book /api/apj/boo... See more...
Hi Team, I want to create a splunk dashboard with the avearge response time taken by the all the API's wich follow this condition. Example: I have below API's /api/cvraman/book /api/apj/book /api/nehru/book /api/cvraman/collections /api/apj/collections /api/indira/collections /api/rahul/notes /api/rajiv/notes /api/modi/notes Now i will check for the average of the API /api/*/book,/api/*/collections,/api/*/notes. Dashboard should have only these response times in the chart /api/*/book,/api/*/collections,/api/*/notes. i tried the below query but the dashboard shows the combined average on all the three can someone please help on this index=your_index (URI = /api/*/book OR URI = /api/*/collections OR /api/*/notes. ) |stats avg(duration) as avg_time  
@tscroggins  Thank you but is there way we can customize the colors Categorically based like i have 3 Grades A,B,C in legend label and i want to color the tile like(A=Green,B=Orange,C=red)
Hi @bowesmana  This is the event that is occurred: The time the event occurred is  2023-11-30 00:00:33.789 I have set the Cron expression of last 15 minutes  and Time Range is last 15 minute... See more...
Hi @bowesmana  This is the event that is occurred: The time the event occurred is  2023-11-30 00:00:33.789 I have set the Cron expression of last 15 minutes  and Time Range is last 15 minutes I am getting incidents and alerts after 30 minutes or sometimes after 45 minutes when the event triggered in splunk. These is my query: index= "abc" "pace api iaCode - YYY no valid pace arrangementId as response!!!" OR "pace api iaCode - ZZZ no valid pace arrangementId as response!!!" source!="/var/log/messages" sourcetype=600000304_gg_abs_ipc2| rex "-\s+(?<Exception>.*)" | table Exception source host sourcetype _time    
Hi Splunkers,   This problem is occurring on Splunk_TA_paloalto app panels. Is there someone who knows how to handle this problem? I understand it has no effect on any search, but it's still annoyi... See more...
Hi Splunkers,   This problem is occurring on Splunk_TA_paloalto app panels. Is there someone who knows how to handle this problem? I understand it has no effect on any search, but it's still annoying.   Thanks in advance.
HI @_pravin . probably the eval doesn't always match so all the other commands, that use the Module field don't work. Check if you really need this conditiion and try to replace with the isnull() f... See more...
HI @_pravin . probably the eval doesn't always match so all the other commands, that use the Module field don't work. Check if you really need this conditiion and try to replace with the isnull() function. ciao. Gioseppe
@rivars  You are a lifesaver!  
Thanks a lot @ITWhisperer . Works like a charm. Sorry I am pretty newbie in splunk and did not even know the existence of command makeresults  One short question, could this be do more flexible? ... See more...
Thanks a lot @ITWhisperer . Works like a charm. Sorry I am pretty newbie in splunk and did not even know the existence of command makeresults  One short question, could this be do more flexible? I mean, if columns are named like A1 B1 A2 B2 A3 B3 instead of A B C D E F is there a way to make this in a "loop"? I tried foreach, but appendpipe cannot be used within a foreach statement. Thanks a lot for your time. 
This is an example based on your example dataset It assumes that there is a lookup file requests.csv (which I generated using the second code snipped below) The makeresults stuff just sets up your ... See more...
This is an example based on your example dataset It assumes that there is a lookup file requests.csv (which I generated using the second code snipped below) The makeresults stuff just sets up your data, so assume your search runs up to the inputlookup statement below. | makeresults | eval _raw=split(replace("time,os,host,user 1/10/2023 9:00,Linux,Server1,UserA 1/10/2023 11:00,Linux,Server1,UserA 1/10/2023 12:00,Linux,Server2,UserA 1/10/2023 9:00,Linux,Server2,UserB 1/10/2023 14:00,Linux,Server1,UserA","\n","###"),"###") | multikv forceheader=1 | eval _time=strptime(time, "%d/%m/%Y %k:%M") | table _time,os,host,user | inputlookup append=t requests.csv | eval user=coalesce(user, reporterName) | foreach change* [ eval <<FIELD>>=strptime('<<FIELD>>', "%d/%m/%Y %k:%M") ] | stats list(_time) as _time values(key) as key values(reporterEmail) as reporterEmail values(summary) as summary values(changeStartDate) as changeStartDate values(changeEndDate) as changeEndDate by user host | eval isInside=mvmap(_time, if(_time>=changeStartDate AND _time<changeEndDate, _time.":1", _time.":0")) | mvexpand isInside | rex field=isInside "(?<_time>[^:]*):(?<isInside>\d)" the logic is then that it appends the contents of the lookup file to the end of the data and makes the common name (user or reporterName) and then converts the change time fields to epoch. Then the stats function joins all the items together - there is an assumption that there is only one requests in requests.csv for each user/server - if more then the logic will need to change. After the stats, the mvmap just compares the times and then expands out the results with isInside showing if the event is inside the request period    Here's the csv generation so you can test if needed. | makeresults | eval _raw=split(replace("key,host,reporterName,reporterEmail,summary,changeStartDate,changeEndDate REQ-1000,Server1,UserA,UserA@dummy.com,Investigate error,1/10/2023 8:00,1/10/2023 13:00 REA-1001,Server2,UserB,UserB@dummy.com,Reset service,1/10/2023 8:00,1/10/2023 10:00","\n","###"),"###") | multikv forceheader=1 | table key,host,reporterName,reporterEmail,summary,changeStartDate,changeEndDate | outputlookup requests.csv  
Hi, I need to write a query to find the time remaining to consume events.   index=x message.message="Response sent" message.feedId="v1" | stats count as Produced index=y | spath RenderedMessage |... See more...
Hi, I need to write a query to find the time remaining to consume events.   index=x message.message="Response sent" message.feedId="v1" | stats count as Produced index=y | spath RenderedMessage | search RenderedMessage="*/v1/xyz*StatusCode*2*"| stats count as Processed index=z message.feedId="v1" | stats avg("message.durationMs") as AverageResponseTime     So I want to basically perform: Average Time left = Produced - Processed /AverageResponseTime How can I go about doing this? Thank you so much
Thank you very much, Sir. Finally, I got the solution based on your suggestion. I put the filename column and value in the csv file. That is the easy way to get the lookupfilename in search. 
Hello @PickleRick , Thank you for the detailed information. I have gone through the shortcomings and I guess I'll work through that. However, could you please guide me on the inputs.conf and output... See more...
Hello @PickleRick , Thank you for the detailed information. I have gone through the shortcomings and I guess I'll work through that. However, could you please guide me on the inputs.conf and outputs.conf for cloud. Is there a way to validate if the UF is receiving logs? 
@bowesmana , thanks for your response, but i am still having trouble with how to 'now use eval+stats to join and collapse the data events and the lookup events together'. I don't see how i can apply... See more...
@bowesmana , thanks for your response, but i am still having trouble with how to 'now use eval+stats to join and collapse the data events and the lookup events together'. I don't see how i can apply these rules: Optional joining, i.e. i want every record in the events list Event date is between the request changeStartDate and changeEndDate For example i could have these events (some columns removed for brevity) _time os host user 1/10/2023 9:00 Linux Server1 UserA 1/10/2023 11:00 Linux Server1 UserA 1/10/2023 12:00 Linux Server2 UserA 1/10/2023 9:00 Linux Server2 UserB 1/10/2023 14:00 Linux Server1 UserA and these requests key host reporterName reporterEmail summary changeStartDate changeEndDate REQ-1000 Server1 UserA UserA@dummy.com Investigate error 1/10/2023 8:00 1/10/2023 13:00 REA-1001 Server2 UserB UserB@dummy.com Reset service 1/10/2023 8:00 1/10/2023 10:00 and i would like this result _time os host user key reporterName reporterEmail summary changeStartDate changeEndDate 1/10/2023 9:00 Linux Server1 UserA REQ-1000 UserA UserA@dummy.com Reset service 1/10/2023 8:00 1/10/2023 13:00 1/10/2023 11:00 Linux Server1 UserA REQ-1000 UserA UserA@dummy.com Reset service 1/10/2023 8:00 1/10/2023 13:00 1/10/2023 12:00 Linux Server2 UserA             1/10/2023 9:00 Linux Server2 UserB REA-1001 UserB UserB@dummy.com Investigate error 1/10/2023 8:00 1/10/2023 10:00 1/10/2023  14:00:00 PM Linux Server1 UserA             So, UserA raised 1 request and matches to 2 of the events, but the last event does not match as it's outside the date/time range. Thanks in advance
Hi @Splunkerninja, In classic dashboards, you can install and use the Treemap visualization <https://splunkbase.splunk.com/app/3118>.
There are several patterns illustrated for use with renderXml = true and $XmlRegex: <Provider[^>]+Name=["']Microsoft-Windows-Security-Auditing["'] <EventID>4688<\/EventID> <Data Name=["']NewProces... See more...
There are several patterns illustrated for use with renderXml = true and $XmlRegex: <Provider[^>]+Name=["']Microsoft-Windows-Security-Auditing["'] <EventID>4688<\/EventID> <Data Name=["']NewProcessName["']>C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumClient\.exe<\/Data> <Data Name=["']ParentProcessName["']>C:\\Program Files \(x86\)\\Tanium\\Tanium Client\\TaniumClient\.exe<\/Data> Recall that % was used as a start and end delimiter and is not part of the pattern.
Hi @AL3Z, Please read <https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf#Event_Log_allow_list_and_deny_list_formats> carefully. If renderXml = false, yes, you can use EventCode an... See more...
Hi @AL3Z, Please read <https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/Inputsconf#Event_Log_allow_list_and_deny_list_formats> carefully. If renderXml = false, yes, you can use EventCode and Message in your blacklist settings. It appears you have set the suppress_* settings to true. You should only set those to true if either (a) renderXml = true or (b) you want to exclude the fields from your events as illustrated by your image.
Interesting, it sounds like you have the energy to dig a little deeper. Take a look at these links https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-clara-fication-job-inspector.html https:/... See more...
Interesting, it sounds like you have the energy to dig a little deeper. Take a look at these links https://www.splunk.com/en_us/blog/tips-and-tricks/splunk-clara-fication-job-inspector.html https://conf.splunk.com/files/2020/slides/TRU1143C.pdf which show how you can dive into debug logging and the search log - maybe that will throw up something useful.  
I am not sure what else to suggest - for some reason you have gone back to a 5 minute cron window with a 15 minute time range, which is something I earlier suggested you change. I also suggested usi... See more...
I am not sure what else to suggest - for some reason you have gone back to a 5 minute cron window with a 15 minute time range, which is something I earlier suggested you change. I also suggested using a specific earliest/latest time window, which you do not appear to be doing. It is also not clear what you meant in your original post about incidents coming in at 9:16 with events at 8:20 Unless you are able to give detail about events/times and specific detail of the problem, it is impossible for anyone to offer concrete advice that will help you. You would need to provide an example where the events are visible in Spunk at a certain time the cron schedule for the alert the time window for the alert search the run of the alert does not show the expected data