All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Just use a line chart or column chart as your visualisation.  
Not exactly like the above one.But the ouput should be a chart with x-axis /api/*/book,api/*/collections and api/*/notes and Y-axis should be the response time
Not exactly like the above one.But the ouput should be a chart with x-axis /api/*/book,api/*/collections and api/*/notes and Y-axis should be the response time 
Do you mean something like this index=your_index (URI = /api/*/book OR URI = /api/*/collections OR /api/*/notes. ) |stats avg(duration) as avg_time by URI
You could try something like this index=x message.message="Response sent" message.feedId="v1" | stats count as Produced | appendcols [ search index=y | spath RenderedMessage | search RenderedMessage... See more...
You could try something like this index=x message.message="Response sent" message.feedId="v1" | stats count as Produced | appendcols [ search index=y | spath RenderedMessage | search RenderedMessage="*/v1/xyz*StatusCode*2*"| stats count as Processed] | appendcols [ search index=z message.feedId="v1" | stats avg("message.durationMs") as AverageResponseTime] | eval AverageTimeLeft = (Produced - Processed) * AverageResponseTime Note that I think your calculation should be a multiplication not a division
Yes, you can use different column names No, I don't think this method can be done in a loop, however, there may be other ways to solve this which might be applied in a loop, but it may depend on the... See more...
Yes, you can use different column names No, I don't think this method can be done in a loop, however, there may be other ways to solve this which might be applied in a loop, but it may depend on the column names and the relationship between the pairs of column names, e.g. A and A1, B and B1, etc.
Hi @Vishal2 , you already have the solution: you have to create a lookup containing the list of monitored hosts and run the above search, what's your doubt? Ciao. Giuseppe P.S.: Karma Points are ... See more...
Hi @Vishal2 , you already have the solution: you have to create a lookup containing the list of monitored hosts and run the above search, what's your doubt? Ciao. Giuseppe P.S.: Karma Points are appreciated by all the contributors
Power Shell for linux? cat filename | grep or awk?
Hi Team, I want to create a splunk dashboard with the avearge response time taken by the all the API's wich follow this condition. Example: I have below API's /api/cvraman/book /api/apj/boo... See more...
Hi Team, I want to create a splunk dashboard with the avearge response time taken by the all the API's wich follow this condition. Example: I have below API's /api/cvraman/book /api/apj/book /api/nehru/book /api/cvraman/collections /api/apj/collections /api/indira/collections /api/rahul/notes /api/rajiv/notes /api/modi/notes Now i will check for the average of the API /api/*/book,/api/*/collections,/api/*/notes. Dashboard should have only these response times in the chart /api/*/book,/api/*/collections,/api/*/notes. i tried the below query but the dashboard shows the combined average on all the three can someone please help on this index=your_index (URI = /api/*/book OR URI = /api/*/collections OR /api/*/notes. ) |stats avg(duration) as avg_time  
@tscroggins  Thank you but is there way we can customize the colors Categorically based like i have 3 Grades A,B,C in legend label and i want to color the tile like(A=Green,B=Orange,C=red)
Hi @bowesmana  This is the event that is occurred: The time the event occurred is  2023-11-30 00:00:33.789 I have set the Cron expression of last 15 minutes  and Time Range is last 15 minute... See more...
Hi @bowesmana  This is the event that is occurred: The time the event occurred is  2023-11-30 00:00:33.789 I have set the Cron expression of last 15 minutes  and Time Range is last 15 minutes I am getting incidents and alerts after 30 minutes or sometimes after 45 minutes when the event triggered in splunk. These is my query: index= "abc" "pace api iaCode - YYY no valid pace arrangementId as response!!!" OR "pace api iaCode - ZZZ no valid pace arrangementId as response!!!" source!="/var/log/messages" sourcetype=600000304_gg_abs_ipc2| rex "-\s+(?<Exception>.*)" | table Exception source host sourcetype _time    
Hi Splunkers,   This problem is occurring on Splunk_TA_paloalto app panels. Is there someone who knows how to handle this problem? I understand it has no effect on any search, but it's still annoyi... See more...
Hi Splunkers,   This problem is occurring on Splunk_TA_paloalto app panels. Is there someone who knows how to handle this problem? I understand it has no effect on any search, but it's still annoying.   Thanks in advance.
HI @_pravin . probably the eval doesn't always match so all the other commands, that use the Module field don't work. Check if you really need this conditiion and try to replace with the isnull() f... See more...
HI @_pravin . probably the eval doesn't always match so all the other commands, that use the Module field don't work. Check if you really need this conditiion and try to replace with the isnull() function. ciao. Gioseppe
@rivars  You are a lifesaver!  
Thanks a lot @ITWhisperer . Works like a charm. Sorry I am pretty newbie in splunk and did not even know the existence of command makeresults  One short question, could this be do more flexible? ... See more...
Thanks a lot @ITWhisperer . Works like a charm. Sorry I am pretty newbie in splunk and did not even know the existence of command makeresults  One short question, could this be do more flexible? I mean, if columns are named like A1 B1 A2 B2 A3 B3 instead of A B C D E F is there a way to make this in a "loop"? I tried foreach, but appendpipe cannot be used within a foreach statement. Thanks a lot for your time. 
This is an example based on your example dataset It assumes that there is a lookup file requests.csv (which I generated using the second code snipped below) The makeresults stuff just sets up your ... See more...
This is an example based on your example dataset It assumes that there is a lookup file requests.csv (which I generated using the second code snipped below) The makeresults stuff just sets up your data, so assume your search runs up to the inputlookup statement below. | makeresults | eval _raw=split(replace("time,os,host,user 1/10/2023 9:00,Linux,Server1,UserA 1/10/2023 11:00,Linux,Server1,UserA 1/10/2023 12:00,Linux,Server2,UserA 1/10/2023 9:00,Linux,Server2,UserB 1/10/2023 14:00,Linux,Server1,UserA","\n","###"),"###") | multikv forceheader=1 | eval _time=strptime(time, "%d/%m/%Y %k:%M") | table _time,os,host,user | inputlookup append=t requests.csv | eval user=coalesce(user, reporterName) | foreach change* [ eval <<FIELD>>=strptime('<<FIELD>>', "%d/%m/%Y %k:%M") ] | stats list(_time) as _time values(key) as key values(reporterEmail) as reporterEmail values(summary) as summary values(changeStartDate) as changeStartDate values(changeEndDate) as changeEndDate by user host | eval isInside=mvmap(_time, if(_time>=changeStartDate AND _time<changeEndDate, _time.":1", _time.":0")) | mvexpand isInside | rex field=isInside "(?<_time>[^:]*):(?<isInside>\d)" the logic is then that it appends the contents of the lookup file to the end of the data and makes the common name (user or reporterName) and then converts the change time fields to epoch. Then the stats function joins all the items together - there is an assumption that there is only one requests in requests.csv for each user/server - if more then the logic will need to change. After the stats, the mvmap just compares the times and then expands out the results with isInside showing if the event is inside the request period    Here's the csv generation so you can test if needed. | makeresults | eval _raw=split(replace("key,host,reporterName,reporterEmail,summary,changeStartDate,changeEndDate REQ-1000,Server1,UserA,UserA@dummy.com,Investigate error,1/10/2023 8:00,1/10/2023 13:00 REA-1001,Server2,UserB,UserB@dummy.com,Reset service,1/10/2023 8:00,1/10/2023 10:00","\n","###"),"###") | multikv forceheader=1 | table key,host,reporterName,reporterEmail,summary,changeStartDate,changeEndDate | outputlookup requests.csv  
Hi, I need to write a query to find the time remaining to consume events.   index=x message.message="Response sent" message.feedId="v1" | stats count as Produced index=y | spath RenderedMessage |... See more...
Hi, I need to write a query to find the time remaining to consume events.   index=x message.message="Response sent" message.feedId="v1" | stats count as Produced index=y | spath RenderedMessage | search RenderedMessage="*/v1/xyz*StatusCode*2*"| stats count as Processed index=z message.feedId="v1" | stats avg("message.durationMs") as AverageResponseTime     So I want to basically perform: Average Time left = Produced - Processed /AverageResponseTime How can I go about doing this? Thank you so much
Thank you very much, Sir. Finally, I got the solution based on your suggestion. I put the filename column and value in the csv file. That is the easy way to get the lookupfilename in search. 
Hello @PickleRick , Thank you for the detailed information. I have gone through the shortcomings and I guess I'll work through that. However, could you please guide me on the inputs.conf and output... See more...
Hello @PickleRick , Thank you for the detailed information. I have gone through the shortcomings and I guess I'll work through that. However, could you please guide me on the inputs.conf and outputs.conf for cloud. Is there a way to validate if the UF is receiving logs? 
@bowesmana , thanks for your response, but i am still having trouble with how to 'now use eval+stats to join and collapse the data events and the lookup events together'. I don't see how i can apply... See more...
@bowesmana , thanks for your response, but i am still having trouble with how to 'now use eval+stats to join and collapse the data events and the lookup events together'. I don't see how i can apply these rules: Optional joining, i.e. i want every record in the events list Event date is between the request changeStartDate and changeEndDate For example i could have these events (some columns removed for brevity) _time os host user 1/10/2023 9:00 Linux Server1 UserA 1/10/2023 11:00 Linux Server1 UserA 1/10/2023 12:00 Linux Server2 UserA 1/10/2023 9:00 Linux Server2 UserB 1/10/2023 14:00 Linux Server1 UserA and these requests key host reporterName reporterEmail summary changeStartDate changeEndDate REQ-1000 Server1 UserA UserA@dummy.com Investigate error 1/10/2023 8:00 1/10/2023 13:00 REA-1001 Server2 UserB UserB@dummy.com Reset service 1/10/2023 8:00 1/10/2023 10:00 and i would like this result _time os host user key reporterName reporterEmail summary changeStartDate changeEndDate 1/10/2023 9:00 Linux Server1 UserA REQ-1000 UserA UserA@dummy.com Reset service 1/10/2023 8:00 1/10/2023 13:00 1/10/2023 11:00 Linux Server1 UserA REQ-1000 UserA UserA@dummy.com Reset service 1/10/2023 8:00 1/10/2023 13:00 1/10/2023 12:00 Linux Server2 UserA             1/10/2023 9:00 Linux Server2 UserB REA-1001 UserB UserB@dummy.com Investigate error 1/10/2023 8:00 1/10/2023 10:00 1/10/2023  14:00:00 PM Linux Server1 UserA             So, UserA raised 1 request and matches to 2 of the events, but the last event does not match as it's outside the date/time range. Thanks in advance