All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, The query above calculates some fields for period of time as at the time picker also, we have an alert which every 6 minutes the values for 2 minuets i want to save the results of the alert... See more...
Hello, The query above calculates some fields for period of time as at the time picker also, we have an alert which every 6 minutes the values for 2 minuets i want to save the results of the alert and at the end calculate the results of a whole week i saw that there is an option to use table dataset my question is if table dataset is the right option, if yes - how can i do it if not, what is the best way to achieve my goal  | stats count as Total_Requests count(eval(Request_Status=500 OR Request_Status=501 OR Request_Status=502 OR Request_Status=503 OR Request_Status=599 OR F5_statusCode=0 OR F5_statusCode="connection limit")) as Requests_Returned_Errors count(eval(Request_Status=504 OR F5_serverTime>20000)) as Requests_Returned_Timeouts by API | fields API Total_Requests Requests_Returned_Errors Requests_Returned_Timeouts | lookup APIs_Owners.csv API OUTPUT Owner | eval TotalErrors=Requests_Returned_Errors+Requests_Returned_Timeouts, SLOTotal=round((Total_Requests-TotalErrors)/Total_Requests*100,2), Owner = if(isnotnull(Owner) , Owner ,"null - edit lookup") | fields API Total_Requests TotalErrors Requests_Returned_Errors Requests_Returned_Timeouts SLO* Owner  thanks
Hi, I'd really appreciate some advice on this. I have a data set looking at users and the apps they have access to. There are 3 apps in total. I want to be able to product a pie chart or visualisat... See more...
Hi, I'd really appreciate some advice on this. I have a data set looking at users and the apps they have access to. There are 3 apps in total. I want to be able to product a pie chart or visualisation showing the combinations that users have access to. Can anyone give any suggestions on the type of query I need to write? Sample data attached UserID App1 App2 App3 1 0 1 0 2 0 1 0 3 0 1 0 4 0 1 0 5 0 0 0 6 0 0 0 7 0 0 0 8 0 0 0 9 1 1 1 10 1 1 1 11 1 1 1 12 1 1 0 13 1 1 0 14 1 1 0 15 0 0 0 16 0 0 0 Many thanks,  Tim  
Dear Splunk community, How do I use a variable inside a colorpallete expression using SimpleXML? I have the following:   mySearch | eval myField = 100   If I have a table that returns rows with ... See more...
Dear Splunk community, How do I use a variable inside a colorpallete expression using SimpleXML? I have the following:   mySearch | eval myField = 100   If I have a table that returns rows with numbers in them, I can change the color of that row doing so:   <format type="color" field="sourceField"> <colorPalette type="expression">if (value > myField ,"#df5065","#00FF00")</colorPalette> </format>   Expectation: Any row above 100 is red (df5065) and all other fields are green (00FF00). Result: All rows are green I need to use myField to calculate with. How do I do that? I have tried:   $myField$ 'myField' (myField)   None work. Thanks in advance.
I have a time range picker <input type="time" token="myTime"></input> in my dashboard. When I choose "Date & Time Range" -> Between 10/17/2001 14:08:46.000 (HH:MM:SS.SSS) and 10/18/2021 00:00:00:000... See more...
I have a time range picker <input type="time" token="myTime"></input> in my dashboard. When I choose "Date & Time Range" -> Between 10/17/2001 14:08:46.000 (HH:MM:SS.SSS) and 10/18/2021 00:00:00:000 (HH:MM:SS.SSS), the dropdown menu "crashes" and is not visible any more. Same thing happens if I have "form.myTime.latest=1634508000" in the URL (1634508000 corresponds to 10/18/2021 00:00:00:000 (HH:MM:SS.SSS)), it hangs at "loading". This seems to happen when start/finish time is 00:00:00.000. Any help?
Hi, I have following table:  ts  action file_name source_ip 2021-10-12T09:34:08.910998Z File Open test 10.0.0.14   I would like to add to this table column with the last username w... See more...
Hi, I have following table:  ts  action file_name source_ip 2021-10-12T09:34:08.910998Z File Open test 10.0.0.14   I would like to add to this table column with the last username who logged from this ip address, using as a latest filter timestamp from this event and as a earliest timestamp - 24h. Search 1 on the dasboard:  index=files sourcetype=test_files | search src_ip="$src_ip$" action="$action$" file_name="$name_substr$" | table ts, action,  file_name, src_ip   Search 2, that currently based on two tokens from the first table (ts and src_ip): index="windows" EventCode=4624 src_ip="$ip$" | eval time="$ts$" | eval ts_u=strptime(time, "%Y-%m-%dT%H:%M:%S.%6NZ") | eval start=relative_time(ts_u,"-24h")  | where _time>$$start$$ AND _time<$$ts_u$$ | stats latest(_time) AS Latest, latest(TargetUserName) AS LastUser | eval LastEvent=strftime(Latest,"%+") | table LastEvent,LastUser   I would like to merge this two searches to one table, but if I use join or append command I can't use the ts/_time and src_ip field values from the first search (result is empty). Do you have any idea how I can merge events from two independent sources? Thank you in advance.  
Need help on procedure to generate  self signed certification on windows universal forwarder?
Hi, I would like some advice on how to best resolve my current issue with regards to the Logs being delayed that are being sent from Syslog Server via a Universal Forwarder. Situation Details: We ... See more...
Hi, I would like some advice on how to best resolve my current issue with regards to the Logs being delayed that are being sent from Syslog Server via a Universal Forwarder. Situation Details: We have a central Syslog Server (Linux Based) that is handling all network devices Logs and storing them locally. The average size of daily log volume on the Syslog Server is about 800 GB to 1 TB approx. We have configured retention policies such that only 2 days of Logs are kept locally on the Syslog Server.  To onboard the logs into Splunk we have deployed a Universal Forwarder on the Syslog Server and configured the below to optimize it,  server.conf   [general] parallelIngestionPipelines = 2 [queue=parsingQueue] maxSize = 10MB   limits.conf   [thruput] maxKBps = 0    However, even now we are getting many events in splunkd.log of    WARN Tail Reader - Enqueuing a very large file=/logs/.... in the batch reader, with bytes_to_read=xxxxxxx, reading of other large files could be delayed   The effect of this is that even though Logs are being forwarded to Splunk but there is a large delay time associated with when the logs are received on Syslog Server and when they are indexed/searchable in Splunk. There are many dips in logs received in Splunk when investigating and usually these dips recover as soon as we start receiving the logs that are sent by Splunk UF. However in some cases, the logs are missed because it seems by the time the Splunk agent reaches the log file to read 2 days have passed so that the log file is no longer available as per retention policies.  Would appreciate any helpful advice on how to go about addressing this issue. Thank you. 
Hi Experts,                    As part of an new initiative looking at SLO metrics. I have created the below query which nicely counts the amount of errors per day over a 30 day window and also prov... See more...
Hi Experts,                    As part of an new initiative looking at SLO metrics. I have created the below query which nicely counts the amount of errors per day over a 30 day window and also provides a nice average level on the same graph using an overlay for easy viewing. earliest=-30d@d index=fx ERROR sourcetype=mysourcetype source="mysource.log" | rex field=source "temp(?<instance>.*?)\/" | stats count by _time instance | timechart span=1d max(count) by instance | appendcols [search earliest=-30d@d index=fx ERROR sourcetype=mysourcetype source="mysource.log" | rex field=source "temp(?<instance>.*?)\/" | stats count by _time instance | stats avg(count) AS 30d_average]|filldown 30d_average I wanted to somehow work out the percentage of good results (anything that is lower then the average value) and the percentage of bad results (above the average) and show in a stats table for each instance. Help needed! thanks in advance Theo
Hello,  i need to configure a search using If else condition but the search outputs in a table format.  Can someone please assist ? Original snippet of the search :     index=index1 sourcetype=... See more...
Hello,  i need to configure a search using If else condition but the search outputs in a table format.  Can someone please assist ? Original snippet of the search :     index=index1 sourcetype=index1:xyz eventName IN( GetAction, PutAction,ModifyAction) | [Search...]     Requirement :  Separate using IF-else as follows: index=index1 sourcetype=xyz IF eventName in ( PutAction, GetAction) then [ run search 1 ] else if eventName in (ModifyAction ) then [run search 2] Please note both search1 and 2  returns results in Table format.
Very new to splunk here. I would like to group each http request to each directory based on their directory, and produce a count for each and plot it in a pie chart. GET /vendor, GET /Services, GET ... See more...
Very new to splunk here. I would like to group each http request to each directory based on their directory, and produce a count for each and plot it in a pie chart. GET /vendor, GET /Services, GET /config, GET /About For example GET /vendor/vendor/auth/signin  and GET /vendor/vendor/browse should be classified under /vendor in a table. my current query is wrong and doesn't show anything, modified it based on a GIAC paper. index="apache_logs" | stats count by request | eval request=case( request="GET /config*", "/config", request="GET /vendor*", "/vendor", request="GET /Services*", "/Services", request="GET /About*", "/About") request="GET /about*", "/about") | top request limit=0 useother=f | eval request=request." (".count." events, ".round(percent,2)."%)" I would also like to differentiate requests to /about and /About   I hope this made sense. 
Hi Experts, Am new to splunk.. I need to extract the fields which is in MSGTXT which are highlighted. Only when MSGTXT in  this format(SZ5114RA 00 1045 .06 .0 165K 2% 9728K 3% 400M") as there are d... See more...
Hi Experts, Am new to splunk.. I need to extract the fields which is in MSGTXT which are highlighted. Only when MSGTXT in  this format(SZ5114RA 00 1045 .06 .0 165K 2% 9728K 3% 400M") as there are different type message text also in the logs Example SZ5114RA as A 00 as B 1045 as C .06 as D .0 as E 165K as F 2% as G 9728K as H 3% as I 400M as J   Please help..!! thank you below is the Sample logs.. {"MFSOURCETYPE":"SYSLOG","DATETIME":"2021-10-16 02:24:47.53 +1100","SYSLOGSYSTEMNAME":"P01","JOBID":"SZ04","JOBNAME":"SZ04","SYSPLEX":"SYPLX1A","ACTION":"INFORMATIONAL","MSGNUM":"SZ5114RA","MSGTXT":"SZ5114RA 00 1045 .06 .0 165K 2% 9728K 3% 400M","MSGREQTYPE":""}   {"MFSOURCETYPE":"SYSLOG","DATETIME":"2021-10-16 02:24:47.54 +1100","SYSLOGSYSTEMNAME":"P01","JOBID":"SZ04","JOBNAME":"SZ04","SYSPLEX":"SYPLX1A","ACTION":"INFORMATIONAL","MSGNUM":"SZ04","MSGTXT":"SZ04 ENDED -SYS=P01 NAME=LIVE$SZ TOTAL CPU TIME= 12.4 TOTAL ELAPSED TIME= 47.2","MSGREQTYPE":""}      
Hello Team,  We want to integrate imperva cloud with splunk. We have install TA-Incapsula-cef in splunk however, unable to integrate with Splunk.  Kindly provide your support.
I want to list all the 'Authentication' related content we have created in the ES App. Is there any SPL query to get this. Need to list all the dashboards, Notable Events etc... of Authentication t... See more...
I want to list all the 'Authentication' related content we have created in the ES App. Is there any SPL query to get this. Need to list all the dashboards, Notable Events etc... of Authentication type. I would really appreciate any help.
Hi, we are created alerts for windows server availability (server status is shutting down) by using of Event codes (EventCode=41 OR EventCode=6006 OR EventCode=6008) now requirement is to create das... See more...
Hi, we are created alerts for windows server availability (server status is shutting down) by using of Event codes (EventCode=41 OR EventCode=6006 OR EventCode=6008) now requirement is to create dashboards for servers status (Running and shutting down).can you give me suggestions to create both (running and shutting down) status on one dashboard.
  Hi everyone, I'm looking for a search, that shows me when the health status of splunkd is changing from green to yellow or red... Would that be possible?  
Hi, I have checkboxes with 3 choices like below  i want to display the panel based on the checkbox selected . Ex: If A & B are selected then corresponding panel should be displayed. Similarly ... See more...
Hi, I have checkboxes with 3 choices like below  i want to display the panel based on the checkbox selected . Ex: If A & B are selected then corresponding panel should be displayed. Similarly for any other choices like B&C or A&C the corresponding panel should be displayed. Can you please help me the code. P.S: Currently i am able to display the panel if only one checkbox is selected or if all the three checkboxes are selected with condition based xml code. 
Hi, Im trying to create a single value with trendline visualisation, where I want to compare the difference between todays result with yesterday results. The trendline should be the results differen... See more...
Hi, Im trying to create a single value with trendline visualisation, where I want to compare the difference between todays result with yesterday results. The trendline should be the results difference of  yesterday and today.  I have applied several solutions, but the total number does not tally with the today's result. My base query is: index=emailgateway action=* from!="" to!="" | stats count which result shown as (today result) : Base Here are several solutions that I have tried:- Solution 1 Im using the trendline wma2  index=emailgateway action=* from!="" to!="" | timechart span=1d count as Total | trendline wma2("x") as Trend | sort - _time the result shown as below: Solution 1 - the result shows the trendline, but the total number (90,702) did not tally with today's result (227,019).    Solution 2 Im using the delta command :-  index=emailgateway action=* from!="" to!="" | timechart span=1d count as Total | delta Total p=1 as diference the result shown as below:  - the total number is different (including the trendline number)   Solution 3 I tried to use the |tstats command (from Enterprise Security) | tstats summariesonly=true allow_old_summaries=true count from datamodel=Email where (All_Email.action=* AND All_Email.orig_dest!="" OR All_Email.orig_src!="") earliest=-48h latest=-24h | append [| tstats summariesonly=true allow_old_summaries=true count from datamodel=Email where (All_Email.action=* AND All_Email.orig_dest!="" OR All_Email.orig_src!="") earliest=-24h latest=now] | appendcols [| makeresults | eval time=now() ] | rename time AS _time Solution 3 - which also did not work Can anyone help? Did i missed anything? Please.
I'm using mstats earliest_time(metric) to find the earliest time for metric. If I use    |mstats prestats=false earliest_time("http_req_duration_value") as "Start Time" where index=au_cpe_common_me... See more...
I'm using mstats earliest_time(metric) to find the earliest time for metric. If I use    |mstats prestats=false earliest_time("http_req_duration_value") as "Start Time" where index=au_cpe_common_metrics    it returns a "Start Time" like 1633986822.000000 I want to be able to display this time in human readable format on a dashboard however when I try  ~~~ |mstats prestats=false earliest_time("http_req_duration_value") as "Start Time" where index=au_cpe_common_metrics | eval STime2=strftime("Start Time", "%d/%m/%Y %H:%M:%S")|table STime2 ~~~ I get no results.  I'd also like to be able to subtract earliest_time from latest_time to get the duration of the event based on other dimensions.   I also tried prestats = true but it  returned no Time values in the events. What format is earliest time in and why can't  I format it or do calculations with the value? What is happening here? I'm new to operating with metric indexes
Hi I'm searching on an internet usage index for events that contain a particular word somewhere in the domain. For example the word could be "edublogs". From the result, I need to sum the total of ... See more...
Hi I'm searching on an internet usage index for events that contain a particular word somewhere in the domain. For example the word could be "edublogs". From the result, I need to sum the total of bytes downloaded for all events returned. The problem I'm having is needing to calculate a distinct count of centers and a distinct count of userids where the domain name only begins with "edublogs", e.g. "edublogs.com" and not when it doesn't, e.g.  "accountsedublogs.com". I know this example is wrong but can someone please help me with how I would change it to achieve the outcome below? index=searchdata domain="*edublogs*" | stats count dc(centre) AS distinctCenters | stats count dc(userid) AS distinctUserids | stats sum(bytes) AS totalBytes by domain | table domain totalBytes, distinctCenters, distinctUserids My desired results would look something like... domain totalBytes distinctCenters distinctUserids Senior 50000 15 321 Many thanks
msg:  INFO | 2021-10-14 10:38 PM |  Message consumed: {"InputAmountToCredit":"22.67","CurrencyCode":"AUD","Buid":"1401","OrderNumber":"877118406","ID":"58916"} In above sample data, I want to extrac... See more...
msg:  INFO | 2021-10-14 10:38 PM |  Message consumed: {"InputAmountToCredit":"22.67","CurrencyCode":"AUD","Buid":"1401","OrderNumber":"877118406","ID":"58916"} In above sample data, I want to extract "InputAmount" to credit.  Also I wan to extract "ID" and perform another search to get its status (as status is in another event and needs to be extracted by mapping ID)  This is for logs and I have lots of events to perform search on to generate report. Any help would be appreciated.