All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I am trying to extract the system IDs from single event into the multiple events, I mean that each SID is in a separate line. I try to deploy a regex for this, without success. Could perhaps... See more...
Hello, I am trying to extract the system IDs from single event into the multiple events, I mean that each SID is in a separate line. I try to deploy a regex for this, without success. Could perhaps anyone help with the below? Kind Regards, Kamil   | makeresults | eval SID="I32 DYR DZ1 MHW DYN I58 ICZ ICN I69 I8Y IAE I6J I71 SLG I9Z I7T I7Z I5Y I5U I5T I3I I3G TCX I5O DZX DZC DYQ DYO DYM OGO OJ8 OK8 OKQ OKX DXF DYE DYF SS4 QMW I24 R9H O67 OP0 SP9 I4I I4M" | rex field=SID "^(?<SID2>[^\r\n].+)"  
Hi Team,  I've created a Splunk dashboard and I'm able to see the data, also I have created few users and given the permissions(admin) for the users to see the dashboard data. But somehow the users ... See more...
Hi Team,  I've created a Splunk dashboard and I'm able to see the data, also I have created few users and given the permissions(admin) for the users to see the dashboard data. But somehow the users are able to see the dashboard but unable to see data inside the dashboards. Other users are able to see the data coming from the index by search manually but are unable to see the data in the dashboard which is created by the admin user. Can you please guide me if I'm missing anything?
Hey Splunkers,  I am quite new to Splunk and want to create a heat map that displays average values per Hour grouped per day over a week. Below you can see what i got so far. My problem is that the... See more...
Hey Splunkers,  I am quite new to Splunk and want to create a heat map that displays average values per Hour grouped per day over a week. Below you can see what i got so far. My problem is that the columns and rows seem to be inverted and that the current y-axis shows values from 6 to ohter instead of 1 to 24 hours. Can anyone lend me a hand with this? Thanks in advance  Nico  EDIT: What i am looking for should look somewhat like this:     
Hi All, We have configured multiple inputs in our Splunk IDM Layer and few of them are Microsoft Add-ons. Each of these add-ons have multiple inputs with varied time frequency to pull data from Azur... See more...
Hi All, We have configured multiple inputs in our Splunk IDM Layer and few of them are Microsoft Add-ons. Each of these add-ons have multiple inputs with varied time frequency to pull data from Azure resources. Hence, we would like to know the limit on configuring the inputs or the limit of apps & add-ons that can be installed in the Splunk IDM layer. Also, we would like to know if the performance can be monitored by us through some means? Thanks!
Good Morning, I am using the http_poller logstash input filter to connect to the AppDynamics OAUTH API that I will then use to pass to the http filter plugin to retrieve AppDynamics REST API metri... See more...
Good Morning, I am using the http_poller logstash input filter to connect to the AppDynamics OAUTH API that I will then use to pass to the http filter plugin to retrieve AppDynamics REST API metric data to insert into Elasticsearch. My logstash http_poller configuration is: ``` input { http_poller { urls => { AppDynamics => { method => post url => "https://*appdynamics-test-account*/controller/api/oauth/access_token" headers => { "Accept" => "application/json" "Content-Type" => "application/vnd.appd.cntrl+protobuf;v=1" "Authorizations" => "*AppDynamics bearer token*" "grant_type" => "client_credentials" "client_id" => "Stage-CurlTest" "client_secret" => "*AppDynamics Client Secret*" } } } request_timeout => 60 schedule => { every => "1h"} codec => "json" metadata_target => "appd-token" type => "AppDynamics" } } } ``` I am getting the following error response when the poller tries to connect to AppDynamics ``` { "type": "AppDynamics", "@timestamp": "2021-10-18T14:02:11.191Z", "appd-token": { "request": { "method": "post", "url": "https://*appdynamics-test-account*/controller/api/oauth/access_token", "headers": { "Authorizations": "AppDynamics Bearer Token", "grant_type": "client_credentials", "client_secret": "`AppDynamics Client Secret`", "client_id": "Stage-CurlTest", "Content-Type": "application/vnd.appd.cntrl+protobuf;v=1", "Accept": "application/json" } }, "response_headers": { "x-content-type-options": "nosniff", "x-frame-options": "SAMEORIGIN", "x-xss-protection": "1; mode=block", "connection": "keep-alive", "server": "AppDynamics", "date": "Mon, 18 Oct 2021 14:02:10 GMT" }, "runtime_seconds": 0.36, "host": "SAPPLOG03", "name": "AppDynamics", "response_message": "Not Acceptable", "code": 406, "times_retried": 0 }, "@version": "1", "tags": [ "_httprequestfailure" ] } ``` New to both the http_poller filter and the AppDynamics Metric REST API so not sure what I have configured wrong. I have also posted to the Elastic Community forum. I have removed any account-specific information for security reasons so any additional information that would be helpful let me know and I'll get it for this post. Thanks, Bill
Hi, I created an app using the Add-on Builder v.4.0.0 to use custom Alert Actions on Splunk. I created two Add-on Setup parameters: username and password. When I enter the information on these fiel... See more...
Hi, I created an app using the Add-on Builder v.4.0.0 to use custom Alert Actions on Splunk. I created two Add-on Setup parameters: username and password. When I enter the information on these fields and click on the "Save" button, it seems that these information were not saved (no UI changes) and if I close and re-open the UI some fields are not filled. This same app was working on the previous version of the Add-on Builder and the fields were being filled. However, it seem that this info is being saved on the .conf files. In the attachment there is a screenshot with the "password" field not being filled when I re-opened the UI screen. Could someone help me? I am using Splunk version 8.2.1 and tried with different web browsers. Thanks.  
on the output I get the result with users. the username is similar to the name of the mail. how do i call the username variable in the sendemail command  username abc abc1 abc2 John 1 2 ... See more...
on the output I get the result with users. the username is similar to the name of the mail. how do i call the username variable in the sendemail command  username abc abc1 abc2 John 1 2 3 Smith 3 1 2 Georgy 2 3 1 | sendemail to="$username$@gmail.com" sendresults=true subject="Test sub" message="message" error  command="sendemail", {u'@gmail.com': (501, '5.1.3 Invalid address')} while sending mail to: @gmail.com in the output, the variable takes the username, gets everyone's name and sends a message to everyone And is it possible to make everyone get only their own line of output? Thanks !!!
hello I dont succeed to round the fiel ResponseTime which is a decimaf field with a point instaed a comma   index=tutu | eval web_duration_ms=round('web_duration_ms', 0) | timechart avg(web_durat... See more...
hello I dont succeed to round the fiel ResponseTime which is a decimaf field with a point instaed a comma   index=tutu | eval web_duration_ms=round('web_duration_ms', 0) | timechart avg(web_duration_ms) as ResponseTime by Url    what is wrong please?
I appreciate any help in preventing license usage warnings ? One item I thought of was to create a Dashboard of Indexes Data & License utilization. What other items do I need to watch to prevent Lice... See more...
I appreciate any help in preventing license usage warnings ? One item I thought of was to create a Dashboard of Indexes Data & License utilization. What other items do I need to watch to prevent License usage warnings please? Thank u in advance.
I have a field name Sec_field i want to know list of dashboards are using this field <Sec_field> is it possible to get using a SPL
Hello, The query above calculates some fields for period of time as at the time picker also, we have an alert which every 6 minutes the values for 2 minuets i want to save the results of the alert... See more...
Hello, The query above calculates some fields for period of time as at the time picker also, we have an alert which every 6 minutes the values for 2 minuets i want to save the results of the alert and at the end calculate the results of a whole week i saw that there is an option to use table dataset my question is if table dataset is the right option, if yes - how can i do it if not, what is the best way to achieve my goal  | stats count as Total_Requests count(eval(Request_Status=500 OR Request_Status=501 OR Request_Status=502 OR Request_Status=503 OR Request_Status=599 OR F5_statusCode=0 OR F5_statusCode="connection limit")) as Requests_Returned_Errors count(eval(Request_Status=504 OR F5_serverTime>20000)) as Requests_Returned_Timeouts by API | fields API Total_Requests Requests_Returned_Errors Requests_Returned_Timeouts | lookup APIs_Owners.csv API OUTPUT Owner | eval TotalErrors=Requests_Returned_Errors+Requests_Returned_Timeouts, SLOTotal=round((Total_Requests-TotalErrors)/Total_Requests*100,2), Owner = if(isnotnull(Owner) , Owner ,"null - edit lookup") | fields API Total_Requests TotalErrors Requests_Returned_Errors Requests_Returned_Timeouts SLO* Owner  thanks
Hi, I'd really appreciate some advice on this. I have a data set looking at users and the apps they have access to. There are 3 apps in total. I want to be able to product a pie chart or visualisat... See more...
Hi, I'd really appreciate some advice on this. I have a data set looking at users and the apps they have access to. There are 3 apps in total. I want to be able to product a pie chart or visualisation showing the combinations that users have access to. Can anyone give any suggestions on the type of query I need to write? Sample data attached UserID App1 App2 App3 1 0 1 0 2 0 1 0 3 0 1 0 4 0 1 0 5 0 0 0 6 0 0 0 7 0 0 0 8 0 0 0 9 1 1 1 10 1 1 1 11 1 1 1 12 1 1 0 13 1 1 0 14 1 1 0 15 0 0 0 16 0 0 0 Many thanks,  Tim  
Dear Splunk community, How do I use a variable inside a colorpallete expression using SimpleXML? I have the following:   mySearch | eval myField = 100   If I have a table that returns rows with ... See more...
Dear Splunk community, How do I use a variable inside a colorpallete expression using SimpleXML? I have the following:   mySearch | eval myField = 100   If I have a table that returns rows with numbers in them, I can change the color of that row doing so:   <format type="color" field="sourceField"> <colorPalette type="expression">if (value > myField ,"#df5065","#00FF00")</colorPalette> </format>   Expectation: Any row above 100 is red (df5065) and all other fields are green (00FF00). Result: All rows are green I need to use myField to calculate with. How do I do that? I have tried:   $myField$ 'myField' (myField)   None work. Thanks in advance.
I have a time range picker <input type="time" token="myTime"></input> in my dashboard. When I choose "Date & Time Range" -> Between 10/17/2001 14:08:46.000 (HH:MM:SS.SSS) and 10/18/2021 00:00:00:000... See more...
I have a time range picker <input type="time" token="myTime"></input> in my dashboard. When I choose "Date & Time Range" -> Between 10/17/2001 14:08:46.000 (HH:MM:SS.SSS) and 10/18/2021 00:00:00:000 (HH:MM:SS.SSS), the dropdown menu "crashes" and is not visible any more. Same thing happens if I have "form.myTime.latest=1634508000" in the URL (1634508000 corresponds to 10/18/2021 00:00:00:000 (HH:MM:SS.SSS)), it hangs at "loading". This seems to happen when start/finish time is 00:00:00.000. Any help?
Hi, I have following table:  ts  action file_name source_ip 2021-10-12T09:34:08.910998Z File Open test 10.0.0.14   I would like to add to this table column with the last username w... See more...
Hi, I have following table:  ts  action file_name source_ip 2021-10-12T09:34:08.910998Z File Open test 10.0.0.14   I would like to add to this table column with the last username who logged from this ip address, using as a latest filter timestamp from this event and as a earliest timestamp - 24h. Search 1 on the dasboard:  index=files sourcetype=test_files | search src_ip="$src_ip$" action="$action$" file_name="$name_substr$" | table ts, action,  file_name, src_ip   Search 2, that currently based on two tokens from the first table (ts and src_ip): index="windows" EventCode=4624 src_ip="$ip$" | eval time="$ts$" | eval ts_u=strptime(time, "%Y-%m-%dT%H:%M:%S.%6NZ") | eval start=relative_time(ts_u,"-24h")  | where _time>$$start$$ AND _time<$$ts_u$$ | stats latest(_time) AS Latest, latest(TargetUserName) AS LastUser | eval LastEvent=strftime(Latest,"%+") | table LastEvent,LastUser   I would like to merge this two searches to one table, but if I use join or append command I can't use the ts/_time and src_ip field values from the first search (result is empty). Do you have any idea how I can merge events from two independent sources? Thank you in advance.  
Need help on procedure to generate  self signed certification on windows universal forwarder?
Hi, I would like some advice on how to best resolve my current issue with regards to the Logs being delayed that are being sent from Syslog Server via a Universal Forwarder. Situation Details: We ... See more...
Hi, I would like some advice on how to best resolve my current issue with regards to the Logs being delayed that are being sent from Syslog Server via a Universal Forwarder. Situation Details: We have a central Syslog Server (Linux Based) that is handling all network devices Logs and storing them locally. The average size of daily log volume on the Syslog Server is about 800 GB to 1 TB approx. We have configured retention policies such that only 2 days of Logs are kept locally on the Syslog Server.  To onboard the logs into Splunk we have deployed a Universal Forwarder on the Syslog Server and configured the below to optimize it,  server.conf   [general] parallelIngestionPipelines = 2 [queue=parsingQueue] maxSize = 10MB   limits.conf   [thruput] maxKBps = 0    However, even now we are getting many events in splunkd.log of    WARN Tail Reader - Enqueuing a very large file=/logs/.... in the batch reader, with bytes_to_read=xxxxxxx, reading of other large files could be delayed   The effect of this is that even though Logs are being forwarded to Splunk but there is a large delay time associated with when the logs are received on Syslog Server and when they are indexed/searchable in Splunk. There are many dips in logs received in Splunk when investigating and usually these dips recover as soon as we start receiving the logs that are sent by Splunk UF. However in some cases, the logs are missed because it seems by the time the Splunk agent reaches the log file to read 2 days have passed so that the log file is no longer available as per retention policies.  Would appreciate any helpful advice on how to go about addressing this issue. Thank you. 
Hi Experts,                    As part of an new initiative looking at SLO metrics. I have created the below query which nicely counts the amount of errors per day over a 30 day window and also prov... See more...
Hi Experts,                    As part of an new initiative looking at SLO metrics. I have created the below query which nicely counts the amount of errors per day over a 30 day window and also provides a nice average level on the same graph using an overlay for easy viewing. earliest=-30d@d index=fx ERROR sourcetype=mysourcetype source="mysource.log" | rex field=source "temp(?<instance>.*?)\/" | stats count by _time instance | timechart span=1d max(count) by instance | appendcols [search earliest=-30d@d index=fx ERROR sourcetype=mysourcetype source="mysource.log" | rex field=source "temp(?<instance>.*?)\/" | stats count by _time instance | stats avg(count) AS 30d_average]|filldown 30d_average I wanted to somehow work out the percentage of good results (anything that is lower then the average value) and the percentage of bad results (above the average) and show in a stats table for each instance. Help needed! thanks in advance Theo
Hello,  i need to configure a search using If else condition but the search outputs in a table format.  Can someone please assist ? Original snippet of the search :     index=index1 sourcetype=... See more...
Hello,  i need to configure a search using If else condition but the search outputs in a table format.  Can someone please assist ? Original snippet of the search :     index=index1 sourcetype=index1:xyz eventName IN( GetAction, PutAction,ModifyAction) | [Search...]     Requirement :  Separate using IF-else as follows: index=index1 sourcetype=xyz IF eventName in ( PutAction, GetAction) then [ run search 1 ] else if eventName in (ModifyAction ) then [run search 2] Please note both search1 and 2  returns results in Table format.
Very new to splunk here. I would like to group each http request to each directory based on their directory, and produce a count for each and plot it in a pie chart. GET /vendor, GET /Services, GET ... See more...
Very new to splunk here. I would like to group each http request to each directory based on their directory, and produce a count for each and plot it in a pie chart. GET /vendor, GET /Services, GET /config, GET /About For example GET /vendor/vendor/auth/signin  and GET /vendor/vendor/browse should be classified under /vendor in a table. my current query is wrong and doesn't show anything, modified it based on a GIAC paper. index="apache_logs" | stats count by request | eval request=case( request="GET /config*", "/config", request="GET /vendor*", "/vendor", request="GET /Services*", "/Services", request="GET /About*", "/About") request="GET /about*", "/about") | top request limit=0 useother=f | eval request=request." (".count." events, ".round(percent,2)."%)" I would also like to differentiate requests to /about and /About   I hope this made sense.