All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I have a dashboard with a dropdown of about 40 choices that come from a search query (not static). Each choice should "unhide" the respective panel on the dashboard.  I do not think I a... See more...
Hi All, I have a dashboard with a dropdown of about 40 choices that come from a search query (not static). Each choice should "unhide" the respective panel on the dashboard.  I do not think I am grasping the <done>, <change>, <finalized>, <condition match>, etc elements I need to figure this out.  I have created a dummy version of my actual dashboard below:       <form> <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="dashboard"> <label>Dashboard Selection</label> <fieldForLabel>Picker</fieldForLabel> <fieldForValue>Picker</fieldForValue> <search> <query>| makeresults | eval Picker = "Apple,Orange" | eval Picker = split(Picker,",") | stats count by Picker</query> </search> </input> </fieldset> <row> <panel> <html>DEBUG TOKENS : : : showapple - $showapple$ || dashboard - $dashboard$</html> </panel> </row> <row> <panel depends="$showapple$"> <title>Apple Dashboard</title> <table> <search> <query>| makeresults | eval Events = "10.10.10.10" | table Events</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> <row> <panel depends="$showorange$"> <title>Orange Dashboard</title> <table> <search> <query>| makeresults | eval Events = "192.168.1.1" | table Events</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> </table> </panel> </row> </form>        You will notice I do not have any code to decipher when showapple or showorange is true.  I basically tried iterations of this code to make it work:       <done> <condition> <eval token="showapple">if($dashboard$="Apple","TRUE",null())</eval> <eval token="showorange">if($dashboard$="Orange","TRUE",null())</eval> </condition> </done>       I have tried using the above in different areas, and I always get null().  I also tried doing <change> instead of <done> so the dashboards would change depending on the dropdown value instantly, but I did not have success there either.
I am looking to create a splunk query but finding it complex to start with.   Use case: Index 1 has two logs like       Log 1: Received from client C for user Y and request id: X Log 2:... See more...
I am looking to create a splunk query but finding it complex to start with.   Use case: Index 1 has two logs like       Log 1: Received from client C for user Y and request id: X Log 2: request id:X completed       Index 2 looks like:       User Y has total sent items count : Z         I want output if all user items count  , from particular client say , D, for which request is completed. Basically, if from D client, there was a request and that request is completed, give me the  User and items count for that.
I have below JSON event where there are errors present in a field which is a list. I want to extract the values in the list and group them with another field which is part of an object of the same ev... See more...
I have below JSON event where there are errors present in a field which is a list. I want to extract the values in the list and group them with another field which is part of an object of the same event.  After grouping I want to count them like below output. I am using below query but not getting the expected output. Any help on this will be highly appreciated.  Sample JSON Event1   { "errorList": ["There is an ErrorA", "There is some other ErrorB", "Ohh another ErrorC"], "Details": { "type": "ABC" } }     Sample JSON Event2   { "errorList": ["There is some other ErrorB", "Ohh another ErrorC"], "Details": { "type": "XYZ" } }     Expected Output   Type Error Count ABC There is some other ErrorB 3 ABC There is an ErrorA 4 XYZ Ohh another ErrorC 2     Query I am trying      BASE_SEARCH | rex field=MESSAGE "(?<JSON>\{.*\})" | spath input=JSON | rename Details{}.type as "Type" | rename errorList{} as "Error" | stats count as Count by "Type" "Error" | table Type, Error , Count  
I have a search which triggers an alert if an event hasn't be received by 6.20 am. That alert works fine but it needs to send data into another system. That system needs a unique id within it's Key f... See more...
I have a search which triggers an alert if an event hasn't be received by 6.20 am. That alert works fine but it needs to send data into another system. That system needs a unique id within it's Key field. key=$result._time$ won't work as the event doesn't exist. Is there a way to add a unique value into that key field on an event that doesn't exist? The search is: sourcetype=Batch OR sourcetype=ManualBatch "Step 'CleanupOldRunlogs' finished with status SUCCESS"
Hello everyone, Have you ever wondered why microsoft does not documented Operation types with Unicode + meaning? You don´t need to anymore. I have made the needed research (anyone can do) and h... See more...
Hello everyone, Have you ever wondered why microsoft does not documented Operation types with Unicode + meaning? You don´t need to anymore. I have made the needed research (anyone can do) and here are the results: %%2458 = Read %%2459 = Write %%2457 = Delete      
Hi, We Designed a new custom model using tensorflow library to do the predictive analysis for our usecase.  We have installed DLTK with container based and DLTK environment setup is done. We are lo... See more...
Hi, We Designed a new custom model using tensorflow library to do the predictive analysis for our usecase.  We have installed DLTK with container based and DLTK environment setup is done. We are looking for the steps/video to upload our custom ML model which is trained outside of spluk environment. Is it possible to upload the custom ML model into Splunk ? if yes , How we can call the custom ML model based on the application logs ? Please help     
I have an alert with a "Send email" trigger action when the number of results is greater than zero. The aim is to send a table of results inline in the email. This isn't currently working - no emai... See more...
I have an alert with a "Send email" trigger action when the number of results is greater than zero. The aim is to send a table of results inline in the email. This isn't currently working - no email is being received when there are valid qualifying events in the search period (previous day). The alert was deployed using the SHC deployer, and is owned by "nobody". If I "Open in Search" I see results. If I clone the alert so a local version is run under my user context, I get the alert email sent.   Looking into _internal events, I can see that when the "nobody" search runs, no results are returned, and hence no email is sent - this isn't an issue with email configuration. Why does this search in this context give me no results?
  I am having correlation search running for every 5mins to get last 15mins data. The requirement is if the event comes at first time and created the episode but it should not create a incident at ... See more...
  I am having correlation search running for every 5mins to get last 15mins data. The requirement is if the event comes at first time and created the episode but it should not create a incident at the time. it should wait for 15mins and then if the issue is still there then create a incident. How we will know issue is still there, we are having auto-closure as well, for the same event if the auto-closure event comes correlate the event into the same episode and close the episode , this should happen if the auto-closure event come within 15mins. If auto-closure comes after 15mins then close the incident. Action for raising the incident will be based on status= ACTION and for auto-closure , the status is OK.   Now, to implement this we have created the action rules in NEAP policy,  1. to stop episode creating incident for 15mins -                  if the event in the episode is >=1 and status as active and if the episodes existed for 900sec--> SNOW configure 2. To close the episode  after receiving status as OK within 15mins                 if the event in the episode is >=2 and status in OK --> Close the episode 3. To close the episode after receiving status as OK after 15mins                  if the event in the episode is >=2 and status in OK and incident_status for that episode is not resolved or closed --> Close incident in SNOW But the tried use cases are not working, any one can help me with this how we can implement?
Can anyone assist with this, I see quiute a few people have successfully got the logs working following this work around --> https://support.umbrella.com/hc/en-us/articles/360001388406-Configuring-... See more...
Can anyone assist with this, I see quiute a few people have successfully got the logs working following this work around --> https://support.umbrella.com/hc/en-us/articles/360001388406-Configuring-Splunk-with-a-Cisco-managed-S3-Bucket However we get the following error when trying to run the shell script? fatal error: SSL validation failed for <link> EOF occurred in violation of protocol (_ssl.c:1129)
While pushing the application from deployment server to search head1 it gives me this error after entering the below command. ./splunk apply shcluster-bundle -target https://172.31.14.82:8089 Hel... See more...
While pushing the application from deployment server to search head1 it gives me this error after entering the below command. ./splunk apply shcluster-bundle -target https://172.31.14.82:8089 Help me to sort this issue   [root@ip-172-31-3-3 bin]# ./splunk apply shcluster-bundle -target https://172.31.14.82:8089 Warning: Depending on the configuration changes being pushed, this command might initiate a rolling restart of the cluster members. Please refer to the documentation for the details. Do you wish to continue? [y/n]: y WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Your session is invalid. Please login. Splunk username: admin Password: \Error in parsing pass4SymmKey under shclustering stanza.
Hi all, I have a correlation search that passes alerts from another system into ES and I need to prevent the urgency of the alert from being changed by ES. Essentially I (think I) need ES to igno... See more...
Hi all, I have a correlation search that passes alerts from another system into ES and I need to prevent the urgency of the alert from being changed by ES. Essentially I (think I) need ES to ignore the priority of any asset or identity associated with the incident so that the urgency doesn't change. Cany anyone offer any advice on how to do this? Thanks very much  Edit: I should add, I didn't create the original correlation search and I don't have much experience in this area, hence the question. Thanks again!
I am testing PAVO Getwatchlist Add-on 1.1.7 on Splunk Enterprise 9.0.0 It looks working almost fine. I need to use additional columns and set configration in getwatchlist.conf like following. 1=ad... See more...
I am testing PAVO Getwatchlist Add-on 1.1.7 on Splunk Enterprise 9.0.0 It looks working almost fine. I need to use additional columns and set configration in getwatchlist.conf like following. 1=additional1 2=additional2 3=additional3 ... I expected that field name of additional columns become "additional1", "additional2" ... But, it became "1", "2", ... I have tried to modify getwatchlist.py like following. $ diff getwatchlist.py getwatchlist_fix.py 388c388 < row_holder[add_col] = self.format_value(row[int(add_col)]) --- > row_holder[add_cols[add_col]] = self.format_value(row[int(add_col)]) After that, the field names became "additional1", "additional2" ... as expected. I am not sure which behavior is correct. But, I feel "additional1", "additional2" ... are better.
I've tried to explore every links and docs in the AppDynamics website, but i still can't find any related information 1. Is there any user limit on AppDynamic? Or we have freedom to create as many u... See more...
I've tried to explore every links and docs in the AppDynamics website, but i still can't find any related information 1. Is there any user limit on AppDynamic? Or we have freedom to create as many user as we want, per controller? And what's the controller term actually means? Is it counted as an account or a license that I've purchased? 2. Is there any data ingest or data usage limit on AppDynamic Pro plan, like limited each month only 300GB and i have to pay more if i need to increase it, or is the one i found at your docs is correct (100GB/Day/User) Regards, Yohan
Bear with me as this is the first time im doing this. I configured a vmware host to send its events via syslog to splunk. It is working. Raw logs are stored in /opt/syslog/192.168.x.x in four differ... See more...
Bear with me as this is the first time im doing this. I configured a vmware host to send its events via syslog to splunk. It is working. Raw logs are stored in /opt/syslog/192.168.x.x in four different types (local, daemon logs etc) Now, how do I index these logs? How do I create a new index=vmware which will start index raw logs and I can start searching? Googled a bit but I cant find a step-by-step tutorial
Hi all I have a dashboard made with Dashboard Studio with multiple tables and graphs. For it to be more interactive I would like to be able to click in my visualizations and update a token that i... See more...
Hi all I have a dashboard made with Dashboard Studio with multiple tables and graphs. For it to be more interactive I would like to be able to click in my visualizations and update a token that is shared across the dashboard. I have created a dynamically populated multiselect and drilldowns in my visualizations. The drilldown recognizes the token of the multiselect. When i click, the visualizations update for a second and then reset. The token seems to update and gets immediately reset by the multiselect field which does not update. Similar behaviour occurs for the dropdown input. Does anyone know why it happens and how to fix it?
Hi, I get data from DB using dbxquery. I set the time filter by:  WHERE time BETWEEN DATE_TRUNC('hour',NOW()) - INTERVAL '4 HOURS' AND DATE_TRUNC('hour',NOW()) - INTERVAL '2 HOURS' I use DATE_T... See more...
Hi, I get data from DB using dbxquery. I set the time filter by:  WHERE time BETWEEN DATE_TRUNC('hour',NOW()) - INTERVAL '4 HOURS' AND DATE_TRUNC('hour',NOW()) - INTERVAL '2 HOURS' I use DATE_TRUNC in order to get data from exact hour (7:00-9:00, insteads of 7:10-9:10 or example) After that, using Splunk, I make a span = 2h In the alert, I want to send it every 2 hours. There is a problem from 4:00 - 6:00 but at 9:30, I don't receive any alert (because there is nothing return from the search) However, now, at 10:10, when I run the search, it sort the result that I want. _time id count 2022-10-14 04:00 123 0 2022-10-14 06:00 123 0   Effectively, there is no data for id "123" in the filter period in SQL query.  Do you have any idea how can I do it more generally, not filter time like what I am doing now in the SQL, to avoid this problem? Or a way to filter time by Splunk, not by SQL Here is my search: |dbxquery connection="database" query=" SELECT id as id, time as time, count(*) as count FROM table WHERE time BETWEEN DATE_TRUNC('hour',NOW()) - INTERVAL '4 HOURS' AND DATE_TRUNC('hour',NOW()) - INTERVAL '2 HOURS' GROUP BY id, time" |lookup lookup.csv id OUTPUT id |eval list_id = "123,466,233,111" |eval split_list_id= split(list_id ,",") |mvexpand split_list_id |where id=split_list_id |eval _time=strptime(time,"%Y-%m-%dT%H:%M:%S.%N") |timechart span=2h count by id | untable _time id count | makecontinuous | where count = 0 |stats max(_time) as date_time by id |eval date_time=strftime(date_time,"%Y-%m-%dT%H:%M:%S")
Hello everyone, I am trying to install forwarder on Linux chown -R splunk:splunk /opt/splunkforwarder sudo -u splunk sh -c "/opt/splunkforwarder/bin/splunk set deploy-poll deployment_address:80... See more...
Hello everyone, I am trying to install forwarder on Linux chown -R splunk:splunk /opt/splunkforwarder sudo -u splunk sh -c "/opt/splunkforwarder/bin/splunk set deploy-poll deployment_address:8089"   and getting error: sh: /opt/splunkforwarder/bin/splunk: Permission denied   What should I look? What can be problem?
please help I need to compare and display the last 30days data and last 15mnts data 
Hello Kindly assist me in this query/solution. I have a long list of IPs that logged in. Out of this list, I want to know the percentage of only 5 IPs. When I use this query ---My base query... See more...
Hello Kindly assist me in this query/solution. I have a long list of IPs that logged in. Out of this list, I want to know the percentage of only 5 IPs. When I use this query ---My base query----         | search NOT IPs IN ("IP.A", "IP.B", "IP.C", "IP.D", "IP.E") | stats count by IP | eventstats sum(count) as perc | eval percentage= round(count*100/perc,2) | fields - perc         It gives me a table like this IP Count Percentage IP.A 52 37 IP.B 35 26 IP.C 22 18 IP.D 44 17 IP.E 11 2 The Total percentage =100%. But when I use this query ---My base query---- ip=*         | stats count by IP | eventstats sum(count) as perc | eval percentage= round(count*100/perc,2) | fields - perc         I get about 5 pages of all the list of the IPs and their respective percentages including the IP.A to IP.E in a table which all together totals 100% but the percentages of IP.A to IP.E changes completely. The 5 IPs shouldn't give me a 100%. It should a percentage fraction of the whole. Please help.
I'm trying to do something pretty straightforward, and have looked at  practically every "average" answer on Splunk Community, but no dice.  I want to compare total and average webpage hits on a line... See more...
I'm trying to do something pretty straightforward, and have looked at  practically every "average" answer on Splunk Community, but no dice.  I want to compare total and average webpage hits on a line chart.  I calculated and confirmed the standard (fillnull value=0) and cumulative (fillnull value=null) averages with the following:   host.... | bin _time span=1h | eval date_hour=strftime(_time, "%H") | stats count as hits by date, date_hour | xyseries date, date_hour, hits | fillnull value=0 |appendpipe     [| untable date, date_hour, hits      | eventstats avg(hits) as avg_events by date_hour      | eval "Average Events"= avg_events      | xyseries date date_hour avg_events      | head 1      | eval date="Average Events"]     How do I plot hits and avg_events on a line chart by date_hour?  Also,  if there is less convoluted SPL to get the same results, I'd love to know that as well—because I think I found where Google ends.   Thanks!