All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a scheduled report that I have set up as a dashboard panel to display the results of the report alongside other important information. However, when I open the dashboard, it appears that the s... See more...
I have a scheduled report that I have set up as a dashboard panel to display the results of the report alongside other important information. However, when I open the dashboard, it appears that the search runs from the beginning, instead of just displaying the results of the last report run. Here is the code from the dashboard source: <panel>      <chart>           <title>PanelTitle</title>                <search ref="ReportName"></search>                <option name="charting.chart">column</option>                <option name="charting.chart.showDataLabels">all</option>                <option name="charting.drilldown">all</option>      </chart> </panel> details of the report that might help: This scheduled report runs daily at 0:00. Its time range is last 7 days. It is shared at the app level,  and was created by search and choosing "Save As Report", and is owned by the same account that was used to create the dashboard
I have the following query    index="main" | rex field=_raw "node '(?<Falling_Node>[^']*)" | eval Rising_Node=case(<.....>) | sort +_time | filldown Rising_Node Falling_Node | where (host==Risin... See more...
I have the following query    index="main" | rex field=_raw "node '(?<Falling_Node>[^']*)" | eval Rising_Node=case(<.....>) | sort +_time | filldown Rising_Node Falling_Node | where (host==Rising_Node OR host==Falling_Node) | sort -_time | transaction startswith="<start event>" endswith="<end event>"   Where I look at all events and then use rex and eval to evaluate the necessary rising and falling nodes of each transaction. Then I sort +_time so that I can filldown those fields to all the events, then use | where to filter out any events that aren't coming from either the rising or falling nodes, and then finally sort-_time so that I can form the transaction. This works perfectly for all transactions where the start and end events are coming from hosts that are the Rising_Node or the Falling_Node.  However, some of my events that I want to turn into transactions aren't like this. They have the same necessary start and end events, but those start and end events are NOT on the Rising_Node or the Falling_Node. Therefore, my query does not work because it the " | where (host==Rising_Node OR host==Falling_Node)" always filters out those start and end events so the transaction cannot occur. Does anyone have a workaround for these peculiar events where the start and end events are not on the rising or falling nodes? Here is what the scambled data looks like for some of the transactions that have start events that are not on the rising or falling nodes:  2020/08/11 11:40:18.473, NOT_RISING/FALLING_NODE , Requested Falling node 'Falling_Node' by user '....'" (Start Event) 2020/08/11 11:40:44.512, Falling_Node, , <....information....> 2020/08/11 11:40:45.512, Rising_Node, , <....information....> 2020/08/11 11:40:49.512, NOT_RISING/FALLING_NODE, , <....information....> 2020/08/11 11:40:49.889, NOT_RISING/FALLING_NODE, , <....information....> 2020/08/11 11:40:50.512, Rising_Node, , <....information....> 2020/08/11 11:40:51.512, Rising_Node, , <....information....> 2020/08/11 11:40:55.889, NOT_RISING/FALLING_NODE, , <....information....> 2020/08/11 11:40:59.512, Rising_Node, , <....information....> 2020/08/11 11:40:59.889, NOT_RISING/FALLING_NODE, , <....information....> 2020/08/11 11:41:13.915, NOT_RISING/FALLING_NODE, Completed Transaction (End Event) This is a sample of events in which my query doesn't work correctly because the Start and End Events are NOT the Rising or Falling Node. However, I want to filter out all the NOT_RISING/FALLING_NODE events within the transaction.
Hi  I have a dashboard with 3 panels. 2nd and 3rd will be invisible by default first panel have some data like show below  app   |  fail    |   success  ---------------------------- A      |    ... See more...
Hi  I have a dashboard with 3 panels. 2nd and 3rd will be invisible by default first panel have some data like show below  app   |  fail    |   success  ---------------------------- A      |     10    |          20 B      |     30    |         40 ------------------------------ if i click on A, corresponding panel (say 2nd(only)) should be visible. if i click on B, corresponding panel (say 3nd(only)) should be visible. could someone please help me with the condition I have do in <drilldown></drilldown> in first panel   Currently I am trying like this. <panel id="first"> .......  </panel> <panel depends="$second$"> ...... </panel> <panel depends="$third$"> ...... </panel>
We recently updated our Splunk deployment from 6.5 to 7.3.3. We redeployed our server and indexed everything the exact same as before. Same inputs, props, transforms. But we are having a weird issue ... See more...
We recently updated our Splunk deployment from 6.5 to 7.3.3. We redeployed our server and indexed everything the exact same as before. Same inputs, props, transforms. But we are having a weird issue with some of our reports being identical as the old ones but then anywhere from 15-30% of the results are all jumbled up. I put an example of what we are seeing below   source     host    testtype  results  measurement c://              1           test1          PASS           00.000125 c://              2            c://            UUT SER:    TUE OP:   The logs haven't change but it's like is not recognizing the props and transforms.conf for some and pushes the information around for our results. I'm still pretty new to Splunk and have tried to find a question similar to this but haven't found any answer. Thanks for the help
I have a service that is dropping a json object every 5 minutes. These objects contain multiple KeyValuePair Categories with multiple KVP Properies within them. If I use the following query: index... See more...
I have a service that is dropping a json object every 5 minutes. These objects contain multiple KeyValuePair Categories with multiple KVP Properies within them. If I use the following query: index= ****Query **** | spath input=Properties.Data path=Items{} output=Items | stats count by Items | spath input=Items path=Props{} output=Props | mvexpand Props | spath input=Props | spath input=Items | fields - Items count Props* | where CN="ClientId" AND PN="Client_authentication_success" Which generates a table: CN | CV | PN | PV (CategoryName | CategoryValue | PropertyName | PropertyValue) The problem is that I cannot seem to generate a timeline from that data. I think this is because its pulling ALL the data from the entire duration and none of it contains any time information for each individual log they belong to. So I may need to inject the time information early on before it aggregates all the data into one report.  I'm Assuming the table would need to look more like: _time | CN | CV | PN | PV From there I'm assuming it would be something like: | timechart span=5m sum(PV) by CV where sum in top10 To get a timeline going. Here is an example of one record: {"Items":[{"CN":"ClientId","CV":"ABC0001","Props":[{"PN":"Client_authentication_success","PV":10}]},{"CN":"ClientId","CV":"CDE0001","Props":[{"PN":"Client_authentication_success","PV":754}]},{"CN":"ClientId","CV":"ABC0002","Props":[{"PN":"Client_authentication_success","PV":33}]}]}
Running a sample search suggested by "Add sparklines to search results" in Splunk Documentation for the latest version (8.0.5 as of this writing): index=_internal | chart sparkline count by sourcety... See more...
Running a sample search suggested by "Add sparklines to search results" in Splunk Documentation for the latest version (8.0.5 as of this writing): index=_internal | chart sparkline count by sourcetype ... results in the following "green" job notification: This search uses deprecated 'stats' command syntax. This syntax implicitly translates '<function>' or '<function>()' to '<function>(*)', except for cases where the function is 'count'. Use '<function>(*)' instead. (This is reproduced in Splunk Enterprise versions 8.02, 8.04.1, 8.05.) Questions: Does this happen for you, too (in Splunk Enterprise versions 8.x)? Why does Splunk use deprecated syntax in their docs? (Is there a good reason for it, or is this simply a benign case of a slightly stale documentation that's just not worth of an update?) How would one modify the above search to (1) eliminate the "deprecated syntax" message while (2) retaining both the sparkline and the count fields, and optionally, (3) making it at least as performant as the "deprecated" one? Thank you! (This question is a more generic follow up to "This search uses deprecated 'stats' command syntax."  that was answered by @richgalloway (thank you).)
Hello, I was able to set up the app to monitor my local windows machine's network, all dashboards in the app display events except Geographical Information. Any suggestions on how I can fix the issue... See more...
Hello, I was able to set up the app to monitor my local windows machine's network, all dashboards in the app display events except Geographical Information. Any suggestions on how I can fix the issue. Thanks.
Hello  i have log events with time format "2020-08-13 15:50:20 UTC+0000" and i have defined  TIME_FORMAT as %Y-%m-%d %H:%M:%S %Z but still seeing Failed to parse timestamp warnings. can someone co... See more...
Hello  i have log events with time format "2020-08-13 15:50:20 UTC+0000" and i have defined  TIME_FORMAT as %Y-%m-%d %H:%M:%S %Z but still seeing Failed to parse timestamp warnings. can someone correct where am doing mistake? 
Is there an easy way to get the first and last event by a unique ID? "transaction" seems to be the way to go but I haven't been able to figure out how to remove the in-between events / values then s... See more...
Is there an easy way to get the first and last event by a unique ID? "transaction" seems to be the way to go but I haven't been able to figure out how to remove the in-between events / values then split them back out into individual events.
Im kinda newbie here in splunk. Whats the difference between multivalue and transpose command? how can i convert this input to this desired output? (see tables below) Is Multivalue or transpose comma... See more...
Im kinda newbie here in splunk. Whats the difference between multivalue and transpose command? how can i convert this input to this desired output? (see tables below) Is Multivalue or transpose command can manage this? For a newbie like me those commands is kinda tricky. Many thanks in advance   input: Field1 Field2 Country 1-Jun 1-Jul 1-Aug 1-Sep 1-Oct X AB Thailand 23 2343 31 1 1 Y CD Singapore 2 321 2 52 22 Z EF Philippines 31 56 43 84 23   output: Field1 Field2 Country Month Value X AB Thailand 1-Jun 23 X AB Thailand 1-Jul 2343 X AB Thailand 1-Aug 31 X AB Thailand 1-Sep 1 X AB Thailand 1-Oct  1 Y CD Singapore 1-Jun 2 Y CD Singapore 1-Jul 321 Y CD Singapore 1-Aug 2 Y CD Singapore 1-Sep 52 Y CD Singapore 1-Oct  22 Z EF Philippines 1-Jun 31 Z EF Philippines 1-Jul 56 Z EF Philippines 1-Aug 43 Z EF Philippines 1-Sep 84 Z EF Philippines 1-Oct  23
Hello, I have a table that looks similar to the example below, there are some values in columns that do not need to be color coded at all, while other values in the same column need to be color code... See more...
Hello, I have a table that looks similar to the example below, there are some values in columns that do not need to be color coded at all, while other values in the same column need to be color coded based on value range. Is it possible to color code cells based on row rather than column? For example, below I would only need the "% Completed" row's values to be color coded but not the other row's values. Thanks in advance.   Team 1 Team 2 Team 3 Total Stories 5 10 20 Stories Completed  5 0 10 % Completed  100% 0% 50%
Hello, Recently I've been testing Splunk Enterprise on trial license. Today I Installed Splunk App for Microsoft Exchange and it took over the trial license. In result Splunk Enterprise switched to ... See more...
Hello, Recently I've been testing Splunk Enterprise on trial license. Today I Installed Splunk App for Microsoft Exchange and it took over the trial license. In result Splunk Enterprise switched to free license and it lost many features I've been using. I uninstalled Splunk App for Microsoft Exchange and I've tried to install back trial license file, but I've got error: "Bad Request — enttrial.lic: failed to add, stackid=download-trial maps to invalid gid" Could you help with this issue? I need to test Splunk for couple more weeks. Trial haven't expired - I installed Splunk 2 weeks ago and trial should last 60 days.
I would like to setup an identity lookup for Azure AD user accounts in Splunk ES.  It looks like the Microsoft Azure Add-on collects the user data using the Microsoft Azure Active Directory Users inp... See more...
I would like to setup an identity lookup for Azure AD user accounts in Splunk ES.  It looks like the Microsoft Azure Add-on collects the user data using the Microsoft Azure Active Directory Users input. How can I tell and configure what user attributes will be collected?  @sleclerc1 I saw you had success with getting this going, can you share what attributes you got back for each user?  Thanks.
Hi All, I am trying to test DLTK. As I run a search for sending to container, it shows errors: unable to read JSON response from http://localhost:32775/fit. Either you have no MLTK Container runn... See more...
Hi All, I am trying to test DLTK. As I run a search for sending to container, it shows errors: unable to read JSON response from http://localhost:32775/fit. Either you have no MLTK Container running or you probably face a network or connection issue. Returned with exception (Expecting value: line 1 column 1 (char 0)) Docker container is running and I implemented the sample code on jupyter notebook. What is wrong with this? The SPL I run is this:  | inputlookup server_power.csv | fit MLTKContainer mode=stage algo=linear_regressor epochs=10 batch_size=32 ac_power from total* into app:server_power_regression Could anyone know about it?    
We are currently running a single-instance of Splunk (1 search, 1 indexer) on a c5.4xlarge AWS EC2 instance in a POC environment. The single-instance specs are as follows: 16 vCPU 32 GB Memory ... See more...
We are currently running a single-instance of Splunk (1 search, 1 indexer) on a c5.4xlarge AWS EC2 instance in a POC environment. The single-instance specs are as follows: 16 vCPU 32 GB Memory Network Bandwidth: 10 Gbps EBS Bandwidth: 4.8 Gbps We have about 5 users concurrently accessing Splunk and executing search queries. Additionally, our Splunk instance is integrated with ServiceNOW for ticket generation, and we're also leveraging the AWS Add-on for Splunk to ingest AWS logs. We are also ingesting MuleSoft logs via an API connector, but shall eventually transition to the Splunk HTTP Event Collector (HEC) to receive the logs through either a push/pull mechanism. Currently, we are not experiencing any performance issues and are planning to integrate additional apps/add-ons in the future. We recently heard about the Splunk Dashboards App (Beta): https://splunkbase.splunk.com/app/4710/ , which looks like a great app for dashboarding and meets some of our major requirements for visualizing large and complex data sets. Currently, we have 4 dashboards that leverage both SPL, as well as Simple XML/in-line CSS and JavaScript, which we plan to carry-over to the Splunk Dashboards App framework after installing it. While we understand that the App's Splunkbase page states that there is no support for converting dashboards with Simple XML/in-line CSS/JS, we still want to proceed with the conversion as a POC to understand what the impacts would be. Based on the specifications mentioned above, does anyone see potential concerns from a performance/bandwdith standpoint should we proceed with installing and using the Splunk Dashboards App? Thanks!
Hello, We are not receiving the alerts which are created by particular users in splunk cloud. Those  were working before , but suddenly stopped sending the alert emails. Is it due to any permission... See more...
Hello, We are not receiving the alerts which are created by particular users in splunk cloud. Those  were working before , but suddenly stopped sending the alert emails. Is it due to any permission issue with the users ? Please help  
Hi, Getting following errors from failed |mcatalog search against metrics index using a power user role. The |mcatalog search runs ok with admin role. The search was running ok in Splunk 7.3.3 but ... See more...
Hi, Getting following errors from failed |mcatalog search against metrics index using a power user role. The |mcatalog search runs ok with admin role. The search was running ok in Splunk 7.3.3 but getting error in Splunk 8.0.2. Power user role has the metric_alerts and list_metrics_catalog capabilities Search: |mcatalog values(metric_name) where index=os_metrics causing ERROR SearchPhaseGenerator - Fallback to two phase search failed:Term based search is not supported ERROR SearchOrchestrator - Term based search is not supported ERROR SearchStatusEnforcer - sid:1597278941.14635_FE8E9C88-7BFB-466C-BA64-C8544C690C88 Term based search is not supported ERROR dispatchRunner - RunDispatch::runDispatchThread threw error: Term based search is not supported   Any suggestions on the cause of the "Term based search is not supported" and solution would be welcome. Thanks.
hi, I have created a lookup and have opened the lookup using search option...i saved the results of this search into a dashboard, but i am not able to find this dashboard in the dashboards list..nei... See more...
hi, I have created a lookup and have opened the lookup using search option...i saved the results of this search into a dashboard, but i am not able to find this dashboard in the dashboards list..neither does it come in the lookup editor app...can anyone let me know what am i missing here? thanks, shari  
Hi Team We are unable to get the alert emails even when the events matching the alert condition is present in splunk cloud. It was working till 2 days before. This issue is only for alerts configure... See more...
Hi Team We are unable to get the alert emails even when the events matching the alert condition is present in splunk cloud. It was working till 2 days before. This issue is only for alerts configured recently(3 weeks back). Others we are getting the emails. Please help how we can resolve or whom to contact for Splunk cloud. Thanks
Our company policy requires an SSL private key to be encrypted. Unfortunately, my script is using the requests library which doesn't support this, as written in its documentation (feature request : h... See more...
Our company policy requires an SSL private key to be encrypted. Unfortunately, my script is using the requests library which doesn't support this, as written in its documentation (feature request : https://github.com/psf/requests/issues/1573) There seems to be a workaround by using a custom adapater (https://github.com/m-click/requests_pkcs12) which is using pkcs12 package files (which contain both the certificate and the key  and can be encrypted). I've installed this adapter under "lib" in my app directory (and I added the line of code to my script to append this path to the library path). But this adapter is requiring the pyopenssl library (shipped with Splunk, installed in the python default library path) but this one, in turns, requires ndg.httpsclient.ssl_peer_verification (not shipped with Splunk!). So the adapter fails to load. So in the end 2 questions: 1) Is anybody having to use client-side SSL authentication to query an API and is using an encrypted key ? How to make it work ? 2) Is there a clean way to add ndg.httpsclient.ssl_peer_verification to the libraries available to splunk so that pyopenssl can be loaded ? I've tried to add it to my app's "lib" directory but it seems that pyopenssl can not find it (maybe it expects to find it in the "default" directory ?)