All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have several groups with access to the same index. In authorize.conf these groups all either have access to wildcard(*) indexes or they have been explicitly given access to this index. the strange ... See more...
I have several groups with access to the same index. In authorize.conf these groups all either have access to wildcard(*) indexes or they have been explicitly given access to this index. the strange thing that is throwing me for a loop is that two of the three groups with srchIndexesAllowed=* can access and see the data in this index, while 1 of the three groups with srchIndexesAllowed=* does not. the only group of these 4 sample groups that explicitly spells out the index in srchIndexesAllowed also can not access the data nor see it.  I have tried restarting the search head to see if that would cause it to reload these preferences but that has not helped. Does anyone have any suggestions for how to troubleshoot this?
Dear Splunkers, I have a flow of events and need to perform alarm when some value, e.g. metricValue is greater than threshold and set state level and last level fields to be calculated following way:... See more...
Dear Splunkers, I have a flow of events and need to perform alarm when some value, e.g. metricValue is greater than threshold and set state level and last level fields to be calculated following way: first event or value is less than threshold = stateLevel=0 => value greater than threshold state level = lastLevel+1 and till max level (custom value provided by Client)  => value less than threshold > stateLevel = lastLevel -1. with my current search lastLevel is always not greater than 1, stateLevel is not greater than 2.    I have a question on what's wrong with my eval command: maxLevel = 3 | streamstats current=f window=1 last(dl_dmax) as lastDmax, last(stateLevel) as lastStateLevel by _time | eval stateLevel = if(isnull(lastStateLevel), 0, lastStateLevel) | eval lastLevel = if(lastDmax>threshold, case(stateLevel<maxLevel, stateLevel+1, stateLevel==maxLevel, maxLevel), case(stateLevel!=0, stateLevel-1, stateLevel=0, 0)) | eval stateLevel = if(metricValue>threshold, case(lastLevel<maxLevel, lastLevel+1, lastLevel==maxLevel, maxLevel), case(lastLevel!=0, lastLevel-1, lastLevel=0, 0)) | table  threshold, metricValue, maxLevel, alertLevel, clearLevel, lastLevel, stateLevel
Hi, I have a column chart trellis split into two parts basis status - Delivered and Not Delivered. How do I go ahead and color the two charts differently based on status? I have attached the snapshot... See more...
Hi, I have a column chart trellis split into two parts basis status - Delivered and Not Delivered. How do I go ahead and color the two charts differently based on status? I have attached the snapshot of trellis. Regards
Hello, I am trying to get hold of an Enterprise License Free Trial in order to run Boss of the SOC on my VM. When I navigate to https://www.splunk.com/en_us/download/splunk-enterprise.html the page... See more...
Hello, I am trying to get hold of an Enterprise License Free Trial in order to run Boss of the SOC on my VM. When I navigate to https://www.splunk.com/en_us/download/splunk-enterprise.html the page keeps reloading.. I made a new account to check that I hadn't already used my free trial, and clicking the Enterprise 60 day trial just gives me a free trial to Splunk Cloud.. Has the Enterprise Free Trial gone? How can I go about getting a license? Thanks,
Hi We are looking to pull the list of Applications that got added today which are not part of APPD yesterday. Is there any way we can pull it apart from controller Audit logs.
Hello dear community, I have a splunk search where I look for all the events that occur over a specific period of time. this period is "from Monday at 5 am until Friday 10 pm" I wish to be able to c... See more...
Hello dear community, I have a splunk search where I look for all the events that occur over a specific period of time. this period is "from Monday at 5 am until Friday 10 pm" I wish to be able to calculate the time that there is between this period independently of whether or not there are events in the period (it will depend on my choice in the "select time" button). if in my select time I chose the last month and the month is made up of 5 weeks I must calculate the time over all my periods "from Monday at 5 am until Friday 10 pm" i have this : ...| eval date_wday = strftime(_time, "%w%H") | search date_wday>=106 AND date_wday<=523 Could you help me on this
How to use metrics index to store metrics data from events on SH? Does is it possible to have  multiple values and multiple metrics name in same index from same query?For example: I have query whic... See more...
How to use metrics index to store metrics data from events on SH? Does is it possible to have  multiple values and multiple metrics name in same index from same query?For example: I have query which having stats with multiple fields | stats sum(leadTime) as lead_time count as user_count sum(failure) as failure_count values(number) as user_number by group _timeHow we can store all fields in metrics?
When I try to run the Python script in Splunk Linux server with below command  python controlFlow IT-PTE-TEST '[{"rtfEnv": "IT-PTE-TEST","appId": "exx-xx-xx","appName": " app-two","flowName": "xyz::... See more...
When I try to run the Python script in Splunk Linux server with below command  python controlFlow IT-PTE-TEST '[{"rtfEnv": "IT-PTE-TEST","appId": "exx-xx-xx","appName": " app-two","flowName": "xyz::api-main"}]'  and give results as  Environment  GroupName            ApplicationName         FlowName IT-PTE-TEST   exx-xx-xx                 app-two                           xyz::api-main When I try to call the same script from the Splunk UI |script controlFlow IT-PTE-TEST "[{\"rtfEnv\": \"IT-PTE-TEST \",\"appId\": \"exx-xx-xx \",\"appName\": \"app-two\",\"flowName\": \"xyz::api-main"}]" check [{"rtfEnv": "IT-PTE-TEST" appId appName flowName Environment GroupName ApplicationName FlowName IT-PTE-TEST exx-xx-xx    app-two xyz::api-main           Here passing argument is coming as outcome fields. Can you help to resolve the issue. Want to get the same result as we run in the server.  
Hi, Is there a way to limit or restrict the view of our custom "Navigation Menu" . Like we want to hide some reports for general viewers.   Thanks and Regards,
I'am retrieving data from database by using DB Connect. I want to encrypt some sensitive data with sha256 before indexing  Is there any way or method to solve this problem ? 
We have got a new requirement, wherein we need to send ‘Reports’ generated by splunk to our EDX sftp server inbox location. how to  send reports using SFTP or FTPS.
Hi All, I am trying to install splunkforwarder-7.3.8 Windows 64 bit version on Windows 2012 R2 Server. But I got Rollback error screen and installation ended with "UniversalForwarder Setup Wizard e... See more...
Hi All, I am trying to install splunkforwarder-7.3.8 Windows 64 bit version on Windows 2012 R2 Server. But I got Rollback error screen and installation ended with "UniversalForwarder Setup Wizard ended prematurely" window. I was running the installer through Admin user. On checking the installation directory found that the files are partially copied during the installation and no service is created when checked in services.msc. Looking forward for suggestions.
In general terms, I've been trying to create a search that can perform a subsearch using a few fields that are present in one collection of related events in order to find a unique uuid field, that I... See more...
In general terms, I've been trying to create a search that can perform a subsearch using a few fields that are present in one collection of related events in order to find a unique uuid field, that I then use to perform another search in order to retrieve all events with that uuid, which would include the events found by the prior subsearch, but also other events that do not have the fields used in the original subsearch, but do have this unique uuid (they're sourced from a different application but the uuid is passed through) The issue I'm having is that the uuid is a relatively new addition, and I'd like to be able to default to essentially the original subsearch (without then filtering on the uuid field) and so at least be able to display some of the events should the user wish to view the logs of a task where they've not used the latest version of the application which contains the uuid update. My current implementation will return no events at all in targeting old versions of the task as the first subsearch returns the value of the uuid field which in those older events would be null/empty , and I can't seem to find a way to do some kind of conditional where if that subsearch returns null/empty then just re-run the subsearch but don't return the uuid, instead return whatever that subsearch comes back with (or just the result of another search that could be a copy of the original subsearch minus the filtering)   Example: I have a task, that logs out two shapes of events due to there being two source applications as part of the task First set of events related to a particular task have  message shapes that can be of the form: old: { "name":  "some_name", "count": "3" } new: { "name":  "some_name", "count": "3" ,  "uuid": "some-uuid-that-is-current-task-invocation-specific" } Second set of events related to that same particular task have  message shapes that can be of the form: old: { "some_detail":  "some_value", "another_detail": "another_value" } new: { "some_detail":  "some_value", "another_detail": "another_value" , "uuid": "some-uuid-that-is-current-task-invocation-specific" } My first subsearch uses the name and  count field, using specific values selected via an input dropdown in a dashboard, to find the first set of events that are related to this particular run of the task. It then returns the uuid field directly to the main search which then retrieves all events with that uuid (common to both sets  of events, and unique per task invocation) However if we're looking at an old version of the task, it finds nothing as there's no uuid field present I'd like to be able to somehow check the result of that sub-search if not null then pass it to the main search as usual and retrieve all the related events. if null, then return the results of another search instead (or the original subsearch, but without trying to filter on a uuid)   I've tried using things like where, appendPipe, if, isNotNull etc but with no success so far, though that may be more to do with my lack of understanding than with them not being the rights tool for the job! Many thanks for any help you can give!
Hi All, i have the following situation a lookup with the following values value 1 2 3   a table with name and value name value a 1 b 2 a 1 b 2 b 2 a 2... See more...
Hi All, i have the following situation a lookup with the following values value 1 2 3   a table with name and value name value a 1 b 2 a 1 b 2 b 2 a 2 b 1   I would like to be able to view a table as follows on a dashboard. must count how many times there is one of the values ​​in lookup for the name name 1 2 3 a 2 1 0 b 0 4 0 c 0 0 0   if I add the value 4 in the lookup the table automatically becomes name 1 2 3 4 a 2 1 0 0 b 0 4 0 0 c 0 0 0 0   thanks for any help Simone
Hi. I would like to know if anybode else had this issue. We upgraded our UF on AIX to 8.1.3 from 8.0.4, following the guidelines from Splunk. We have set outputs.conf to use indexer discovery. Af... See more...
Hi. I would like to know if anybode else had this issue. We upgraded our UF on AIX to 8.1.3 from 8.0.4, following the guidelines from Splunk. We have set outputs.conf to use indexer discovery. After the upgrade we saw these message: ERROR IndexerDiscoveryHeartbeatThread - Error in Indexer Discovery communication. Verify that the pass4SymmKey set under [indexer_discovery:prod] in 'outputs.conf' matches the same setting under [indexer_discovery] in 'server.conf' on the Cluster Master. [uri=https://xxxx:8089/services/indexer_discovery http_code=502 http_response="OK"] The pass4SymmKey has not changed during the upgrade. We changed the configuration to bypass indexer discovery, and that got data flowing into the system again.   Kind regards Lars Søndergaard
I am just starting off with configuring up some Alerts in my Splunk environment. One of the alerts that i have configured up as a test is to run a scheduled test once a day, looking to see whether a... See more...
I am just starting off with configuring up some Alerts in my Splunk environment. One of the alerts that i have configured up as a test is to run a scheduled test once a day, looking to see whether any of the Cisco switches in my environment has restarted. I've configured up the following search: index=<my_index> "%SYS-5-RESTART" | stats count When using this as a simple search, this seems to work well, letting me know accurately if a switch has rebooted within the search time window. However with the alert that i have created from this search, it seems to be sending out an email regardless of the search result. The Alert configuration i have used is as follows: Alert Type: scheduled (run everyday at 5pm) Expires 24 hours Trigger alert when: Number of Results is greater than 0 Trigger: once Trigger Actions: Send email even today, when i used the above search term for the last 24 hours, it is coming up with a count of 0 and yet Splunk is still forwarding out an email at 5pm. Is there something that i am missing with the alert syntax? Thanks,
Hello Team,  I have to collect and import data into Splunk, provided by a REST API. How would it be possible to do? Scenario: As soon as I receive a notification that data is ready I need to exec... See more...
Hello Team,  I have to collect and import data into Splunk, provided by a REST API. How would it be possible to do? Scenario: As soon as I receive a notification that data is ready I need to execute the REST call to get the data. Data is in json format and these data have to end up in Splunk. Do I need to implement something custom, to save the response into a file for Splunk Fw to read?  or does Splunk offer something that works out of the box? Thank you in advance,  chtamp   
Hi all, I need to create an alert based on a success rate less than a specific value. My data is as follows: store = "store1" result= "success" store = "store1" result= "success" store = "store1"... See more...
Hi all, I need to create an alert based on a success rate less than a specific value. My data is as follows: store = "store1" result= "success" store = "store1" result= "success" store = "store1" result= "success-with-warnings" store = "store1" result= "failed" store = "store2" result= "success-with-warnings" store = "store2" result= "failed" store = "store3" result= "success-with-warnings" store = "store3" result= "success" I need to calculate the success rate based on each store. Result = "success" or "success-with-warnings" are considered a success, all other result values are considered failed. For example, using the above data, the search result should be something like this: store1 %75 success store2 %50 success store3 %100 success Then I need to use those values to create an alert, that will be triggered daily to check stores with a success rate is less than 70%. So in this case, I will get an alert for store2.            
Brand new Splunk user here. As a learning experience, I want to index all URLs requested by users/computers (Mac, Linux & Windows) on my network. I've found lots of good info on how to use Splunk to ... See more...
Brand new Splunk user here. As a learning experience, I want to index all URLs requested by users/computers (Mac, Linux & Windows) on my network. I've found lots of good info on how to use Splunk to search that data. But thus far I've been unable to figure out how you actually get it into Splunk to begin with. I'm using the free version, so I want to minimize the data flow so that I don't breach the max. amount you're allowed as a free user. As far as I can tell, I'm supposed to install the universal forwarder on the various machines on my network. But won't that forward all syslog/Windows events, which is way more than I need? And in any case, are URL requests even stored by syslog/Windows event system? As you can tell, I'm kind of stuck! Can anyone point me in the right direction?