All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

7 is a non-standard day number.  Try 0 6,12,20,22 * * 0,6
Another update: my csv lookup in this example has only 2 rows, but it could have many more. Also I am not planning to use other fields Product, Feature but just need FailureMsg
Based on docs this should works. BUT on example part those platform selections have done on app not serverclass level. Maybe you should try that? Btw have you configured this by gui or manually with... See more...
Based on docs this should works. BUT on example part those platform selections have done on app not serverclass level. Maybe you should try that? Btw have you configured this by gui or manually with text editor?
This statement | eval IP_ADDRESS=if(index=index1, interfaces.address, PRIMARY_IP_ADDRESS) will need to have single quotes round the interfaces.address, as eval statements need fields with non-simpl... See more...
This statement | eval IP_ADDRESS=if(index=index1, interfaces.address, PRIMARY_IP_ADDRESS) will need to have single quotes round the interfaces.address, as eval statements need fields with non-simple characters to be single quoted, in this case the full-stop (.) | eval IP_ADDRESS=if(index=index1, 'interfaces.address', PRIMARY_IP_ADDRESS) Note also that index=index1 would need to be index="index1" as you are looking for the value of index to be the string index1 rather than comparing field index to field index1. As for debugging queries, if you just remove the 'where' clause, you can see what you are getting and what the value of indexes is.  Hope this helps
Unfortunately, I don't know if you can make <select> input take custom values, that's more of an html question if you are doing it inside the html panels - I am guessing you can probably make some JS... See more...
Unfortunately, I don't know if you can make <select> input take custom values, that's more of an html question if you are doing it inside the html panels - I am guessing you can probably make some JS to make this this work, but it's a guess.
Yes @naveenalagu you are right re count=1, in this type of solution, you normally set an indicator in each part of the search (outer+append) as @ITWhisperer has shown and then the final stats will do... See more...
Yes @naveenalagu you are right re count=1, in this type of solution, you normally set an indicator in each part of the search (outer+append) as @ITWhisperer has shown and then the final stats will do the evaluation to work out where the data came from.  
Would it fit your use case to set inputs.conf and outputs.conf such that the UF forwards the same logs to two different indexer servers, then those indexer servers have different props.conf which can... See more...
Would it fit your use case to set inputs.conf and outputs.conf such that the UF forwards the same logs to two different indexer servers, then those indexer servers have different props.conf which can mask and not mask the fields?  It seems like props.conf on the UF won't solve your problem.
For people finding this question in the years after 2016, you can set the max_upload_size setting in web.conf [settings] # set to the MB max max_upload_size = 500 # you can also set a larger splunkd... See more...
For people finding this question in the years after 2016, you can set the max_upload_size setting in web.conf [settings] # set to the MB max max_upload_size = 500 # you can also set a larger splunkdConnectionTimeout value so it wont timeout when uploading splunkdConnectionTimeout=600 ref: https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/Webconf
If you know which error to look for, or can make a good guess that it includes the word "ingestion", then you could search in the internal logs: index=_internal log_level=error ingestion You coul... See more...
If you know which error to look for, or can make a good guess that it includes the word "ingestion", then you could search in the internal logs: index=_internal log_level=error ingestion You could also make a "maintenance alert" which looks for a drop in logs for an index, source, sourcetype, or some other field. If you expect logs at a certain time but there are zero, then it could be because of a log ingestion error.
we are trying to set up a cron schedule on alert to run only on weekends(sat and sun) at 6am, 12pm, 8pm , 10pm i tired giving below cron in Splunk, it is saying invalid cron, can any one help on thi... See more...
we are trying to set up a cron schedule on alert to run only on weekends(sat and sun) at 6am, 12pm, 8pm , 10pm i tired giving below cron in Splunk, it is saying invalid cron, can any one help on this??  
Sorry for the late reply... Just started back working on this. For anyone who is curious, the answer was the port we were using had less attributes. 
And it is strictly necessary to sandwich one function in the middle of the other? Can the functions not be shrunk to smaller modules and then arranged as desired in a playbook?
can anyone help on this??
UFs are independent so it is possible to have different configurations on each.  If the UFs are managed by a Deployment Server, however, you cannot have different props.conf files in the same app.  ... See more...
UFs are independent so it is possible to have different configurations on each.  If the UFs are managed by a Deployment Server, however, you cannot have different props.conf files in the same app.  You would have to create separate apps and put them in different server classes for the UFs to have different props for the same sourcetype. To answer the second part of the question, you *should* be able to put force_local_processing = true in the props.conf file to have the UF perform masking.  Of course, you would also need SEDCMD settings to define the maskings themselves.  I say "should" because I don't have experience with this and the documentation isn't clear about what the UF will do locally.
That isn't specifically a HEC functionality, but Splunk can be configured with props and transforms to discard unwanted data by sending it to the nullQueue before indexing. This will consume network ... See more...
That isn't specifically a HEC functionality, but Splunk can be configured with props and transforms to discard unwanted data by sending it to the nullQueue before indexing. This will consume network bandwidth from sending the data from the cloud to splunk, but will not count the discarded logs against your Splunk license.
Indeed. You could try the workaround. Perhaps it still works.
Hi,  I have an app that ingest offenses from a SIEM system (qradar).  One time there were a few thousands offenses to ingest at the same time, and it caused to an error in the app ingestion. But non... See more...
Hi,  I have an app that ingest offenses from a SIEM system (qradar).  One time there were a few thousands offenses to ingest at the same time, and it caused to an error in the app ingestion. But none of the offenses were ingested for a few hours. Is there a way to alert when there is an ingestion error for an app, and maybe a way to fix it?
Thank you!
Any update on this issue?  
I have a dashboard where I have 4 multi select boxes and a input file with all possible results for each app.  When there are no results for an app it is sent as a 100%.  Problem is that the results ... See more...
I have a dashboard where I have 4 multi select boxes and a input file with all possible results for each app.  When there are no results for an app it is sent as a 100%.  Problem is that the results have all apps and ignore the multi-select because of the input file.  Below is the code.... data.environment.application data.environment.environment data.environment.stack data.componentId app1 prod AZ Acomp app1 prod AZ Bcomp app2 uat AW Zcomp app2 uat AW Ycomp app2 uat AW Xcomp app3 prod GC Mcomp   index=MINE data.environment.application="app2" data.environment.environment="uat" | eval estack="AW" | fillnull value="uat" estack data.environment.stack | where 'data.environment.stack'=estack | streamstats window=1 current=False global=False values(data.result) AS nextResult BY data.componentId | eval failureStart=if((nextResult="FAILURE" AND 'data.result'="SUCCESS"), "True", "False"), failureEnd=if((nextResult="SUCCESS" AND 'data.result'="FAILURE"), "True", "False") | transaction data.componentId, data.environment.application, data.environment.stack startswith="failureStart=True" endswith="failureEnd=True" maxpause=15m | stats sum(duration) as downtime by data.componentId | inputlookup append=true all_env_component.csv | fillnull value=0 | addinfo | eval uptime=(info_max_time - info_min_time)-downtime, avail=(uptime/(info_max_time - info_min_time))*100, downMins=round(downtime/60, 0) | rename data.componentId AS Component, avail AS Availability | fillnull value=100 Availability | dedup Component | table Component, Availability Thank you in advance for the help.