All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

can anyone help on this??
UFs are independent so it is possible to have different configurations on each.  If the UFs are managed by a Deployment Server, however, you cannot have different props.conf files in the same app.  ... See more...
UFs are independent so it is possible to have different configurations on each.  If the UFs are managed by a Deployment Server, however, you cannot have different props.conf files in the same app.  You would have to create separate apps and put them in different server classes for the UFs to have different props for the same sourcetype. To answer the second part of the question, you *should* be able to put force_local_processing = true in the props.conf file to have the UF perform masking.  Of course, you would also need SEDCMD settings to define the maskings themselves.  I say "should" because I don't have experience with this and the documentation isn't clear about what the UF will do locally.
That isn't specifically a HEC functionality, but Splunk can be configured with props and transforms to discard unwanted data by sending it to the nullQueue before indexing. This will consume network ... See more...
That isn't specifically a HEC functionality, but Splunk can be configured with props and transforms to discard unwanted data by sending it to the nullQueue before indexing. This will consume network bandwidth from sending the data from the cloud to splunk, but will not count the discarded logs against your Splunk license.
Indeed. You could try the workaround. Perhaps it still works.
Hi,  I have an app that ingest offenses from a SIEM system (qradar).  One time there were a few thousands offenses to ingest at the same time, and it caused to an error in the app ingestion. But non... See more...
Hi,  I have an app that ingest offenses from a SIEM system (qradar).  One time there were a few thousands offenses to ingest at the same time, and it caused to an error in the app ingestion. But none of the offenses were ingested for a few hours. Is there a way to alert when there is an ingestion error for an app, and maybe a way to fix it?
Thank you!
Any update on this issue?  
I have a dashboard where I have 4 multi select boxes and a input file with all possible results for each app.  When there are no results for an app it is sent as a 100%.  Problem is that the results ... See more...
I have a dashboard where I have 4 multi select boxes and a input file with all possible results for each app.  When there are no results for an app it is sent as a 100%.  Problem is that the results have all apps and ignore the multi-select because of the input file.  Below is the code.... data.environment.application data.environment.environment data.environment.stack data.componentId app1 prod AZ Acomp app1 prod AZ Bcomp app2 uat AW Zcomp app2 uat AW Ycomp app2 uat AW Xcomp app3 prod GC Mcomp   index=MINE data.environment.application="app2" data.environment.environment="uat" | eval estack="AW" | fillnull value="uat" estack data.environment.stack | where 'data.environment.stack'=estack | streamstats window=1 current=False global=False values(data.result) AS nextResult BY data.componentId | eval failureStart=if((nextResult="FAILURE" AND 'data.result'="SUCCESS"), "True", "False"), failureEnd=if((nextResult="SUCCESS" AND 'data.result'="FAILURE"), "True", "False") | transaction data.componentId, data.environment.application, data.environment.stack startswith="failureStart=True" endswith="failureEnd=True" maxpause=15m | stats sum(duration) as downtime by data.componentId | inputlookup append=true all_env_component.csv | fillnull value=0 | addinfo | eval uptime=(info_max_time - info_min_time)-downtime, avail=(uptime/(info_max_time - info_min_time))*100, downMins=round(downtime/60, 0) | rename data.componentId AS Component, avail AS Availability | fillnull value=100 Availability | dedup Component | table Component, Availability Thank you in advance for the help.
Are those logs deliberately put in a file, or can they be viewed in the Windows Event Log? If they are in the Windows Event Logs, then you can use a WinEventLog stanza: [WinEventLog://Microsoft-Win... See more...
Are those logs deliberately put in a file, or can they be viewed in the Windows Event Log? If they are in the Windows Event Logs, then you can use a WinEventLog stanza: [WinEventLog://Microsoft-Windows-DriverFrameworks-UserMode/Operational] index=<your index> sourcetype=<your sourcetype> #etc  ref: https://docs.splunk.com/Documentation/Splunk/9.2.0/admin/Inputsconf
Great point, and something I did not know beforehand.  In troubleshooting stumbled onto the documentation stating what you are pointing out, the new _ds* indexes.  So yes, the _ds* indexes are local ... See more...
Great point, and something I did not know beforehand.  In troubleshooting stumbled onto the documentation stating what you are pointing out, the new _ds* indexes.  So yes, the _ds* indexes are local to the DS.
I want to add C:\windows\system32\winevt\logs\Microsoft-Windows-DriverFrameworks-UserMode/Operational  as a stanza in my inputs.conf. How do I write the stanza? Thank you
Is it possible in Splunk to have one props.conf file on one server's Universal Forwarder (UF) for a specific app, and another props.conf file on a different server for the same app, but with one file... See more...
Is it possible in Splunk to have one props.conf file on one server's Universal Forwarder (UF) for a specific app, and another props.conf file on a different server for the same app, but with one file masking a certain field and the other not?
I opened a P2 3 days ago... still waiting.  Typical
Have you local indexes on DS or are you sending logs to your real indexers? This has changes on 9.2.x and it could cause something weird.
Everything is shiny "new".  This is a satellite to our full implementation, hosted in AWS.  Splunk 9.2.0.1 on both agents and the DS (which doubles as an HF) running on AWS RHEL 8.9.  UF's are all r... See more...
Everything is shiny "new".  This is a satellite to our full implementation, hosted in AWS.  Splunk 9.2.0.1 on both agents and the DS (which doubles as an HF) running on AWS RHEL 8.9.  UF's are all running 9.2.0.  Less than 40 total agents (14 Win, 26 nix).  DS was acting up, so destroyed it and built new.  Instantly, the same problem.  Even tried adding hostnames to the filter vice using wildcard.  Same.  The odd thing.  The DS reports that Windows hosts are running the Linux TA, but when you check the Windows hosts, they are running the Windows TA as they should be
I'm trying to achieve the following and hoped someone could help? I have a multivalue field that contains values that are colors, and would like to know how many fields contain duplicate colors, and... See more...
I'm trying to achieve the following and hoped someone could help? I have a multivalue field that contains values that are colors, and would like to know how many fields contain duplicate colors, and what the value of those colors are. e.g. my data colors blue blue red yellow red blue red blue red red green green Would return something like: duplicate_color duplicate_count blue 2 red 1 green 1 Because 'blue' is present as a duplicate in two entries, 'red' in one entry, and 'green' in one entry. 'yellow' is omitted because it is not a duplicate. Thank you very much for any help Steve
Seems fairly simple / basic configurations. I would suggest raising Support case to get this troubleshot and fixed. @isoutamo thoughts?
Hi can you told the base information of your environment (OS, version, splunk version, TA versions, UF versions etc.)? Have you update something lately etc.? r. Ismo
Hi @meetmshah I have added sample _raw events from original query     [test_field_name=test_field_name_1]: Hello This is event0 no_failure_msg some other message0 id_num { data: 000 }} [test_fi... See more...
Hi @meetmshah I have added sample _raw events from original query     [test_field_name=test_field_name_1]: Hello This is event0 no_failure_msg some other message0 id_num { data: 000 }} [test_field_name=test_field_name_1]: Hello This is event1 fail_msg1 some other message1 id_num { data: 111 }} [test_field_name=test_field_name_1]: Hello This is event2 fail_msg2 some other message2 id_num { data: 999 }} [test_field_name=test_field_name_1]: Hello This is event3 no_failure_msg some other message3 id_num { data: 222 }}      From these events I want to return these 2 events where fail_msg1 or fail_msg2 are present     [test_field_name=test_field_name_1]: Hello This is event1 fail_msg1 some other message1 id_num { data: 111 }} [test_field_name=test_field_name_1]: Hello This is event2 fail_msg2 some other message2 id_num { data: 999 }}      
Pretty simple....    serverClass:All:app:all_outputs] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:All] whitelist.0 = * [serverClass:Windows:app:Splunk_TA_windows]... See more...
Pretty simple....    serverClass:All:app:all_outputs] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:All] whitelist.0 = * [serverClass:Windows:app:Splunk_TA_windows] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:Linux:app:Splunk_TA_nix] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:All:app:all_deploymentclient] restartSplunkWeb = 0 restartSplunkd = 1 stateOnClient = enabled [serverClass:Linux] machineTypesFilter = linux-x86_64 whitelist.0 = * [serverClass:Windows] machineTypesFilter = windows-x64 whitelist.0 = *