All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

In terms of pathing to the config file, you can think of _cluster as an app. So you can do: master-apps/_cluster/local -or- master-apps/<yourapp>/local   But it makes no sense to have both: mas... See more...
In terms of pathing to the config file, you can think of _cluster as an app. So you can do: master-apps/_cluster/local -or- master-apps/<yourapp>/local   But it makes no sense to have both: master-apps/_cluster/<yourapp>/local   ref: https://docs.splunk.com/Documentation/Splunk/9.3.2/Indexer/Updatepeerconfigurations   Also one other thing came to mind: You are using the old name "master-apps" rather than the new name "manager-apps". This is fine as long as all your apps are placed either in master-apps or manager-apps, but your apps should not be in both folders.
Kindly, let me know why you need to skip _cluster path?
Yes, I know the WebUI should be disabled for the indexers, but it's test environment so it's enabled.
You can have separate _meta entries for different input stanzas. If you have two heavy forwarders handing different inputs then this should be doable. I've not tried it in a generalized input stanza ... See more...
You can have separate _meta entries for different input stanzas. If you have two heavy forwarders handing different inputs then this should be doable. I've not tried it in a generalized input stanza but if the number of input stanzas are low then it is feasible to add _meta entries for each.
What do you mean by "we go to the Data inputs => HTTP Event Collector at indexer Side"? You have WebUI enabled on your clustered indexers? It's a big no-no.
Are you sure the user for which the search is scheduled has appropriate capabilities to run mcollect and access to the destination index?
Unfortunately, there is just one "instance" of _meta entry in the whole config. So you can't "merge" separate _meta settings - one will overwrite another. That's why TRANSFORMS is a better approach. ... See more...
Unfortunately, there is just one "instance" of _meta entry in the whole config. So you can't "merge" separate _meta settings - one will overwrite another. That's why TRANSFORMS is a better approach. I'm also not sure what _meta will do on the splunktcp input especially when handling an input stream already containing metadata fields.
Can you post how your _meta field was configured? It should be in inputs.conf and have the format: _meta = fieldname::fieldvalue So if you have two heavy forwarders, one can have an input with: ... See more...
Can you post how your _meta field was configured? It should be in inputs.conf and have the format: _meta = fieldname::fieldvalue So if you have two heavy forwarders, one can have an input with: _meta = meta_hfnum::1 and the other: _meta = meta_hfnum::2  
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics... See more...
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics every X minutes but that doesn't seem to be working, the metrics are only collected when I run the search manually.   Any suggestions to how I can make mcollect just automatically collect the metrics I am looking for ?   Thanks
You shouldn't need to put inputs.conf into master-apps/_cluster/http_input/local, it should either go into master-apps/_cluster/local or master-apps/http_input/local . Try moving it into _cluster/loc... See more...
You shouldn't need to put inputs.conf into master-apps/_cluster/http_input/local, it should either go into master-apps/_cluster/local or master-apps/http_input/local . Try moving it into _cluster/local or http_input/local.
To use the API to create access tokens, you need to use the management port (8089) not the web interface port (8000). You also need to remove the localization (en-US) part of your path. It should be... See more...
To use the API to create access tokens, you need to use the management port (8089) not the web interface port (8000). You also need to remove the localization (en-US) part of your path. It should be: curl -k -u admin:Password -X POST http://127.0.0.1:8089/services/authorization/tokens?output_mode=json --data name=admin --data audience=Users --data-urlencode expires_on=+30d I also suggest using https:// instead of http:// . You don't want your token to be visible in plaintext over the network.
hi   I thank the problem is fixed thanks  for your supporting  
Assuming that field1 and field3 are always at the beginning and end of the line respectively, and assuming that their values do not contain spaces, and assuming they are separated by spaces, you coul... See more...
Assuming that field1 and field3 are always at the beginning and end of the line respectively, and assuming that their values do not contain spaces, and assuming they are separated by spaces, you could use this: ^(?<field1>\S+)\s*(?<field2>\S+)?\s(?<field3>\S+)$  
probably an easy one, i have two events as follows   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want ... See more...
probably an easy one, i have two events as follows   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want the value and when it doesnt i want it to be blank and all records have mynextfield3 and i always want that as field3 i want rex these lines and end up with field1               field2              field3 thisisfield1    thisisfield2   mynextfield3 thisisfield1                              mynextfield3  
Hello,   I want to create Input: HEC on the indexers => Indexer Cluster.   Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local: [http] disabled=0 enableSSL=0 [ht... See more...
Hello,   I want to create Input: HEC on the indexers => Indexer Cluster.   Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local: [http] disabled=0 enableSSL=0 [http://hec-input] disabled=0 enableSSL=0 #useACK=true index=HEC source=HEC_Source sourcetype=_json token=2f5c143f-b777-4777-b2cc-ea45a4288677 Push these configuration to the peer-app (Indexers).   But we go to the Data inputs => HTTP Event Collector  at indexer Side we still found it as below:    
@VamsikrishnaThis is a rather old thread and the thread author's last activity on the forum is about 3 years ago so it's relatively unlikely you'll get answer from them. To the main point - I'd gues... See more...
@VamsikrishnaThis is a rather old thread and the thread author's last activity on the forum is about 3 years ago so it's relatively unlikely you'll get answer from them. To the main point - I'd guess that for one reason or another the forwarders fails to break the input stream into small enough chunks before sending it downstream.
Hi  @kundanshekhx  Did you fix this issue? If yes, Please let me know how you fixed.
receivers: #Apache Cary apache: endpoint: "http://localhost:80/server-status?auto" collection_interval: 10s #Apache Cary   service: telemetry: metrics: address: "${SPLUNK_LISTEN_INTERFACE}:... See more...
receivers: #Apache Cary apache: endpoint: "http://localhost:80/server-status?auto" collection_interval: 10s #Apache Cary   service: telemetry: metrics: address: "${SPLUNK_LISTEN_INTERFACE}:8888" extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [sapm, signalfx] # Use instead when sending to gateway #exporters: [otlp, signalfx] metrics: ##Apache Cary receivers: [hostmetrics, otlp, signalfx] receivers: [hostmetrics, otlp, signalfx, apache] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] # Use instead when sending to gateway #exporters: [otlp]
Hi, Can you share the part of your yaml where you define the apache receiver and also share the part that contains your service pipeline? 
Hi, thanks for your replying 1. Did you restart the collector after changing agent_config.yaml?  ans : YES , I have restarted collector after changing agent_config.yaml? 2. Did you add the new ap... See more...
Hi, thanks for your replying 1. Did you restart the collector after changing agent_config.yaml?  ans : YES , I have restarted collector after changing agent_config.yaml? 2. Did you add the new apache receiver to the metrics pipeline? I think I have added it ,as the doument 3. Did you check for apache.* metrics using the metric finder? Or check for data in the apache built-in dashboard? I have checked  for apache.* metrics , but no anything