All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Can you post how your _meta field was configured? It should be in inputs.conf and have the format: _meta = fieldname::fieldvalue So if you have two heavy forwarders, one can have an input with: ... See more...
Can you post how your _meta field was configured? It should be in inputs.conf and have the format: _meta = fieldname::fieldvalue So if you have two heavy forwarders, one can have an input with: _meta = meta_hfnum::1 and the other: _meta = meta_hfnum::2  
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics... See more...
Hi Folks I've been using mcollect to collect metrics from the events in my indexes and I thought if I set up an alert with the mcollect part in the search, it would automatically collect the metrics every X minutes but that doesn't seem to be working, the metrics are only collected when I run the search manually.   Any suggestions to how I can make mcollect just automatically collect the metrics I am looking for ?   Thanks
You shouldn't need to put inputs.conf into master-apps/_cluster/http_input/local, it should either go into master-apps/_cluster/local or master-apps/http_input/local . Try moving it into _cluster/loc... See more...
You shouldn't need to put inputs.conf into master-apps/_cluster/http_input/local, it should either go into master-apps/_cluster/local or master-apps/http_input/local . Try moving it into _cluster/local or http_input/local.
To use the API to create access tokens, you need to use the management port (8089) not the web interface port (8000). You also need to remove the localization (en-US) part of your path. It should be... See more...
To use the API to create access tokens, you need to use the management port (8089) not the web interface port (8000). You also need to remove the localization (en-US) part of your path. It should be: curl -k -u admin:Password -X POST http://127.0.0.1:8089/services/authorization/tokens?output_mode=json --data name=admin --data audience=Users --data-urlencode expires_on=+30d I also suggest using https:// instead of http:// . You don't want your token to be visible in plaintext over the network.
hi   I thank the problem is fixed thanks  for your supporting  
Assuming that field1 and field3 are always at the beginning and end of the line respectively, and assuming that their values do not contain spaces, and assuming they are separated by spaces, you coul... See more...
Assuming that field1 and field3 are always at the beginning and end of the line respectively, and assuming that their values do not contain spaces, and assuming they are separated by spaces, you could use this: ^(?<field1>\S+)\s*(?<field2>\S+)?\s(?<field3>\S+)$  
probably an easy one, i have two events as follows   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want ... See more...
probably an easy one, i have two events as follows   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want the value and when it doesnt i want it to be blank and all records have mynextfield3 and i always want that as field3 i want rex these lines and end up with field1               field2              field3 thisisfield1    thisisfield2   mynextfield3 thisisfield1                              mynextfield3  
Hello,   I want to create Input: HEC on the indexers => Indexer Cluster.   Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local: [http] disabled=0 enableSSL=0 [ht... See more...
Hello,   I want to create Input: HEC on the indexers => Indexer Cluster.   Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local: [http] disabled=0 enableSSL=0 [http://hec-input] disabled=0 enableSSL=0 #useACK=true index=HEC source=HEC_Source sourcetype=_json token=2f5c143f-b777-4777-b2cc-ea45a4288677 Push these configuration to the peer-app (Indexers).   But we go to the Data inputs => HTTP Event Collector  at indexer Side we still found it as below:    
@VamsikrishnaThis is a rather old thread and the thread author's last activity on the forum is about 3 years ago so it's relatively unlikely you'll get answer from them. To the main point - I'd gues... See more...
@VamsikrishnaThis is a rather old thread and the thread author's last activity on the forum is about 3 years ago so it's relatively unlikely you'll get answer from them. To the main point - I'd guess that for one reason or another the forwarders fails to break the input stream into small enough chunks before sending it downstream.
Hi  @kundanshekhx  Did you fix this issue? If yes, Please let me know how you fixed.
receivers: #Apache Cary apache: endpoint: "http://localhost:80/server-status?auto" collection_interval: 10s #Apache Cary   service: telemetry: metrics: address: "${SPLUNK_LISTEN_INTERFACE}:... See more...
receivers: #Apache Cary apache: endpoint: "http://localhost:80/server-status?auto" collection_interval: 10s #Apache Cary   service: telemetry: metrics: address: "${SPLUNK_LISTEN_INTERFACE}:8888" extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [sapm, signalfx] # Use instead when sending to gateway #exporters: [otlp, signalfx] metrics: ##Apache Cary receivers: [hostmetrics, otlp, signalfx] receivers: [hostmetrics, otlp, signalfx, apache] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] # Use instead when sending to gateway #exporters: [otlp]
Hi, Can you share the part of your yaml where you define the apache receiver and also share the part that contains your service pipeline? 
Hi, thanks for your replying 1. Did you restart the collector after changing agent_config.yaml?  ans : YES , I have restarted collector after changing agent_config.yaml? 2. Did you add the new ap... See more...
Hi, thanks for your replying 1. Did you restart the collector after changing agent_config.yaml?  ans : YES , I have restarted collector after changing agent_config.yaml? 2. Did you add the new apache receiver to the metrics pipeline? I think I have added it ,as the doument 3. Did you check for apache.* metrics using the metric finder? Or check for data in the apache built-in dashboard? I have checked  for apache.* metrics , but no anything
Why would you do that if you have perfectly well working answers above? Also, this thread is several years old...
The fact that it's syslog is irrelevant here. You define transforms on a per-sourcetype, per-source or per-host basis. I assume you're trying to do this - https://docs.splunk.com/Documentation/Splun... See more...
The fact that it's syslog is irrelevant here. You define transforms on a per-sourcetype, per-source or per-host basis. I assume you're trying to do this - https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad Just remember that order of transforms is important. If you want to only index selected events and get rid of the rest you must first send them all to nullQueue and then rewrite the queue to indexQueue for the selected ones.
You could try cutting your TIME_FORMAT before %7N and then check if it works. If it does, it means that the %7N is the culprit. The problem is that you lose timezone info and would have to set it man... See more...
You could try cutting your TIME_FORMAT before %7N and then check if it works. If it does, it means that the %7N is the culprit. The problem is that you lose timezone info and would have to set it manually. To be fully honest, I'd check if the source can actually just send the timestamp in the epoch format along the event and forget about time parsing altogether.
Some more insight, if I remove the python files it will deploy successfully. 
This worked instantly!  I appreciate you!  Thanks,  JJJ 
I have read other forums and posts, but it did not help with my issue.  I am getting the typical Error while creating deployable apps: Error coping src'" /opt/splunk/etc/shcluster/my_app" to staging... See more...
I have read other forums and posts, but it did not help with my issue.  I am getting the typical Error while creating deployable apps: Error coping src'" /opt/splunk/etc/shcluster/my_app" to staging area=" /opt/splunk/var/run/splunk/deply.#######.tmp/apps/my_app." 180 errors    They are all python files (.py) this is for every app that has python files as well. I have checked the permissions and Splunk owns the files. I am so confused as to what is happening. I understand this is a permission issue, but Splunk owns the files (rambling) with permissions set to rw on all python files on the deployer.  Also, splunk user owns all files under /opt/splunk    Any help greatly appreciated! 
they are probably talking about this   https://www.sentinelone.com/partners/featured-partner-splunk , but it that resource and what it links to haven't answered my questions about getting the Sentine... See more...
they are probably talking about this   https://www.sentinelone.com/partners/featured-partner-splunk , but it that resource and what it links to haven't answered my questions about getting the SentinelOne Splunk app working