All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

hi   I thank the problem is fixed thanks  for your supporting  
Assuming that field1 and field3 are always at the beginning and end of the line respectively, and assuming that their values do not contain spaces, and assuming they are separated by spaces, you coul... See more...
Assuming that field1 and field3 are always at the beginning and end of the line respectively, and assuming that their values do not contain spaces, and assuming they are separated by spaces, you could use this: ^(?<field1>\S+)\s*(?<field2>\S+)?\s(?<field3>\S+)$  
probably an easy one, i have two events as follows   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want ... See more...
probably an easy one, i have two events as follows   thisisfield1 thisisfield2 mynextfield3 thisisfield1 mynextfield3 meaning in some events field2 exists, in some it doesnt, when it does i want the value and when it doesnt i want it to be blank and all records have mynextfield3 and i always want that as field3 i want rex these lines and end up with field1               field2              field3 thisisfield1    thisisfield2   mynextfield3 thisisfield1                              mynextfield3  
Hello,   I want to create Input: HEC on the indexers => Indexer Cluster.   Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local: [http] disabled=0 enableSSL=0 [ht... See more...
Hello,   I want to create Input: HEC on the indexers => Indexer Cluster.   Create inputs.conf under /opt/splunk/etc/master-apps/_cluster/http_input/local: [http] disabled=0 enableSSL=0 [http://hec-input] disabled=0 enableSSL=0 #useACK=true index=HEC source=HEC_Source sourcetype=_json token=2f5c143f-b777-4777-b2cc-ea45a4288677 Push these configuration to the peer-app (Indexers).   But we go to the Data inputs => HTTP Event Collector  at indexer Side we still found it as below:    
@VamsikrishnaThis is a rather old thread and the thread author's last activity on the forum is about 3 years ago so it's relatively unlikely you'll get answer from them. To the main point - I'd gues... See more...
@VamsikrishnaThis is a rather old thread and the thread author's last activity on the forum is about 3 years ago so it's relatively unlikely you'll get answer from them. To the main point - I'd guess that for one reason or another the forwarders fails to break the input stream into small enough chunks before sending it downstream.
Hi  @kundanshekhx  Did you fix this issue? If yes, Please let me know how you fixed.
receivers: #Apache Cary apache: endpoint: "http://localhost:80/server-status?auto" collection_interval: 10s #Apache Cary   service: telemetry: metrics: address: "${SPLUNK_LISTEN_INTERFACE}:... See more...
receivers: #Apache Cary apache: endpoint: "http://localhost:80/server-status?auto" collection_interval: 10s #Apache Cary   service: telemetry: metrics: address: "${SPLUNK_LISTEN_INTERFACE}:8888" extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [sapm, signalfx] # Use instead when sending to gateway #exporters: [otlp, signalfx] metrics: ##Apache Cary receivers: [hostmetrics, otlp, signalfx] receivers: [hostmetrics, otlp, signalfx, apache] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] # Use instead when sending to gateway #exporters: [otlp]
Hi, Can you share the part of your yaml where you define the apache receiver and also share the part that contains your service pipeline? 
Hi, thanks for your replying 1. Did you restart the collector after changing agent_config.yaml?  ans : YES , I have restarted collector after changing agent_config.yaml? 2. Did you add the new ap... See more...
Hi, thanks for your replying 1. Did you restart the collector after changing agent_config.yaml?  ans : YES , I have restarted collector after changing agent_config.yaml? 2. Did you add the new apache receiver to the metrics pipeline? I think I have added it ,as the doument 3. Did you check for apache.* metrics using the metric finder? Or check for data in the apache built-in dashboard? I have checked  for apache.* metrics , but no anything
Why would you do that if you have perfectly well working answers above? Also, this thread is several years old...
The fact that it's syslog is irrelevant here. You define transforms on a per-sourcetype, per-source or per-host basis. I assume you're trying to do this - https://docs.splunk.com/Documentation/Splun... See more...
The fact that it's syslog is irrelevant here. You define transforms on a per-sourcetype, per-source or per-host basis. I assume you're trying to do this - https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad Just remember that order of transforms is important. If you want to only index selected events and get rid of the rest you must first send them all to nullQueue and then rewrite the queue to indexQueue for the selected ones.
You could try cutting your TIME_FORMAT before %7N and then check if it works. If it does, it means that the %7N is the culprit. The problem is that you lose timezone info and would have to set it man... See more...
You could try cutting your TIME_FORMAT before %7N and then check if it works. If it does, it means that the %7N is the culprit. The problem is that you lose timezone info and would have to set it manually. To be fully honest, I'd check if the source can actually just send the timestamp in the epoch format along the event and forget about time parsing altogether.
Some more insight, if I remove the python files it will deploy successfully. 
This worked instantly!  I appreciate you!  Thanks,  JJJ 
I have read other forums and posts, but it did not help with my issue.  I am getting the typical Error while creating deployable apps: Error coping src'" /opt/splunk/etc/shcluster/my_app" to staging... See more...
I have read other forums and posts, but it did not help with my issue.  I am getting the typical Error while creating deployable apps: Error coping src'" /opt/splunk/etc/shcluster/my_app" to staging area=" /opt/splunk/var/run/splunk/deply.#######.tmp/apps/my_app." 180 errors    They are all python files (.py) this is for every app that has python files as well. I have checked the permissions and Splunk owns the files. I am so confused as to what is happening. I understand this is a permission issue, but Splunk owns the files (rambling) with permissions set to rw on all python files on the deployer.  Also, splunk user owns all files under /opt/splunk    Any help greatly appreciated! 
they are probably talking about this   https://www.sentinelone.com/partners/featured-partner-splunk , but it that resource and what it links to haven't answered my questions about getting the Sentine... See more...
they are probably talking about this   https://www.sentinelone.com/partners/featured-partner-splunk , but it that resource and what it links to haven't answered my questions about getting the SentinelOne Splunk app working   
I'm having a similar problem. the SentinelOne recording when Kyle shows how easy it is to set up was missing something. because I the video he pretty much just drops the API token in there and  BAM! ... See more...
I'm having a similar problem. the SentinelOne recording when Kyle shows how easy it is to set up was missing something. because I the video he pretty much just drops the API token in there and  BAM! everything works.  I wish there was some setup documentation or guides that show you how to configure these integrations. 
And how is this supposed to work? There is no property called splunk_forwarder in any props stanza. Also, Splunk does variable expansion on a very limited set of settings.
Thanks for your feedback. The Dashboards team continuously seeks to simplify the admin experience for managing dashboards. This is as intended. Dashboards are a bit different than most other webpa... See more...
Thanks for your feedback. The Dashboards team continuously seeks to simplify the admin experience for managing dashboards. This is as intended. Dashboards are a bit different than most other webpages because they are specifically configured by a dashboard builder to look a certain way. So allowing end users to change the theme conflicts with the idea that a dashboard builder can build a dashboard to be viewed they way they built it. On top of that, due to the high level of customization and specific color settings, we can't guarantee that bulk theme changes will maintain readability or visual consistency across all dashboards. 
For anyone else that stumbles upon this, I ended up working with infoblox support on this. The issue was in the way the Infoblox Data Connector was writing the timestamp in the logs prior to sending... See more...
For anyone else that stumbles upon this, I ended up working with infoblox support on this. The issue was in the way the Infoblox Data Connector was writing the timestamp in the logs prior to sending to Splunk. Depending on your grid timezone settings, the IB Data Connector was actually offsetting the epoch time... instead of leaving it epoch. (don't ask me why). They pushed a patch down a few days ago that fixed it for us.