All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

receivers: #Apache Cary apache: endpoint: "http://localhost:80/server-status?auto" collection_interval: 10s #Apache Cary   service: telemetry: metrics: address: "${SPLUNK_LISTEN_INTERFACE}:... See more...
receivers: #Apache Cary apache: endpoint: "http://localhost:80/server-status?auto" collection_interval: 10s #Apache Cary   service: telemetry: metrics: address: "${SPLUNK_LISTEN_INTERFACE}:8888" extensions: [health_check, http_forwarder, zpages, smartagent] pipelines: traces: receivers: [jaeger, otlp, zipkin] processors: - memory_limiter - batch - resourcedetection #- resource/add_environment exporters: [sapm, signalfx] # Use instead when sending to gateway #exporters: [otlp, signalfx] metrics: ##Apache Cary receivers: [hostmetrics, otlp, signalfx] receivers: [hostmetrics, otlp, signalfx, apache] processors: [memory_limiter, batch, resourcedetection] exporters: [signalfx] # Use instead when sending to gateway #exporters: [otlp]
Hi, Can you share the part of your yaml where you define the apache receiver and also share the part that contains your service pipeline? 
Hi, thanks for your replying 1. Did you restart the collector after changing agent_config.yaml?  ans : YES , I have restarted collector after changing agent_config.yaml? 2. Did you add the new ap... See more...
Hi, thanks for your replying 1. Did you restart the collector after changing agent_config.yaml?  ans : YES , I have restarted collector after changing agent_config.yaml? 2. Did you add the new apache receiver to the metrics pipeline? I think I have added it ,as the doument 3. Did you check for apache.* metrics using the metric finder? Or check for data in the apache built-in dashboard? I have checked  for apache.* metrics , but no anything
Why would you do that if you have perfectly well working answers above? Also, this thread is several years old...
The fact that it's syslog is irrelevant here. You define transforms on a per-sourcetype, per-source or per-host basis. I assume you're trying to do this - https://docs.splunk.com/Documentation/Splun... See more...
The fact that it's syslog is irrelevant here. You define transforms on a per-sourcetype, per-source or per-host basis. I assume you're trying to do this - https://docs.splunk.com/Documentation/Splunk/latest/Forwarding/Routeandfilterdatad Just remember that order of transforms is important. If you want to only index selected events and get rid of the rest you must first send them all to nullQueue and then rewrite the queue to indexQueue for the selected ones.
You could try cutting your TIME_FORMAT before %7N and then check if it works. If it does, it means that the %7N is the culprit. The problem is that you lose timezone info and would have to set it man... See more...
You could try cutting your TIME_FORMAT before %7N and then check if it works. If it does, it means that the %7N is the culprit. The problem is that you lose timezone info and would have to set it manually. To be fully honest, I'd check if the source can actually just send the timestamp in the epoch format along the event and forget about time parsing altogether.
Some more insight, if I remove the python files it will deploy successfully. 
This worked instantly!  I appreciate you!  Thanks,  JJJ 
I have read other forums and posts, but it did not help with my issue.  I am getting the typical Error while creating deployable apps: Error coping src'" /opt/splunk/etc/shcluster/my_app" to staging... See more...
I have read other forums and posts, but it did not help with my issue.  I am getting the typical Error while creating deployable apps: Error coping src'" /opt/splunk/etc/shcluster/my_app" to staging area=" /opt/splunk/var/run/splunk/deply.#######.tmp/apps/my_app." 180 errors    They are all python files (.py) this is for every app that has python files as well. I have checked the permissions and Splunk owns the files. I am so confused as to what is happening. I understand this is a permission issue, but Splunk owns the files (rambling) with permissions set to rw on all python files on the deployer.  Also, splunk user owns all files under /opt/splunk    Any help greatly appreciated! 
they are probably talking about this   https://www.sentinelone.com/partners/featured-partner-splunk , but it that resource and what it links to haven't answered my questions about getting the Sentine... See more...
they are probably talking about this   https://www.sentinelone.com/partners/featured-partner-splunk , but it that resource and what it links to haven't answered my questions about getting the SentinelOne Splunk app working   
I'm having a similar problem. the SentinelOne recording when Kyle shows how easy it is to set up was missing something. because I the video he pretty much just drops the API token in there and  BAM! ... See more...
I'm having a similar problem. the SentinelOne recording when Kyle shows how easy it is to set up was missing something. because I the video he pretty much just drops the API token in there and  BAM! everything works.  I wish there was some setup documentation or guides that show you how to configure these integrations. 
And how is this supposed to work? There is no property called splunk_forwarder in any props stanza. Also, Splunk does variable expansion on a very limited set of settings.
Thanks for your feedback. The Dashboards team continuously seeks to simplify the admin experience for managing dashboards. This is as intended. Dashboards are a bit different than most other webpa... See more...
Thanks for your feedback. The Dashboards team continuously seeks to simplify the admin experience for managing dashboards. This is as intended. Dashboards are a bit different than most other webpages because they are specifically configured by a dashboard builder to look a certain way. So allowing end users to change the theme conflicts with the idea that a dashboard builder can build a dashboard to be viewed they way they built it. On top of that, due to the high level of customization and specific color settings, we can't guarantee that bulk theme changes will maintain readability or visual consistency across all dashboards. 
For anyone else that stumbles upon this, I ended up working with infoblox support on this. The issue was in the way the Infoblox Data Connector was writing the timestamp in the logs prior to sending... See more...
For anyone else that stumbles upon this, I ended up working with infoblox support on this. The issue was in the way the Infoblox Data Connector was writing the timestamp in the logs prior to sending to Splunk. Depending on your grid timezone settings, the IB Data Connector was actually offsetting the epoch time... instead of leaving it epoch. (don't ask me why). They pushed a patch down a few days ago that fixed it for us.
Hello Team, I have forwarded syslogs to Splunk Enterprise, I am trying to find a way to create props.conf and transforms.conf such a way that Splunk ingests all the messages which matches the keywor... See more...
Hello Team, I have forwarded syslogs to Splunk Enterprise, I am trying to find a way to create props.conf and transforms.conf such a way that Splunk ingests all the messages which matches the keywords that I have defined in a regex in transforms.conf and drop all the non matching messages however I am not able to do the same. Is there a way to do that or does transforms and props.conf only work to drop the messages which are defined in the regex as currently if I try to that Splunk is dropping only the keywords that I defined and ingesting everything else. I am new to splunk so requesting some inputs for the same. Thanks in advance!!
What have you tried so far?  Was it anything like this? | rex "\}\s*-\s*(?<field>.*)"
I'm trying to regex the field that has "REPLY" CommonEndpointLoggingAspect {requestId=94f2a697-3c0d-4835-b96a-42be3d2426e2, serviceName=getCart} - REPLY 
curl command :  curl -k -u  admin:Password -X POST http://127.0.0.1:8000/en-US/services/authorization/tokens?output_mode=json --data name=admin  --data audience=Users --data-urlencode expires_on=+3... See more...
curl command :  curl -k -u  admin:Password -X POST http://127.0.0.1:8000/en-US/services/authorization/tokens?output_mode=json --data name=admin  --data audience=Users --data-urlencode expires_on=+30d   But I am able to login via UI and create an access token.   If I try to do the same using curl command, I am getting the below response. Note: The response has been trimmed.     <div class="error-message"> <h1 data-role="error-title">Oops.</h1> <p data-role="error-message">Page not found! Click <a href="/" data-role="return-to-splunk-home">here</a> to return to Splunk homepage.</p> </div> </div> </div> <div class="message-wrapper"> <div class="message-container fixed-width" data-role="more-results"><a href="/en-US/app/search/search?q=index%3D_internal%20host%3D%22f6xffpvw93.corp.com%2A%22%20source%3D%2Aweb_service.log%20log_level%3DERROR%20requestid%3D6740cfffb611125b5e0" target="_blank">View more information about your request (request ID = 6740cfffb611125b5e0) in Search</a></div> <div class="message-container fixed-width" data-role="crashes"></div> <div class="message-container fixed-width" data-role="refferer"></div> <div class="message-container fixed-width" data-role="debug"></div> <div class="message-container fixed-width" data-role="byline"> <p class="byline">.</p> </div> </div> </body>
I had done something like this in a previous life.  Each HF should get an app which has a props definition under the default stanza.  For a small number of HF's you can do this manually, for a large ... See more...
I had done something like this in a previous life.  Each HF should get an app which has a props definition under the default stanza.  For a small number of HF's you can do this manually, for a large group to manage from like a DS reference the Splunk environment variables. props.conf [default] splunk_forwarder = <HOSTNAME> It has been a while so play around with this.  I seem to remember it was a props.conf mapped to transforms.conf which inserted the hostname so find what works the best for you.