All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We don't know your data, we don't know what you're getting, we don't know if you match your data properly or extract the fields properly. We don't know anything except a search and some excel table.
It's a bit more complicated than that. 1. As @richgalloway pointed out, UF is by default capped with maxKBps (which is a rough value - there is no guarantee for Splunk to _always_ process no more th... See more...
It's a bit more complicated than that. 1. As @richgalloway pointed out, UF is by default capped with maxKBps (which is a rough value - there is no guarantee for Splunk to _always_ process no more than that value per second). 2. Even if you set the limit to 0 (no limit at all), the back pressure from output will make the forwarder to stop reading the file until the queue empties a bit. Generally, the "speed" of Splunk reading files depends mostly on non-Splunk limits (like output rate which might be limited by receiving instance performance or network bandwidth or input rate if the file is placed on a network share). Also since the limits apply to the general overall size of the data regardless of how big the events are, the EPS value isn't that important here - the same limit will apply if you send just a few big events as when you send many small ones. But there is also one more thing worth pointing out - UF doesn't (typically, unless you use indexed extractions on structured data) deal with events as such - it reads and sends to an output chunks of data for breaking into events "further down the road" (on indexers or heavy forwarders). With sufficiently modern UF and configured EVENT_BREAKER, you should be sending chunks of data ending on event boundary, but you typically don't send single events (unless they are huge).  
index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actio... See more...
index=test-index (data loaded) OR ("GET data published/data/ui" OR "GET /v8/wi/data/*" OR "GET data/ui/wi/load/success") |rex field=_raw "DIP:\s+\[(?<dip>[^\]]+)." |rex field=_raw "ACTION:\s+(?<actions>\w+)" |rex dield=_raw "SERVICE:\s+(?<services>\S+)" |search actions= start OR actions=done NOT service="null" |eval split=services.":".actions |timechart span=1d count by split |eval _time=strftime(_time, "%d/%m/%Y") |table _time *start *done   When we run the above query , not all services getting captured,  but we have data, attached the screen shot(highlighted ones are missing). can anyone let me know what is the issue with the query.
i am reading teh host from log file and have query to return all the host.    index=aaa source="/var/log/test1.log"|stats count by host    can we include teh step to categories test/qa/prod in t... See more...
i am reading teh host from log file and have query to return all the host.    index=aaa source="/var/log/test1.log"|stats count by host    can we include teh step to categories test/qa/prod in the drop down list  from the list of host returned in the query itself?(using wildcard if host has t then test /if host has q -qa server, etc? but for now i am using static options  test - testhost qa - qahost prod - prodhost
It depends on the size of an event.  The UF is rate-limited by the maxKBps setting in limits.conf.
hi Team, Docker build is failing with this error. => ERROR [15/16] RUN sed -i 's/<AppenderRef ref=\"Console\"\/>/<!-- <AppenderRef ref=\"Console\"\/> -->/g' /usr/local/lib/python3.10/site-packages/... See more...
hi Team, Docker build is failing with this error. => ERROR [15/16] RUN sed -i 's/<AppenderRef ref=\"Console\"\/>/<!-- <AppenderRef ref=\"Console\"\/> -->/g' /usr/local/lib/python3.10/site-packages/appdynamics_bindeps/proxy/conf/logging/log4j2.xml  below is the pkg version appdynamics==23.8.0.6197 appdynamics-bindeps is not getting pulled/installed  tried with latest version of appdynamics pkg .. same experience.. appdynamics==24.2.0.6567 this is happening only on mac m1 pro. adding explicitly "appdynamics-bindeps-linux-x64==23.8.0" in requirements.txt is giving below error. File "/tmp/appd/lib/cp310-cp310-ffd7b4d13d09a0572eb0f3d85bb006d0043821e28e0e1e2c12f81995da1bd796/site-packages/appdynamics_bindeps/zmq/backend/cython/__init__.py", line 6, in <module> 2024-04-10 11:14:44 from . import (constants, error, message, context, 2024-04-10 11:14:44 ImportError: cannot import name 'constants' from partially initialized module 'appdynamics_bindeps.zmq.backend.cython' (most likely due to a circular import) (/tmp/appd/lib/cp310-cp310-ffd7b4d13d09a0572eb0f3d85bb006d0043821e28e0e1e2c12f81995da1bd796/site-packages/appdynamics_bindeps/zmq/backend/cython/__init__.py)
One way around this is to use a small (﹠) or fullwidth (&) ampersand.
Did you ever figured this out? 
Hi Team, what is the Events-per-second (EPS) in flat file with universal forwarder?
Thank you for the response. After a lot of digging and looking through py files and scripts, I did manage to find those 2 conf files. I was able to successfully disable SSL and access via http, the w... See more...
Thank you for the response. After a lot of digging and looking through py files and scripts, I did manage to find those 2 conf files. I was able to successfully disable SSL and access via http, the weird thing is on some machines, I am unable to login, I receive a 403 error (SOAR outputs as "Login Prevented. Please close your browser and try again"), on others, I can log in with no issue. I cannot find anything in the nginx confs that would cause this issue. It is assumably a local issue to those machines, but I would like to track it down so I can ensure it won't be a problem in the environment we intend to use. I know Django and UWSGI are also playing roles in this configuration, but I am not sure what those roles are. 
Hy @Marcie.Sirbaugh  I just looked into it but didn't get any information from this because the namespace default is already there but I don't know where i'm missing
Hi Team, As checked our Splunk ITSI default schedule backup is taking more than 10 hours to complete, could you please assit us on this. Thanks
I opened a case with Splunk support, they stated that problem was that "the latest version of those apps / addons were not updated in the Upgrade Readiness App database file". They updated the dat... See more...
I opened a case with Splunk support, they stated that problem was that "the latest version of those apps / addons were not updated in the Upgrade Readiness App database file". They updated the database file and after that, all apps and add-ons on my (Splunk Cloud) search head passed the Python scan under the upgrade readiness app.
I put a ticket into Splunk and found that its a "known" bug that is not in their normal KBDB but they will work to get it there, in the mean time per support and @SierraX confirming, upgrading to 9.1... See more...
I put a ticket into Splunk and found that its a "known" bug that is not in their normal KBDB but they will work to get it there, in the mean time per support and @SierraX confirming, upgrading to 9.1.3 resolved the issue.  I have requested if Splunk would be able to divulge what the bug was.   Waiting for response. Thanks @SierraX for your response... funny I got your response and Splunk support's response in at the same time... (Scary... LOL)
Hey, I am having issues with changing the region color based on values, could you help me with it ? I have to show the status regions of HK, FR and TYO, and I just want the country to be highlight... See more...
Hey, I am having issues with changing the region color based on values, could you help me with it ? I have to show the status regions of HK, FR and TYO, and I just want the country to be highlighted in the world map based on the value. How can it be done ?
Does Anything Special need to be done when Installing Splunk 9.1.1 on RHEL 9.3? Or just follow the steps and it will be good to go? Thanks -David 
Not sure if you've discovered RHONDOS yet... They sell PowerConnect which is certified by both SAP and Splunk. It has many out of the box dashboards for SAP's SaaS cloud offerings which will help you!
PowerCoonnect from RHONDS includes tons of out-of-the-box dashboards for SAP's SaaS offerings. You should reach out to them. You can see more information HERE
PowerConnect for SAP is certified by SAP. It contains hundreds of out of the box extractors and dashboards. It works with ABAP, Cloud and JAVA systems.  You can find out more information HERE 
PowerConnect is a third-party application certified by both SAP and Splunk. It provides deep insights into many of SAP's SaaS offerings including CPQ. You can find more information at HERE