All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Team, Currently, we are using Splunk UF agents which is installed on all infra servers and which receives configuration from Deployment servers and both are running under 9.1.2 version. And t... See more...
Hi Team, Currently, we are using Splunk UF agents which is installed on all infra servers and which receives configuration from Deployment servers and both are running under 9.1.2 version. And these logs are getting forwarded to Splunk cloud console via Cribl workers. And the Splunk cloud instance indexer and search head running with 9.2.2 version. Now, our ask is if we upgrade our Splunk UF and Splunk enterprise version on deployment servers from 9.1.2 to 9.3.0, will it impact the cloud components (due to compatibility issues) or will it not impact as these cloud components receives logs indirectly via cribl? Could you please clarify?
Hi everyone! I'm trying to figure out how to map a field name dynamically to a column of a table. as it stands the table looks like this: twomonth_value onemonth_value current_value 5 3 1... See more...
Hi everyone! I'm trying to figure out how to map a field name dynamically to a column of a table. as it stands the table looks like this: twomonth_value onemonth_value current_value 5 3 1   I want the output to be instead.. july_value august_value september_value 5 3 1   I am able to get the correct dynamic value of each month via | eval current_value = strftime(relative_time(now(), "@mon"), "%B")+."_value" However, i'm unsure on how to change the field name directly in the table. Thanks in advance!
Hi Everyone, I am not Splunk engineer but I have task to do. sc4s.service is failed. Can't get the logs. It was working before.  As an error it says 'Unauthorized access'. But I don't have any cred... See more...
Hi Everyone, I am not Splunk engineer but I have task to do. sc4s.service is failed. Can't get the logs. It was working before.  As an error it says 'Unauthorized access'. But I don't have any credentials for that.  Environment="SC4S_IMAGE=docker.io/splunk/scs:latest"  Could you help me please how to fix it? Thanks, 
I am trying to write an eval expression to translate a few different languages into English.   One of the languages is Hebrew which is a right to left language, and when I use the Hebrew text in my q... See more...
I am trying to write an eval expression to translate a few different languages into English.   One of the languages is Hebrew which is a right to left language, and when I use the Hebrew text in my query, my cursor location is no longer predictable, and I cannot copy/paste the Hebrew into an otherwise left to right query expression.  I then tried to create a macro to do the evaluation, but I ran into the same issue.  Even using a different browser(Firefox vs. Brave), or a different program (notepad++), but I always encounter the cursor/keyboard anomalies after pasting the text into my query.   I need to translate a few different strings within a case eval expression.  Is anyone aware of any similar issues being encountered and/or of any potential work arounds?  Does someone have an alternate suggestion as to how I can accomplish the translations? Here is an example of what I am trying to do: | eval appName = case(appName="플레이어","player",appName="티빙","Tving",appName=... This Hebrew text is an example of where I run into issues: כאן ארכיון
Hi, We use Splunk Forwarder to monitor application data. There are multiple folders on a given server, each with same set of log files, but since the folder names are a distinguishing factor, we are... See more...
Hi, We use Splunk Forwarder to monitor application data. There are multiple folders on a given server, each with same set of log files, but since the folder names are a distinguishing factor, we are using crcSalt=<SOURCE> so that Splunk treats all log files differently.  We also make sure to lock the stanza to a specific extension as needed, e.g. logname.log, or log*.txt, so that rotated files are ignored. That being said, I still want to find out if there are any situations where splunk could be re-indexing files multiple times and might warrant the use of initCrcLen instead.  Is this something that's possible via search? Does Splunk forwarder keeps some type of internal record/tracker that it is now re-indexing previously seen file again? Thanks,
Hi! Is there any way to make data retrival rate slower? Something like 1h worth of data every 1m When we are trying to save 30D data from our elastic(about 4.4m events) server makes huge network loa... See more...
Hi! Is there any way to make data retrival rate slower? Something like 1h worth of data every 1m When we are trying to save 30D data from our elastic(about 4.4m events) server makes huge network load spike and then stops responding.
Hi fellows,  It's time to migrate our good old Standalone Splunk Enterprise to 'Small Enterprise deployment'. I read through tons of docs, unfortunately, I didn't find any step-by-step guide, so I ... See more...
Hi fellows,  It's time to migrate our good old Standalone Splunk Enterprise to 'Small Enterprise deployment'. I read through tons of docs, unfortunately, I didn't find any step-by-step guide, so I have many questions. May Some of you can help. - The existing server is CentOs7, the new servers will be Ubuntu 22.04. Just before the migration, I plan to upgrade Splunk on it from 9.1.5 to the latest 9.3.1.  (it wasn't updated because Centos 7 is not supported by 9.3.1)  OR do I set up the new servers with 9.1.5 and upgrade them after the migration? - Our daily volume is 3-400 GB/ day. It will not grow drastically in the medium term. What are your recommended hardware specs for the indexers? Can we use the "Mid-range indexer specification" or go for the "High-performance indexer specification" (as described at https://docs.splunk.com/Documentation/Splunk/9.3.1/Capacity/Referencehardware ) - If I understand correctly I can copy the /etc/apps/ from the old server to the new Search Head, so I will have all the apps, saved searches, etc. But what config files must be modified to get the data for the new indexers? (For forwarders this is clear, but we are using a lot of other inputs (Rest-API, HEC, scripted) - Do I configure our existing server as part of the indexer cluster (3rd member), then when all the replications are done on the new servers remove it from the cluster, or copy the index data to one of the new indexers - rename the buckets ( adding the indexer's unique id) and let the cluster manager do the job? (Do I need a separate Cluster manager or the SH could do the job?) And here comes the big twist... - Currently, we using S3 storage via NAS Bridge for the cold buckets. This solution is not recommended and we are already experiencing fallbacks. So, we planned to change the configuration to use SmartStore. How I can move the current cold buckets there? (a lot of data and because of the NAS Bridge this is very very slow to copy...)  Thanks in advance Norbert  
How to get an output containing all host details of all time along with their last update times?  Below search is taking huge time, how to get this optimized for faster search - index=*| fields h... See more...
How to get an output containing all host details of all time along with their last update times?  Below search is taking huge time, how to get this optimized for faster search - index=*| fields host, _time | stats max(_time) as last_update_time by host | eval t=now() | eval days_since_last_update=tonumber(strftime((t-last_update_time),"%d"))-1 | where days_since_last_update>30 | eval last_update_time=strftime(last_update_time, "%Y-%m-%d %H:%M:%S") | table last_update_time host days_since_last_update  
Hello Splunkers!! I have ingested data into Splunk from the source system using the URI "https://localhost:8088/services/collector" along with the HEC token. However, the data is not being displaye... See more...
Hello Splunkers!! I have ingested data into Splunk from the source system using the URI "https://localhost:8088/services/collector" along with the HEC token. However, the data is not being displayed in Splunk with the appropriate sourcetype parsing, which is affecting the timestamp settings for the events. The sourcetype and timestamp are currently being displayed as below. My actual props.conf setting as below : [agv_voot] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true category = Custom KV_MODE = json pulldown_type = 1 TIME_PREFIX = ^\@timestamp TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%3N TIMESTAMP_FIELDS = @timestamp TRANSFORMS-trim_timestamp = trim_long_timestamp transforms.conf [trim_long_timestamp] REGEX = (\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{3})\d+(-\d{2}:\d{2}) FORMAT = $1 Please help to fix the proper parsing with correct sourcetype and timestamp.
Hi  can we force the default expiration of all scheduled searches to 24 hours in Splunk cloud? I came across few post/docs which states that can be done but It was unclear as in which configuration ... See more...
Hi  can we force the default expiration of all scheduled searches to 24 hours in Splunk cloud? I came across few post/docs which states that can be done but It was unclear as in which configuration we need to make the changes in
Splunk HEC was configured as defined in the documentation. I could see that I can send data using https URL. When sending same data using HTTP URL - request is failing with the error "curl: (56) Recv... See more...
Splunk HEC was configured as defined in the documentation. I could see that I can send data using https URL. When sending same data using HTTP URL - request is failing with the error "curl: (56) Recv failure: Connection reset by peer". curl https://<host>:<port>/services/collector -H  'Authorisation: Splunk <token>' -d '{"sourcetype": "demo", "event": "Test data!"}' OUTPUT/Response :  {"text":"Success","code":0} curl http://<host>:<port>/services/collector -H  'Authorisation: Splunk <token>' -d '{"sourcetype": "demo", "event": "Test data!"}' curl: (56) Recv failure: Connection reset by peer This was the command used to enable token /opt/splunk/bin/splunk http-event-collector enable -name <hec_name> -uri https://localhost:8089 which worked perfectly fine thought I had to enable http URL and executed below command: /opt/splunk/bin/splunk http-event-collector enable -name catania-app-stat -uri http://localhost:8089 Error/Output : Cannot connect Splunk server What am I missing here. How do I get source to send data over HTTP protocol.
We are looking into build an our own AI chatbot with integrating Splunk AI Assistant. Can Splunk AI Assistant be called using API calls through our application and get the responses? If possible, can... See more...
We are looking into build an our own AI chatbot with integrating Splunk AI Assistant. Can Splunk AI Assistant be called using API calls through our application and get the responses? If possible, can you provide further details about those ?
Working on a query to generate an alert when a field value changes. The requirement is to detect the change in IP for a FQDN. Currently I'm trying to use a lookup file which has the current value of... See more...
Working on a query to generate an alert when a field value changes. The requirement is to detect the change in IP for a FQDN. Currently I'm trying to use a lookup file which has the current value of the IP for two FQDN per host.  Columns - Host|FQDN|Current_IP Looks something like Host1 fqdn1 IP1 Host2 fqdn1 IP1 Host1 fqdn2 IP2 Host2 fqdn2 IP2 I followed an approach suggested in another thread to use inputlookup My current query looks like - stats latest(IP) as Latest_IP | inputlookup append=true myfile.csv | stats first(Latest_IP) as Latest_IP, first(Current_IP) as Previous_IP | where Latest_IP!=Previous_IP   This gives me a result with the latest and previous IP whenever the IP changes, but looking to add more details to the result which also lists the FQDN and the time when the IP changed.
We've configured -Dappagent.start.timeout=30000 for the java agent for the webapps after we got the issue of Pods getting failed to start due to AppD taking lot of time initially which was delaying l... See more...
We've configured -Dappagent.start.timeout=30000 for the java agent for the webapps after we got the issue of Pods getting failed to start due to AppD taking lot of time initially which was delaying liveness probe in EKS. After adding timeout, as per the doc the AppD agent will start in parallel with the application startup reducing the overall startup time. Until few days ago, we started below issue in webapps running on Wildfly server where it is saying java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot and surprisingly it is giving this error when we remove the timeout configuration. Can anyone confirm if they came across this issue?  9:17:46,228 INFO [stdout] (AD Agent init) Agent will mark node historical at normal shutdown of JVM 09:17:50,323 INFO [stdout] (AD Agent init) Registered app server agent with Node ID[455861] Component ID[6467] Application ID [553] 09:17:56,727 ERROR [stderr] (Reference Reaper #2) Exception in thread "Reference Reaper #2" java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot 09:17:56,729 ERROR [stderr] (Reference Reaper #2) at org.wildfly.common.ref.References$ReaperThread.run(References.java) 09:17:56,822 ERROR [stderr] (Reference Reaper #1) Exception in thread "Reference Reaper #3" Exception in thread "Reference Reaper #1" java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot 09:17:56,823 ERROR [stderr] (Reference Reaper #1) at org.wildfly.common.ref.References$ReaperThread.run(References.java) 09:17:56,823 ERROR [stderr] (Reference Reaper #1) Caused by: java.lang.ClassNotFoundException: com.singularity.ee.agent.appagent.entrypoint.bciengine.FastMethodInterceptorDelegatorBoot from [Module "org.wildfly.common" version 1.6.0.Final from local module loader @7a30d1e6 (finder: local module finder @5891e32e (roots: /opt/jboss/modules,/opt/jboss/modules/system/layers/base))] 09:17:56,824 ERROR [stderr] (Reference Reaper #1) at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:200) 09:17:56,824 ERROR [stderr] (Reference Reaper #1) at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:410) 09:17:56,824 ERROR [stderr] (Reference Reaper #1) at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:398) 09:17:56,825 ERROR [stderr] (Reference Reaper #1) at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:116) 09:17:56,825 ERROR [stderr] (Reference Reaper #1) ... 1 more 09:17:56,826 ERROR [stderr] (Reference Reaper #3) java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot 09:17:56,827 ERROR [stderr] (Reference Reaper #3) at org.wildfly.common.ref.References$ReaperThread.run(References.java) 09:18:05,627 INFO [stdout] (AD Agent init) Started AppDynamics Java Agent Successfully. 09:18:41,038 ERROR [org.xnio.nio] (default I/O-2) XNIO000011: Task org.xnio.nio.WorkerThread$SynchTask@191c0b4b failed with an exception: java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot
Hello Everyone,        I have 2 Individual systems from which I am getting API(GET) responses, I have requirement of comparing these JSON responses which we are getting from 2 different system and i... See more...
Hello Everyone,        I have 2 Individual systems from which I am getting API(GET) responses, I have requirement of comparing these JSON responses which we are getting from 2 different system and if these payloads matching, then mark it as 'SUCCESS' else 'FAILURE'. I want to build report based on these results.  Can anyone please check and let me know possible solution in splunk ?  and also let me know what splunk skills we need to achieve this requirement. Thanks.    
Hi, I have 2 panels for which the events flow is high and so I am trying to include the stats command along with the fields command in the base query. I have a field TotalTransaction for which i ne... See more...
Hi, I have 2 panels for which the events flow is high and so I am trying to include the stats command along with the fields command in the base query. I have a field TotalTransaction for which i need stats values as  "|stats count(TotalTansaction) as tottrans by Tier" for one panel "|stats count(TotalTransaction) as tottrans by Tier Proxy method" for the other panel How to get both the stats values included in the same query.
I have a basic timechart query that graphs the number of Queries per second (QPS) for several hosts. I need to filter the results to only show hosts that had a change in QPS of + or - 50% at any poin... See more...
I have a basic timechart query that graphs the number of Queries per second (QPS) for several hosts. I need to filter the results to only show hosts that had a change in QPS of + or - 50% at any point. (Show only these two results and drop the others)   index=metrics host=* | timechart span=5m partial=f limit=0 per_second(Query) as QPS by Site   
I am getting an integrity check error on /opt/splunk/bin/python2.7 that says present_but_shouldnt_be. I can find the write protected file python2.7 in that path. Is this just as simple as deleting it... See more...
I am getting an integrity check error on /opt/splunk/bin/python2.7 that says present_but_shouldnt_be. I can find the write protected file python2.7 in that path. Is this just as simple as deleting it? Is there some uninstall I need to run? 
I am using a Statistics table for the visualization of some data. Is there a way to colorize cells based on partial text values? Example...      I want a different cell color based on the first 3 ... See more...
I am using a Statistics table for the visualization of some data. Is there a way to colorize cells based on partial text values? Example...      I want a different cell color based on the first 3 letters of the values .  I would want one cell color for all cells  that start with BAX,  a different color for cells that start with CAR ,  etc.  BAX11ROW3  BAX12ROW5 CAR01ROW4 CAR05ROW3 DOR01ROW6
Hi everyone! Splunk Observability Cloud provides end-to-end visibility and insights into the performance of applications and infrastructure. Splunk offers a series of Observability Cloud education ... See more...
Hi everyone! Splunk Observability Cloud provides end-to-end visibility and insights into the performance of applications and infrastructure. Splunk offers a series of Observability Cloud education courses that help users and professionals learn how to monitor, troubleshoot, and optimize the performance of their IT environments using real-time observability tools. These courses cover everything from the basics of infrastructure monitoring to advanced application performance management (APM) and real-user monitoring. Check out some free courses here! Register here for Visualizing and Alerting in Splunk Observability Cloud, here for Fundamentals of Metrics Monitoring in Splunk Observability Cloud, here for Getting Data into Splunk Observability Cloud, and here for Intro to Splunk Observability Cloud! Post a comment if you have any questions! Splunk Community Team