All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Is your data already in Splunk? Have the fields already been extracted? Do you know how to write SPL? Do you know how to create dashboards?
| eval average=floor(average)
Hi  can we force the default expiration of all scheduled searches to 24 hours in Splunk cloud? I came across few post/docs which states that can be done but It was unclear as in which configuration ... See more...
Hi  can we force the default expiration of all scheduled searches to 24 hours in Splunk cloud? I came across few post/docs which states that can be done but It was unclear as in which configuration we need to make the changes in
Splunk HEC was configured as defined in the documentation. I could see that I can send data using https URL. When sending same data using HTTP URL - request is failing with the error "curl: (56) Recv... See more...
Splunk HEC was configured as defined in the documentation. I could see that I can send data using https URL. When sending same data using HTTP URL - request is failing with the error "curl: (56) Recv failure: Connection reset by peer". curl https://<host>:<port>/services/collector -H  'Authorisation: Splunk <token>' -d '{"sourcetype": "demo", "event": "Test data!"}' OUTPUT/Response :  {"text":"Success","code":0} curl http://<host>:<port>/services/collector -H  'Authorisation: Splunk <token>' -d '{"sourcetype": "demo", "event": "Test data!"}' curl: (56) Recv failure: Connection reset by peer This was the command used to enable token /opt/splunk/bin/splunk http-event-collector enable -name <hec_name> -uri https://localhost:8089 which worked perfectly fine thought I had to enable http URL and executed below command: /opt/splunk/bin/splunk http-event-collector enable -name catania-app-stat -uri http://localhost:8089 Error/Output : Cannot connect Splunk server What am I missing here. How do I get source to send data over HTTP protocol.
Maybe something like | mstats rate(Query) as QPS where index=metrics host=* by Site span=5m | streamstats window=2 global=false current=false stdev(QPS) as devF by Site | sort Site, - _time | stream... See more...
Maybe something like | mstats rate(Query) as QPS where index=metrics host=* by Site span=5m | streamstats window=2 global=false current=false stdev(QPS) as devF by Site | sort Site, - _time | streamstats window=2 global=false current=false stdev(QPS) as devB by Site | where 4*devF > QPS OR 4*devB > QPS | timechart span=5m values(QPS) by Site  
Hi @tsocyberoperati , you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.1/Forwarding/Forwarddatatothird-partysystemsd#Forward_a_subset_of_data in props.conf [... See more...
Hi @tsocyberoperati , you have to follow the instructions at https://docs.splunk.com/Documentation/Splunk/9.3.1/Forwarding/Forwarddatatothird-partysystemsd#Forward_a_subset_of_data in props.conf [host::hostA] TRANSFORMS-hostA = send_to_syslog in transforms.conf [send_to_syslog] REGEX = . DEST_KEY = _SYSLOG_ROUTING FORMAT = my_syslog_group where my_syslog_group is the stanza in outputs.conf. Ciao. Giuseppe
We are looking into build an our own AI chatbot with integrating Splunk AI Assistant. Can Splunk AI Assistant be called using API calls through our application and get the responses? If possible, can... See more...
We are looking into build an our own AI chatbot with integrating Splunk AI Assistant. Can Splunk AI Assistant be called using API calls through our application and get the responses? If possible, can you provide further details about those ?
Hi @arjun_ananth , I don't like lookup method, I'd like to use a summary index: schedule a search every night (if the change frequency that you want to monitor is one day) e.g.: index=your_index |... See more...
Hi @arjun_ananth , I don't like lookup method, I'd like to use a summary index: schedule a search every night (if the change frequency that you want to monitor is one day) e.g.: index=your_index | dedup ip | table _time host ip | collect index=your_summary and then run a search on the summary index: index=your_summary | stats dc(ip) AS ip_count By host | where ip_count>1 in this way you haven't the problem of manage the timestamp and lookup upgrade, and, at the same time, you have a quick search. Ciao. Giuseppe
Working on a query to generate an alert when a field value changes. The requirement is to detect the change in IP for a FQDN. Currently I'm trying to use a lookup file which has the current value of... See more...
Working on a query to generate an alert when a field value changes. The requirement is to detect the change in IP for a FQDN. Currently I'm trying to use a lookup file which has the current value of the IP for two FQDN per host.  Columns - Host|FQDN|Current_IP Looks something like Host1 fqdn1 IP1 Host2 fqdn1 IP1 Host1 fqdn2 IP2 Host2 fqdn2 IP2 I followed an approach suggested in another thread to use inputlookup My current query looks like - stats latest(IP) as Latest_IP | inputlookup append=true myfile.csv | stats first(Latest_IP) as Latest_IP, first(Current_IP) as Previous_IP | where Latest_IP!=Previous_IP   This gives me a result with the latest and previous IP whenever the IP changes, but looking to add more details to the result which also lists the FQDN and the time when the IP changed.
We've configured -Dappagent.start.timeout=30000 for the java agent for the webapps after we got the issue of Pods getting failed to start due to AppD taking lot of time initially which was delaying l... See more...
We've configured -Dappagent.start.timeout=30000 for the java agent for the webapps after we got the issue of Pods getting failed to start due to AppD taking lot of time initially which was delaying liveness probe in EKS. After adding timeout, as per the doc the AppD agent will start in parallel with the application startup reducing the overall startup time. Until few days ago, we started below issue in webapps running on Wildfly server where it is saying java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot and surprisingly it is giving this error when we remove the timeout configuration. Can anyone confirm if they came across this issue?  9:17:46,228 INFO [stdout] (AD Agent init) Agent will mark node historical at normal shutdown of JVM 09:17:50,323 INFO [stdout] (AD Agent init) Registered app server agent with Node ID[455861] Component ID[6467] Application ID [553] 09:17:56,727 ERROR [stderr] (Reference Reaper #2) Exception in thread "Reference Reaper #2" java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot 09:17:56,729 ERROR [stderr] (Reference Reaper #2) at org.wildfly.common.ref.References$ReaperThread.run(References.java) 09:17:56,822 ERROR [stderr] (Reference Reaper #1) Exception in thread "Reference Reaper #3" Exception in thread "Reference Reaper #1" java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot 09:17:56,823 ERROR [stderr] (Reference Reaper #1) at org.wildfly.common.ref.References$ReaperThread.run(References.java) 09:17:56,823 ERROR [stderr] (Reference Reaper #1) Caused by: java.lang.ClassNotFoundException: com.singularity.ee.agent.appagent.entrypoint.bciengine.FastMethodInterceptorDelegatorBoot from [Module "org.wildfly.common" version 1.6.0.Final from local module loader @7a30d1e6 (finder: local module finder @5891e32e (roots: /opt/jboss/modules,/opt/jboss/modules/system/layers/base))] 09:17:56,824 ERROR [stderr] (Reference Reaper #1) at org.jboss.modules.ModuleClassLoader.findClass(ModuleClassLoader.java:200) 09:17:56,824 ERROR [stderr] (Reference Reaper #1) at org.jboss.modules.ConcurrentClassLoader.performLoadClassUnchecked(ConcurrentClassLoader.java:410) 09:17:56,824 ERROR [stderr] (Reference Reaper #1) at org.jboss.modules.ConcurrentClassLoader.performLoadClass(ConcurrentClassLoader.java:398) 09:17:56,825 ERROR [stderr] (Reference Reaper #1) at org.jboss.modules.ConcurrentClassLoader.loadClass(ConcurrentClassLoader.java:116) 09:17:56,825 ERROR [stderr] (Reference Reaper #1) ... 1 more 09:17:56,826 ERROR [stderr] (Reference Reaper #3) java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot 09:17:56,827 ERROR [stderr] (Reference Reaper #3) at org.wildfly.common.ref.References$ReaperThread.run(References.java) 09:18:05,627 INFO [stdout] (AD Agent init) Started AppDynamics Java Agent Successfully. 09:18:41,038 ERROR [org.xnio.nio] (default I/O-2) XNIO000011: Task org.xnio.nio.WorkerThread$SynchTask@191c0b4b failed with an exception: java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot
Hi Ryan, I was able to get this by using sum of calls/min of a particular transaction. This is giving exact calls, so ADQL was not required. Thanks Fadil
Hello Everyone,        I have 2 Individual systems from which I am getting API(GET) responses, I have requirement of comparing these JSON responses which we are getting from 2 different system and i... See more...
Hello Everyone,        I have 2 Individual systems from which I am getting API(GET) responses, I have requirement of comparing these JSON responses which we are getting from 2 different system and if these payloads matching, then mark it as 'SUCCESS' else 'FAILURE'. I want to build report based on these results.  Can anyone please check and let me know possible solution in splunk ?  and also let me know what splunk skills we need to achieve this requirement. Thanks.    
Hey! Drop me a message I will explain You how to achieve that.
The console lines just print diagnostic messages. All modern browsers have a console object, so you can leave them if you want. If you do remove them, an empty catch block won't cause any problems. ... See more...
The console lines just print diagnostic messages. All modern browsers have a console object, so you can leave them if you want. If you do remove them, an empty catch block won't cause any problems. The rest of the sample code just responds to click events. Looking at the custom_token_links dashboard example, you can see how this is done:   <a href="#" class="btn-pill" data-set-token="show_chart" data-value="show" data-unset-token="show_table"> Show Chart </a>   When you click on Show Chart, the click event hander is called, explicitly passing the data-set-token and data-unset-token values. In this example, data-unset-token is set to show_table, and the click handler sets the show_table token value to undefined. Because the table has a dependency on $show_table$, the table is hidden when show_table is undefined. If you can post a representative copy of the broken dashboard, we can probably fix it.
It worked,  how do i remove the decimal (floating number)
|stats count(TotalTransaction) as tottrans by Tier Proxy method |eventstats sum(tottrans) as totaltrans by Tier
Not visible is not a solution . There is a vulnerability listed on Splunks site.  
Hi, I have 2 panels for which the events flow is high and so I am trying to include the stats command along with the fields command in the base query. I have a field TotalTransaction for which i ne... See more...
Hi, I have 2 panels for which the events flow is high and so I am trying to include the stats command along with the fields command in the base query. I have a field TotalTransaction for which i need stats values as  "|stats count(TotalTansaction) as tottrans by Tier" for one panel "|stats count(TotalTransaction) as tottrans by Tier Proxy method" for the other panel How to get both the stats values included in the same query.
I have a basic timechart query that graphs the number of Queries per second (QPS) for several hosts. I need to filter the results to only show hosts that had a change in QPS of + or - 50% at any poin... See more...
I have a basic timechart query that graphs the number of Queries per second (QPS) for several hosts. I need to filter the results to only show hosts that had a change in QPS of + or - 50% at any point. (Show only these two results and drop the others)   index=metrics host=* | timechart span=5m partial=f limit=0 per_second(Query) as QPS by Site   
I am getting an integrity check error on /opt/splunk/bin/python2.7 that says present_but_shouldnt_be. I can find the write protected file python2.7 in that path. Is this just as simple as deleting it... See more...
I am getting an integrity check error on /opt/splunk/bin/python2.7 that says present_but_shouldnt_be. I can find the write protected file python2.7 in that path. Is this just as simple as deleting it? Is there some uninstall I need to run?