All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @jroedel , are you sure about the number of spaces? please try this: TIME_FORMAT=%s,\n\s*"nanoOfSecond"\s*:\s*%9N TIME_PREFIX="epochSecond"\s*:\s* MAX_TIMESTAMP_LOOKAHEAD=500 Ciao. Giuseppe
After upgrading Splunk from 8 to 9 version I've started to receive messages : " The Upgrade Readiness App detected 1 app with deprecated Python: splunk-rolling-upgrade " Can't find this app Splunkb... See more...
After upgrading Splunk from 8 to 9 version I've started to receive messages : " The Upgrade Readiness App detected 1 app with deprecated Python: splunk-rolling-upgrade " Can't find this app Splunkbase | apps. As far as I understand it's Splunk buit-in app? Should I delete it or how can I resolve this issue ? P"lease help.
I have to parse the timestamp of JSON logs and I would like to include subsecond precision. My JSON-Events start like this:     { "instant" : { "epochSecond" : 1727189281, "nanoOfSecond"... See more...
I have to parse the timestamp of JSON logs and I would like to include subsecond precision. My JSON-Events start like this:     { "instant" : { "epochSecond" : 1727189281, "nanoOfSecond" : 202684061 }, ...       Thus I tried as config in props.conf:   TIME_FORMAT=%s,\n "nanoOfSecond" : %9N TIME_PREFIX="epochSecond" :\s MAX_TIMESTAMP_LOOKAHEAD=500     That did unfortunately not work.   What is the right way to parse this time stamp with subsecond precision?
How can we send a file as input to an API endpoint from custom spl commands developed for both Splunk Enterprise and Splunk Cloud, ensuring the API endpoint returns the desired enrichment details?
I agree with what @KendallW shared its hard to comment anything without checking actual data but this type of ERRORs can mainly happen due to mismatch in timestamps.
Hi, regarding test 1 your assmption is correct. regarding test 2 if the test is executed at 11:00 am for example and fails at this time. the alert will be triggered immediately after the failed exe... See more...
Hi, regarding test 1 your assmption is correct. regarding test 2 if the test is executed at 11:00 am for example and fails at this time. the alert will be triggered immediately after the failed execution when the  configured trigger threshold is reached at this time.  If the test is successful at 11:00 am and the next execution of the test fails at 11:30 am.  the alert will be triggered immediately after the failed execution when the  configured trigger threshold is reached.
I have provided the sample data. I have huge data in few thousand lines. Which is pushed to Splunk. Query should be generic to accept any data size. Its not just 10 values.
I have provided the sample data. I have huge data in few thousand lines. Which is pushed to Splunk. Query should be generic to accept any data size.
I faced this issue, found that server.pem under /etc/auth had expired.  1) renamed server.pem 2) ran splunk restart 3) new cert got generated with 3 year extension on expiry date. Do not change a... See more...
I faced this issue, found that server.pem under /etc/auth had expired.  1) renamed server.pem 2) ran splunk restart 3) new cert got generated with 3 year extension on expiry date. Do not change any java settings if it was working before and suddenly stopped working,  check cert expiry first.  
You need to go back to the four golden rules of asking an answerable analytical question that I call 4 Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw e... See more...
You need to go back to the four golden rules of asking an answerable analytical question that I call 4 Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Hi Akshay.Nimbal, Thank you for posting to community. It looks like you're encountering a similar issue ("Exception in thread 'Reference Reaper #2' java.lang.NoClassDefFoundError: com/singularity... See more...
Hi Akshay.Nimbal, Thank you for posting to community. It looks like you're encountering a similar issue ("Exception in thread 'Reference Reaper #2' java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot") that's been discussed in a related community post: You can check out the troubleshooting steps here: java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInt‌ Also, just a heads-up: depending on your framework, there are some startup settings required. For JBoss or Wildfly, you need to ensure that the Java Agent and the log manager packages are included in the server startup routine.This is documented here: https://docs.appdynamics.com/appd/onprem/24.x/24.9/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent/agent-installation-by-java-framework/jboss-and-wildfly-startup-settings‌ I hope this reference helps. However, let me know if the issue persists. I’d be happy to assist further. Martina
@sainag_splunk's solution should work.  A less literal, but more traditional way to do this is | stats dc(ServerName) as count by UpgradeStatus | eventstats sum(count) as total | eval count = count ... See more...
@sainag_splunk's solution should work.  A less literal, but more traditional way to do this is | stats dc(ServerName) as count by UpgradeStatus | eventstats sum(count) as total | eval count = count . " (" . round(count / total * 100) . "%)" | fields - total | transpose header_field=UpgradeStatus | fields - column Here is an emulation | makeresults format=csv data="ServerName, UpgradeStatus Server1, Completed Server2, Completed Server3, Completed Server4, Completed Server5, Completed Server6, Completed Server7, Pending Server8, Pending Server9, Pending Server10, Pending" | stats dc(ServerName) as count by UpgradeStatus | eventstats sum(count) as total | eval count = count . " (" . round(count / total * 100) . "%)" | fields - total | transpose header_field=UpgradeStatus | fields - column
Sorry for not providing enough information earlier. We are running 5 jobs daily in our system but we are seeing some jenkins job data are not getting reported back on splunk. Out of 5, splunk shows ... See more...
Sorry for not providing enough information earlier. We are running 5 jobs daily in our system but we are seeing some jenkins job data are not getting reported back on splunk. Out of 5, splunk shows only 3 jobs if we have the query like    index=jenkins_statistics (host=abc.com/*) event_tag=job_event type=completed job_name="*abc/develop*" | stats count by job_name, type     If we remove the type from the above query, we get more data which tells us that some jobs are marking as started but splunk not getting the completed event for the same job, hence data discrepancies are there.  So just wanted to check do we have guaranteed delivery for Splunk event from Jenkins to Splunk? As per my understanding, events are sent to an Splunk HTTP Endpoint Collector endpoint and they are sent fire and forget   Either some of the events are getting dropped at that level or there is a bug somewhere in the https://plugins.jenkins.io/splunk-devops plugin that is causing events to get missed
 https://docs.appdynamics.com/appd/24.x/latest/en/database-visibility/administer-the-database-agent/install-the-database-agent java -Djava.library.path="<db_agent_home>\auth\x64" -Ddbagent.name="Sca... See more...
 https://docs.appdynamics.com/appd/24.x/latest/en/database-visibility/administer-the-database-agent/install-the-database-agent java -Djava.library.path="<db_agent_home>\auth\x64" -Ddbagent.name="Scarborough Network Database Agent" -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -jar <db_agent_home>\db-agent.jar
We have a Microsoft SQL database SP with a few hundred lines. When we tried to analyze the content of this SP under the controller tab Databases|Queries||Query Details|Query, the query text got trunc... See more...
We have a Microsoft SQL database SP with a few hundred lines. When we tried to analyze the content of this SP under the controller tab Databases|Queries||Query Details|Query, the query text got truncated. Is there a setting can increase the captured SQL text size? The controller build is  24.6.3.  DBAgent version 23.6.0.0
How did you correct it?  Please share to help others.
Hi @timothylindt  My two cents here: 1) if the Salesforce Add-on does not support FIPS, then Splunk doc team should have updated on the add-on docs (the opposite is what happened in this case - if ... See more...
Hi @timothylindt  My two cents here: 1) if the Salesforce Add-on does not support FIPS, then Splunk doc team should have updated on the add-on docs (the opposite is what happened in this case - if its supporting, then many times they forget to mention it) 2) if this issue impacted previous Splunk Admins, then they should have asked this question(as the SalesForce is a popular app) 3) if the FIPS is not supported by the add-on, then you can contact Splunk Support as the Add-on is a Splunk Supported Add-on.    Some detailed for other Splunk newbies about the FIPS: FIPS - Federal Information Processing Standards (FIPS) THP - Transparent Huge Pages (as some users may get confused with FIPS and THP) How to enable FIPS, how to verify, etc https://docs.splunk.com/Documentation/Splunk/9.3.1/Security/SecuringSplunkEnterprisewithFIPS  
I finally identified the mistake I was making, and the issue has been resolved. Thank you for your reponse!
I finally identified the mistake I was making, and the issue has been resolved. Thank you so much for your reponse!
Either edit the search from the panel in dashboard edit mode or use &amp; in the XML source