All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I faced this issue, found that server.pem under /etc/auth had expired.  1) renamed server.pem 2) ran splunk restart 3) new cert got generated with 3 year extension on expiry date. Do not change a... See more...
I faced this issue, found that server.pem under /etc/auth had expired.  1) renamed server.pem 2) ran splunk restart 3) new cert got generated with 3 year extension on expiry date. Do not change any java settings if it was working before and suddenly stopped working,  check cert expiry first.  
You need to go back to the four golden rules of asking an answerable analytical question that I call 4 Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw e... See more...
You need to go back to the four golden rules of asking an answerable analytical question that I call 4 Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Hi Akshay.Nimbal, Thank you for posting to community. It looks like you're encountering a similar issue ("Exception in thread 'Reference Reaper #2' java.lang.NoClassDefFoundError: com/singularity... See more...
Hi Akshay.Nimbal, Thank you for posting to community. It looks like you're encountering a similar issue ("Exception in thread 'Reference Reaper #2' java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot") that's been discussed in a related community post: You can check out the troubleshooting steps here: java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInt‌ Also, just a heads-up: depending on your framework, there are some startup settings required. For JBoss or Wildfly, you need to ensure that the Java Agent and the log manager packages are included in the server startup routine.This is documented here: https://docs.appdynamics.com/appd/onprem/24.x/24.9/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent/agent-installation-by-java-framework/jboss-and-wildfly-startup-settings‌ I hope this reference helps. However, let me know if the issue persists. I’d be happy to assist further. Martina
@sainag_splunk's solution should work.  A less literal, but more traditional way to do this is | stats dc(ServerName) as count by UpgradeStatus | eventstats sum(count) as total | eval count = count ... See more...
@sainag_splunk's solution should work.  A less literal, but more traditional way to do this is | stats dc(ServerName) as count by UpgradeStatus | eventstats sum(count) as total | eval count = count . " (" . round(count / total * 100) . "%)" | fields - total | transpose header_field=UpgradeStatus | fields - column Here is an emulation | makeresults format=csv data="ServerName, UpgradeStatus Server1, Completed Server2, Completed Server3, Completed Server4, Completed Server5, Completed Server6, Completed Server7, Pending Server8, Pending Server9, Pending Server10, Pending" | stats dc(ServerName) as count by UpgradeStatus | eventstats sum(count) as total | eval count = count . " (" . round(count / total * 100) . "%)" | fields - total | transpose header_field=UpgradeStatus | fields - column
Sorry for not providing enough information earlier. We are running 5 jobs daily in our system but we are seeing some jenkins job data are not getting reported back on splunk. Out of 5, splunk shows ... See more...
Sorry for not providing enough information earlier. We are running 5 jobs daily in our system but we are seeing some jenkins job data are not getting reported back on splunk. Out of 5, splunk shows only 3 jobs if we have the query like    index=jenkins_statistics (host=abc.com/*) event_tag=job_event type=completed job_name="*abc/develop*" | stats count by job_name, type     If we remove the type from the above query, we get more data which tells us that some jobs are marking as started but splunk not getting the completed event for the same job, hence data discrepancies are there.  So just wanted to check do we have guaranteed delivery for Splunk event from Jenkins to Splunk? As per my understanding, events are sent to an Splunk HTTP Endpoint Collector endpoint and they are sent fire and forget   Either some of the events are getting dropped at that level or there is a bug somewhere in the https://plugins.jenkins.io/splunk-devops plugin that is causing events to get missed
 https://docs.appdynamics.com/appd/24.x/latest/en/database-visibility/administer-the-database-agent/install-the-database-agent java -Djava.library.path="<db_agent_home>\auth\x64" -Ddbagent.name="Sca... See more...
 https://docs.appdynamics.com/appd/24.x/latest/en/database-visibility/administer-the-database-agent/install-the-database-agent java -Djava.library.path="<db_agent_home>\auth\x64" -Ddbagent.name="Scarborough Network Database Agent" -DLog4jContextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector -jar <db_agent_home>\db-agent.jar
We have a Microsoft SQL database SP with a few hundred lines. When we tried to analyze the content of this SP under the controller tab Databases|Queries||Query Details|Query, the query text got trunc... See more...
We have a Microsoft SQL database SP with a few hundred lines. When we tried to analyze the content of this SP under the controller tab Databases|Queries||Query Details|Query, the query text got truncated. Is there a setting can increase the captured SQL text size? The controller build is  24.6.3.  DBAgent version 23.6.0.0
How did you correct it?  Please share to help others.
Hi @timothylindt  My two cents here: 1) if the Salesforce Add-on does not support FIPS, then Splunk doc team should have updated on the add-on docs (the opposite is what happened in this case - if ... See more...
Hi @timothylindt  My two cents here: 1) if the Salesforce Add-on does not support FIPS, then Splunk doc team should have updated on the add-on docs (the opposite is what happened in this case - if its supporting, then many times they forget to mention it) 2) if this issue impacted previous Splunk Admins, then they should have asked this question(as the SalesForce is a popular app) 3) if the FIPS is not supported by the add-on, then you can contact Splunk Support as the Add-on is a Splunk Supported Add-on.    Some detailed for other Splunk newbies about the FIPS: FIPS - Federal Information Processing Standards (FIPS) THP - Transparent Huge Pages (as some users may get confused with FIPS and THP) How to enable FIPS, how to verify, etc https://docs.splunk.com/Documentation/Splunk/9.3.1/Security/SecuringSplunkEnterprisewithFIPS  
I finally identified the mistake I was making, and the issue has been resolved. Thank you for your reponse!
I finally identified the mistake I was making, and the issue has been resolved. Thank you so much for your reponse!
Either edit the search from the panel in dashboard edit mode or use &amp; in the XML source
It's impossible to answer such question without knowing your data and your environment. You can start debugging by checking which jobs were started and verifying if you can find a corresponding job c... See more...
It's impossible to answer such question without knowing your data and your environment. You can start debugging by checking which jobs were started and verifying if you can find a corresponding job completed event for them. If so check if the data is in different format or if your extractions properly match the fields. If not check your ingestion pipeline to see why there are missing events.
You can do the more or less same thing with rex alone https://regex101.com/r/KFMlCd/1 EDIT: Ready Splunk version: | makeresults | eval origData="<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><v... See more...
You can do the more or less same thing with rex alone https://regex101.com/r/KFMlCd/1 EDIT: Ready Splunk version: | makeresults | eval origData="<?xml version='1.0' encoding='UTF-8'?><BAD_MSG><violation_masks><block>58f7c3e96a0c279b-7e3f5f28b0000040</block><alarm>5cf7c3e97b0c6fdb-7e3f5f28b0000040</alarm><learn>5cf2c1e9730c2f5b-3d3c000830000000</learn><staging>0-0</staging></violation_masks><response_violations><violation><viol_index>56</viol_index><viol_name>VIOL_HTTP_RESPONSE_STATUS</viol_name><response_code>500</response_code></violation></response_violations></BAD_MSG>" | rex field=origData mode=sed "s/<([^\\/][^>]+)>(?=.*<\\/\\1>)/\n<\1>/g" | rex field=origData mode=sed "s/><\//>\n<\//g" It doesn't indent though.
Hi @marnall , I had set web.conf to   [settings] cacheEntriesLimit = 0 cacheBytesLimit = 0 js_no_cache = 1   .. no difference. 
we are using splunk forwarder to forward the jenkins data to splunk. Noticed that splunk does not display all the data. here is the example: index=jenkins_statistics (host=abc.com/*) event_tag=jo... See more...
we are using splunk forwarder to forward the jenkins data to splunk. Noticed that splunk does not display all the data. here is the example: index=jenkins_statistics (host=abc.com/*) event_tag=job_event job_name="*abc/develop*" | stats count by job_name, type returns completed = 74 and started = 118 Ideally whatever is started should also be completed. so can you help me figuring out what could be the problem?
Hi @ITWhisperer,  the request is great ! it's working fine in the search indeed. Unfortunately it doesn't  work within a dashboard source code : first line is highlighted with the message "unencode... See more...
Hi @ITWhisperer,  the request is great ! it's working fine in the search indeed. Unfortunately it doesn't  work within a dashboard source code : first line is highlighted with the message "unencoded <"   | rex mode=sed field=origData "s/<(?<!)/ </g s/>(?=<)/> /g"    it there a way to make the request understandable in dashboard UI ?
It is a Splunk Supported add-on, so if you have a support contract then you could ask them.
In web.conf, there are some cache-related settings that might work to disable either the caching of views, or the cache entirely. https://docs.splunk.com/Documentation/Splunk/latest/Admin/Webconf ma... See more...
In web.conf, there are some cache-related settings that might work to disable either the caching of views, or the cache entirely. https://docs.splunk.com/Documentation/Splunk/latest/Admin/Webconf max_view_cache_size = <integer> * The maximum number of views to cache in the appserver. * Default: 1000 cacheBytesLimit = <integer> * Splunkd can keep a small cache of static web assets in memory. When the total size of the objects in cache grows larger than this setting, in bytes, splunkd begins ageing entries out of the cache. * If set to zero, disables the cache. * Default: 4194304  
This might work: <yoursearch> | eval <yourdisplayedtimefield> = strftime(<youroriginaltimefield>, "%B %e, %Y") And here is a good reference website for picking the string format characters: https:/... See more...
This might work: <yoursearch> | eval <yourdisplayedtimefield> = strftime(<youroriginaltimefield>, "%B %e, %Y") And here is a good reference website for picking the string format characters: https://strftime.net/