All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @msalghamdi , could you better describe your requirement, eventually with an example? Ciao. Giuseppe
Hi @jroedel , if the Add Data feature doesn't permit to use this feature I suppose that it isn't possible event if it's strange. I tried but I have the same result Ciao. Giuseppe
Hi @Alex_Rus , let me understand: you want to filter events on the Universal Forwarder, is it correct? see blacklists and whiteslists in Splunk_TA_Windows documentation that guides you: https://do... See more...
Hi @Alex_Rus , let me understand: you want to filter events on the Universal Forwarder, is it correct? see blacklists and whiteslists in Splunk_TA_Windows documentation that guides you: https://docs.splunk.com/Documentation/Splunk/latest/Admin/Inputsconf#Event_Log_filtering Ciao. Giuseppe
Hello Splunkers How can i utilize a lookup in a correlation search showing the detected keyword in the search result ? its a requirement that the analyst shouldn't have the capability to view l... See more...
Hello Splunkers How can i utilize a lookup in a correlation search showing the detected keyword in the search result ? its a requirement that the analyst shouldn't have the capability to view lookups Thanks in advance.
Thanks for your second attempt. I tried, but still no luck. Might there be the possibility, that the "Add Data" WebUI Wizard does not support this correctly?
Hi Splunk community!  I need to filter events from the Splunk_ta_Windows application by the EventCode, Account_Name and Source_Network_Address fields. Tell me, in what form should props.conf and tra... See more...
Hi Splunk community!  I need to filter events from the Splunk_ta_Windows application by the EventCode, Account_Name and Source_Network_Address fields. Tell me, in what form should props.conf and transform.conf be written and in what folder should they be located?
Hi @jroedel , please try this: TIME_FORMAT=%s,\n\s*\"nanoOfSecond\"\s*:\s*%9N TIME_PREFIX=\"epochSecond\"\s*:\s* MAX_TIMESTAMP_LOOKAHEAD=500 Ciao. Giuseppe
I tried, but still no luck  
Hi @jroedel , are you sure about the number of spaces? please try this: TIME_FORMAT=%s,\n\s*"nanoOfSecond"\s*:\s*%9N TIME_PREFIX="epochSecond"\s*:\s* MAX_TIMESTAMP_LOOKAHEAD=500 Ciao. Giuseppe
After upgrading Splunk from 8 to 9 version I've started to receive messages : " The Upgrade Readiness App detected 1 app with deprecated Python: splunk-rolling-upgrade " Can't find this app Splunkb... See more...
After upgrading Splunk from 8 to 9 version I've started to receive messages : " The Upgrade Readiness App detected 1 app with deprecated Python: splunk-rolling-upgrade " Can't find this app Splunkbase | apps. As far as I understand it's Splunk buit-in app? Should I delete it or how can I resolve this issue ? P"lease help.
I have to parse the timestamp of JSON logs and I would like to include subsecond precision. My JSON-Events start like this:     { "instant" : { "epochSecond" : 1727189281, "nanoOfSecond"... See more...
I have to parse the timestamp of JSON logs and I would like to include subsecond precision. My JSON-Events start like this:     { "instant" : { "epochSecond" : 1727189281, "nanoOfSecond" : 202684061 }, ...       Thus I tried as config in props.conf:   TIME_FORMAT=%s,\n "nanoOfSecond" : %9N TIME_PREFIX="epochSecond" :\s MAX_TIMESTAMP_LOOKAHEAD=500     That did unfortunately not work.   What is the right way to parse this time stamp with subsecond precision?
How can we send a file as input to an API endpoint from custom spl commands developed for both Splunk Enterprise and Splunk Cloud, ensuring the API endpoint returns the desired enrichment details?
I agree with what @KendallW shared its hard to comment anything without checking actual data but this type of ERRORs can mainly happen due to mismatch in timestamps.
Hi, regarding test 1 your assmption is correct. regarding test 2 if the test is executed at 11:00 am for example and fails at this time. the alert will be triggered immediately after the failed exe... See more...
Hi, regarding test 1 your assmption is correct. regarding test 2 if the test is executed at 11:00 am for example and fails at this time. the alert will be triggered immediately after the failed execution when the  configured trigger threshold is reached at this time.  If the test is successful at 11:00 am and the next execution of the test fails at 11:30 am.  the alert will be triggered immediately after the failed execution when the  configured trigger threshold is reached.
I have provided the sample data. I have huge data in few thousand lines. Which is pushed to Splunk. Query should be generic to accept any data size. Its not just 10 values.
I have provided the sample data. I have huge data in few thousand lines. Which is pushed to Splunk. Query should be generic to accept any data size.
I faced this issue, found that server.pem under /etc/auth had expired.  1) renamed server.pem 2) ran splunk restart 3) new cert got generated with 3 year extension on expiry date. Do not change a... See more...
I faced this issue, found that server.pem under /etc/auth had expired.  1) renamed server.pem 2) ran splunk restart 3) new cert got generated with 3 year extension on expiry date. Do not change any java settings if it was working before and suddenly stopped working,  check cert expiry first.  
You need to go back to the four golden rules of asking an answerable analytical question that I call 4 Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw e... See more...
You need to go back to the four golden rules of asking an answerable analytical question that I call 4 Commandments: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search that volunteers here do not have to look at. Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
Hi Akshay.Nimbal, Thank you for posting to community. It looks like you're encountering a similar issue ("Exception in thread 'Reference Reaper #2' java.lang.NoClassDefFoundError: com/singularity... See more...
Hi Akshay.Nimbal, Thank you for posting to community. It looks like you're encountering a similar issue ("Exception in thread 'Reference Reaper #2' java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInterceptorDelegatorBoot") that's been discussed in a related community post: You can check out the troubleshooting steps here: java.lang.NoClassDefFoundError: com/singularity/ee/agent/appagent/entrypoint/bciengine/FastMethodInt‌ Also, just a heads-up: depending on your framework, there are some startup settings required. For JBoss or Wildfly, you need to ensure that the Java Agent and the log manager packages are included in the server startup routine.This is documented here: https://docs.appdynamics.com/appd/onprem/24.x/24.9/en/application-monitoring/install-app-server-agents/java-agent/install-the-java-agent/agent-installation-by-java-framework/jboss-and-wildfly-startup-settings‌ I hope this reference helps. However, let me know if the issue persists. I’d be happy to assist further. Martina
@sainag_splunk's solution should work.  A less literal, but more traditional way to do this is | stats dc(ServerName) as count by UpgradeStatus | eventstats sum(count) as total | eval count = count ... See more...
@sainag_splunk's solution should work.  A less literal, but more traditional way to do this is | stats dc(ServerName) as count by UpgradeStatus | eventstats sum(count) as total | eval count = count . " (" . round(count / total * 100) . "%)" | fields - total | transpose header_field=UpgradeStatus | fields - column Here is an emulation | makeresults format=csv data="ServerName, UpgradeStatus Server1, Completed Server2, Completed Server3, Completed Server4, Completed Server5, Completed Server6, Completed Server7, Pending Server8, Pending Server9, Pending Server10, Pending" | stats dc(ServerName) as count by UpgradeStatus | eventstats sum(count) as total | eval count = count . " (" . round(count / total * 100) . "%)" | fields - total | transpose header_field=UpgradeStatus | fields - column