All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi it’s like @VatsalJagani said, when you are not set exact end time for your search, but you have earliest then splunk put latest=now. You could look those earliest and latest values which your ale... See more...
Hi it’s like @VatsalJagani said, when you are not set exact end time for your search, but you have earliest then splunk put latest=now. You could look those earliest and latest values which your alerts are used from _audit index. But I suppose that even then you will not get always exactly the same results! Why this happens? When you are ingesting data there are always some delays, it could be less than second or several minutes or even longer time depending on your environment, log sources and how those are integrated into Splunk. For that reason you should always use suitable earliest and latest values with suitable buffers on every alerts! And if there are some inputs where the latency could have too big variation, then you probably need to create two series of alerts for it. One which are trying to look it as well as possible for online/real time and second one which are taking care of those later coming events which haven’t realized by this real time alert. r. Ismo
Hi as @VatsalJagani already said that error message didn’t relate to you login issue. It’s just told that your DB connect didn’t work as kvstore is somehow broken/stop. On splunkd.log should be som... See more...
Hi as @VatsalJagani already said that error message didn’t relate to you login issue. It’s just told that your DB connect didn’t work as kvstore is somehow broken/stop. On splunkd.log should be some lines which could help us to see what was a real issue. But let’s start that migration part as it’s quite obvious that it has something to do with this issue! From where you migrated it and what is target environment? How do you do the migration? Was there any issues before migration? Anything else we should know? r. Ismo
@Sathish28  1. Check status of KV store 2. Verify the status of the KV Store service ./splunk show kvstore-status 3. Check mongod.log less /opt/splunk/var/log/splunk/mongod.log 4. Verify ... See more...
@Sathish28  1. Check status of KV store 2. Verify the status of the KV Store service ./splunk show kvstore-status 3. Check mongod.log less /opt/splunk/var/log/splunk/mongod.log 4. Verify that the permissions for the KV Store directories and files are set correctly. Incorrect permissions can prevent the KV Store from initializing. Set splunk.key to the default file permission. chmod 600 $SPLUNK_HOME/var/lib/splunk/kvstore/mongo/splunk.key Restart Splunk
@Sathish28- Few things I want to take your attention: The error you are seeing is not related to the login issue you are having at all.   For the Login Issue: Are you trying LDAP credential? ... See more...
@Sathish28- Few things I want to take your attention: The error you are seeing is not related to the login issue you are having at all.   For the Login Issue: Are you trying LDAP credential? Login first with Admin Splunk native account. Then fix the LDAP related issue. Check Splunk internal logs & LDAP configuration page. Is it Splunk native authentication? Then you might need to reset the creds.   For Mongod related errors you are seeing in the logs. As suggested by @splunkreal  please check the Splunk's internal logs to find the details on why mongodb service unable to start.   I hope this helps!!! Kindly upvote if it does!!!
@AShwin1119- I think its the same question here I have answered - https://community.splunk.com/t5/Monitoring-Splunk/indexer-cluster-to-SH-cluster-replication-issue/m-p/709746/highlight/true#M10687  ... See more...
@AShwin1119- I think its the same question here I have answered - https://community.splunk.com/t5/Monitoring-Splunk/indexer-cluster-to-SH-cluster-replication-issue/m-p/709746/highlight/true#M10687   I think you are not forwarding the SH data to Indexers. * Which is compulsory when you are using SHC. * And best-practice in all SHs. https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/Forwardsearchheaddata   I hope this helps!!! Kindly upvote if it does!!!!
@CrossWordKnower- When you say earliest=-8h@h , latest becomes now as you are not providing it. So number of results differ even when you run the search again manually, because your search will searc... See more...
@CrossWordKnower- When you say earliest=-8h@h , latest becomes now as you are not providing it. So number of results differ even when you run the search again manually, because your search will search new events coming in every time.   Try using static values of earliest & latest, for example earliest=01/22/2025:00:00:00 latest=01/23/2025:00:00:00 And in this scenario it should gave exactly the same count, regardless of its manual search or alert or when ever you search the search,   I hope this is understandable. Kindly upvote if it helps!!!
@welcomerrr- As described by @PickleRick , you cannot create a function for stats command, but you can create the whole new custom command which might be implementing the functionality in Python.   ... See more...
@welcomerrr- As described by @PickleRick , you cannot create a function for stats command, but you can create the whole new custom command which might be implementing the functionality in Python.   But most of the requirements that you might have should be able to fulfilled with existing stats command function. Kindly please describe exact use-case and community should be able to help you write query without writing custom command or function.  
@karn- As @dolezelk answered your question, if that sounds okay, then kindly accept the answer by clicking to "Accept as Solution" so future community users get help from it as well.   Community Mo... See more...
@karn- As @dolezelk answered your question, if that sounds okay, then kindly accept the answer by clicking to "Accept as Solution" so future community users get help from it as well.   Community Moderator, Vatsal
@AShwin1119- I think you are not forwarding the SH data to Indexers. * Which is compulsory when you are using SHC. * And best-practice in all SHs. https://docs.splunk.com/Documentation/Splunk/9.4.... See more...
@AShwin1119- I think you are not forwarding the SH data to Indexers. * Which is compulsory when you are using SHC. * And best-practice in all SHs. https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/Forwardsearchheaddata   I hope this helps!! Please upvote if it helps!!!
@htidore  1. Try clearing all *.splunk.com cookies 2. Since it works from your office network but not from your desktop, there might be a network-related issue. Check if there are any firewall or p... See more...
@htidore  1. Try clearing all *.splunk.com cookies 2. Since it works from your office network but not from your desktop, there might be a network-related issue. Check if there are any firewall or proxy settings on your desktop that might be blocking access to Splunk. If possible, try using a VPN to connect to your office network and see if that resolves the issue. This can help determine if the problem is related to your network configuration.
@AShwin1119 First, it's important to understand that the data needs to replicate properly across the indexers. When you search for data from the search head, it doesn't directly query the indexers. ... See more...
@AShwin1119 First, it's important to understand that the data needs to replicate properly across the indexers. When you search for data from the search head, it doesn't directly query the indexers. Instead, the search head first contacts the cluster master, which checks which indexers are available and retrieves the results from them. If the replication and search factors are correctly configured on the cluster master, your environment should be functioning properly. The data may be indexed on one indexer but not fully replicated across all indexers in the cluster or between the SHs. If the indexers are not properly replicating data to all search heads in a timely manner, you may see discrepancies in event counts when searching. Please monitor your environment using the Monitoring Console, including the search heads, indexers, and other components. How can you ensure that the same notable events are visible across all search heads? If possible, could you provide a screenshot?
I always get 403 Forbidden when logging in to www.splunk.com. However, when I login from office network, it is ok. This is very frustrating. I cannot access UF and latest Splunk Enterprise from my ... See more...
I always get 403 Forbidden when logging in to www.splunk.com. However, when I login from office network, it is ok. This is very frustrating. I cannot access UF and latest Splunk Enterprise from my desktop. The funny things is after I get the 403 Forbidden, I tried going to docs.splunk.com, and I can see that I am actually logged in.  But when I tried to go to other pages, I will get 403 Forbidden.
we have a SH cluster with 3 SH which is collecting data with indexer cluster having 3 indexers. Now the problem is data present in the each indexer is not properly replicating in all 3 SH, example if... See more...
we have a SH cluster with 3 SH which is collecting data with indexer cluster having 3 indexers. Now the problem is data present in the each indexer is not properly replicating in all 3 SH, example if we check for last 15 min _internal data on each SH then number of event is different by 1k to 5 k. And if I create dashboard in SH then this is getting replicated properly in between the SH. because of this issue in enterprise security notable is showing different in each SH.
Hi, If you send an OpenTelemetry formatted span that contains span links, it should show up in Observability Cloud. To get this to work in the real-world, you'll probably need to manually instrument... See more...
Hi, If you send an OpenTelemetry formatted span that contains span links, it should show up in Observability Cloud. To get this to work in the real-world, you'll probably need to manually instrument your source code. Here is an example json payload saved to a file "span.otel.json" and sent to a local OTel collector with: curl -X POST http://localhost:4318/v1/traces -H "Content-Type: application/json" -d @Span.otel.json { "resourceSpans": [ { "resource": { "attributes": [ { "key": "service.name", "value": { "stringValue": "example-service" } }, { "key": "service.instance.id", "value": { "stringValue": "instance-12345" } } ] }, "scopeSpans": [ { "scope": { "name": "example-tracer", "version": "1.0.0" }, "spans": [ { "traceId": "1e223ff1f80f1c69f8f0b81c1a2d32ad", "spanId": "6f9b3f4d1de5bf5e", "parentSpanId": "", "name": "example-operation", "kind": "SPAN_KIND_SERVER", "startTimeUnixNano": 1706100243452000000, "endTimeUnixNano": 1706100244952000000, "attributes": [ { "key": "http.method", "value": { "stringValue": "GET" } }, { "key": "http.url", "value": { "stringValue": "http://example.com/api" } } ], "links": [ { "traceId": "1e223ff1f80f1c69f8f0b81c1a2d32ae", "spanId": "9a7b3e4f1dceaa56", "attributes": [ { "key": "link-type", "value": { "stringValue": "related" } } ], "droppedAttributesCount": 0 } ] } ] } ] } ] }    
You're an absolute genius! Thank you so much. I knew there had to be something I was missing. It was as simple as you say. I made the transform using the regex I already knew worked and then referenc... See more...
You're an absolute genius! Thank you so much. I knew there had to be something I was missing. It was as simple as you say. I made the transform using the regex I already knew worked and then referenced it in a field extraction. ~Worked like a charm.
Thank you @kgorzynski! Updated information provides much needed clarity.
You haven't answered key questions from me and @bowesmana.  Without SPL, what do you use to count number of sensors per host (if the total number of events is not the answer). Let me repeat the four... See more...
You haven't answered key questions from me and @bowesmana.  Without SPL, what do you use to count number of sensors per host (if the total number of events is not the answer). Let me repeat the four commandments of asking answerable questions in this forum: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
it goes back to community license. Tested that ...
it very unpleasant that it oficially does not support RHEL 9. It is 3 years after RHEL9 has been released and still nothing.... This might be reason, why customers are discouraged to use Splunk solu... See more...
it very unpleasant that it oficially does not support RHEL 9. It is 3 years after RHEL9 has been released and still nothing.... This might be reason, why customers are discouraged to use Splunk solution for SOAR
You can't create a custom aggregation function for stats. You can create your own command though. https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/