All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

@welcomerrr- As described by @PickleRick , you cannot create a function for stats command, but you can create the whole new custom command which might be implementing the functionality in Python.   ... See more...
@welcomerrr- As described by @PickleRick , you cannot create a function for stats command, but you can create the whole new custom command which might be implementing the functionality in Python.   But most of the requirements that you might have should be able to fulfilled with existing stats command function. Kindly please describe exact use-case and community should be able to help you write query without writing custom command or function.  
@karn- As @dolezelk answered your question, if that sounds okay, then kindly accept the answer by clicking to "Accept as Solution" so future community users get help from it as well.   Community Mo... See more...
@karn- As @dolezelk answered your question, if that sounds okay, then kindly accept the answer by clicking to "Accept as Solution" so future community users get help from it as well.   Community Moderator, Vatsal
@AShwin1119- I think you are not forwarding the SH data to Indexers. * Which is compulsory when you are using SHC. * And best-practice in all SHs. https://docs.splunk.com/Documentation/Splunk/9.4.... See more...
@AShwin1119- I think you are not forwarding the SH data to Indexers. * Which is compulsory when you are using SHC. * And best-practice in all SHs. https://docs.splunk.com/Documentation/Splunk/9.4.0/DistSearch/Forwardsearchheaddata   I hope this helps!! Please upvote if it helps!!!
@htidore  1. Try clearing all *.splunk.com cookies 2. Since it works from your office network but not from your desktop, there might be a network-related issue. Check if there are any firewall or p... See more...
@htidore  1. Try clearing all *.splunk.com cookies 2. Since it works from your office network but not from your desktop, there might be a network-related issue. Check if there are any firewall or proxy settings on your desktop that might be blocking access to Splunk. If possible, try using a VPN to connect to your office network and see if that resolves the issue. This can help determine if the problem is related to your network configuration.
@AShwin1119 First, it's important to understand that the data needs to replicate properly across the indexers. When you search for data from the search head, it doesn't directly query the indexers. ... See more...
@AShwin1119 First, it's important to understand that the data needs to replicate properly across the indexers. When you search for data from the search head, it doesn't directly query the indexers. Instead, the search head first contacts the cluster master, which checks which indexers are available and retrieves the results from them. If the replication and search factors are correctly configured on the cluster master, your environment should be functioning properly. The data may be indexed on one indexer but not fully replicated across all indexers in the cluster or between the SHs. If the indexers are not properly replicating data to all search heads in a timely manner, you may see discrepancies in event counts when searching. Please monitor your environment using the Monitoring Console, including the search heads, indexers, and other components. How can you ensure that the same notable events are visible across all search heads? If possible, could you provide a screenshot?
I always get 403 Forbidden when logging in to www.splunk.com. However, when I login from office network, it is ok. This is very frustrating. I cannot access UF and latest Splunk Enterprise from my ... See more...
I always get 403 Forbidden when logging in to www.splunk.com. However, when I login from office network, it is ok. This is very frustrating. I cannot access UF and latest Splunk Enterprise from my desktop. The funny things is after I get the 403 Forbidden, I tried going to docs.splunk.com, and I can see that I am actually logged in.  But when I tried to go to other pages, I will get 403 Forbidden.
we have a SH cluster with 3 SH which is collecting data with indexer cluster having 3 indexers. Now the problem is data present in the each indexer is not properly replicating in all 3 SH, example if... See more...
we have a SH cluster with 3 SH which is collecting data with indexer cluster having 3 indexers. Now the problem is data present in the each indexer is not properly replicating in all 3 SH, example if we check for last 15 min _internal data on each SH then number of event is different by 1k to 5 k. And if I create dashboard in SH then this is getting replicated properly in between the SH. because of this issue in enterprise security notable is showing different in each SH.
Hi, If you send an OpenTelemetry formatted span that contains span links, it should show up in Observability Cloud. To get this to work in the real-world, you'll probably need to manually instrument... See more...
Hi, If you send an OpenTelemetry formatted span that contains span links, it should show up in Observability Cloud. To get this to work in the real-world, you'll probably need to manually instrument your source code. Here is an example json payload saved to a file "span.otel.json" and sent to a local OTel collector with: curl -X POST http://localhost:4318/v1/traces -H "Content-Type: application/json" -d @Span.otel.json { "resourceSpans": [ { "resource": { "attributes": [ { "key": "service.name", "value": { "stringValue": "example-service" } }, { "key": "service.instance.id", "value": { "stringValue": "instance-12345" } } ] }, "scopeSpans": [ { "scope": { "name": "example-tracer", "version": "1.0.0" }, "spans": [ { "traceId": "1e223ff1f80f1c69f8f0b81c1a2d32ad", "spanId": "6f9b3f4d1de5bf5e", "parentSpanId": "", "name": "example-operation", "kind": "SPAN_KIND_SERVER", "startTimeUnixNano": 1706100243452000000, "endTimeUnixNano": 1706100244952000000, "attributes": [ { "key": "http.method", "value": { "stringValue": "GET" } }, { "key": "http.url", "value": { "stringValue": "http://example.com/api" } } ], "links": [ { "traceId": "1e223ff1f80f1c69f8f0b81c1a2d32ae", "spanId": "9a7b3e4f1dceaa56", "attributes": [ { "key": "link-type", "value": { "stringValue": "related" } } ], "droppedAttributesCount": 0 } ] } ] } ] } ] }    
You're an absolute genius! Thank you so much. I knew there had to be something I was missing. It was as simple as you say. I made the transform using the regex I already knew worked and then referenc... See more...
You're an absolute genius! Thank you so much. I knew there had to be something I was missing. It was as simple as you say. I made the transform using the regex I already knew worked and then referenced it in a field extraction. ~Worked like a charm.
Thank you @kgorzynski! Updated information provides much needed clarity.
You haven't answered key questions from me and @bowesmana.  Without SPL, what do you use to count number of sensors per host (if the total number of events is not the answer). Let me repeat the four... See more...
You haven't answered key questions from me and @bowesmana.  Without SPL, what do you use to count number of sensors per host (if the total number of events is not the answer). Let me repeat the four commandments of asking answerable questions in this forum: Illustrate data input (in raw text, anonymize as needed), whether they are raw events or output from a search (SPL that volunteers here do not have to look at). Illustrate the desired output from illustrated data. Explain the logic between illustrated data and desired output without SPL. If you also illustrate attempted SPL, illustrate actual output and compare with desired output, explain why they look different to you if that is not painfully obvious.
it goes back to community license. Tested that ...
it very unpleasant that it oficially does not support RHEL 9. It is 3 years after RHEL9 has been released and still nothing.... This might be reason, why customers are discouraged to use Splunk solu... See more...
it very unpleasant that it oficially does not support RHEL 9. It is 3 years after RHEL9 has been released and still nothing.... This might be reason, why customers are discouraged to use Splunk solution for SOAR
You can't create a custom aggregation function for stats. You can create your own command though. https://dev.splunk.com/enterprise/docs/devtools/customsearchcommands/
Hello, I am building a splunk app , where I want to have my own custom aggregate function for stats command. Below is my use case let say. | makeresults count=10 | eval event_count=random()%... See more...
Hello, I am building a splunk app , where I want to have my own custom aggregate function for stats command. Below is my use case let say. | makeresults count=10 | eval event_count=random()%10 | stats mysum("event_count") as total_count Does anyone knows how my python code should look like if its feasible to create mysum function. Thanks!
The search appears to be working a treat. Now just need to understand why, so lots to learn  Thank you very much for your help. Kind regards Chris
Hi Splunkers! The issue I am having is regarding different results from alerts when some condition is met, compared to manual search results on the same query and time frame. I am having a repeated i... See more...
Hi Splunkers! The issue I am having is regarding different results from alerts when some condition is met, compared to manual search results on the same query and time frame. I am having a repeated issue between different search queries including different functions, where an alert is triggered, and when i view the results of the alert, it outputs for example 3000 events scanned, and 2 results in the statistic section. While when i manually trigger this search it will output 3500 events scanned and 0 results in the statistic scan. I cant find any solution online, and this issue is causing several of my alerts to false alert. here is an example query that is giving me this issue incase that is helpful: index="index" <search> earliest=-8h@h |stats count(Field) as Counter earliest(Field) as DataOld by FieldA, Field B |where DataNew!=DataOld OR isnull(DataOld) |table Counter, DataOld, Field A, Field B any help is very appericated!
Hi I think i was not able to explain you properly if you can have a look now in proper way and tell will be helpful as iam getting data in json format already so. I am getting  cloud logstash data... See more...
Hi I think i was not able to explain you properly if you can have a look now in proper way and tell will be helpful as iam getting data in json format already so. I am getting  cloud logstash data and its sourcetype is httpevent now below is the output iam already getting in json format in search logs of splunk. @timestamp: 2025T19:31:30.615Z environment: dev event: { [+] } host: { [+] } input: { [+] } kubernetes: { [+] } message: +0000 FM [com.abc.cmp.event.message.base.Abs] DEBUG Receiver not ready, try other receivers or try later audit is disabled } so in message sometime iam getting above data and  some logs have some json data as well. and also above json data for above fields are also coming. Now i want to know that i have to use this data in my splunk for to have a structured data out of it how to do that so that i can use that data for my use ?
I suppose that you should try to move those timestamp extractions under each source:: definitions. Then those should work. Anyhow those definitions which you have put on that new sourcetype definiti... See more...
I suppose that you should try to move those timestamp extractions under each source:: definitions. Then those should work. Anyhow those definitions which you have put on that new sourcetype definitions are working on search time if those can apply on search time. But example those _time settings are working only in indexing phase.
Thanks @isoutamo , As I understand ; Since these definitions are used only at search time , then I only need the add-on installed on the search Head. On the HEC I will put the props.conf with the TI... See more...
Thanks @isoutamo , As I understand ; Since these definitions are used only at search time , then I only need the add-on installed on the search Head. On the HEC I will put the props.conf with the TIME PREFIX related regex , so time will be extracted from the incoming logs and sent to the indexers.