All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have two searches  and I only want to find rows which has common MessageID . Currently it is returning extra row because of second search .  Query before Or is returning 100 records  and after OR ... See more...
I have two searches  and I only want to find rows which has common MessageID . Currently it is returning extra row because of second search .  Query before Or is returning 100 records  and after OR one was returning 110 rows  and for those extra 10 rows messageID in first is NULL , So I want to drop those messages . Please help how can i  change this query to make it work . I am trying to find count of matched IDs and  list of all such ids  ```query for apigateway call``` (index=aws_np earliest="03/28/2025:13:30:00" latest="03/28/2025:14:35:00" Method response body after transformations: sourcetype="aws:apigateway" business_unit=XX aws_account_alias ="XXXX" network_environment=xxXXX source="API-Gateway-Execution-Logs*" (application="xXXXXX" OR application="xXXXX-xXX") | rex field=_raw "Method response body after transformations: (?<json>[^$]+)" | spath input=json path="header.messageID" output=messageID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | spath input=json path="header.action" output=action | where status=200 and action="Create" ` | rename _time as request_time | table messageID, request_time) | append ```query for 2nd query call``` [ search kubernetes_cluster="eks-XXX*" index="aws_XXX" sourcetype = "kubernetes_logs" source = *XXXX* "sendData" | rex field=_raw "sendData: (?<json>[^$]+)" | spath input=json path="header.messageID" output=messageID | rename _time as pubsub_time | table messageID, pubsub_time ] | stats values(request_time) as request_time values(pubsub_time) as pubsub_time by messageID      
Hi everyone, I'm seeking advice on the best way to send application logs from our client's Docker containers into a Splunk Cloud instance, and I’d appreciate your input and experiences. Currently, ... See more...
Hi everyone, I'm seeking advice on the best way to send application logs from our client's Docker containers into a Splunk Cloud instance, and I’d appreciate your input and experiences. Currently, my leading approach involves using Docker’s "Splunk logging driver" to forward data via the HEC. However, my understanding is that this method primarily sends container-level data rather than detailed application logs. Another method I came across involves deploying Splunk's Docker image to create a standalone Enterprise container alongside the Universal Forwarder. The idea here is to set up monitors in the forwarder's inputs.conf to send data to the Enterprise instance and then route it via a Heavy Forwarder to Splunk Cloud. Has anyone successfully implemented either of these approaches—or perhaps a different method—to ingest application logs from Docker containers into Splunk Cloud? Any insights, tips, or shared experiences would be greatly appreciated. Thanks in advance for your help! Cheers,
Like the others, not totally sure what you're after, but given your example data, this SPL | stats latest(version) as last_version max(_time) as last_time count(version) as count_version dc(version)... See more...
Like the others, not totally sure what you're after, but given your example data, this SPL | stats latest(version) as last_version max(_time) as last_time count(version) as count_version dc(version) as dc_version by hostname model system will tell you the count of version records, the last version/date, the distinct version count for each host/system/model However, if you want to detect "NEWER" vs going backwards in versions, you'll need to define that rule in the version data. Also, this will only tell you within the search range, how far back do you want to go?  
It is not clear what you are expecting your result to be. You mention devices but these are not mentioned in your data. Since each event appears to represent a different version, can you not just cou... See more...
It is not clear what you are expecting your result to be. You mention devices but these are not mentioned in your data. Since each event appears to represent a different version, can you not just count the events? Please clarify what you are trying to do, what more of your data looks like and what your expected results would be.
Do you want to explain what you are meaning?
If I have understood right Django is no longer supported on recent spunk versions https://docs.splunk.com/Documentation/Splunk/9.4.0/Installation/ChangesforSplunkappdevelopers ?
unnecessary comment.  expect better from a Trust member
Hi It's obviously that without replication=true it's only in SH side and indexers cannot use is. Can you tell more about that later error report? r. Ismo
Hello Team - I have a strange use case wherein while invoking Splunk cloud REST APIs via Python SDK , only for one endpoint /services/apps/local I am receiving 200 response however for any other endp... See more...
Hello Team - I have a strange use case wherein while invoking Splunk cloud REST APIs via Python SDK , only for one endpoint /services/apps/local I am receiving 200 response however for any other endpoint such as /services/server/info or /services/search/jobs - I get connection timeout.  While debugging I approached Splunk's internal logs (using index = _internal),  I found that for the request made through client I see an entry in access logs with 200/201 http code but not sure why would it result into connection time out[Err 110] as if the client kept on waiting to receive the response from server and at the end gave up. I tried increasing timeout value on client side as well yet no luck   I don't think reachability is an issue here as /services/apps/local endpoint on 8089 port is accessible and for other endpoints too , there are log traces on Splunk cloud side as aforesaid so what could be an issue here ?  Search query is also extremely simple -  search index=_internal | stats count by sourcetype   Please help. 
In cluster you should also change internals to auto! Otherwise splunk don't replicate those buckets!
I think that this is a duplicate questions? But here is one old post which could help you https://community.splunk.com/t5/Installation/EC2-from-AMI-having-splunk-installed-stops-working/m-p/669633#M1... See more...
I think that this is a duplicate questions? But here is one old post which could help you https://community.splunk.com/t5/Installation/EC2-from-AMI-having-splunk-installed-stops-working/m-p/669633#M13418
Probably the best way to do this is create a lookup file/kvstore collection where you are stored current/earlier versions. Then just create a SPL query which creates current status, then checks what a... See more...
Probably the best way to do this is create a lookup file/kvstore collection where you are stored current/earlier versions. Then just create a SPL query which creates current status, then checks what are differences between those two. If you need exact SPL then please share what you have currently with some sample data with masked identifiers.
But the ones that were missing the specified repFactor were the ones that had thousands of events. The other indexes that already had repFactor set to auto only had a few events with that error. So I... See more...
But the ones that were missing the specified repFactor were the ones that had thousands of events. The other indexes that already had repFactor set to auto only had a few events with that error. So I think you may be on to something, 
thanks, yea, I was planning on giving that a shot, but mostly interested in how to replace those certs on the UFs before they expire.   ansible / MS tools could be a backup, but I'd really like to... See more...
thanks, yea, I was planning on giving that a shot, but mostly interested in how to replace those certs on the UFs before they expire.   ansible / MS tools could be a backup, but I'd really like to implement and have it fully controlled from the deployment server.   so, just thinking through it, updating the cert(s) in this app would push out the updated certs to the UFs, but then all of the UFs would fail to phone home until I update the certs on the deployment server?  then I'd have to hope that everything works from a fully broken state?  just seems risky.   open to suggestions, and maybe I'm over thinking some of it.   wondering if anyone's accomplished this in a safe practical way?  @krusovice, what did you wind up doing?
I've seen the repFactor set to auto or 0. I'm changing all the non-internal indexes to auto, (adding the line repFactor to the stanzas that are missing them.  RF and SF are 2. I have a Single site ... See more...
I've seen the repFactor set to auto or 0. I'm changing all the non-internal indexes to auto, (adding the line repFactor to the stanzas that are missing them.  RF and SF are 2. I have a Single site cluster with 6 indexers. 
Hello Splunk Community, I need to find out how many upgrades were performed to systems and unsure how to best proceed. The data is similar to what is listed below: _time hostname system model... See more...
Hello Splunk Community, I need to find out how many upgrades were performed to systems and unsure how to best proceed. The data is similar to what is listed below: _time hostname system model version 2025-01-01 a x x 15.2(8) 2025-01-01 b y y 15.3(5) 2025-01-02 a x x 15.3(5)   There are thousands of systems with various versions. I am trying to find a way to capture devices that have gone from one version to a newer one indicating an upgrade took place. Multiple upgrades could have occurred over time for a single device and those need to be accounted for as well. Any help suggesting where to start looking into what to use would be greatly appreciated. Thanks. -E
You should create an own app which contains all those needed certs. If you have Splunk Cloud in use you can copy the idea from its Universal Forwarder app. Of course it needs that you have added your... See more...
You should create an own app which contains all those needed certs. If you have Splunk Cloud in use you can copy the idea from its Universal Forwarder app. Of course it needs that you have added your own private CA.pem into Splunk's CA certs file if you have this in use.
Here is one docs page which told how those steps are done and what are order of those https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Searchtimeoperationssequence. You can see this aft... See more...
Here is one docs page which told how those steps are done and what are order of those https://docs.splunk.com/Documentation/Splunk/latest/Knowledge/Searchtimeoperationssequence. You can see this after you have run your search by clicking Jobs link -> Inspect Job and  then open search.log There are several .conf presentations and splunk blogs how to use this information.
did you wind up getting a good solution in place for pushing new certs from the deployment server?  
If you have so called golden image which are created incorrectly (it contains those critical configurations), You could read this old post https://community.splunk.com/t5/Installation/EC2-from-AMI-ha... See more...
If you have so called golden image which are created incorrectly (it contains those critical configurations), You could read this old post https://community.splunk.com/t5/Installation/EC2-from-AMI-having-splunk-installed-stops-working/m-p/669633#M13418 r. Ismo