All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All,    I have created one query and it is working fine in search. I am sharing part of code from dashboard. In first part of call if you see I have hardcoded  by earliest and latest time . But ... See more...
Hi All,    I have created one query and it is working fine in search. I am sharing part of code from dashboard. In first part of call if you see I have hardcoded  by earliest and latest time . But i want to pass those as input values by selecting input time provided on dashboard  and then remaining part of query I want to run for whole day or lets say another time range . becuse it is possible that request i have received during mentioned time might get process later at dayy.How can I achieve this . Also I want to hide few columns at end like message guid , request time and output time .   <panel> <table> <title>Contact -Timings</title> <search> <query>```query for apigateway call``` index=aws* earliest="03/28/2025:13:30:00" latest="03/28/2025:14:35:00" Method response body after transformations: sourcetype="aws:apigateway" | rex field=_raw "Method response body after transformations: (?&lt;json&gt;[^$]+)" | spath input=json path="header.messageGUID" output=messageGUID | spath input=json path="payload.statusType.code" output=status | spath input=json path="payload.statusType.text" output=text | spath input=json path="header.action" output=action | where status=200 and action="Create" | rename _time as request_time ```dedupe is added to remove duplicates ``` | dedup messageGUID | append ```query for event brigdel``` [ search index="aws_np" | rex field=_raw "messageGUID\": String\(\"(?&lt;messageGUID&gt;[^\"]+)" | rex field=_raw "source\": String\(\"(?&lt;source&gt;[^\"]+)" | rex field=_raw "type\": String\(\"(?&lt;type&gt;[^\"]+)" | where source="MDM" and type="Contact" ```and messageGUID="0461870f-ee8a-96cd-3db6-1ca1f6dbeb30"``` | rename _time as output_time | dedup messageGUID ] | stats values(request_time) as request_time values(output_time) as output_time by messageGUID | where isnotnull(output_time) and isnotnull(request_time) | eval timeTaken=(output_time-request_time)/60| convert ctime(output_time)| convert ctime(request_time) | eventstats avg(timeTaken) min(timeTaken) max(timeTaken) count(messageGUID) | head 1</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="drilldown">none</option> </table> </panel>    
./splunk cmd mongod -version mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /splunk/lib/libcrypto.so.10: no version information available (requ... See more...
./splunk cmd mongod -version mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod:/splunk/lib/libssl.so.10: no version information available (required by mongod) db version v7.0.14 Build Info: { "version": "7.0.14", "gitVersion": "ce59cfc6a3c5e5c067dca0d30697edd68d4f5188", "openSSLVersion": "OpenSSL 1.0.2zk-fips 3 Sep 2024", "modules": [ "enterprise" ], "allocator": "tcmalloc", "environment": { "distmod": "rhel70", "distarch": "x86_64", "target_arch": "x86_64" } }   why am i getting  mongo db error    mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod: /splunk/lib/libcrypto.so.10: no version information available (required by mongod) mongod:/splunk/lib/libssl.so.10: no version information available (required by mongod)
Hi, how can i query all Dashboards with no access in the last 60d?
I have been using the Splunk Add on for Salesforce Add on for while now but i want to know if anyone else is using it and noticed if the number of events being ingesting has decreased?   When i loo... See more...
I have been using the Splunk Add on for Salesforce Add on for while now but i want to know if anyone else is using it and noticed if the number of events being ingesting has decreased?   When i look back to December i could see i could see Splunk would ingest mutiple UserLicense events per day but now its one event every 4 days.
Hi,   How to query scheduled searches and alerts that is not scheduled?
Hi, How can i query Alerts without alert actions and i want to see also the status. 
Hello Team, We have been using Corelight APP for Splunk to ingest Corelight IDS events to our distributed Splunk environment. This app is working absolutely fine on Indexer.  We are unable to launc... See more...
Hello Team, We have been using Corelight APP for Splunk to ingest Corelight IDS events to our distributed Splunk environment. This app is working absolutely fine on Indexer.  We are unable to launch this app on Search head. Can you please assist us with the details to get the App working on Search Head?   Regards, Prathamesh
I have the following simplified version of the query where for each caller, I need all_calls (from sourcetype=x) and messagebank_calls (from sourcetype=y).  index=sample1 sourcetype=x host=host1 | s... See more...
I have the following simplified version of the query where for each caller, I need all_calls (from sourcetype=x) and messagebank_calls (from sourcetype=y).  index=sample1 sourcetype=x host=host1 | stats values(caller) as caller by callid | stats count as all_calls by caller | rename caller as caller_party | appendcols [ search index=sample1 AND sourcetype=y | stats count as messagebank_calls by caller_party] | search all_calls=*   messagebank_calls value is incorrect and I'm guessing because of the subsearch/appendcols? How do I increase the limit or re-write so I can get the same results caller, all_calls, messagebank_calls?
Please find the below attached screenshot and data sample i need to create 5 felids  problem statement - old splunk query not working as logging pattern got changed 3/28/25 10:04:25.685 PM ... See more...
Please find the below attached screenshot and data sample i need to create 5 felids  problem statement - old splunk query not working as logging pattern got changed 3/28/25 10:04:25.685 PM   2025-03-28T22:04:25.685Z INFO 1 --- [ool-1-thread-11] c.d.t.l.s.s.e.e.NoopLoggingEtlEndpoint : Completed generation for [DE, 2025-03-28, LOAN_EVENT_SDP, 1]. Number of records: 186 host = lonhybridapp03.uk.db.com source = /var/log/pods/ls2_ls2-intraday-sdp-86854ff574-48dgp_830e2ef9-56be-4996-ae21-127366a78515/ls2-intraday-sdp/0.log sourcetype = kube:container:ls2-intraday-sdp   Need below    index=*1644* container_name="ls2-sdp-java" $selected_countries$ | rex field=_raw "country=(?P<country>\w+)"    (DE) | rex field=_raw "sdpType=(?P<sdpType>\w+)"  (LOAN_EVENT_SDP) | rex field=_raw "cobDate=(?P<cobDate>\w+)"  (2025-03-28) | rex field=_raw "record-count: (?P<Recordcount>\w+)" (186) | rex field=_raw "\[(?<dateTime>.*)\] \{Thread"  (2025-03-28T22:04) | eval DateTime=strptime(dateTime, "%Y-%m-%dT%H:%M:%S,%N") | eval CreatedTime=strftime(DateTime, "%H:%M") | eval CreatedDate=strftime(DateTime, "%Y-%m-%d") above SPL has old query , can you please help me with new rex pattern to extract these fields  For clear understanding i have attached required fields in screenshot    
we got a requirement to on-board new platform logs to Splunk. They will have 1.8 TB/day data to be ingested. As of now our license is 2 TB/day and we already have other platform data on-boarded. Now ... See more...
we got a requirement to on-board new platform logs to Splunk. They will have 1.8 TB/day data to be ingested. As of now our license is 2 TB/day and we already have other platform data on-boarded. Now these new ones accepted to uplift our license with 2TB/day more so now our total becomes 4TB/day. But here they said that their normal ingestion is 1.8 TB/day, but during DDOS attack it can go in double digits. We got surprised by this. Total itself is 4TB/day, how come we can handle double digits TB of data, which in return this project might impact the on-boarding of other projects. My manager asked me to investigate on this whether we can accommodate this requirement? If yes, he want the action plan. If not, he want the justification to share it with them. I am not much aware of these licensing things in Splunk, but as per my knowledge this is very dangerous because 4TB and 10/20TB per day is huge difference. My understanding is, if we breach 4TB/day (may be 200gb of data more), new indexing stops but still old searches can be accessed.  Our infrastructure: multi site cluster with 3 sites ... 2 indexers in each (total 6), 3 SHs one in each, 1 deployment server, 2 CMs (active and standby), 1 deployer (which is license master.) Can anyone please help me on this topic how to proceed on it?
I'm using forecast graph for my data and it shows the graph in below format: 1. Existing data shows in solid Blue line 2. Forecast data shows in dotted Blue line 3. Confidence level (upper limit a... See more...
I'm using forecast graph for my data and it shows the graph in below format: 1. Existing data shows in solid Blue line 2. Forecast data shows in dotted Blue line 3. Confidence level (upper limit and lower limit with blue) I want to change the forecast data color (refer attached).
Hello, we have Windows servers from two environments, we want WinEventLog source (Windows Events logs) to go in "windows" index from main environment and secondary environment to go to "sec_windows"... See more...
Hello, we have Windows servers from two environments, we want WinEventLog source (Windows Events logs) to go in "windows" index from main environment and secondary environment to go to "sec_windows". On UF from secondary environment we have setup inputs.conf with index = sec_windows but this doesn't work : all goes to windows index, could you help ? Thank you very much.   props.conf [source::WinEventLog:*] TRANSFORMS-set_index_sec_windows = set_index_sec_windows TRANSFORMS-set_index_windows_wineventlog = set_index_windows_wineventlog transforms.conf # Windows [set_index_windows_wineventlog] SOURCE_KEY = MetaData:Source REGEX = WinEventLog DEST_KEY = _MetaData:Index FORMAT = windows [set_index_sec_windows] SOURCE_KEY = _MetaData:Index REGEX = sec_windows DEST_KEY = _MetaData:Index FORMAT = sec_windows  
We recently upgraded our add-on to use tls1.2 and python3, by following this blog this blog post. Link  After upgrading, during first time time installation, the splunk server is asking for resta... See more...
We recently upgraded our add-on to use tls1.2 and python3, by following this blog this blog post. Link  After upgrading, during first time time installation, the splunk server is asking for restart, earlier it never used to ask for restart during first time, only when we upgrade the app it used to ask for restart. Also I'm not using input.conf file in my add-on 
I have disabled input (generic S3) of aws add-on for a year. After I enable it, it ingests old data so I disable it and change initial date in inputs.conf. After restarting splunk serevice, I enable ... See more...
I have disabled input (generic S3) of aws add-on for a year. After I enable it, it ingests old data so I disable it and change initial date in inputs.conf. After restarting splunk serevice, I enable it but no data coming. I tried clone and changed only the name. No data coming also. I don't know how to check it. Is it checkpoint issue or somethings? Please help me to check it. Thanks for Advance
We have security logs coming to Splunk using data input configuration in Splunk.. The logs have a field called security configuration IDs and they are unique and each config id belongs to one app. So... See more...
We have security logs coming to Splunk using data input configuration in Splunk.. The logs have a field called security configuration IDs and they are unique and each config id belongs to one app. Sometimes two or three belongs to one app. Approx they have 200 config IDs and they want to restrict users from not seeing other config ID logs. So they are asking to create 200 indexes with config id in index name and can restrict based on that. But according to my knowledge...having more indexes is not a good idea. It needs more maintainance and stuff like that. So what am thinking is while configuring data input I can name with config accordingly so that it will come under 'Source' field and a single index for all of them. When creating role I will be assigning that index and in restrictions I will be giving search filter that belongs to individual user. My question is will this work as expected? Anyone already following this please confirm. Even if we restrict A user with common index=X and Source=123456 (config ID) and save it... If he give index=A in search still he can see all config ID logs or only 123456 ID logs? Please confirm. Any other alternative idea also please help me.
Hi, I am seeking recommendations on optimizing the most resource-intensive saved searches in my Splunk Cloud instance to reduce Indexers CPU utilization, which is consistently at 99%. We are using S... See more...
Hi, I am seeking recommendations on optimizing the most resource-intensive saved searches in my Splunk Cloud instance to reduce Indexers CPU utilization, which is consistently at 99%. We are using Splunk ES, SA-NetworkProtection apps.  By CMC, these are the most expensive ones and take around 30-40 minutes to complete.  _ACCELERATE_DM_Splunk_SA_CIM_Authentication_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Network_Traffic_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Vulnerabilities_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Endpoint.Services_ACCELERATE _ACCELERATE_DM_Splunk_SA_CIM_Network_Sessions_ACCELERATE_ _ACCELERATE_DM_Splunk_SA_CIM_Change_ACCELERATE_ _ACCELERATE_DM_SA-NetworkProtection_Domain_Analysis_ACCELERATE_ _ACCELERATE_DM_DA-ESS-ThreatIntelligence_Threat_Intelligence_ACCELERATE_ Any recommendations on how I can optimize without disabling them? Thank you
Can service account be used as owner of knowledge objects(saved searches, transforms-lookups, props-extracts, macros, and views)?Please share pros and cons.
Is there is any query to check like if there is any fixup pending and also it shows SF , RF and data is searchable  in the cluster master . We can check in cluster master U.I but without going the... See more...
Is there is any query to check like if there is any fixup pending and also it shows SF , RF and data is searchable  in the cluster master . We can check in cluster master U.I but without going there is there anywhere this log are store so that we can fetch. I need to created a query which shows the status of SF, RF and searchable in Cluster Master also if there are any fixup pending.
Hi,   I have onboarded palo-alto traffic and threat logs via HEC and SLS (Strata logging service). These logs are JSON logs and as the documentation they should come under sourcetype=pan... See more...
Hi,   I have onboarded palo-alto traffic and threat logs via HEC and SLS (Strata logging service). These logs are JSON logs and as the documentation they should come under sourcetype=pan:firewall_cloud.All our dashboards are set up expecting traffic logs under pan:traffic and threat logs under pan:threat.   Having checked the props.conf and transforms.conf for sourcetype=pan:firewall_cloud, there is no rule to route the logs to pan:threat or pan:traffic. how is everyone dealing with this situation ? appreciate any workarounds or suggestions in general. This seems to be big issue anyone using SLS (strata logging service).Thanks.