All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello  I just want to know if I have Sentinel logs forwarded to Splunk via HEC directly. Is there any other way to get these logs? I am given the Sentinel logs directly in Splunk and have no acces... See more...
Hello  I just want to know if I have Sentinel logs forwarded to Splunk via HEC directly. Is there any other way to get these logs? I am given the Sentinel logs directly in Splunk and have no access to Azure. I do not want to use HEC because of the huge amount of unfiltered data. Is there any way to resolve this issue or can I ask the Azure team to do something that gives me filtered data, even if I have to use HEC in the end? Sentinel 
After upgrade to version 9.4 I have attempted to configure a list of acceptable domains for the alert_actions.conf.  My environment has a *wide* variety of acceptable email sub-domains which have th... See more...
After upgrade to version 9.4 I have attempted to configure a list of acceptable domains for the alert_actions.conf.  My environment has a *wide* variety of acceptable email sub-domains which have the same base.  However, the domain matching appears to the strict and wildcards are not matching. For example, users may have emails like:  a@temp.mydomain.com  b@perm.mydomain.com  Setting an allow domain like *.mydomain.com   does not match the users and they are removed from alerts and reports.  Does any one have a workaround other than adding every possible sub-domain? 
Hi, we've encountered some unusual behaviour when ingesting data and are at a loss as to what might be causing it. We have two presumably identical indexes ingesting identically structured messages f... See more...
Hi, we've encountered some unusual behaviour when ingesting data and are at a loss as to what might be causing it. We have two presumably identical indexes ingesting identically structured messages from different regions via an HEC for each. Index 1 has no issue, all messages are ingested no problem. On Index 2, events only appear if the source attribute of the message is equal or greater than 14 characters long. e.g.: {    other_data: ...    source: 12345678901234 }   Any ideas?
I have an audit table with before and after records of changes made to a user table. So every time an update is made to the user table, a record is logged to the audit table with the current value fo... See more...
I have an audit table with before and after records of changes made to a user table. So every time an update is made to the user table, a record is logged to the audit table with the current value for each field, and and a second record is logged with the new value for each field. So if a record was disabled and that was the only change made, the 2 records would look like this: Mod_type  user ID Email change # Active OLD 123 Me@hotmail.com 152 No NEW 123 Me@hotmail.com 152 Yes   I need to match the 2 records by user ID and change # so I can find all the records where specific changes were made, such as going from inactive to active, or where the email address changed, etc. I've looked into selfjoin, appendpipe, etc., but none of them seem to be what I need. I'm trying to say "give me all the records where the active field was changed from "No" to "Yes" and the Mod_Type is "New"".  Thanks for any help.
I have configured syslog-ng to listen on multiple ports, save them in a folder with IP name and hf to send logs to indexers,  In one case i have 127.0.0.1 sending as loopback to syslog-ng server, no... See more...
I have configured syslog-ng to listen on multiple ports, save them in a folder with IP name and hf to send logs to indexers,  In one case i have 127.0.0.1 sending as loopback to syslog-ng server, now i want to remove this IP from my input configs. let suppose I have below folder /opt/syslog/Fortigate under fortigate I have mutiple fortigates sending logs and i dont know in future we can add a new fortigate here 1 hot is 127.0.0.1, i want to remove this from my inputs, what should I do.
Hello, Our company has gone through an audit and one of the auditors has asked us to monitor attempts to delete records in Splunk.  I did some research and found the search item below which would do... See more...
Hello, Our company has gone through an audit and one of the auditors has asked us to monitor attempts to delete records in Splunk.  I did some research and found the search item below which would do the trick.  The issue is if I setup an alert with this, the alert is triggered because the previous search for this alert is saved and we get alerted for that search because the word delete is in that search.   index=_audit action=search | regex search="\\|(\\s|\\n|\\r|([\\s\\S]*))*delete" Is there a way to ignore this search string when doing a search?  Or has anybody been able to setup an alert for attempts to delete records? We only have 4 admins with the can_delete role but the auditors want to be sure if an admin tries to delete records, there will be an alert.  
Hi Team, On May 20th, we successfully migrated from Splunk On-Prem to Splunk Cloud. We have a scheduled search that runs every 31 minutes, which was functioning correctly in the on-prem environmen... See more...
Hi Team, On May 20th, we successfully migrated from Splunk On-Prem to Splunk Cloud. We have a scheduled search that runs every 31 minutes, which was functioning correctly in the on-prem environment. However, after the migration, the same search query is no longer working in the cloud environment. on-prem index=proofpoint earliest=-32m@m latest=-1m@m | transaction x, qid keepevicted=true | search action=* cmd=env_from cmd=env_rcpt | addinfo | fields action country delay dest duration file_hash file_name file_size internal_message_id message_id message_info orig_dest orig_recipient orig_src process process_id protocol recipient recipient_count recipient_status reply response_time retries return_addr size src src_user status_code subject url user vendor_product xdelay xref filter_action filter_score signature signature signature_extra signature_id | fields - _raw | join type=outer internal_message_id [search index=summary sourcetype=proofpoint_stash earliest=-48m | fields internal_message_id | dedup internal_message_id | eval inSummary="T"] | search NOT inSummary="T"| collect index=summary addtime=true source=proofpoint sourcetype=proofpoint_stash Cloud index=proofpoint earliest=-32m@m latest=-1m@m | transaction x, qid keepevicted=true | search action=* cmd=env_from cmd=env_rcpt | addinfo | fields action country delay dest duration file_hash file_name file_size internal_message_id message_id message_info orig_dest orig_recipient orig_src process process_id protocol recipient recipient_count recipient_status reply response_time retries return_addr size src src_user status_code subject url user vendor_product xdelay xref filter_action filter_score signature signature signature_extra signature_id | fields - _raw | join type=outer internal_message_id [search index=summary sourcetype=stash earliest=-48m | fields internal_message_id | dedup internal_message_id | eval inSummary="T"] | search NOT inSummary="T"| collect index=proofpoint_summary addtime=true source=proofpoint sourcetype=stash Thanks
splunk index is flowing, but in application its not reflecting. We are currently investigating an issue where logs stop appearing in the UI after a short period of time. For example, in the apps_log... See more...
splunk index is flowing, but in application its not reflecting. We are currently investigating an issue where logs stop appearing in the UI after a short period of time. For example, in the apps_log, logs are visible for a few minutes but then stop showing up. This behavior is inconsistent across environments — in some, logs are visible as expected, while in others, they're missing entirely. The Splunk index appears to be receiving the data, but it's not being reflected in the application UI. We're not yet sure what’s causing this discrepancy and would appreciate any insights or assistance you can provide.
Hopefully I've only got a small problem this time, but I've had no luck fixing it despite hours of trying. All I'm trying to do is convert a string time field to unix using strptime. This is my time ... See more...
Hopefully I've only got a small problem this time, but I've had no luck fixing it despite hours of trying. All I'm trying to do is convert a string time field to unix using strptime. This is my time field: Ended: 0d1h55m0s   I've been trying to convert it to unix using the following command: | eval time_sec = strptime('Time', "Ended: %dd%Hh%Mm%Ss")   For clarity, this is the full search: | inputlookup metrics.csv | eval occurred=strftime(strptime(occurred,"%a, %d %b %Y %T %Z"), "%F %T %Z") | eval closed=strftime(strptime(closed,"%a, %d %b %Y %T %Z"), "%F %T %Z") | eval time_sec = strptime('Time', "Ended: %dd%Hh%Mm") | where strptime(occurred, "%F %T") >= strptime("2025-05-01 00:00:00", "%F %T") AND (isnull(closeReason) OR closeReason="Resolved") | fillnull value=Resolved closeReason   The example time I've posted above 0d1h55m0s should ideally convert to 6900(seconds).
I am upgrading from RHEL 7 to RHEL 8 in light of end of support for Red Hat. We have a clustered environment. We have two sites per cluster for each the SH and Indexer Cluster. All splunk servers are... See more...
I am upgrading from RHEL 7 to RHEL 8 in light of end of support for Red Hat. We have a clustered environment. We have two sites per cluster for each the SH and Indexer Cluster. All splunk servers are on 9.2.0.1.     My question is, can we run a RHEL 8 Cluster Master and have a mixed environment of RH 8 and RH 7 servers within the cluster? I know there is a hierarchy for the servers, but i wasn't sure to what extent the OS affected the application.    With the upgrade, I might have:   RHEL 8 Indexer Cluster Manager while Indexers themselves are on RHEL 7. RHEL 8 SH cluster Manager while SH may be on RHEL 7. Depending on how the in-place upgrade goes, determines how many servers I upgrade at once. These are all Azure servers or VmWare servers.    Would any search functionality for any of the search peers be affected by differing OS versions?   Thank you for any clarity. 
Hello, colleagues. I'm using an independent stream forwarder installed on Ubuntu 22.04.05 as a service. After updating to 8.1.5 bytes_in, bytes_out, packets_in, packets_out are always equal to zero... See more...
Hello, colleagues. I'm using an independent stream forwarder installed on Ubuntu 22.04.05 as a service. After updating to 8.1.5 bytes_in, bytes_out, packets_in, packets_out are always equal to zero. If I stop the service and change /opt/streamfwd/bin/streamfwd from 8.1.5 to 8.1.3 and start sert service again, everything is ok.  Anybody run into this? thanks. { [-] app_tag: PANA-L7-PEN : ххххххххх bytes_in: 0 bytes_out: 0 dest_ip: x.x.x.x dest_port: 55438 endtime: 2025-05-28T15:01:26Z event_name: netFlowData exporter_ip: x.x.x.x exporter_time: 2025-May-28 15:01:26 exporter_uptime: 3148584010 flow_end_reason: 3 flow_end_rel: 0 flow_start_rel: 0 fwd_status: 64 input_snmpidx: 168 netflow_elements: [ [+] ] netflow_version: 9 observation_domain_id: 1 output_snmpidx: 127 packets_in: 0 packets_out: 0 protoid: 6 selector_id: 0 seqnumber: 2278842767 src_ip: x.x.x.x src_port: 9997 timestamp: 2025-05-28T15:01:26Z tos: 0 }
Hello, colleagues. I am using independent streamfwd as a service installed on Linux Ubuntu 22.04.05. Streamfwd gets settings from the stream app and gets the indexers list. Everything is ok, streamf... See more...
Hello, colleagues. I am using independent streamfwd as a service installed on Linux Ubuntu 22.04.05. Streamfwd gets settings from the stream app and gets the indexers list. Everything is ok, streamfwd balancing data between all indexers, but if I made a push from the master node to the indexers cluster, and the indexers are rebooting, data balancing breaks after that streamfwd sending data just to one indexer. I can't find how to fix this. Please help thanks
Hi ,  I have this scenario where i am getting data from one of the index with 2 other specified filters like index=index_logs_App989 customer="*ABC*" org in ("Provider1","Provider2") i have one f... See more...
Hi ,  I have this scenario where i am getting data from one of the index with 2 other specified filters like index=index_logs_App989 customer="*ABC*" org in ("Provider1","Provider2") i have one filed with the date values as below Tue 27 May 2025 15:26:23:702 EDT  - from this i have to take out the time part and convert it into date like 05/27/2025  - so that i can use this to aggregate at the date or day only ... any guidance please      
Hello, colleagues. After upgrading Splunk Stream 8.1.5 stopped parsing bytes_in, bytes_out, packets_in, packets_out, they are always equal to zero...  { [-] app_tag: PANA-L7-PEN : xxxxxxxxxxxxx b... See more...
Hello, colleagues. After upgrading Splunk Stream 8.1.5 stopped parsing bytes_in, bytes_out, packets_in, packets_out, they are always equal to zero...  { [-] app_tag: PANA-L7-PEN : xxxxxxxxxxxxx bytes_in: 0 bytes_out: 0 dest_ip: x.x.x.x dest_port: xxx endtime: 2025-05-28T15:01:26Z event_name: netFlowData exporter_ip: x.x.x.x exporter_time: 2025-May-28 15:01:26 exporter_uptime: 3148584010 flow_end_reason: 3 flow_end_rel: 0 flow_start_rel: 0 fwd_status: xx input_snmpidx: xx netflow_elements: [ [+] ] netflow_version: 9 observation_domain_id: 1 output_snmpidx: xxx packets_in: 0 packets_out: 0 protoid: 6 selector_id: 0 seqnumber: 2278842767 src_ip: x.x.x.x src_port: 9997 timestamp: 2025-05-28T15:01:26Z tos: 0 } I am using an independent streamforwarder with streamfwd installed as a service on linux ubuntu 22.04.5 If I stop the service and replace the streamfwd file with the old version 8.1.3 and start the service again, everything is ok Anybody run into this?  Thanks!
Dear everyone, I have a Splunk Clustering (2 indexers) with: Replication Factor=2 Searchable Factor=2 I supposed to sizing a index A on indexes.conf. Then, I found this useful website: https://sp... See more...
Dear everyone, I have a Splunk Clustering (2 indexers) with: Replication Factor=2 Searchable Factor=2 I supposed to sizing a index A on indexes.conf. Then, I found this useful website: https://splunk-sizing.soclib.net/ My concern on this website is how to calculate "Daily Data Volume" (average uncompressed raw data). So, how can I calculate this ? Can I use a SPL command on Search Head to calculate this ? Thanks & best regards.
hello i have an index  (A) on indexer and other index (B) on Search head (we are making it standalone) . i want to send data from index A to B . How to proceed . I have admin rights.
Apigee API Management Monitoring App for Splunk | Splunkbase The "Visit Site" link give this message:  "404 Sorry this page is not available The link you followed may be broken, or the page may ha... See more...
Apigee API Management Monitoring App for Splunk | Splunkbase The "Visit Site" link give this message:  "404 Sorry this page is not available The link you followed may be broken, or the page may have been removed." Are there any alternative ways to download?
Need to write a regex for  same as time and same as event given below in image   
Context: We have SPlunk ES setup on-prem. We want to extract the required payloads through queries, generate scheduled reports (e.g., daily), and export these to a cloud location for ingestion by S... See more...
Context: We have SPlunk ES setup on-prem. We want to extract the required payloads through queries, generate scheduled reports (e.g., daily), and export these to a cloud location for ingestion by Snowflake. Requirement: 1. Is there any way we can have API connection with Snowflake where it can call the API to extract specific logs from a specific index in SPlunk 2. If #1 is not possible, can we atleast run queries and send that report to a cloud repository for Snowflake to extract from.   TIA
Hi everyone, I'm developing an app that uses a custom configuration file. I'm updating the file using the Splunk JavaScript SDK and REST API calls. In my lab setup (Splunk 9.4.1), everything works a... See more...
Hi everyone, I'm developing an app that uses a custom configuration file. I'm updating the file using the Splunk JavaScript SDK and REST API calls. In my lab setup (Splunk 9.4.1), everything works as expected—the custom config file is replicated correctly. However, when I deploy the app in our production environment (also running Splunk 9.4.1), changes to the configuration file do not replicate to the other search heads. I used btool to verify that the file is not excluded from replication. Has anyone encountered a similar issue? What steps can I take to investigate and debug this? What specific logs or configurations should I check?