All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a index with 7 sources of which I utilize 4 sources. The alert outputs data to a lookup file as its alert function and is written something like this. index=my_index  source=source1 OR s... See more...
I have a index with 7 sources of which I utilize 4 sources. The alert outputs data to a lookup file as its alert function and is written something like this. index=my_index  source=source1 OR source=source2 OR source=source3 OR source=source4 stats commands eval commands table commands etc. I want to configure the alert to run only when all the four sources are present. I tried doing this. But the alert isnt running even when all 4 sources are present. Please help me on how to configure this.
Hi Team, I'm trying to add customized event timestamp by extracting from raw data instead of adding current time as the event time. To achieve this I created a sourcetype with following setting... See more...
Hi Team, I'm trying to add customized event timestamp by extracting from raw data instead of adding current time as the event time. To achieve this I created a sourcetype with following settings from splunk web gui after testing in lower environment. But in production it is not functioning as expected. Raw data:  2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:27", LAST_UPDATE_USER="xxxxxx" 2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:27", LAST_UPDATE_USER="xxxxxx" 2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:28", LAST_UPDATE_USER="xxxxxxx" 2024-11-18 09:20:10.187, STAGE_INV_TXNS_ID="xxxxxxxxx", LOC="xxxxxxx", STORE_NAME="xxxxxxx", STORE_PCODE="xxxxxxxxx", TRAN_CODE="xxxx", TRANS_TYPE="xxxxxxx", TRAN_DATE_TIME="2024-11-18 09:09:30", LAST_UPDATE_USER="xxxxx" I want the timestamp in TRAN_DATE_TIME field to be event timestamp. This data we are pulling from database using db connect. Could you please help us in understanding whats going wrong and how it can be corrected.
Dear splunkers, Through tuning Splunk Enterprise, we required to change every connection through Splunk Instances from IP Address to Domain Name. Everything from server.conf are done except this. So... See more...
Dear splunkers, Through tuning Splunk Enterprise, we required to change every connection through Splunk Instances from IP Address to Domain Name. Everything from server.conf are done except this. So, is possible to change these Peers URI from IP Address to Domain Name and where can we find this configuration ? Thanks & best regards, Benny On  
I want to import Adaudit logs into Splunk but I don't know how The important thing is that I want to do this from the oldest logs, not from now on.
background - the designed windows log flow is Splunk Agent of Universal forwarder -> Splunk Heavy Forwarder-> Splunk Indexer. the path are monitored with inputs.conf in Universal forwarder like this... See more...
background - the designed windows log flow is Splunk Agent of Universal forwarder -> Splunk Heavy Forwarder-> Splunk Indexer. the path are monitored with inputs.conf in Universal forwarder like this [monitor://D:\test\*.csv] disabled=0 index=asr_info sourcetype=csv source=asr:report crcSalt=<SOURCE> the example content for one of the csv file is like below -  cn,comment_id,asr_number,created_by,created_date zhy,15,2024-10-12-1,cc,2024-10-28 18:10 bj,10,2024-09-12-1,cc,2024-09-12 13:55   for the 2 indexed rows, the field extractions are good except _time.  for the first row, _time is 10/12/24 6:10:00.000 PM, for the second row, _time is 9/12/24 1:55:00.000 PM Question - How to make _time be the real ingested time instead of guessing from the row content? (tried with DATETIME_CONFIG = CURRENT in both HF and index in props like - [source::asr:report] DATATIME_CONFIG = CURRENT but, it does not work ) 
Hey, I am facing following issues when sending data using HEC token. Connection has been established with no issue but getting following error message with HEC. Any recommendations to resolve this i... See more...
Hey, I am facing following issues when sending data using HEC token. Connection has been established with no issue but getting following error message with HEC. Any recommendations to resolve this issue will be highly appreciated. Thank you!     [http] disabled = 0 enableSSL = 0 is also there.  
Hello Splunkers!! I want my _time to be extracted and match with time filed in the events. This is token based data. We are using http token to fetch the data from the kafka to Splunk and all the de... See more...
Hello Splunkers!! I want my _time to be extracted and match with time filed in the events. This is token based data. We are using http token to fetch the data from the kafka to Splunk and all the default setting are under search app including ( inputs.conf and props.conf). I have tried props in the second screenshot under search app but nothing works. Please help me what to do to get the required _time match with time field? I have applied below settings but nothing work for me. CHARSET = UTF-8 AUTO_KV_JSON = false DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6NZ TIME_PREFIX = \"time\"\:\" category = Custom pulldown_type = true TIMESTAMP_FIELDS = time
Hi Guys,  I want to provide support for Python 3.11 and Python 3.9 for my splunk app on Splunk Enterprise and Splunk Cloud. I don't want to publish multiple version of same app packaged with py3.... See more...
Hi Guys,  I want to provide support for Python 3.11 and Python 3.9 for my splunk app on Splunk Enterprise and Splunk Cloud. I don't want to publish multiple version of same app packaged with py3.9 compatible libraries and other with py3.11 compatible libraries.  I can include my dependencies in two folders lib3.7 and lib3.11. And then while installation, Is there any way I can check the python version available, and then set which lib folder to use for app ? Has anyone done something similar before ? Will this be achievable ? Regards Anmol Batra
Hello members,   i'm trying to integrate splunk wtih Group-ib DRP product but i'm facing issues with the application. I entered my API key and the username of the dashboard of sso and after redirec... See more...
Hello members,   i'm trying to integrate splunk wtih Group-ib DRP product but i'm facing issues with the application. I entered my API key and the username of the dashboard of sso and after redirection there are no results from the index or any information related to group-ib product   i installed this app : https://splunkbase.splunk.com/app/7124   i need to fix the problem as soon as possible .
Hi Splunk Community, I need advice on the best approach for streaming logs from Splunk Cloud Platform to an external platform. The logs are already being ingested into Splunk Cloud from various appl... See more...
Hi Splunk Community, I need advice on the best approach for streaming logs from Splunk Cloud Platform to an external platform. The logs are already being ingested into Splunk Cloud from various applications used by my client's organization. Now, the requirement is to forward or stream these logs to an external system for additional processing and analytics. #Splunk cloud Thank you  Nav
Hello, let me explain my architecture. Multi site cluster (3 site cluster)... 2 indexers, 1 SH, 2 syslog servers (UF installed)... In each site 1 Dep server, 1 Deployer overall, 2 cluster managers... See more...
Hello, let me explain my architecture. Multi site cluster (3 site cluster)... 2 indexers, 1 SH, 2 syslog servers (UF installed)... In each site 1 Dep server, 1 Deployer overall, 2 cluster managers (1 stand by)... As of now, network logs are configured to our syslog server and UF forward the data to indexers. We will configure logs with the help of FQDN.  For example we have X application which may or may not contain FQDN. If it contains FQDN, it will go to that app index or else it will go to different index. (Wrote these props and transforms in cluster manager). In deployment server inputs.conf we just given log path along with different index (which specified in transforms of Cluster manager). So all the logs will flow to cluster manager and then we wrote props and transforms to filter the data. Is there any other way to write these configurations other than this? Giving props and transforms of cluster manager: cat props.conf [f5_waf] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000 # Leaving PUNCT enabled can impact indexing performance. Customers can # comment this line if they need to use PUNCT (e.g. security use cases) ANNOTATE_PUNCT = false TRANSFORMS-0_fix_hostname = syslog-host TRANSFORMS-1_extract_fqdn = f5_waf-extract_fqdn TRANSFORMS-2_fix_index = f5_waf-route_to_index   cat transforms.conf # FIELD EXTRACTION USING A REGEX [f5_waf-extract_fqdn] SOURCE_KEY = _raw REGEX = Host:\s(.+)\n FORMAT = fqdn::$1 WRITE_META = true # Routes the data to a different index-- This must be listed in a TRANSFORMS-<name> entry. [f5_waf-route_to_index] INGEST_EVAL = indexname=json_extract(lookup("fqdn_indexname_mapping.csv", json_object("fqdn", fqdn), json_array("indexname")), "indexname"), index=if(isnotnull(indexname), indexname, index), fqdn:=null(), indexname:=null()   cat fqdn_indexname_mapping.csv fqdn indexname selenium.systems.us.fed xxx_app_selenium1 v-testlab-service1.systems.us.fed xxx_app_testlab_service1   Gone through documents but just asking for any better alternatives?? 
What exactly is false positives, false negatives, true positives, true negatives means? How to identify them in Splunk and can we trigger them and how it is useful to us in monitoring Splunk? Please ... See more...
What exactly is false positives, false negatives, true positives, true negatives means? How to identify them in Splunk and can we trigger them and how it is useful to us in monitoring Splunk? Please explain.
The scenario is there are 100 endpoints sending logs to there internal inhouse syslog server. We need to deploy Splunk here. So that admin will be able to monitor logs on Splunk Enterprise. Make sure... See more...
The scenario is there are 100 endpoints sending logs to there internal inhouse syslog server. We need to deploy Splunk here. So that admin will be able to monitor logs on Splunk Enterprise. Make sure both the Universal Forwarder and Splunk Enterprise should be present in the same syslog server. I am here for the steps I need to follow for this deployment.  I am mentioning below the steps I am thinking to take place. 1.) First I am thinking to install Splunk Enterprise on the server and then to install universal forwarder. 2.) During the installation process of universal forwarder I choose local system rather then domain deployment, then in deployment server i have to leave it blank and on receiver server I have to put the syslog server's IP address and port number which I can be able to get by running command ipconfig on cmd. 3.) I need to download Microsoft add on Splunk base on the same server. 4.) Extract the Splunk base file and create a local folder in Splunkforwarder > etc and paste the input.conf file there and do the required changes. 5.) Then I will be able to get all the syslog server's log on Splunk Enterprise. Please correct me, or add other steps which I need to follow.
Hello Splunkers, I’m working on integrating a Microsoft Office 365 tenant hosted in China (managed by 21Vianet) with Splunk Cloud. I am using the Splunk Add-on for Microsoft Office 365 but need help... See more...
Hello Splunkers, I’m working on integrating a Microsoft Office 365 tenant hosted in China (managed by 21Vianet) with Splunk Cloud. I am using the Splunk Add-on for Microsoft Office 365 but need help configuring it specifically for the China tenant. I understand that the endpoints for China are different from the global Microsoft 365 environment. For instance: Graph API Endpoint: https://microsoftgraph.chinacloudapi.cn AAD Authorization Endpoint: https://login.partner.microsoftonline.cn Could someone provide step-by-step instructions or point me to the necessary configuration files (like inputs.conf) or documentation to correctly set this up for: Subscription to O365 audit logs Graph API integration Event collection Additionally, if there are any known challenges or limitations specific to the China tenant setup, I’d appreciate insights on those as well. Thank you in advance for your guidance! Tilakram
How do I set up Splunk DB Connect so I only get new log information every time I do a query instead of pulling the whole database each time? I've got the connection working and I'm getting data in, ... See more...
How do I set up Splunk DB Connect so I only get new log information every time I do a query instead of pulling the whole database each time? I've got the connection working and I'm getting data in, but every time the input runs it pulls the entire database again instead of just pulling in the newest data. How do I limit what it pulls?
Hi Everyone, The issue with the code below appears to be with the values of the {report_id} variable not being passed correctly to the download_report function, in particular this line:       ... See more...
Hi Everyone, The issue with the code below appears to be with the values of the {report_id} variable not being passed correctly to the download_report function, in particular this line:       url = f"https://example_url/{report_id}/download"       If I hardcode the url with a valid token, instead of the {report_id} variable,  the report gets downloaded, as expected. Any help would be much appreciated ! Full code below:       import requests def collect_events(helper, ew): """ Main function to authenticate, generate report ID, and download the report. """ username = helper.get_arg('username') password = helper.get_arg('password') auth_url = "https://example_url/auth" headers = { 'Content-Type': 'application/x-www-form-urlencoded', } data = { 'username': username, 'password': password, 'token': 'true', 'permissions': 'true', } try: # Step 1: Authenticate to get the JWT token auth_response = requests.post(auth_url, headers=headers, data=data) if auth_response.status_code == 201: jwt_token = auth_response.text.strip() # Extract and clean the token if jwt_token: # Log and create an event for the JWT token event = helper.new_event( data=f"JWT Token: {jwt_token}" ) ew.write_event(event) # Step 2: Generate the report ID report_id = generate_report_id(jwt_token, helper) if report_id: # Log and create an event for the report ID event = helper.new_event( data=f"Report ID: {report_id}" ) ew.write_event(event) # Step 3: Download the report file_path = download_report(jwt_token, report_id, helper) if file_path: helper.log_info(f"Report successfully downloaded to: {file_path}") else: raise ValueError("Failed to download the report.") else: raise ValueError("Failed to generate report ID.") else: raise ValueError("JWT token not found in response.") else: raise ValueError(f"Failed to get JWT: {auth_response.status_code}, {auth_response.text}") except Exception as e: helper.log_error(f"Error in collect_events: {e}") def generate_report_id(jwt_token, helper): url = "https://example_url" headers = { "accept": "application/json", "Authorization": f"Bearer {jwt_token}" } params = { "havingQuery": "isSecurity: true", "platform": "Windows" } try: response = requests.get(url, headers=headers, params=params) if response.status_code in (200, 201): report_data = response.json() report_id = report_data.get('reportId') if report_id: return report_id else: raise ValueError("Report ID not found in response.") else: raise ValueError(f"Failed to generate report ID: {response.status_code}, {response.text}") except Exception as e: helper.log_error(f"Error while generating report ID: {e}") raise ValueError(f"Error while generating report ID: {e}") def download_report(jwt_token, report_id, helper): """ Downloads the report using the JWT token and report ID. """ url = f"https://example_url/{report_id}/download" headers = { "accept": "application/json", "Authorization": f"Bearer {jwt_token}", } try: # Make the request to download the report response = helper.send_http_request(url, method="GET", headers=headers, verify=True) if response.status_code in (200, 201): # Save the report content to a file sanitized_report_id = "".join(c if c.isalnum() else "_" for c in report_id) file_path = f"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_addon-builder\\local\\temp\\{sanitized_report_id}.csv.gz" with open(file_path, "wb") as file: file.write(response.content) helper.log_info(f"Report downloaded successfully to: {file_path}") return file_path else: raise ValueError(f"Failed to download report: {response.status_code}, {response.text}") except Exception as e: helper.log_error(f"Error while downloading report: {e}") raise ValueError(f"Error while downloading report: {e}")        
we have a user ID that we are looking to find out what splunk has collected.  what is the serach that i use?
Installed the app yesterday on our cloud instance (Victoria) and I can't figure out what index it points data to or where that is configured? The setup UI never asks for the index. Also, I can't find... See more...
Installed the app yesterday on our cloud instance (Victoria) and I can't figure out what index it points data to or where that is configured? The setup UI never asks for the index. Also, I can't find any internal logs for the app to understand what may be going on. Feeling like this was created as an app whereas maybe it should have been an add-on in the add-on builder? Any help would be greatly appreciated. Josh
Hi all, Let me explain my infrastructure here. We have a dedicated 6 syslog servers which forwards data from network devices to Splunk indexer cluster. (6 indexers), a cluster manager and 3 search h... See more...
Hi all, Let me explain my infrastructure here. We have a dedicated 6 syslog servers which forwards data from network devices to Splunk indexer cluster. (6 indexers), a cluster manager and 3 search heads. It's a multisite cluster (2 indexers in each, 1 SH,  and 2 syslog servers to receive network data). 1 Dep server and 1 deployer overall. Application team will provide FQDN and we need to map it to new index by creating and assign that index to that application team. Can you please let me know how to proceed with this data ingestion ?
Dear Splunkers, I am running through an issue concerning the SplunkBar that is empty in some view. As long as I am navigating in my app ([splunk-adress]/app/myapp), everything is normal. The Splunk... See more...
Dear Splunkers, I am running through an issue concerning the SplunkBar that is empty in some view. As long as I am navigating in my app ([splunk-adress]/app/myapp), everything is normal. The Splunk Bar appears on top of my view, and disappears when I am using hideSplunkBar=true. My problem is that when I am clicking on any element of the settings page in the Settings>Knowledge Category (red square on the picture), the bar is totally empty and I have the following error in the console: Uncaught TypeError : Splunk.Module is undefined. <anonymous> [splunk adress] /en-Us/manager/system/advandedsearch. The problem does not appear on the other categories of Settings (green square on the picture). I tried adding hideChrome=false and hideSplunkBar=false at the end of the url but it didn't do anything. I tried searching for the advancedsearch folder but didn't manage to find it.  Has anyone already encountered this problem or knows how to solve it?   [Update] : After more investigation I found out that the problem also occured on Splunk version 9.1.0.1 and occures on the views that are using the template [splunk_home]/.../templates/layout/base.html  Thank you in advance,