All Topics

Top

All Topics

Hey, I am facing following issues when sending data using HEC token. Connection has been established with no issue but getting following error message with HEC. Any recommendations to resolve this i... See more...
Hey, I am facing following issues when sending data using HEC token. Connection has been established with no issue but getting following error message with HEC. Any recommendations to resolve this issue will be highly appreciated. Thank you!     [http] disabled = 0 enableSSL = 0 is also there.  
Hello Splunkers!! I want my _time to be extracted and match with time filed in the events. This is token based data. We are using http token to fetch the data from the kafka to Splunk and all the de... See more...
Hello Splunkers!! I want my _time to be extracted and match with time filed in the events. This is token based data. We are using http token to fetch the data from the kafka to Splunk and all the default setting are under search app including ( inputs.conf and props.conf). I have tried props in the second screenshot under search app but nothing works. Please help me what to do to get the required _time match with time field? I have applied below settings but nothing work for me. CHARSET = UTF-8 AUTO_KV_JSON = false DATETIME_CONFIG = INDEXED_EXTRACTIONS = json KV_MODE = none LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6NZ TIME_PREFIX = \"time\"\:\" category = Custom pulldown_type = true TIMESTAMP_FIELDS = time
Hi Guys,  I want to provide support for Python 3.11 and Python 3.9 for my splunk app on Splunk Enterprise and Splunk Cloud. I don't want to publish multiple version of same app packaged with py3.... See more...
Hi Guys,  I want to provide support for Python 3.11 and Python 3.9 for my splunk app on Splunk Enterprise and Splunk Cloud. I don't want to publish multiple version of same app packaged with py3.9 compatible libraries and other with py3.11 compatible libraries.  I can include my dependencies in two folders lib3.7 and lib3.11. And then while installation, Is there any way I can check the python version available, and then set which lib folder to use for app ? Has anyone done something similar before ? Will this be achievable ? Regards Anmol Batra
Hello members,   i'm trying to integrate splunk wtih Group-ib DRP product but i'm facing issues with the application. I entered my API key and the username of the dashboard of sso and after redirec... See more...
Hello members,   i'm trying to integrate splunk wtih Group-ib DRP product but i'm facing issues with the application. I entered my API key and the username of the dashboard of sso and after redirection there are no results from the index or any information related to group-ib product   i installed this app : https://splunkbase.splunk.com/app/7124   i need to fix the problem as soon as possible .
Hi Splunk Community, I need advice on the best approach for streaming logs from Splunk Cloud Platform to an external platform. The logs are already being ingested into Splunk Cloud from various appl... See more...
Hi Splunk Community, I need advice on the best approach for streaming logs from Splunk Cloud Platform to an external platform. The logs are already being ingested into Splunk Cloud from various applications used by my client's organization. Now, the requirement is to forward or stream these logs to an external system for additional processing and analytics. #Splunk cloud Thank you  Nav
Hello, let me explain my architecture. Multi site cluster (3 site cluster)... 2 indexers, 1 SH, 2 syslog servers (UF installed)... In each site 1 Dep server, 1 Deployer overall, 2 cluster managers... See more...
Hello, let me explain my architecture. Multi site cluster (3 site cluster)... 2 indexers, 1 SH, 2 syslog servers (UF installed)... In each site 1 Dep server, 1 Deployer overall, 2 cluster managers (1 stand by)... As of now, network logs are configured to our syslog server and UF forward the data to indexers. We will configure logs with the help of FQDN.  For example we have X application which may or may not contain FQDN. If it contains FQDN, it will go to that app index or else it will go to different index. (Wrote these props and transforms in cluster manager). In deployment server inputs.conf we just given log path along with different index (which specified in transforms of Cluster manager). So all the logs will flow to cluster manager and then we wrote props and transforms to filter the data. Is there any other way to write these configurations other than this? Giving props and transforms of cluster manager: cat props.conf [f5_waf] TIME_PREFIX = ^ MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %b %d %H:%M:%S SEDCMD-newline_remove = s/\\r\\n/\n/g LINE_BREAKER = ([\r\n]+)[A-Z][a-z]{2}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s SHOULD_LINEMERGE = False TRUNCATE = 10000 # Leaving PUNCT enabled can impact indexing performance. Customers can # comment this line if they need to use PUNCT (e.g. security use cases) ANNOTATE_PUNCT = false TRANSFORMS-0_fix_hostname = syslog-host TRANSFORMS-1_extract_fqdn = f5_waf-extract_fqdn TRANSFORMS-2_fix_index = f5_waf-route_to_index   cat transforms.conf # FIELD EXTRACTION USING A REGEX [f5_waf-extract_fqdn] SOURCE_KEY = _raw REGEX = Host:\s(.+)\n FORMAT = fqdn::$1 WRITE_META = true # Routes the data to a different index-- This must be listed in a TRANSFORMS-<name> entry. [f5_waf-route_to_index] INGEST_EVAL = indexname=json_extract(lookup("fqdn_indexname_mapping.csv", json_object("fqdn", fqdn), json_array("indexname")), "indexname"), index=if(isnotnull(indexname), indexname, index), fqdn:=null(), indexname:=null()   cat fqdn_indexname_mapping.csv fqdn indexname selenium.systems.us.fed xxx_app_selenium1 v-testlab-service1.systems.us.fed xxx_app_testlab_service1   Gone through documents but just asking for any better alternatives?? 
What exactly is false positives, false negatives, true positives, true negatives means? How to identify them in Splunk and can we trigger them and how it is useful to us in monitoring Splunk? Please ... See more...
What exactly is false positives, false negatives, true positives, true negatives means? How to identify them in Splunk and can we trigger them and how it is useful to us in monitoring Splunk? Please explain.
The scenario is there are 100 endpoints sending logs to there internal inhouse syslog server. We need to deploy Splunk here. So that admin will be able to monitor logs on Splunk Enterprise. Make sure... See more...
The scenario is there are 100 endpoints sending logs to there internal inhouse syslog server. We need to deploy Splunk here. So that admin will be able to monitor logs on Splunk Enterprise. Make sure both the Universal Forwarder and Splunk Enterprise should be present in the same syslog server. I am here for the steps I need to follow for this deployment.  I am mentioning below the steps I am thinking to take place. 1.) First I am thinking to install Splunk Enterprise on the server and then to install universal forwarder. 2.) During the installation process of universal forwarder I choose local system rather then domain deployment, then in deployment server i have to leave it blank and on receiver server I have to put the syslog server's IP address and port number which I can be able to get by running command ipconfig on cmd. 3.) I need to download Microsoft add on Splunk base on the same server. 4.) Extract the Splunk base file and create a local folder in Splunkforwarder > etc and paste the input.conf file there and do the required changes. 5.) Then I will be able to get all the syslog server's log on Splunk Enterprise. Please correct me, or add other steps which I need to follow.
Hello Splunkers, I’m working on integrating a Microsoft Office 365 tenant hosted in China (managed by 21Vianet) with Splunk Cloud. I am using the Splunk Add-on for Microsoft Office 365 but need help... See more...
Hello Splunkers, I’m working on integrating a Microsoft Office 365 tenant hosted in China (managed by 21Vianet) with Splunk Cloud. I am using the Splunk Add-on for Microsoft Office 365 but need help configuring it specifically for the China tenant. I understand that the endpoints for China are different from the global Microsoft 365 environment. For instance: Graph API Endpoint: https://microsoftgraph.chinacloudapi.cn AAD Authorization Endpoint: https://login.partner.microsoftonline.cn Could someone provide step-by-step instructions or point me to the necessary configuration files (like inputs.conf) or documentation to correctly set this up for: Subscription to O365 audit logs Graph API integration Event collection Additionally, if there are any known challenges or limitations specific to the China tenant setup, I’d appreciate insights on those as well. Thank you in advance for your guidance! Tilakram
How do I set up Splunk DB Connect so I only get new log information every time I do a query instead of pulling the whole database each time? I've got the connection working and I'm getting data in, ... See more...
How do I set up Splunk DB Connect so I only get new log information every time I do a query instead of pulling the whole database each time? I've got the connection working and I'm getting data in, but every time the input runs it pulls the entire database again instead of just pulling in the newest data. How do I limit what it pulls?
Hi Everyone, The issue with the code below appears to be with the values of the {report_id} variable not being passed correctly to the download_report function, in particular this line:       ... See more...
Hi Everyone, The issue with the code below appears to be with the values of the {report_id} variable not being passed correctly to the download_report function, in particular this line:       url = f"https://example_url/{report_id}/download"       If I hardcode the url with a valid token, instead of the {report_id} variable,  the report gets downloaded, as expected. Any help would be much appreciated ! Full code below:       import requests def collect_events(helper, ew): """ Main function to authenticate, generate report ID, and download the report. """ username = helper.get_arg('username') password = helper.get_arg('password') auth_url = "https://example_url/auth" headers = { 'Content-Type': 'application/x-www-form-urlencoded', } data = { 'username': username, 'password': password, 'token': 'true', 'permissions': 'true', } try: # Step 1: Authenticate to get the JWT token auth_response = requests.post(auth_url, headers=headers, data=data) if auth_response.status_code == 201: jwt_token = auth_response.text.strip() # Extract and clean the token if jwt_token: # Log and create an event for the JWT token event = helper.new_event( data=f"JWT Token: {jwt_token}" ) ew.write_event(event) # Step 2: Generate the report ID report_id = generate_report_id(jwt_token, helper) if report_id: # Log and create an event for the report ID event = helper.new_event( data=f"Report ID: {report_id}" ) ew.write_event(event) # Step 3: Download the report file_path = download_report(jwt_token, report_id, helper) if file_path: helper.log_info(f"Report successfully downloaded to: {file_path}") else: raise ValueError("Failed to download the report.") else: raise ValueError("Failed to generate report ID.") else: raise ValueError("JWT token not found in response.") else: raise ValueError(f"Failed to get JWT: {auth_response.status_code}, {auth_response.text}") except Exception as e: helper.log_error(f"Error in collect_events: {e}") def generate_report_id(jwt_token, helper): url = "https://example_url" headers = { "accept": "application/json", "Authorization": f"Bearer {jwt_token}" } params = { "havingQuery": "isSecurity: true", "platform": "Windows" } try: response = requests.get(url, headers=headers, params=params) if response.status_code in (200, 201): report_data = response.json() report_id = report_data.get('reportId') if report_id: return report_id else: raise ValueError("Report ID not found in response.") else: raise ValueError(f"Failed to generate report ID: {response.status_code}, {response.text}") except Exception as e: helper.log_error(f"Error while generating report ID: {e}") raise ValueError(f"Error while generating report ID: {e}") def download_report(jwt_token, report_id, helper): """ Downloads the report using the JWT token and report ID. """ url = f"https://example_url/{report_id}/download" headers = { "accept": "application/json", "Authorization": f"Bearer {jwt_token}", } try: # Make the request to download the report response = helper.send_http_request(url, method="GET", headers=headers, verify=True) if response.status_code in (200, 201): # Save the report content to a file sanitized_report_id = "".join(c if c.isalnum() else "_" for c in report_id) file_path = f"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_addon-builder\\local\\temp\\{sanitized_report_id}.csv.gz" with open(file_path, "wb") as file: file.write(response.content) helper.log_info(f"Report downloaded successfully to: {file_path}") return file_path else: raise ValueError(f"Failed to download report: {response.status_code}, {response.text}") except Exception as e: helper.log_error(f"Error while downloading report: {e}") raise ValueError(f"Error while downloading report: {e}")        
we have a user ID that we are looking to find out what splunk has collected.  what is the serach that i use?
Installed the app yesterday on our cloud instance (Victoria) and I can't figure out what index it points data to or where that is configured? The setup UI never asks for the index. Also, I can't find... See more...
Installed the app yesterday on our cloud instance (Victoria) and I can't figure out what index it points data to or where that is configured? The setup UI never asks for the index. Also, I can't find any internal logs for the app to understand what may be going on. Feeling like this was created as an app whereas maybe it should have been an add-on in the add-on builder? Any help would be greatly appreciated. Josh
Hi all, Let me explain my infrastructure here. We have a dedicated 6 syslog servers which forwards data from network devices to Splunk indexer cluster. (6 indexers), a cluster manager and 3 search h... See more...
Hi all, Let me explain my infrastructure here. We have a dedicated 6 syslog servers which forwards data from network devices to Splunk indexer cluster. (6 indexers), a cluster manager and 3 search heads. It's a multisite cluster (2 indexers in each, 1 SH,  and 2 syslog servers to receive network data). 1 Dep server and 1 deployer overall. Application team will provide FQDN and we need to map it to new index by creating and assign that index to that application team. Can you please let me know how to proceed with this data ingestion ?
Dear Splunkers, I am running through an issue concerning the SplunkBar that is empty in some view. As long as I am navigating in my app ([splunk-adress]/app/myapp), everything is normal. The Splunk... See more...
Dear Splunkers, I am running through an issue concerning the SplunkBar that is empty in some view. As long as I am navigating in my app ([splunk-adress]/app/myapp), everything is normal. The Splunk Bar appears on top of my view, and disappears when I am using hideSplunkBar=true. My problem is that when I am clicking on any element of the settings page in the Settings>Knowledge Category (red square on the picture), the bar is totally empty and I have the following error in the console: Uncaught TypeError : Splunk.Module is undefined. <anonymous> [splunk adress] /en-Us/manager/system/advandedsearch. The problem does not appear on the other categories of Settings (green square on the picture). I tried adding hideChrome=false and hideSplunkBar=false at the end of the url but it didn't do anything. I tried searching for the advancedsearch folder but didn't manage to find it.  Has anyone already encountered this problem or knows how to solve it?   [Update] : After more investigation I found out that the problem also occured on Splunk version 9.1.0.1 and occures on the views that are using the template [splunk_home]/.../templates/layout/base.html  Thank you in advance,
Hello, I am facing strange issue with a Splunk Forwarder where on some servers of the same role is CPU usage 0-3% and the others are around 15%. It doesn't sound bad on the 1st hand, but it did caus... See more...
Hello, I am facing strange issue with a Splunk Forwarder where on some servers of the same role is CPU usage 0-3% and the others are around 15%. It doesn't sound bad on the 1st hand, but it did cause us issues with deployment and such behavior is dangerous for live services if it will grow. It started around 3 weeks ago with installed 9.3.0 on the Windows Server 2019 VMs with 8 CPU cores and 24GB RAM. I did update Forwarder to the 9.3.1 and the behavior is the same.  For example, we have 8 servers with the same setup and apps running on, traffic to them is load balanced and very similar, log files amount and size is also very similar. 5 servers are affected, 3 not. All of them have set 10 inputs, from what are 4 perfmonitors (CPU,RAM,Disk space and Web Services) and 6 inputs are checking around 40 log files. Any sugestion what to check to understand what is happening?  
Hi, I'm new to Splunk DB connector. Having Splunk on-prem version and trying to pull data from Snowflake audit logs and push to cribl.io (for log optimization purpose and reducing log size).  As ... See more...
Hi, I'm new to Splunk DB connector. Having Splunk on-prem version and trying to pull data from Snowflake audit logs and push to cribl.io (for log optimization purpose and reducing log size).  As Cribl.io doesn't have connector for Snowflake (and not in near roadmap), wondering if I use Splunk DB connect to read data from Snowflake and send to Cribl.io followed by sending to destination i.e. Splunk (for log monitoring and alerting) Question: Would this be "double hop" to Splunk, if yes, any Splunk charges be applicable while Splunk DB connect reading from Snowflake and sending to Cribl.io? Thank you! Avi
I'm trying to transform a error log Below is a sample log (nginx_error) 2024/11/15 13:10:11 [error] 4080#4080: *260309 connect() failed (111: Connection refused) while connecting to upstream, cli... See more...
I'm trying to transform a error log Below is a sample log (nginx_error) 2024/11/15 13:10:11 [error] 4080#4080: *260309 connect() failed (111: Connection refused) while connecting to upstream, client: 210.54.88.72, server: mpos.mintpayments.com, request: "GET /payment-mint/cnpPayments/v1/publicKeys?callback=jQuery360014295356911736334_1731369073329&X-Signature=plkb810sFSSSIbASLb818BMXxgtUM76QNvhI%252FBA%253D&X-Timestamp=1731368881376&X-ApiKey=CSSSAPXXXXXXPxmO7kjMi&X-CompanyToken=d1111e8lV1mpvljiCD2zRgEEU121p&_=1731369073330 HTTP/1.1", upstream: "https://10.20.3.59:28076//cnpPayments/v1/publicKeys?callback=jQuery360014295356911736334_1731369073329&X-Signature=plkb810sFY3jmET4IbASLb818BMXxgtUM76QNvhI%252FBA%253D&X-Timestamp=1731368881376&X-ApiKey=CNPAPIIk7elIMDTunrIGMuXPxmO7kjMi&X-CompanyToken=dX6E3yDe8lV1mpvljiCD2zRgEEU121p&_=173123073330", host: "test.mintpayments.com", referrer: "https://vicky9.mintpayments.com/testing??asd We are trying to 1) GET query parameters must not be logged 2) Referrer must not contain the query string I have updated my config as below [04:59 PM] [root@dev-web01 splunkforwarder]# cat ./etc/system/local/props.conf [source::///var/log/devops/nginx_error.log] TRANSFORMS-sanitize_referer = remove_get_query_params, remove_referer_query [04:59 PM] [root@dev-web01 splunkforwarder]# cat ./etc/system/local/transforms.conf [remove_get_query_params] REGEX = (GET|POST|HEAD) ([^? ]+)\?.* FORMAT = $1 $2 DEST_KEY = _raw REPEAT_MATCH = true [remove_referer_query] REGEX = referrer: "(.*?)\?.*" FORMAT = referrer: "$1" DEST_KEY = _raw REPEAT_MATCH = true Verified that the regex is correct and when I run below to list the changes, its present /opt/splunkforwarder/bin/splunk btool transforms list --debug /opt/splunkforwarder/bin/splunk btool props list --debug Still I can see no transformation in the logs, what could be the issue here ? We are using custom splunkforwarder in our env.
Hello, There is an app for Aruba Edgeconnect - https://splunkbase.splunk.com/app/6302 Is there any documentation on how to get the logs ingested into splunk from Aruba EdgeConnect ?  
Hello, I need some help in imperva to Splunk Cloud integration. I am using the Splunk Addon for AWS on my cloud SH, and from there i configured imperva account using Key ID and secret Id with the Im... See more...
Hello, I need some help in imperva to Splunk Cloud integration. I am using the Splunk Addon for AWS on my cloud SH, and from there i configured imperva account using Key ID and secret Id with the Imperva S3. And in inputs I am using the Incremental S3, the logs are coming in to Splunk Cloud but there is some miss too I can see some logs are available on AWS S3 but some how those are ingesting in to Splunk Cloud, I am not getting any help online reason I am posting question here.  Please advice some body.    Thank you.