All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

inputs.conf is part of the Splunk Universal Forwarder configuration and is sent out by the Splunk Deployment Server. I don't understand the second question.  The UF does not write to log paths, exce... See more...
inputs.conf is part of the Splunk Universal Forwarder configuration and is sent out by the Splunk Deployment Server. I don't understand the second question.  The UF does not write to log paths, except for its own (internal) logs.
Hello Splunkers, I’m working on integrating a Microsoft Office 365 tenant hosted in China (managed by 21Vianet) with Splunk Cloud. I am using the Splunk Add-on for Microsoft Office 365 but need help... See more...
Hello Splunkers, I’m working on integrating a Microsoft Office 365 tenant hosted in China (managed by 21Vianet) with Splunk Cloud. I am using the Splunk Add-on for Microsoft Office 365 but need help configuring it specifically for the China tenant. I understand that the endpoints for China are different from the global Microsoft 365 environment. For instance: Graph API Endpoint: https://microsoftgraph.chinacloudapi.cn AAD Authorization Endpoint: https://login.partner.microsoftonline.cn Could someone provide step-by-step instructions or point me to the necessary configuration files (like inputs.conf) or documentation to correctly set this up for: Subscription to O365 audit logs Graph API integration Event collection Additionally, if there are any known challenges or limitations specific to the China tenant setup, I’d appreciate insights on those as well. Thank you in advance for your guidance! Tilakram
edit_tcp_stream edit_upload_and_index input_file search were the needed capabilities to show the "Add Data" button under Settings in Splunk Cloud, specifically. I think the only ones you r... See more...
edit_tcp_stream edit_upload_and_index input_file search were the needed capabilities to show the "Add Data" button under Settings in Splunk Cloud, specifically. I think the only ones you really do need though are "edit_upload_and_index" and "search" - specifying the indexes available at the role level did limit the shown indexes in the drop-down when going through the add data workflow.  
Where I need to give this inputs.conf? Are you telling about log paths what we write in UF?
I'm sure the syslog server has the ability to segregate traffic by a number of factors - including IP address and perhaps FQDN.  The segregated data should be written to separate files to monitored b... See more...
I'm sure the syslog server has the ability to segregate traffic by a number of factors - including IP address and perhaps FQDN.  The segregated data should be written to separate files to monitored by separate inputs.conf stanzas.  Each monitored file can have a different destination index.
Use a Rising input.  This is where DBX keeps track of the last value seen for a specific field so subsequent queries can fetch newer values. See https://docs.splunk.com/Documentation/DBX/3.18.1/Depl... See more...
Use a Rising input.  This is where DBX keeps track of the last value seen for a specific field so subsequent queries can fetch newer values. See https://docs.splunk.com/Documentation/DBX/3.18.1/DeployDBX/Createandmanagedatabaseinputs#Choose_input_type for details.
I figured this out.  I had 2 data sets in my model and when I was specifying the datamodel=XXX, I didnt pass a dataset after.  So by default this will assume the first listed dataset and work.  When ... See more...
I figured this out.  I had 2 data sets in my model and when I was specifying the datamodel=XXX, I didnt pass a dataset after.  So by default this will assume the first listed dataset and work.  When I was trying to run the query associated with the other dataset it wouldnt.   Simply adding the dataset name as an argument got it working.
I can not tell from the Aruba documentation very easily, but I would hazard a guess from what I do see is this.  The Aruba devices likely forward logs via syslog or HEC, in either case sort out which... See more...
I can not tell from the Aruba documentation very easily, but I would hazard a guess from what I do see is this.  The Aruba devices likely forward logs via syslog or HEC, in either case sort out which it is and then follow your Splunk instance current ingestion methods for either of those transport mechanics.
Sorry if this is troubling everyone... I am new to Splunk admin and still learning.. We have network logs coming and it will be collected via dedicated syslog server (configuring it using FQDN) and ... See more...
Sorry if this is troubling everyone... I am new to Splunk admin and still learning.. We have network logs coming and it will be collected via dedicated syslog server (configuring it using FQDN) and it will be forwarded to our indexers via UF installed on that server.  Currently we have deployment server which forwarded all the logs to indexer via created index and then in cluster manager we are writing props.conf and transforms.conf in such a way that specific FQDN go to specific indexname which is already mentioned in the logs ( we will give them the indexname). Where else can we right this rule I mean props and transforms? Can we write it in dep server? Can we do this anyway easier and faster? If yes please help me with the exact approach anyone...it will be really helpful for me...
Logs are now coming in as expected.  Couple things that threw me off. - Besides adding the index to the dashboard portlet searches, i had to examine the XML to modify (add index) the base search at... See more...
Logs are now coming in as expected.  Couple things that threw me off. - Besides adding the index to the dashboard portlet searches, i had to examine the XML to modify (add index) the base search at the top so the associated drop downs and results portlet at the bottom of the dashboard worked. -  Changing the data inputs source type from 'Automatic' to 'From list' -> 'terraform_cloud' didn't take. It would revert back to 'Automatic' but in the end the source type is still correctly attached to the logs and fields are extracted.  - Lack of documentation. Wasn't sure of the index, source, host, source type, polling interval, log level, etc. Could maybe be added to the setup page? Appreciate just having the app though.
Start in the DMC to do CPU performance comparison on the various instances or try this search. index=_introspection host=<replace-with-hostname> sourcetype=splunk_resource_usage component=PerProcess... See more...
Start in the DMC to do CPU performance comparison on the various instances or try this search. index=_introspection host=<replace-with-hostname> sourcetype=splunk_resource_usage component=PerProcess "data.pct_cpu"="*" | rename data.* as * | eval processes=process_type.":".process.":".args | timechart span=10s max(pct_cpu) as pct_cpu by processes This is assuming HF, you didn't specify but if it's UF there is something similar just a bit different. 
How do I set up Splunk DB Connect so I only get new log information every time I do a query instead of pulling the whole database each time? I've got the connection working and I'm getting data in, ... See more...
How do I set up Splunk DB Connect so I only get new log information every time I do a query instead of pulling the whole database each time? I've got the connection working and I'm getting data in, but every time the input runs it pulls the entire database again instead of just pulling in the newest data. How do I limit what it pulls?
Good call on the props, honestly a wild guess is that Month number is somehow inserted as Minute.  Running the dashboard for October would be a good litmus test for that.  But I didn't see anything i... See more...
Good call on the props, honestly a wild guess is that Month number is somehow inserted as Minute.  Running the dashboard for October would be a good litmus test for that.  But I didn't see anything in the original to make me think that was a real possibility.
FWIW, REPEAT_MATCH is ignored when DEST_KEY=_raw.  I believe DEST_KEY is not needed here since FORMAT says where the capture groups go.
Is this custom forwarder a Heavy Forwarder instead of Universal Forwarder? You can use transforms.conf only in HF. Your sample didn't contain end " which you are expecting on REGEX. Should those r... See more...
Is this custom forwarder a Heavy Forwarder instead of Universal Forwarder? You can use transforms.conf only in HF. Your sample didn't contain end " which you are expecting on REGEX. Should those regex are like https://regex101.com/r/iDjLlJ/1 and https://regex101.com/r/kuIxoI/1 as you are basically replacing _raw on both case with your matching groups? (.*)(GET|POST|HEAD) ([^? ]+)\?([^\"]+)(\".*) => $1$2 $3$5 (.*referrer: ")([^\?]+\?)\?([^"]+)(") => $1$2$4  
I totally agree with others that you are trying to shoot you on foot. Try to keep things as simple as possible. Why you don't want to use Your DS with correctly defined classes? Just put index=xxxx... See more...
I totally agree with others that you are trying to shoot you on foot. Try to keep things as simple as possible. Why you don't want to use Your DS with correctly defined classes? Just put index=xxxx on those and deploy those into correct nodes. It's much easier to create and debug those. It's also much lighter and faster on indexing phase. r. Ismo
Hi @mg99 , your request isn't so clear, could you better detail it? If you want to know which information you have, you could run a search that extract the list of sourcetypes: index=* | stats ... See more...
Hi @mg99 , your request isn't so clear, could you better detail it? If you want to know which information you have, you could run a search that extract the list of sourcetypes: index=* | stats values(host) AS host values(index) AS index count BY sourcetype Ciao. Giuseppe
I still think you're making things harder for yourself.  The DS should be able to deploy an app with inputs.conf stanzas for each application.  Or are all applications writing to the same file?  That... See more...
I still think you're making things harder for yourself.  The DS should be able to deploy an app with inputs.conf stanzas for each application.  Or are all applications writing to the same file?  That would explain the requirement, but having such a file would seem to be a security concern as much as having a common index. I believe index=if... needs to be index:=if... in f5_waf-route_to_index
Hi Everyone, The issue with the code below appears to be with the values of the {report_id} variable not being passed correctly to the download_report function, in particular this line:       ... See more...
Hi Everyone, The issue with the code below appears to be with the values of the {report_id} variable not being passed correctly to the download_report function, in particular this line:       url = f"https://example_url/{report_id}/download"       If I hardcode the url with a valid token, instead of the {report_id} variable,  the report gets downloaded, as expected. Any help would be much appreciated ! Full code below:       import requests def collect_events(helper, ew): """ Main function to authenticate, generate report ID, and download the report. """ username = helper.get_arg('username') password = helper.get_arg('password') auth_url = "https://example_url/auth" headers = { 'Content-Type': 'application/x-www-form-urlencoded', } data = { 'username': username, 'password': password, 'token': 'true', 'permissions': 'true', } try: # Step 1: Authenticate to get the JWT token auth_response = requests.post(auth_url, headers=headers, data=data) if auth_response.status_code == 201: jwt_token = auth_response.text.strip() # Extract and clean the token if jwt_token: # Log and create an event for the JWT token event = helper.new_event( data=f"JWT Token: {jwt_token}" ) ew.write_event(event) # Step 2: Generate the report ID report_id = generate_report_id(jwt_token, helper) if report_id: # Log and create an event for the report ID event = helper.new_event( data=f"Report ID: {report_id}" ) ew.write_event(event) # Step 3: Download the report file_path = download_report(jwt_token, report_id, helper) if file_path: helper.log_info(f"Report successfully downloaded to: {file_path}") else: raise ValueError("Failed to download the report.") else: raise ValueError("Failed to generate report ID.") else: raise ValueError("JWT token not found in response.") else: raise ValueError(f"Failed to get JWT: {auth_response.status_code}, {auth_response.text}") except Exception as e: helper.log_error(f"Error in collect_events: {e}") def generate_report_id(jwt_token, helper): url = "https://example_url" headers = { "accept": "application/json", "Authorization": f"Bearer {jwt_token}" } params = { "havingQuery": "isSecurity: true", "platform": "Windows" } try: response = requests.get(url, headers=headers, params=params) if response.status_code in (200, 201): report_data = response.json() report_id = report_data.get('reportId') if report_id: return report_id else: raise ValueError("Report ID not found in response.") else: raise ValueError(f"Failed to generate report ID: {response.status_code}, {response.text}") except Exception as e: helper.log_error(f"Error while generating report ID: {e}") raise ValueError(f"Error while generating report ID: {e}") def download_report(jwt_token, report_id, helper): """ Downloads the report using the JWT token and report ID. """ url = f"https://example_url/{report_id}/download" headers = { "accept": "application/json", "Authorization": f"Bearer {jwt_token}", } try: # Make the request to download the report response = helper.send_http_request(url, method="GET", headers=headers, verify=True) if response.status_code in (200, 201): # Save the report content to a file sanitized_report_id = "".join(c if c.isalnum() else "_" for c in report_id) file_path = f"C:\\Program Files\\Splunk\\etc\\apps\\splunk_app_addon-builder\\local\\temp\\{sanitized_report_id}.csv.gz" with open(file_path, "wb") as file: file.write(response.content) helper.log_info(f"Report downloaded successfully to: {file_path}") return file_path else: raise ValueError(f"Failed to download report: {response.status_code}, {response.text}") except Exception as e: helper.log_error(f"Error while downloading report: {e}") raise ValueError(f"Error while downloading report: {e}")        
we have a user ID that we are looking to find out what splunk has collected.  what is the serach that i use?