All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, Please help me. I am trying to upgrade splunk UF forwarder to recent version i.e 9.0.3. I have stopped splunk service and have used below commands : Downloaded tar file as root user : ... See more...
Hi All, Please help me. I am trying to upgrade splunk UF forwarder to recent version i.e 9.0.3. I have stopped splunk service and have used below commands : Downloaded tar file as root user : wget -O splunkforwarder-9.0.3-dd0128b1f8cd-Linux-x86_64.tgz "https://download.splunk.com/products/universalforwarder/releases/9.0.3/linux/splunkforwarder-9.0.3-dd0128b1f8cd-Linux-x86_64.tgz" Unzipped as splunk user : tar xvfz splunkforwarder-9.0.3-dd0128b1f8cd-Linux-x86_64.tgz -C /opt Running this as splunk user , tried as root user as well : ./splunk start -accept-license But splunk start service is stopping here : Error calling execve(): No such file or directory Error launching  command: Invalid argument I have attached screenshot of what is happening, please help me with resolution. I really appreciate your help. Regards, PNV
I have a UF set to send logs to both Splunk IDX and SIEM, using the TCPOUT settings in outputs.conf, but this is sending via TCP and we want it to use UDP (due to high log rate). Can it be done? The... See more...
I have a UF set to send logs to both Splunk IDX and SIEM, using the TCPOUT settings in outputs.conf, but this is sending via TCP and we want it to use UDP (due to high log rate). Can it be done? There is no option for the tpcout stanza to set a protocol. So it is TCP only. I found there is a syslog-out stanza for the outputs.conf file, which can use UDP or TCP, but that one also says it can't be used on UFs. Am I stuck with TCP, or is there another way? Thanks for any responses, Rod.
I'm consuming data from Splunk REST API endpoints for other purposes. However, it is throwing this error because I used the "lookup" command in the query. Could anyone assist me in resolving this iss... See more...
I'm consuming data from Splunk REST API endpoints for other purposes. However, it is throwing this error because I used the "lookup" command in the query. Could anyone assist me in resolving this issue? If the "lookup" command is not used, the query works properly. Error: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="FATAL">Error in 'lookup' command: Could not construct lookup 'master_sheet.csv, host, as, host, OUTPUT, LOB, Region, Application, Environment'. See search.log for more details.</msg> </messages> </response>   Query: curl -k -u user:pass https://localhost:8089/services/search/jobs --data-urlencode search='search index=foo sourcetype=abc source=*fs.log | rex "(?<Date>.*)\|(?<Mounted>.*)\|(?<Size>.*)\|(?<Used>.*)\|(?<Avail>.*)\|(?<Used_PCT>.*)\|(?<Filesystem>.*)" | eval Used_PCT=replace(Used_PCT,"%","") | search Filesystem IN (/apps, /logs) | stats latest(*) as * by host,Filesystem | where Used_PCT>=80 | sort -Used_PCT | rename Used_PCT as "Use%" | table host,Filesystem,Size,Used,Avail,Use% | lookup master_sheet.csv host as host OUTPUT LOB,Region,Application,Environment | table host,LOB,Region,Application,Environment,Filesystem,Size,Used,Avail,"Use%"' -d id=mysearch_1234567 curl -u user:pass -k https://localhost:8089/services/search/jobs/mysearch_1234567/results --get -d output_mode=csv  
Hello Splunkers I have the following raw events 2023-01-20 18:45:59.000, mod_time="1674240490", job_id="79" , time_submit="2023-01-20 10:04:55", time_eligible="2023-01-20 10:04:56", time_start... See more...
Hello Splunkers I have the following raw events 2023-01-20 18:45:59.000, mod_time="1674240490", job_id="79" , time_submit="2023-01-20 10:04:55", time_eligible="2023-01-20 10:04:56", time_start="2023-01-20 10:45:59", time_end="2023-01-20 10:48:10", state="COMPLETED", exit_code="0", nodes_alloc="2", nodelist="abc[0002,0006]", submit_to_start_time="00:41:04", eligible_to_start_time="00:41:03", start_to_end_time="00:02:11" 2023-01-20 18:45:59.000, mod_time="1674240490", job_id="79" , time_submit="2023-01-20 10:04:55", time_eligible="2023-01-20 10:04:56", time_start="2023-01-20 10:45:59", time_end="2023-01-20 10:48:10", state="COMPLETED", exit_code="0", nodelist="ABC[0002-0004,0006-0008,0073,0081,0085-0086,0089-0090,0094-0095,0097-0098]" submit_to_start_time="00:41:04", eligible_to_start_time="00:41:03", start_to_end_time="00:02:11" How do I extract or parse the highlighted nodelist="ABC[0002-0004,0006-0008,0073,0081,0085-0086,0089-0090,0094-0095,0097-0098]  into a new field called host and the host values for first event would be host= abc0002 and host=abc0006 similarly for second event it should be host= abc0002 host= abc0003 host= abc0004   host=abc0006  host= abc0007 host= abc0008 host=abc0073 host= abc0081   host=abc0095 host= abc0097 host=abc0098 Thanks in Advance
I am in an air-gap environment and would like to take screenshot of the dashboard at a regular interval. I prefer not to have to install additional app or python selenium. Dashboard studio has Acti... See more...
I am in an air-gap environment and would like to take screenshot of the dashboard at a regular interval. I prefer not to have to install additional app or python selenium. Dashboard studio has Action -> "Download PNG"  that can be perform manually. Is there a way to use this feature and put it on a schedule or python script? Or is there a Splunk API that can take the screenshot of a specified link? I am currently using Splunk Enterprise 9.0. 
Hello All, I am running Splunk 9.0.2 on Oracle 8.6. We monitor Cisco devices. These devices require using port 514 to forward their syslogs to splunk. We are running splunk as a non-root user. ... See more...
Hello All, I am running Splunk 9.0.2 on Oracle 8.6. We monitor Cisco devices. These devices require using port 514 to forward their syslogs to splunk. We are running splunk as a non-root user. How can we configure Splunk to allow access to port 514? eholz1
performing the following search: I get this result. I need to parser this information, building a table excel type. The information is in JSON format, so a UPLOAD in SPLUNK. Like this: ... See more...
performing the following search: I get this result. I need to parser this information, building a table excel type. The information is in JSON format, so a UPLOAD in SPLUNK. Like this:        
So from my research it looks like Base Searches increase the performance of the dashboards. A dashboard with several views loads faster if the query of each view is using a pre-existing base search. ... See more...
So from my research it looks like Base Searches increase the performance of the dashboards. A dashboard with several views loads faster if the query of each view is using a pre-existing base search. However my friend is convinced that that's not the case and using Base Searches does the opposite - it prolongs the loading time of the dashboard. Has anyone else had such experience?
Given the below scenario: base search| table service_name,status,count Service_name Status Count serviceA 500_INTERNAL _ERROR 10 serviceA ... See more...
Given the below scenario: base search| table service_name,status,count Service_name Status Count serviceA 500_INTERNAL _ERROR 10 serviceA 404_NOT_FOUND 4 serviceB 404_NOT_FOUND 1 serviceC 500_INTERNAL_ERROR 2 ServiceC 404_NOT_FOUND 5 serviceD 206_PARTIAL_ERROR 1   How can I display the results with group by service_name and the result as below table: Service_name Status Count serviceA 500_INTERNAL _ERROR, 404_NOT_FOUND 14 serviceB 404_NOT_FOUND 1 serviceC 500_INTERNAL_ERROR, 404_NOT_FOUND 7 serviceD 206_PARTIAL_ERROR 1
I have a field which contains http status code. I want to create a single alert query with multiple conditions.  Example: condition1) status code is 500 and greater than 10% alert should be trigg... See more...
I have a field which contains http status code. I want to create a single alert query with multiple conditions.  Example: condition1) status code is 500 and greater than 10% alert should be triggered. Condition 2) status code is 403 and greater than 20% alert should be triggered.  Condition 3) status code is 503 and greater than 20% alert should be triggered.  Also, Is it possible to have different time range for the above condition? like condition 1 and condition 2 should search for last 15 minutes, whereas condition 3 should search for last 30 mins. How do I form the query?
Hi I have an application hosted on a vendor GCP and the logs of the application are stored in the big query of GCP. I need to setup Splunk in my infrastructure to monitor the application hosted o... See more...
Hi I have an application hosted on a vendor GCP and the logs of the application are stored in the big query of GCP. I need to setup Splunk in my infrastructure to monitor the application hosted outside my infra(vendor GCP). Has anyone done something like this? Do you know how can I ingest the logs to Splunk Enterprise?
Hello, I am trying to add a search peer to our existing environment in order to scale it up a bit. The main instance is Splunk Enterprise which acts as the search head, indexer, and everything els... See more...
Hello, I am trying to add a search peer to our existing environment in order to scale it up a bit. The main instance is Splunk Enterprise which acts as the search head, indexer, and everything else pretty much. When I add the second Splunk Enterprise server that I set up as a peer under Distributed Search > Search Peers, everything stops working essentially on the main instance. Searches will never load and everything is extremely slow. This is when I add the 2nd new server as a peer on the main instance. I've tried adding it both ways and/or enabled on both but nothing seems to work.  My initial thoughts are maybe because the main instance isn't divided into multiple parts like a separate server for a search head, and then have the two indexers under that - but that seems much more complicated to set up than I want. Just looking to add a peer as another indexer type server to expand a bit. Any thoughts are appreciated Thanks 
Hi All, We are having issues with app from after the installation, no relevant data getting regarding the salesforce app. After the installation of this app, we have created the configuration and d... See more...
Hi All, We are having issues with app from after the installation, no relevant data getting regarding the salesforce app. After the installation of this app, we have created the configuration and data inputs, but we couldn't find any useful information or events. Splunk Base: https://splunkbase.splunk.com/app/5689 I have attached the screenshot of events which we are receiving now but those not actual events from salesforce. Kindly let me know if someone can help me to solve this issue. Any suggestions would be appreciated. Thanks in advance. 
Hello Guys, I'd like to create a search based on business hours, and like to use a field with value like this:  "2023/01/20 08:52:58" The bold number would be interesting, and like to search w... See more...
Hello Guys, I'd like to create a search based on business hours, and like to use a field with value like this:  "2023/01/20 08:52:58" The bold number would be interesting, and like to search with multiple values. example 08-18h [08,09,10,11,12,13,14,15,16,17,18] How could I find a regex to extract theese numbers?  thanks a lot!  
Hi, do you have a tentative timeline of when Splunk will deprecate Python 2 on the Splunk Cloud platform? Thanks.
Could you provide me a splunk enterprise security learning links from scratch  Zero to hero classes.. Thanks    
Hi guys, Happy New Year, i do some code testing with the Splunk HEC, now i need to transfer some large volum data with gzip compressed. 1. first i find one limit in $SPLUNK_HOME$/etc/system/defau... See more...
Hi guys, Happy New Year, i do some code testing with the Splunk HEC, now i need to transfer some large volum data with gzip compressed. 1. first i find one limit in $SPLUNK_HOME$/etc/system/default/limits.conf     [http_input] max_content_length = <integer> * The maximum length, in bytes, of HTTP request content that is accepted by the HTTP Event Collector server. * Default: 838860800 (~ 800 MB)     but i It is found that this value seems to calculate the size after decompression, because i have one test file about 50MiB, it's far less than 800MB, but when i sending request, Splunk raise the:     <!doctype html><html><head><meta http-equiv="content-type" content="text/html; charset=UTF-8"><title>413 Content-Length of 838889996 too large (maximum is 838860800)</title></head><body><h1>Content-Length of 838889996 too large (maximum is 838860800)</h1><p>The request your client sent was too large.</p></body></html>     2. the 2nd limit i find at $SPLUNK_HOME$/etc/apps/splunk_httpinput/local/inputs.conf     [http] maxEventSize = <positive integer>[KB|MB|GB] * The maximum size of a single HEC (HTTP Event Collector) event. * HEC disregards and triggers a parsing error for events whose size is greater than 'maxEventSize'. * Default: 5MB     i think this limit is set at only one event size? if i send batch events in one request by "/services/collector", so this limit will apply to every event in the batch events, right? Are there any relevant experts to help confirm this behavior? if need more details feel free to let me know, Many thanks!
Hello, My events contain strings such as: notification that user "mydomain\bob" has notification that user "fred" has  notification that user "01\ralph2" has  I'm trying to write a conditional E... See more...
Hello, My events contain strings such as: notification that user "mydomain\bob" has notification that user "fred" has  notification that user "01\ralph2" has  I'm trying to write a conditional EXTRACT in props.conf, so that the a new field 'domain' is assgined the domain name (i.e. mydomain, 01) where specified, else is assigned NULL and new field 'user' is assigned the user name (i.e. bob, fred, ralph2). This works well enough when there is a domain and a user, but oviously not when there isn't a domain: EXTRACT-domain_user = notification\sthat\suser\s\"(?<domain>[\w\d]+)\\(?<user>[\w\d]+)\"\shas I'd be grateful for some assistance.      
Is it possible to assign a value to a different fields. I am trying to combine two different events but the same index. The other one has the field which I needed ip address while the other one doesn... See more...
Is it possible to assign a value to a different fields. I am trying to combine two different events but the same index. The other one has the field which I needed ip address while the other one doesn't have it in the raw logs. Is it possible to assign/pass the value to the other?   date name description ip 1/15/2023 12:05 xxx this is test 1 192.x.x.x 1/15/2023 12:06 xxx this is test 2   1/15/2023 12:06 xxx this is test 1 192.x.x.x   I tried using eval and passing the data but it fails. Using fill null values and assigning the a fix value doesn't fix it. it should be based from the IP above or within that same date Thanks you in advance for any advice