All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi,  I am trying implement custom app using add-on builder. I am running a rest call and getting error as  Error: python ERROR HTTPSConnectionPool(host='*', port=*): Max retries exceeded with ur... See more...
Hi,  I am trying implement custom app using add-on builder. I am running a rest call and getting error as  Error: python ERROR HTTPSConnectionPool(host='*', port=*): Max retries exceeded with url: /*(Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at *>: Failed to establish a new connection: [WinError 10013] An attempt was made to access a socket in a way forbidden by its access permissions')) I have tried adding "verify=False" in python script but its not helping  response = str ((requests.get(url, data = body, auth=(user, password))).text,verify=False) Any idea what else could be an issue and how to fix it. ?
I have a use case where I'm trying to collect events from a federated search. I can run and search results using the federated index, but when I try to add a collect command to collect the results to... See more...
I have a use case where I'm trying to collect events from a federated search. I can run and search results using the federated index, but when I try to add a collect command to collect the results to a local index I get the following error: "No results to summary index." The search works but automatically returns no results when I try to collect. I've leveraged a workaround by using a makeresults with dummy data followed by an append with a subsearch, that contains my federated search and that collects fine, but now I'm limited by subsearch constraints. Anyone run into this issue? Workaround:   | makeresults | eval test="a" | fields - _time | append [ index=federated:testindex | head 1 ] | collect index=mysummaryindex  
Hello I have an alert that runs every 2 minutes for the last 40 hours of data. I use five different logs to retrieve the result I need using the join command. The throttle is set on, suppressing the... See more...
Hello I have an alert that runs every 2 minutes for the last 40 hours of data. I use five different logs to retrieve the result I need using the join command. The throttle is set on, suppressing the results for 40 hours in order to suppress the repeating alert. My alert runs perfectly and triggers on time. But every three to four months once, I get the delayed alert for some hours. This issue was repeating for every three to four months, So I had an alternative alert running. Now one of the alert gets delayed for 4 hours and an other one was on time. It makes the alert less reliable. I started to monitor the triggered alerts in Triggered alerts section. Note: It's a very big query takes 30 seconds.  May I know the possible reason for this and best practices to avoid this error in future? How to identify the issue? 
My kvstore is failed and I am trying to renew my certificate, my Splunk server is on a windows server. I have tried the steps by removing the server.pem  and  server_pkcs1.pem from ..\Splunk\etc\auth... See more...
My kvstore is failed and I am trying to renew my certificate, my Splunk server is on a windows server. I have tried the steps by removing the server.pem  and  server_pkcs1.pem from ..\Splunk\etc\auth\  as well as delete the expired Cert SplunkServerDefaultCert from Cerlm. this method worked for me in four other deployments however this one deployment, when I go start my Splunk services, I get failure to start Splunkd with an error message " Unable to generate certificate for SSL. Splunkd port communication may not work (Child failed to start: FormatMessage was unable to decode error (193), (0xc1)) SSL certificate generation failed.
Is there a way to create a Splunk query to show the errors from splunk TA and kv store.     
Good morning! Is there a way to update the display message on the Warm Standby instance when users navigate to it?
I'm a bit stumped as to why I cannot tokenize $result.<field>$ from a dynamic search that generates dropdown values in an input. Below is an example dashboard I was testing this on. Of note, I cannot... See more...
I'm a bit stumped as to why I cannot tokenize $result.<field>$ from a dynamic search that generates dropdown values in an input. Below is an example dashboard I was testing this on. Of note, I cannot wildcard "*" the All and use the wildcard in the token because the underlying search narrowly limits the data returned and a wildcard would expand the follow-on search beyond the results of the dynamic input search. I've validated the conditionals work in the test dash with the static_tok, which changes with each selection.  I am attempting to use the dynamic search to create a result that creates OR statements between each value. it would look something like this if "All" dropdown option was selected. Any other dropdown selection should set that value in the token (ie dropdown_tok=buttercup, which works). test="buttercup" OR test="dash" OR test="fleetfoot" OR test="mcintosh" OR test="mistmane" OR test="rarity" OR test="tenderhoof"   This is a dashboard that tests this method: <form version="1.1" theme="light"> <label>test</label> <fieldset submitButton="false"> <input type="time" token="timerange1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <input type="dropdown" token="dropdown_tok" searchWhenChanged="true"> <label>dropdown test</label> <choice value="all">All</choice> <fieldForLabel>test</fieldForLabel> <fieldForValue>test</fieldForValue> <search> <query>| makeresults | eval test="buttercup rarity tenderhoof dash mcintosh fleetfoot mistmane" | makemv delim=" " test | mvexpand test | eventstats values(test) as mv_test | eval all_test="test=\"".mvjoin(mv_test, "\" OR test=\"")."\"" | table test,mv_test,all_test </query> <earliest>$timerange1.earliest$</earliest> <latest>$timerange1.latest$</latest> </search> <change> <condition match="$dropdown_tok$ == &quot;all&quot;"> <set token="dropdown_tok">$result.all_test$</set> <set token="all_test">$result.all_test$</set> <set token="static_tok">"all_condition"</set> </condition> <condition match="$dropdown_tok$!=&quot;all&quot;"> <set token="dropdown_tok">$dropdown_tok|s$</set> <set token="static_tok">"not_all_condition"</set> </condition> </change> <default>all</default> <initialValue>all</initialValue> </input> </panel> </row> <row> <panel> <html> <p>dropdown_tok: $dropdown_tok$</p> <p>all_test: $all_test$</p> <p>static_tok: $static_tok$</p> </html> </panel> <panel> <table> <search> <query>| makeresults | eval dropdown_tok=$dropdown_tok$ | eval static_tok=$static_tok$ | table dropdown_tok,static_tok</query> <earliest>$timerange1.earliest$</earliest> <latest>$timerange1.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>  
Hello, I have a dashboard with the following inputs:   <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="tok1" searchWhenChanged="false"> <label>Tok1</label> <c... See more...
Hello, I have a dashboard with the following inputs:   <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="tok1" searchWhenChanged="false"> <label>Tok1</label> <choice value="All">*</choice> <choice value="Active">Y</choice> <choice value="Inactive">N</choice> <prefix>Status="</prefix> <suffix>"</suffix> <default>*</default> <change> <condition value="All"> <set token="tok1"></set> </condition> <condition> <eval token="tok1">"\" AND upper(STATUS)=upper(\'" + $value$ + "\')\""</eval> </condition> </change> </input> <input type="text" token="tok2" searchWhenChanged="false"> <label>UserID</label> <default></default> <change> <condition> <eval token="tok2">if(match($value$,"\\w")," AND UserID=\"*" + upper($value$) + "*\"", "")</eval> </condition> </change> </input> </fieldset>   These two example tokens are user in a panel where the query is just   | search * $tok1$ $tok2$   because it refers to another search as base query. The problem is the following: I have a submit button, so I expect that every change in the fields do not trigger the search until I press the Submit button. What happens instead is that, if I change the value of tok1,the search starts, if I change the value of tok2 and then click outside of the text box, the search starts. In both cases, the submit button is bypassed. If I remove tok1 and tok2 manipulations in the <change> tag, everything works as expected, so I guess that the issue is caused by this tag, but I cannot understand the flow that Splunk goes through to decide to bypass the submit button. Thank you very much to anyone who can help me. Have a nice day!
Hi! I am faced with the following problem. I need to filter the logs that I receive from the source. I get the logs via Heavy-Forwarder, using the following config: inputs.conf   [udp://4514] ind... See more...
Hi! I am faced with the following problem. I need to filter the logs that I receive from the source. I get the logs via Heavy-Forwarder, using the following config: inputs.conf   [udp://4514] index=mylogs sourcetype = mylogs:leef     Before writing regex I tried the following configuration: props.conf   [mylogs:leef] TRANSFORMS-null= setnull_mylogs   transoforms.conf   [setnull_mylogs] REGEX= . DEST_KEY=queue FORMAT=nullQueue   But it is not working, I still receiving all events in indexes. This conf files store in <heavy_folder>/etc/apps/<app_name>/local.  May be I need to use another stanza name in props, but I tried [source::udp://4514] and it is not working. Any ideas? My goal than to write a few regex and receive only useful logs from this source. Thank you.
Hi, We are getting below error on the machines running with Network Toolkit app. It's affecting the Data forwarding to Splunk cloud. Please help.   0000 ERROR ExecProcessor [5441 ExecProcessorSc... See more...
Hi, We are getting below error on the machines running with Network Toolkit app. It's affecting the Data forwarding to Splunk cloud. Please help.   0000 ERROR ExecProcessor [5441 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/network_tools/bin/ping.py"   self.logger.warn("Thread limit has been reached and thus this execution will be skipped for stanza=%s, thread_count=%i", stanza, len(self.threads))   Thanks!
Hi all, I recently updated the NetSkope Add-on For Splunk (TA-NetSkopeAppForSplunk) from version 3.1.2 to version 3.6.0 in my Splunk Cloud environment (version 9.1.2308.203). I followed the steps o... See more...
Hi all, I recently updated the NetSkope Add-on For Splunk (TA-NetSkopeAppForSplunk) from version 3.1.2 to version 3.6.0 in my Splunk Cloud environment (version 9.1.2308.203). I followed the steps outlined in the Splunkbase upgrade guide, but I’m experiencing issues in getting my data into Splunk for Web Transactions V2. I got the V2 token set up by the Netskope Administrator, with the proper permissions, then I get the following error when setting up the data input: Error occurred while validating Token V2 parameter: 'status''. Did anyone have the same issue? Thanks in advance.
Hi Guys, I am trying fetch details using stats.In this query I am trying get status from the below conditions and when i am populating in the table.The ProccesMsg  has some values but in failure con... See more...
Hi Guys, I am trying fetch details using stats.In this query I am trying get status from the below conditions and when i am populating in the table.The ProccesMsg  has some values but in failure conditions i will add message in the result so i used coalesec to map both the result and need to populate in the table.But i cant able to populate the result.What mistake i did here. index="mulesoft" applicationName="ext" environment=DEV (*End of GL-import flow*) OR (message="GLImport Job Already Running, Please wait for the job to complete*") OR (message="process - No files found for import to ISG") |rename content.File.fstatus as Status|eval Status=case( like('Status' ,"%SUCCESS%"),"SUCCESS",like('Status',"%ERROR%"),"ERROR",like('message',"%process - No files found for import to ISG%"), "ERROR",like('message',"GLImport Job Already Running, Please wait for the job to complete"), "WARN") | eval ProcessMsg= coalesce(ProcessMsg,message) |stats values(content.File.fid) as "TransferBatch/OnDemand" values(content.File.fname) as "BatchName/FileName" values(content.File.fprocess_message) as ProcessMsg values(Status) as Status values(content.File.isg_file_batch_id) as OracleBatchID values(content.File.total_rec_count) as "Total Record Count" by correlationId |table Status Start_Time "TransferBatch/OnDemand" "BatchName/FileName" ProcessMsg OracleBatchID "Total Record Count" ElapsedTimeInSecs "Total Elapsed Time" correlationId  
How to write a query to get data like this Branch 1 🟢 🟢 Branch 2  🟢🟢🟢 Branch 3 🟢 🟢 Branch 4 🟢🟢🟢 . . . . . . Here branch is the actual branch and green represe... See more...
How to write a query to get data like this Branch 1 🟢 🟢 Branch 2  🟢🟢🟢 Branch 3 🟢 🟢 Branch 4 🟢🟢🟢 . . . . . . Here branch is the actual branch and green represent success build ,red will be the failure build and black will be the aborted build status. (Recent  5 build status)
We have multiple firewalls and different locations and each location we have syslog collector server and its forward the logs to splunk indexer.  Pan: traffic count 27,644,629 83% Pan:threat count ... See more...
We have multiple firewalls and different locations and each location we have syslog collector server and its forward the logs to splunk indexer.  Pan: traffic count 27,644,629 83% Pan:threat count 3,224,543 9.77% Pan:firewall_cloud 2,034,183 6.18% last one hour data. it looks like over utilization, so we want to validate receiving logs are legitimate or not?  Planning to reduce consumption of firewall logs.  Please guide me how can i validate firewall logs we are reciving correct logs or any excessive or not needed?
Hello, I'having some problem when filtering standard Windows events. My goal is to send the events coming from my UFs to two different indexes based on the users. If the user ends with ".adm" the in... See more...
Hello, I'having some problem when filtering standard Windows events. My goal is to send the events coming from my UFs to two different indexes based on the users. If the user ends with ".adm" the index should be index1, otherwhise index2. Here is my regex for filtering https://regex101.com/r/PsEHIp/1  I put it in inputs.conf ###### OS Logs ###### [WinEventLog://Security] disabled = 0 index = index1 followTail=true start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 blacklist = (?ms)EventCode=(4624|4634|4625)\s+.*\.adm renderXml=false
Dear splunkers, I need to ingest some apaches log files. Those log files are first sent to a syslog server by rsyslog rsyslog adds to each line of the log file its owns information. A UF... See more...
Dear splunkers, I need to ingest some apaches log files. Those log files are first sent to a syslog server by rsyslog rsyslog adds to each line of the log file its owns information. A UF is installed on this syslog server and can monitor the log file and send them to the indexers Each line of the log file looks like this :   2024-02-16T00:00:00.129824+01:00 website-webserver /var/log/apache2/website/access.log 10.0.0.1 - - [16/Feb/2024:00:00:00 +0100] "GET /" 200 10701 "-" "-" 228   As you can see, the first part of the log, until "/access.log " had been added by rsyslog, so this is something I want Splunk to filter out / delete. So far, I'm able to monitor the file and filter out the rsyslog layer of the events with a parameter, and I added TIME_PREFIX parameter, then Splunk automatically detects the timestamp. Like this :   SEDCMD-1=s/^.*\.log //g TIME_PREFIX=- - \[   I created a custom sourcetype accordingly. But the issue is that, the field extraction is not working properly. Almost no field beside the _time related fileds is being extracted. I guess it's because I'm using a custom sourcetype, so Splunk is not extracting the fields automaticaly as it should; But I'm not really sure... I'm a bit lost Thanks a lot for your kind help
Hi Guys, I am try to exclude field value . need to exclude message=""API:START: /v1/Journals_outbound"    index="mulesoft" applicationName="ext" environment=DEV (message="API: START: /v1/Journa... See more...
Hi Guys, I am try to exclude field value . need to exclude message=""API:START: /v1/Journals_outbound"    index="mulesoft" applicationName="ext" environment=DEV (message="API: START: /v1/Journals_outbound") OR (message="API: START: /v1/revpro-to-oracle/onDemand") OR (message="API: START: /v1/fin_Zuora_GL_Revpro_JournalImport") OR (message="API: START: /v1/revproGLImport/onDemand*") | search NOT message IN ("API: START: /v1/Journals_outbound")    
Hi, I'm trying to setup this AP4S App in our nonprod environment and it seems that this will be beneficial to our Splunk admins.  Just wanted to check about the error that we're seeing in SH-14, th... See more...
Hi, I'm trying to setup this AP4S App in our nonprod environment and it seems that this will be beneficial to our Splunk admins.  Just wanted to check about the error that we're seeing in SH-14, the dashboard about KO changes. Particularly in Panel 2 - List. The 'ia4s_ko_changes_csv_lookup' lookup file doesn't exist.  I've checked the corresponding job, seems fine to me. Maybe I'm missing something. But I noticed that it's corresponding search101 "IA4S-013" works fine. The query is different with what's in SH-14 though so I'm not really sure. Please advise.    
Hi all. I am ingesting data into Splunk Enterprise from a file. This file contains a lot of information, and I would like Splunk to make the events start on the ##start_string and end on the line b... See more...
Hi all. I am ingesting data into Splunk Enterprise from a file. This file contains a lot of information, and I would like Splunk to make the events start on the ##start_string and end on the line before the next occurrence ##end_string Within these blocks there are different fields with the form-> ##key = value Here is an example of the file:   ….. ##start_string ##Field = 1 ##Field2 = 12 ##Field3 = 1 ##Field4 = ##end_string ....... ##start_string ##Field = 22 ##Field2 = 12 ##Field3 = field_value ##Field4 = ##Field8 = 1 ##Field7 = 12 ##Field6 = 1 ##Field5 = ##end_string …… I have tried to create this sourcetype (with different regular expressions) but it creates only one event with all the lines: DATETIME_CONFIG = LINE_BREAKER = ([\n\r]+)##start_string ##LINE_BREAKER = ([\n\r]+##start_string\s+(?<block>.*?)\s+## end_string NO_BINARY_CHECK = true SHOULD_LINEMERGE = true category = Custom description = Format custom logs pulldown_type = 1 disabled = false How should I approach this case? Any ideas or help would be welcome Thanks in advanced
Hi  Can some one help me with the following questions 1) My current setup is in on-premise and i plan to migrate to splunk cloud ,what things should i know ? I dont want historical data to be tr... See more...
Hi  Can some one help me with the following questions 1) My current setup is in on-premise and i plan to migrate to splunk cloud ,what things should i know ? I dont want historical data to be transfered to cloud .? 2) Suppose i have 1000 UF and 5 syslog servers , how should i be sending this data ?  3) Should i install the  Splunk credential package on all of these 1000 + 5 machines or should i deploy a HF before then send it to splunk cloud ? 4) Is there any encryption and compression of data that i have to do before sending to cloud or is it taken care by splunk ?