All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

$SPLUNK_HOME/etc/system/local/web.conf is the one you would want to adjust the splunkdConnectionTimeout in. 
I have a use case where I'm trying to collect events from a federated search. I can run and search results using the federated index, but when I try to add a collect command to collect the results to... See more...
I have a use case where I'm trying to collect events from a federated search. I can run and search results using the federated index, but when I try to add a collect command to collect the results to a local index I get the following error: "No results to summary index." The search works but automatically returns no results when I try to collect. I've leveraged a workaround by using a makeresults with dummy data followed by an append with a subsearch, that contains my federated search and that collects fine, but now I'm limited by subsearch constraints. Anyone run into this issue? Workaround:   | makeresults | eval test="a" | fields - _time | append [ index=federated:testindex | head 1 ] | collect index=mysummaryindex  
Hello I have an alert that runs every 2 minutes for the last 40 hours of data. I use five different logs to retrieve the result I need using the join command. The throttle is set on, suppressing the... See more...
Hello I have an alert that runs every 2 minutes for the last 40 hours of data. I use five different logs to retrieve the result I need using the join command. The throttle is set on, suppressing the results for 40 hours in order to suppress the repeating alert. My alert runs perfectly and triggers on time. But every three to four months once, I get the delayed alert for some hours. This issue was repeating for every three to four months, So I had an alternative alert running. Now one of the alert gets delayed for 4 hours and an other one was on time. It makes the alert less reliable. I started to monitor the triggered alerts in Triggered alerts section. Note: It's a very big query takes 30 seconds.  May I know the possible reason for this and best practices to avoid this error in future? How to identify the issue? 
My kvstore is failed and I am trying to renew my certificate, my Splunk server is on a windows server. I have tried the steps by removing the server.pem  and  server_pkcs1.pem from ..\Splunk\etc\auth... See more...
My kvstore is failed and I am trying to renew my certificate, my Splunk server is on a windows server. I have tried the steps by removing the server.pem  and  server_pkcs1.pem from ..\Splunk\etc\auth\  as well as delete the expired Cert SplunkServerDefaultCert from Cerlm. this method worked for me in four other deployments however this one deployment, when I go start my Splunk services, I get failure to start Splunkd with an error message " Unable to generate certificate for SSL. Splunkd port communication may not work (Child failed to start: FormatMessage was unable to decode error (193), (0xc1)) SSL certificate generation failed.
The search is the following: Index=index1 sourcetype=sourcetype1 hostname=* software != "" | rex field=software "cpe:\/a:(?<Vendor>[^:]+):(?<Product>[^:]+):(?<Version>.*)" | table hostname, Vendor,... See more...
The search is the following: Index=index1 sourcetype=sourcetype1 hostname=* software != "" | rex field=software "cpe:\/a:(?<Vendor>[^:]+):(?<Product>[^:]+):(?<Version>.*)" | table hostname, Vendor, Product, Version | dedup hostname, Vendor, Product, Version
@marnall  - No external library. Splunk ships this Splunk Python module built in.
Is there a way to create a Splunk query to show the errors from splunk TA and kv store.     
Hi @debjit_k, I don't know how this add-on works and I don't know python, but priority is defined in the code of one of the python scripts (snow_incident_base.py) that you can find in bin folder. C... See more...
Hi @debjit_k, I don't know how this add-on works and I don't know python, but priority is defined in the code of one of the python scripts (snow_incident_base.py) that you can find in bin folder. Ciao. Giuseppe
Hello, Here's the image. I want to have the time range change based on the selected Grade. For example: If I select Kindergarten, the time will change to "last 24 hours" Thank you
Gotcha and no worries, that's what I am here for  If you have an LB that can do some kind of probing for the active box then it shouldn't matter if DNS has an issue as the LB will decide which is ... See more...
Gotcha and no worries, that's what I am here for  If you have an LB that can do some kind of probing for the active box then it shouldn't matter if DNS has an issue as the LB will decide which is "healthy" and the DNS record just points to the LB if that makes sense? Not done it myself but definitely heard it a few times in the Warm/Standby conversations around automating the failover. 
Great point about the DNS. Our concern was that, as brought up by our networking team, there might be a lag in when the DNS record gets updated during the manual switch and during that time we wanted... See more...
Great point about the DNS. Our concern was that, as brought up by our networking team, there might be a lag in when the DNS record gets updated during the manual switch and during that time we wanted to direct our users to the right URL, if possible.  I'll reach out to support! It's not a high priority, but I didn't see anything in the UI. Thanks for replying
@catherinelam I highly doubt it although it's probably somewhere on the system.  My 1st question would have to be why? No-one should be going to the standby as it should be behind some DNS and/or LB... See more...
@catherinelam I highly doubt it although it's probably somewhere on the system.  My 1st question would have to be why? No-one should be going to the standby as it should be behind some DNS and/or LB that only sends traffic to the "active" box.  If in doubt I would ask support as changing this, if possible, may cause issues with the support agreement. 
Good morning! Is there a way to update the display message on the Warm Standby instance when users navigate to it?
I'm a bit stumped as to why I cannot tokenize $result.<field>$ from a dynamic search that generates dropdown values in an input. Below is an example dashboard I was testing this on. Of note, I cannot... See more...
I'm a bit stumped as to why I cannot tokenize $result.<field>$ from a dynamic search that generates dropdown values in an input. Below is an example dashboard I was testing this on. Of note, I cannot wildcard "*" the All and use the wildcard in the token because the underlying search narrowly limits the data returned and a wildcard would expand the follow-on search beyond the results of the dynamic input search. I've validated the conditionals work in the test dash with the static_tok, which changes with each selection.  I am attempting to use the dynamic search to create a result that creates OR statements between each value. it would look something like this if "All" dropdown option was selected. Any other dropdown selection should set that value in the token (ie dropdown_tok=buttercup, which works). test="buttercup" OR test="dash" OR test="fleetfoot" OR test="mcintosh" OR test="mistmane" OR test="rarity" OR test="tenderhoof"   This is a dashboard that tests this method: <form version="1.1" theme="light"> <label>test</label> <fieldset submitButton="false"> <input type="time" token="timerange1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <input type="dropdown" token="dropdown_tok" searchWhenChanged="true"> <label>dropdown test</label> <choice value="all">All</choice> <fieldForLabel>test</fieldForLabel> <fieldForValue>test</fieldForValue> <search> <query>| makeresults | eval test="buttercup rarity tenderhoof dash mcintosh fleetfoot mistmane" | makemv delim=" " test | mvexpand test | eventstats values(test) as mv_test | eval all_test="test=\"".mvjoin(mv_test, "\" OR test=\"")."\"" | table test,mv_test,all_test </query> <earliest>$timerange1.earliest$</earliest> <latest>$timerange1.latest$</latest> </search> <change> <condition match="$dropdown_tok$ == &quot;all&quot;"> <set token="dropdown_tok">$result.all_test$</set> <set token="all_test">$result.all_test$</set> <set token="static_tok">"all_condition"</set> </condition> <condition match="$dropdown_tok$!=&quot;all&quot;"> <set token="dropdown_tok">$dropdown_tok|s$</set> <set token="static_tok">"not_all_condition"</set> </condition> </change> <default>all</default> <initialValue>all</initialValue> </input> </panel> </row> <row> <panel> <html> <p>dropdown_tok: $dropdown_tok$</p> <p>all_test: $all_test$</p> <p>static_tok: $static_tok$</p> </html> </panel> <panel> <table> <search> <query>| makeresults | eval dropdown_tok=$dropdown_tok$ | eval static_tok=$static_tok$ | table dropdown_tok,static_tok</query> <earliest>$timerange1.earliest$</earliest> <latest>$timerange1.latest$</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>  
You are correct in saying that Splunk no longer automatically extracts the fields with a new custom source type > Do you know if there is a way to activate this option ?
Check your local.meta file at the following path: /opt/splunk/etc/apps/Splunk_TA_cisco-asa/metadata and look for this stanza [lookups] access = read : [ power, sc_admin ], write : [ ess_... See more...
Check your local.meta file at the following path: /opt/splunk/etc/apps/Splunk_TA_cisco-asa/metadata and look for this stanza [lookups] access = read : [ power, sc_admin ], write : [ ess_analyst, power, sc_admin ] export = system version = 9.1.2308.201 modtime = 1710775209.916764000 then add the role to the access like so: access = read : [ user ,power, sc_admin ] If this answer helped, let me know. 
Hello, I have a dashboard with the following inputs:   <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="tok1" searchWhenChanged="false"> <label>Tok1</label> <c... See more...
Hello, I have a dashboard with the following inputs:   <fieldset submitButton="true" autoRun="false"> <input type="dropdown" token="tok1" searchWhenChanged="false"> <label>Tok1</label> <choice value="All">*</choice> <choice value="Active">Y</choice> <choice value="Inactive">N</choice> <prefix>Status="</prefix> <suffix>"</suffix> <default>*</default> <change> <condition value="All"> <set token="tok1"></set> </condition> <condition> <eval token="tok1">"\" AND upper(STATUS)=upper(\'" + $value$ + "\')\""</eval> </condition> </change> </input> <input type="text" token="tok2" searchWhenChanged="false"> <label>UserID</label> <default></default> <change> <condition> <eval token="tok2">if(match($value$,"\\w")," AND UserID=\"*" + upper($value$) + "*\"", "")</eval> </condition> </change> </input> </fieldset>   These two example tokens are user in a panel where the query is just   | search * $tok1$ $tok2$   because it refers to another search as base query. The problem is the following: I have a submit button, so I expect that every change in the fields do not trigger the search until I press the Submit button. What happens instead is that, if I change the value of tok1,the search starts, if I change the value of tok2 and then click outside of the text box, the search starts. In both cases, the submit button is bypassed. If I remove tok1 and tok2 manipulations in the <change> tag, everything works as expected, so I guess that the issue is caused by this tag, but I cannot understand the flow that Splunk goes through to decide to bypass the submit button. Thank you very much to anyone who can help me. Have a nice day!
Hi! I am faced with the following problem. I need to filter the logs that I receive from the source. I get the logs via Heavy-Forwarder, using the following config: inputs.conf   [udp://4514] ind... See more...
Hi! I am faced with the following problem. I need to filter the logs that I receive from the source. I get the logs via Heavy-Forwarder, using the following config: inputs.conf   [udp://4514] index=mylogs sourcetype = mylogs:leef     Before writing regex I tried the following configuration: props.conf   [mylogs:leef] TRANSFORMS-null= setnull_mylogs   transoforms.conf   [setnull_mylogs] REGEX= . DEST_KEY=queue FORMAT=nullQueue   But it is not working, I still receiving all events in indexes. This conf files store in <heavy_folder>/etc/apps/<app_name>/local.  May be I need to use another stanza name in props, but I tried [source::udp://4514] and it is not working. Any ideas? My goal than to write a few regex and receive only useful logs from this source. Thank you.
Hi, We are getting below error on the machines running with Network Toolkit app. It's affecting the Data forwarding to Splunk cloud. Please help.   0000 ERROR ExecProcessor [5441 ExecProcessorSc... See more...
Hi, We are getting below error on the machines running with Network Toolkit app. It's affecting the Data forwarding to Splunk cloud. Please help.   0000 ERROR ExecProcessor [5441 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/network_tools/bin/ping.py"   self.logger.warn("Thread limit has been reached and thus this execution will be skipped for stanza=%s, thread_count=%i", stanza, len(self.threads))   Thanks!
Hi all, I recently updated the NetSkope Add-on For Splunk (TA-NetSkopeAppForSplunk) from version 3.1.2 to version 3.6.0 in my Splunk Cloud environment (version 9.1.2308.203). I followed the steps o... See more...
Hi all, I recently updated the NetSkope Add-on For Splunk (TA-NetSkopeAppForSplunk) from version 3.1.2 to version 3.6.0 in my Splunk Cloud environment (version 9.1.2308.203). I followed the steps outlined in the Splunkbase upgrade guide, but I’m experiencing issues in getting my data into Splunk for Web Transactions V2. I got the V2 token set up by the Netskope Administrator, with the proper permissions, then I get the following error when setting up the data input: Error occurred while validating Token V2 parameter: 'status''. Did anyone have the same issue? Thanks in advance.