I've noticed a ton of "Unable to read in product version information" and "[HTTP 401] Client is not authenticated" errors lately in the splunk _internal logs. Has anyone else seen the same probl...
See more...
I've noticed a ton of "Unable to read in product version information" and "[HTTP 401] Client is not authenticated" errors lately in the splunk _internal logs. Has anyone else seen the same problem? Is this something that should be ignored? Thanks
We are getting hundreds of these errors a day in the internal logs for orig_component="SearchOperator:rest" and for app="website_monitoring" Failed to fetch REST endpoint uri=https://127.0.0.1:80...
See more...
We are getting hundreds of these errors a day in the internal logs for orig_component="SearchOperator:rest" and for app="website_monitoring" Failed to fetch REST endpoint uri=https://127.0.0.1:8089/services/data/inputs/web_ping?count=0 from server https://127.0.0.1:8089. Check that the URI path provided exists in the REST API. I could not find anything pointing to that IP in our website_monitoring app. Could it be something configured to point to some local endpoint, is anyone else coming across this issue? Thanks
Recently, I observed a message in Splunk Cloud (version 9.2.2403.105) stating, "Found an empty value in 'allowedDomainList' in alert_actions.conf." However, when I check the "Allowed Domain" setting ...
See more...
Recently, I observed a message in Splunk Cloud (version 9.2.2403.105) stating, "Found an empty value in 'allowedDomainList' in alert_actions.conf." However, when I check the "Allowed Domain" setting in the UI by navigating to "Settings > Server settings > Email," it indicates "Leave empty for no restrictions." Despite this, I am still seeing the warning message. #splunkcloud #splunk
Hello Everyone ! I just in stalled Splunk ES trial on Ec2 and also tried on Digital Ocean instance. All goes well. But then I try to Sign -In after tpying creds it shows server error . Read multiple...
See more...
Sorry I don't understand this. What is the intent of the appendpipe and xyseries? The end result should be a timechart containing average of some measurement and a count of disctinct "nodes".
If you have multiple panels, you are probably going to have to use multiple tokens <html> <style> #single1 text { fill: $colour1$ !important; }
</style> </html> | eval _colour=if(final_status ="O...
See more...
If you have multiple panels, you are probably going to have to use multiple tokens <html> <style> #single1 text { fill: $colour1$ !important; }
</style> </html> | eval _colour=if(final_status ="OK","Green","Red")
| fields final_status _colour</query>
<earliest>-15m</earliest>
<latest>now</latest>
<done>
<set token="colour1">$result._colour$</set>
</done> <html> <style> #single2 text { fill: $colour2$ !important; }
</style> </html> | table status _colour</query>
<earliest>@d</earliest>
<latest>now</latest>
<done>
<set token="colour2">$result._colour$</set>
</done>
Hi @vid1 , you have to configure three items in /etc/rsyslog.conf: in the MODULES section: module(load="imudp") # needs to be done just once or module(load="imtcp") # needs to be done just once...
See more...
Hi @vid1 , you have to configure three items in /etc/rsyslog.conf: in the MODULES section: module(load="imudp") # needs to be done just once or module(load="imtcp") # needs to be done just once depending on the protocol you're using. then, in TEMPLATES section: template(name="tmpl-paloalto" type="string" string="/var/log/remote/%fromhost%/paloalto/%HOSTNAME%/paloalto_%$YEAR%-%$MONTH%-%$DAY%_%$HOUR%.log") this string must be modified based on the path and the name of the files that must be written. At least the rule to implement: ruleset(name="writeRemoteData" queue.type="fixedArray" queue.size="250000" queue.dequeueBatchSize="4096" queue.workerThreads="4" queue.workerThreadMinimumMessages="60000")
{
# network - paloalto
if $HOSTNAME == "10.10.10.10" then {
action(type="omfile" ioBufferSize="64k" flushOnTXEnd="off" asyncWriting="on" dynafile="tmpl-paloalto" DirCreateMode="0770" FileCreateMode="0660" template="fmt_default") stop
} this is the most important and difficoult part to implement, because you have to implement all your rules. Ciao. Giuseppe
1. This search is not proper SPL. The quotes don't add up so it's not obvious if you're quoting whole search or indeed have unneeded quotes in it. 2. Are you sure you're not forgetting about escapin...
See more...
1. This search is not proper SPL. The quotes don't add up so it's not obvious if you're quoting whole search or indeed have unneeded quotes in it. 2. Are you sure you're not forgetting about escaping quotes in your string containing search? 3. On Splunk's side, back around 8.0 or even a bit after that the order of arguments with bin and timechart was important. You needed to put the "span=12h" as the first parameter immediately after the command. With sufficiently modern Splunk version it's more lenient to just placing the span parameter almost anywhere.
Hi @vid1 , are you speaking of output configuration on NAS or syslog input Configuration on SC4S? About NAS, I cannot help you, you should search in the NAS Management menu. About SC4S, I don't li...
See more...
Hi @vid1 , are you speaking of output configuration on NAS or syslog input Configuration on SC4S? About NAS, I cannot help you, you should search in the NAS Management menu. About SC4S, I don't like it, I prefer to configure rsyslog (or syslog-ng) for receiving and then inputs on UF. Ciao. Giuseppe
Depends on what you mean by latency. If it's a pure network-level latency you mean then it's up to you to verify what latency you have between those environment. And no architecting can overcome that...
See more...
Depends on what you mean by latency. If it's a pure network-level latency you mean then it's up to you to verify what latency you have between those environment. And no architecting can overcome that. But of course in terms of egress data, if you just set many different environments in different clouds as peers for a single SH(C), you'll get a lot of traffic since each time your search hits a centralized command it will have to send all results it has so far to the SH layer.
Hi @vid1 , check if the Dell PowerScale Add-On for Splunk (https://splunkbase.splunk.com/app/2689) is the correct one for you. Otherwise you have to create your own custom add-on. Ciao. Giuseppe
Hi @vid1 , what's your NAS technology? is there ad Add-On for it in apps.splunk.com? if yes, install it on the Forwarder and on the Search Head. Ciao. Giuseppe
Hello Everyone, I have a requirement that the data can be searchable upto last 30 days in search page. But the index retention period is 90 days. Basically it should allow the user to search only be...
See more...
Hello Everyone, I have a requirement that the data can be searchable upto last 30 days in search page. But the index retention period is 90 days. Basically it should allow the user to search only between last 30 days events and if it is required then allow the user to search for 90 days. Is there any configuration available to make the data searchable and not searchable in splunk. Thanks in advance
Hi, if the issue exists on Windows then it sounds like the general Memory leak problem we have since Feb. this year and Splunk isn't realizing it, it seems. See here: https://community.splunk.com/t5...
See more...
Hi, if the issue exists on Windows then it sounds like the general Memory leak problem we have since Feb. this year and Splunk isn't realizing it, it seems. See here: https://community.splunk.com/t5/Splunk-Enterprise/Memory-leak-in-Windows-Versions-of-Splunk-Enterprise/m-p/696849#M20010