All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That should be a new question.
I have tried the steps and created the certificate using these three documents: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Howtoself-signcertificates https://docs.splunk.com/Docume... See more...
I have tried the steps and created the certificate using these three documents: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Howtoself-signcertificates https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/HowtoprepareyoursignedcertificatesforSplunk https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/ConfigureSplunkforwardingtousesignedcertificates   web.conf [settings] enableSplunkWebSSL = 1 privKeyPath = /opt/splunk/etc/auth/mycerts/myServerPrivateKey.key serverCert = /opt/splunk/etc/auth/mycerts/myServerCertificate.pem   server.conf [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/mycerts/myCertAuthCertificate.pem sslPassword = <encrypted>   inputs.conf [default] host = splunkpoc2.company.com [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myServerCertificate.pem sslPassword = $7$UeC5PhW3bITaydrFnqv0+iOwOC+ItQN/CDEZcvtLovDBwTJt requireClientCert = true sslVersions = *,-ssl2 sslCommonNameToCheck = indexerpoc1.company.com,indexerpoc2.company.com   Splunk Web comes up correctly but again the HTTP is not getting redirected to https:    
Hi have you try to  splunk btool indexes list --debug and look if it’s there and where/in which file it (index_a) has defined. I suppose that there haven’t any index_b definitions. r. Ismo  
Here is some comments about metrics.log By default, metrics.log reports the top 10 results for each type. see more from https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Aboutmetr... See more...
Here is some comments about metrics.log By default, metrics.log reports the top 10 results for each type. see more from https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Aboutmetricslog As you could see metrics.log don’t contains all metrics, it’s just “samples” of those. As @richgalloway already said you must use license_usage or calculate that from _raw.  
Scenario: I have a searchhead and two idx in a cluster. there is an index (index_a) defined in the cluster. Until now I always deployed a copy of the indexes.conf with a mock index on the SH, for exa... See more...
Scenario: I have a searchhead and two idx in a cluster. there is an index (index_a) defined in the cluster. Until now I always deployed a copy of the indexes.conf with a mock index on the SH, for example to manage role permissions for it.  This was helpful to show the index in the role definition. However in this deployment there is no such indexes.conf file where index_a is defined on the SH, but the index still shows up in the configuration UI. All instances have Splunk Enterprise 9.0.5.1 installed Problem: I have a new Index that I defined after index_a. It is called index_b. index_b doesn't show up in the roles definition for some reason.  What I tried: I looked up the name of index_a in the config files of the searchhead. The only appearance is in system/local/authorize.conf. I also compared the index definitions on the CM including file permission settings. The two configurations only differ in index name and app. I also set up a test environment with one indexer and one searchhead. I created one index on the IX and it appeared on the SH role definition some time later without me configuring anything. Again I verified if the name of the index appears anyway in the SHs configs, but it didn't. Question: Is there a new feature which makes the mock definitions in the SH obsolete? I am aware that I can solve this with this approach but it appears to be a nicer way to do it like it is done with index_a
Hi @roopeshetty , in the header of the props.conf, try to not use "sourcetype: [cato_source] TRANSFORMS-filter_logs = cloudparsing Ciao. Giuseppe
@isoutamo @richgalloway , Unable to access the backend for the splunk through putty netwotk is not allowing me to connect what could be the cause?
Hi,  input also located on the same server on same path;
Hi @gcusello ,   I tried the inverse condition as well, but didn't work.   Regards, Pravin  
Hi @roopeshetty , yes, but where is the input for there data flow: in the same server or in a different Heavy Forwarder? If in a different Heavy Forwarder, you have to put these props.conf and tran... See more...
Hi @roopeshetty , yes, but where is the input for there data flow: in the same server or in a different Heavy Forwarder? If in a different Heavy Forwarder, you have to put these props.conf and transforms.conf in it. Ciao. Giuseppe
Hello, I have the following issue, do you know any solution or workaround? (Or maybe I declared something wrongly...) When using a comma separated field values in MAP within the IN command, it is ... See more...
Hello, I have the following issue, do you know any solution or workaround? (Or maybe I declared something wrongly...) When using a comma separated field values in MAP within the IN command, it is not working from the outer search. But when I write out the value of that outside field, it is recognized.   | makeresults | eval ips="a,c,x" | map [ | makeresults | append [ makeresults | eval ips="a", label="aaa" ] | append [ makeresults | eval ips="b", label="bbb" ] | append [ makeresults | eval ips="c", label="ccc" ] | append [ makeresults | eval ips="d", label="ddd" ] ```| search ips IN ($ips$)``` ```NOT WORKING``` | search ips IN (a,c,x) ```WORKING``` | eval outer_ips=$ips$ ] maxsearches=10    
Hi @_pravin , did you tried the inverse condition: | eval Module=if(Module="*",Module,"UNDEFINED") Ciao. Giuseppe
Hi props.conf and transforms.conf are located on our splunk enterprise server on "splunk add on for AWS" app path; that is "D:\Program Files\Splunk\etc\apps\Splunk_TA_aws\local"
Hi @gcusello ,   The empty values are present in interesting fields.   Regards, Pravin
Hi @_pravin , sorry I wasn't clear; in the events tab, see in the interesting fields if the empty value is present in the Module fields or not. Ciao. Giuseppe
Hi @roopeshetty , where did you located props.conf and transforms.conf? they must be located in the first full Splunk instance that the logs are passing through, in other words in the Indexers or (... See more...
Hi @roopeshetty , where did you located props.conf and transforms.conf? they must be located in the first full Splunk instance that the logs are passing through, in other words in the Indexers or (if present) in the intermediate Heavy Forwarder. Ciao. Giuseppe
Hi @vishenps , good for you, see next time! Ciao and happy splunking Giuseppe P.S.: Karma Points are appreciated
Hello, Do you mean the 200GB/day is for an 12vCPU/12GB RAM/900 IOPS Heavy Forwarder that is indexing locally and also forwarding to Indexers but not performing local searches? In this 200GB/day are... See more...
Hello, Do you mean the 200GB/day is for an 12vCPU/12GB RAM/900 IOPS Heavy Forwarder that is indexing locally and also forwarding to Indexers but not performing local searches? In this 200GB/day are you also including logs from internal indexes ( index=_* ) ? If so, what about an Heavy Forwarder with same specs that is not locally indexing? How many GB/day can process (internal and non internal logs)? Thanks a lot, Edoardo
Splunk doesn't really offer a means to convert time zones since each user has the ability to set their own preferred time zone. If you really want to do it, then your lookup table will need to provi... See more...
Splunk doesn't really offer a means to convert time zones since each user has the ability to set their own preferred time zone. If you really want to do it, then your lookup table will need to provide the offset from the default time zone to the local country time zone.  The you should be able to pass that value to the relative_time function.
Thank you @gcusello !!