All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Then what should I do for getting the server up on HTTP?
were you ever able to figure this out? I am facing the same issue
I'm trying to understand the API usage - Internal and Public, basic auth vs token based -  in our controllers so they are appropriately sized and there is no performance bottle necks, how do I get th... See more...
I'm trying to understand the API usage - Internal and Public, basic auth vs token based -  in our controllers so they are appropriately sized and there is no performance bottle necks, how do I get these stats? I want to filter out the Internal API volume from Public and explore the possibility of moving some of these APIs to APIM. Also, Is it possible to move all the external/public api's to a different port to manage them better?
Thank you very much, that explains it! I was able to complete my little proof of concept, this is my complete search: sourcetype=nftemp | top 100 SRC | eval ip_address = SRC | eval ip_dot_decimal_s... See more...
Thank you very much, that explains it! I was able to complete my little proof of concept, this is my complete search: sourcetype=nftemp | top 100 SRC | eval ip_address = SRC | eval ip_dot_decimal_split=split(ip_address,".") | eval first=mvindex(ip_dot_decimal_split,0),second=mvindex(ip_dot_decimal_split,1),third=mvindex(ip_dot_decimal_split,2),fourth=mvindex(ip_dot_decimal_split,3) | fields - ip_dot_decimal_split | eval first=first*pow(256,3),second=second*pow(256,2),third=third*256 | eval ip_address_integer=first+second+third+fourth | map search=" | inputlookup geobeta | where endIPNum >= $ip_address_integer$ AND startIPNum <= $ip_address_integer$ | eval ip=$ip_address$ | eval mapcount=$count$ | sort mapcount | table mapcount,ip,country_iso_code,latitude,longitude,ASName,ASNumber" maxsearches=20000   The sourcetype is a random generated nftables log with a few IPs in it, then convert the ip's to decimal and do the search against the geobeta lookup. The source of the geobeta lookup contains also only a few records, not sure how it will perform when the geobeta lookup will have millions of records in it, lets see ... geobeta comes from maxmind by the way.
Hi Giuseppe, thanks for your response. I would like to expand the IPv4 addresses and are looking for lat/lon, country and AS informations. As i do already have the maxmind database in my own mySQL... See more...
Hi Giuseppe, thanks for your response. I would like to expand the IPv4 addresses and are looking for lat/lon, country and AS informations. As i do already have the maxmind database in my own mySQL i thought to convert the data into a kvstore and get my data from there. None of the Apps on the market had ready suited for me unless i missed one, so i thought to do this by my own .. its just a hobby ... Cheers Matthias
hi @himaniarora20 , this is for using SSL in connection between UFs and IDXs, you don't need to do anything to use self signed certificates in internal connections. Ciao. Giuseppe
That should be a new question.
I have tried the steps and created the certificate using these three documents: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Howtoself-signcertificates https://docs.splunk.com/Docume... See more...
I have tried the steps and created the certificate using these three documents: https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/Howtoself-signcertificates https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/HowtoprepareyoursignedcertificatesforSplunk https://docs.splunk.com/Documentation/Splunk/9.1.2/Security/ConfigureSplunkforwardingtousesignedcertificates   web.conf [settings] enableSplunkWebSSL = 1 privKeyPath = /opt/splunk/etc/auth/mycerts/myServerPrivateKey.key serverCert = /opt/splunk/etc/auth/mycerts/myServerCertificate.pem   server.conf [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/mycerts/myCertAuthCertificate.pem sslPassword = <encrypted>   inputs.conf [default] host = splunkpoc2.company.com [splunktcp-ssl:9997] disabled = 0 [SSL] serverCert = /opt/splunk/etc/auth/mycerts/myServerCertificate.pem sslPassword = $7$UeC5PhW3bITaydrFnqv0+iOwOC+ItQN/CDEZcvtLovDBwTJt requireClientCert = true sslVersions = *,-ssl2 sslCommonNameToCheck = indexerpoc1.company.com,indexerpoc2.company.com   Splunk Web comes up correctly but again the HTTP is not getting redirected to https:    
Hi have you try to  splunk btool indexes list --debug and look if it’s there and where/in which file it (index_a) has defined. I suppose that there haven’t any index_b definitions. r. Ismo  
Here is some comments about metrics.log By default, metrics.log reports the top 10 results for each type. see more from https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Aboutmetr... See more...
Here is some comments about metrics.log By default, metrics.log reports the top 10 results for each type. see more from https://docs.splunk.com/Documentation/Splunk/latest/Troubleshooting/Aboutmetricslog As you could see metrics.log don’t contains all metrics, it’s just “samples” of those. As @richgalloway already said you must use license_usage or calculate that from _raw.  
Scenario: I have a searchhead and two idx in a cluster. there is an index (index_a) defined in the cluster. Until now I always deployed a copy of the indexes.conf with a mock index on the SH, for exa... See more...
Scenario: I have a searchhead and two idx in a cluster. there is an index (index_a) defined in the cluster. Until now I always deployed a copy of the indexes.conf with a mock index on the SH, for example to manage role permissions for it.  This was helpful to show the index in the role definition. However in this deployment there is no such indexes.conf file where index_a is defined on the SH, but the index still shows up in the configuration UI. All instances have Splunk Enterprise 9.0.5.1 installed Problem: I have a new Index that I defined after index_a. It is called index_b. index_b doesn't show up in the roles definition for some reason.  What I tried: I looked up the name of index_a in the config files of the searchhead. The only appearance is in system/local/authorize.conf. I also compared the index definitions on the CM including file permission settings. The two configurations only differ in index name and app. I also set up a test environment with one indexer and one searchhead. I created one index on the IX and it appeared on the SH role definition some time later without me configuring anything. Again I verified if the name of the index appears anyway in the SHs configs, but it didn't. Question: Is there a new feature which makes the mock definitions in the SH obsolete? I am aware that I can solve this with this approach but it appears to be a nicer way to do it like it is done with index_a
Hi @roopeshetty , in the header of the props.conf, try to not use "sourcetype: [cato_source] TRANSFORMS-filter_logs = cloudparsing Ciao. Giuseppe
@isoutamo @richgalloway , Unable to access the backend for the splunk through putty netwotk is not allowing me to connect what could be the cause?
Hi,  input also located on the same server on same path;
Hi @gcusello ,   I tried the inverse condition as well, but didn't work.   Regards, Pravin  
Hi @roopeshetty , yes, but where is the input for there data flow: in the same server or in a different Heavy Forwarder? If in a different Heavy Forwarder, you have to put these props.conf and tran... See more...
Hi @roopeshetty , yes, but where is the input for there data flow: in the same server or in a different Heavy Forwarder? If in a different Heavy Forwarder, you have to put these props.conf and transforms.conf in it. Ciao. Giuseppe
Hello, I have the following issue, do you know any solution or workaround? (Or maybe I declared something wrongly...) When using a comma separated field values in MAP within the IN command, it is ... See more...
Hello, I have the following issue, do you know any solution or workaround? (Or maybe I declared something wrongly...) When using a comma separated field values in MAP within the IN command, it is not working from the outer search. But when I write out the value of that outside field, it is recognized.   | makeresults | eval ips="a,c,x" | map [ | makeresults | append [ makeresults | eval ips="a", label="aaa" ] | append [ makeresults | eval ips="b", label="bbb" ] | append [ makeresults | eval ips="c", label="ccc" ] | append [ makeresults | eval ips="d", label="ddd" ] ```| search ips IN ($ips$)``` ```NOT WORKING``` | search ips IN (a,c,x) ```WORKING``` | eval outer_ips=$ips$ ] maxsearches=10    
Hi @_pravin , did you tried the inverse condition: | eval Module=if(Module="*",Module,"UNDEFINED") Ciao. Giuseppe
Hi props.conf and transforms.conf are located on our splunk enterprise server on "splunk add on for AWS" app path; that is "D:\Program Files\Splunk\etc\apps\Splunk_TA_aws\local"
Hi @gcusello ,   The empty values are present in interesting fields.   Regards, Pravin