All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hang on, one of the questions is suddenly visible again. I don't know what this forum is doing at all: https://community.splunk.com/t5/Getting-Data-In/How-to-connect-OpenTelemetry-Java-agent-to-Splun... See more...
Hang on, one of the questions is suddenly visible again. I don't know what this forum is doing at all: https://community.splunk.com/t5/Getting-Data-In/How-to-connect-OpenTelemetry-Java-agent-to-Splunk-Collector-in/m-p/689993#M114821
In Splunk Answers. I can't link to the question because both questions have been removed. I don't know what else to do
sure! I am using Splunk>enterprise. This is my first time trying to install add on (cisco WSA). I go to Apps>Find More Apps and hit install which prompt me to a login page. I try my user and pass bu... See more...
sure! I am using Splunk>enterprise. This is my first time trying to install add on (cisco WSA). I go to Apps>Find More Apps and hit install which prompt me to a login page. I try my user and pass but it says your user or pass is incorrect. I have tried to change my password and retried but same result.    
So i figured out how to move past this. I had to find the msi product code for the previous 3 versions and then search the registry for them  one by one.  If they exist they will be in the registry u... See more...
So i figured out how to move past this. I had to find the msi product code for the previous 3 versions and then search the registry for them  one by one.  If they exist they will be in the registry under windows installer\"theproductcode" . This entry has neither splunk nor universal forwarder in the string so it was unfound prior to this. Once i deleted the previous entries, i was able to install the new version.
Hi @Cyner__  from the UF, are you able to ping the indexer? from the UF to indexer, is telnet working fine? telnet index:9997 .. is it working fine or not.. 
Hi @thatusername  where? here in splunk community only? is it about any splunk app / addon?... did you add any links /URL's update us more details about what you posted.. then only we can understa... See more...
Hi @thatusername  where? here in splunk community only? is it about any splunk app / addon?... did you add any links /URL's update us more details about what you posted.. then only we can understand why it got denied
Building on what @inventsekar said, it is strongly recommended that every sourcetype have a props.conf stanza.  Splunk can guess about how to interpret your data, but using explicit instructions via ... See more...
Building on what @inventsekar said, it is strongly recommended that every sourcetype have a props.conf stanza.  Splunk can guess about how to interpret your data, but using explicit instructions via props.conf is more performant.  If you need to override default behavior, such as specifying a different time zone, props.conf is required.
Hi @Théophane_GUE .. pls update us what sourcetype name pls.  from UF, how do you send the logs?.. thru any apps/addons? or just inputs.conf?  
Hi @tdavison76  some more details pls..  is it cloud or on-prim? where do you see that red (we have a Red status for Ingestion Latency)...is it on any dashboard or is it on DMC  
Please note, we do not have any "props.conf" file available or configured in the server.  We are maintaining splunk configuration in only "inputs.conf" file.    Hi @shashankk .. more details pls... See more...
Please note, we do not have any "props.conf" file available or configured in the server.  We are maintaining splunk configuration in only "inputs.conf" file.    Hi @shashankk .. more details pls..  is it a dev/test environment or prod..  do you have Deployment server or not..  any reasons for not having a props.conf and only having inputs.onf that inputs.conf is on HF or indexer?... you use UF's or some applications send the logs to the monitored folders directly..
Hi All, I have a report running every 6 hour with below search query. This is fetching hourly availability of haproxy backends based on http response code as shown below. I need to accelerate this ... See more...
Hi All, I have a report running every 6 hour with below search query. This is fetching hourly availability of haproxy backends based on http response code as shown below. I need to accelerate this report, but I think the bucket section of the search is disqualifying this for report acceleration. Can someone help with modifying this search so that it can be accelerated or are there any other work arounds to do this to get the exact same table as shown?   index=haproxy (backend="backend1" OR backend="backend2") | bucket _time span=1h | eval result=if(status >= 500, "Failure", "Success") | stats count(result) as totalcount, count(eval(result="Success")) as success, count(eval(result="Failure")) as failure by backend, _time | eval availability=tostring(round((success/totalcount)*100,3)) + "%" | fields _time, backend, success, failure, totalcount, availability   _time backend success failure totalcount availability 2024-06-07 04:00 backend1 28666 0 28666 100.000% 2024-06-07 05:00 backend1 28666 0 28666 100.000% 2024-06-07 06:00 backend1 28712 0 28712 100.000% 2024-06-07 07:00 backend1 28697 0 28697 100.000% 2024-06-07 08:00 backend1 28678 0 28678 100.000% 2024-06-07 09:00 backend1 28714 0 28714 100.000% 2024-06-07 04:00 backend2 618 0 618 100.000% 2024-06-07 05:00 backend2 179 0 179 100.000% 2024-06-07 06:00 backend2 555 0 555 100.000% 2024-06-07 07:00 backend2 103 0 103 100.000% 2024-06-07 08:00 backend2 1039 0 1039 100.000%
Hi @reza ... moooore details please 1) Splunk cloud or onprim..  2) is it working fine previously and stopped working recently  or ...is this ur first time seeing this error 3) the error screensh... See more...
Hi @reza ... moooore details please 1) Splunk cloud or onprim..  2) is it working fine previously and stopped working recently  or ...is this ur first time seeing this error 3) the error screenshots will be of great help
When I have seen something similar with others, we ended up leveraging cname records within DNS to set aliases - similar to a macro that you would only have to change once at the source.  Do you thi... See more...
When I have seen something similar with others, we ended up leveraging cname records within DNS to set aliases - similar to a macro that you would only have to change once at the source.  Do you think that would be a viable solution within your environment? 
hello all, I have tried many times to install adds on but it does not accept my password which I know for sure is correct. I tried to reset password and try but same result. anybody have an idea how... See more...
hello all, I have tried many times to install adds on but it does not accept my password which I know for sure is correct. I tried to reset password and try but same result. anybody have an idea how to fix this?
hello, I have 2 files that contains the path of the root Certificate Authority that issued my server certificate. Not sure if the above is logically correct but i do have 2 files that have the path... See more...
hello, I have 2 files that contains the path of the root Certificate Authority that issued my server certificate. Not sure if the above is logically correct but i do have 2 files that have the paths to the certificate authority, and I must use both of them somehow/someway in web.conf. my web.conf settings are similar to the below: ### START SPLUNK WEB USING HTTPS:8443 ### enableSplunkWebSSL = 1 httpport = 8443 ### SSL CERTIFICATE FILES ### privKeyPath = $SPLUNK_HOME\etc\auth\DOD.web.certificates\privkey.pem serverCert = $SPLUNK_HOME\etc\auth\DOD.web.certificates\cert.pem sslRootCAPath = However, for "sslRootCAPath =", i need to add both of the files i have. However, the splunk documentation does not specify how I would be able to add these files, or if I can add them with ",", or if there is another property that would allow additional files with certificate authority paths. the files and sample of their content are below. Basically they are made-up of "noise" except for the header and footer section of the file. So not sure how I would combine them into a single file if that is what splunk would like. ca-200.pem -----BEGIN CERTIFICATE----- MIIEjzCCA3egAwIBAgICAwMwDQYJKoZIhvcNAQELBQAwWzELMAkGA1UEBhMCVVMxGDAWBgNVBAoT -----END CERTIFICATE----- MyCompanyRootCA3.pem -----BEGIN CERTIFICATE----- MIIDczCCAlugAwIBAgIBATANBgkqhkiG9w0BAQsFADBbMQswCQYDVQQGEwJVUzEY -----END CERTIFICATE---- any help is apreciated.  
Hi! Thank you! UUID did not work but Group ID did This was my mis-out thank you
The Sedcmd is mainly used to transform the raw data mask or delete lines etc not the event, so I don't think it would work ( I can't test) and I don't think you can use SEDCMD on the UF level.  The ... See more...
The Sedcmd is mainly used to transform the raw data mask or delete lines etc not the event, so I don't think it would work ( I can't test) and I don't think you can use SEDCMD on the UF level.  The option you may have is to use send data with no data to the null queue, so worth a go. Configure the below and deploy on the indexer.  This would send any data with no data to null queue (again I don't have a test environment, so you will have to trail and error it) props.conf (Add your sourcetype here)  [wineventlog] TRANSFORMS-my_windows_blank_events = setnull transforms.conf [setnull] REGEX = ^\s*$ DEST_KEY = queue FORMAT = nullQueue    
I just realized I was way, way overthinking this.  BY DEFINITION, a p99 value is one that 99% of the events are at or below, thus leaving ONE PERCENT of the total number of events higher.   All I nee... See more...
I just realized I was way, way overthinking this.  BY DEFINITION, a p99 value is one that 99% of the events are at or below, thus leaving ONE PERCENT of the total number of events higher.   All I need is: index=foo message="magic string" | stats count as total | eval over_p99=round(total / 100, 0) | fields - total which is fast.  If I wanted # over p95, just use round(total / 100 * 5, 0) etc.  I will be keeping your approach in mind, however, as that's still really good to know.  Ty!
Yes you are on the right path, search time is done via the SH, even after its cooked and sent to the indexers.  Here's some concepts: Search Time Extractions: This is where the RAW data extracti... See more...
Yes you are on the right path, search time is done via the SH, even after its cooked and sent to the indexers.  Here's some concepts: Search Time Extractions: This is where the RAW data extractions happen dynamically – done on the SH when the SPL is run – this is called scheme on the fly, not when the data is ingested – and designed so its fast than unlike traditional SQL databases. These have the TA’s installed to do one the fly extractions via props/trans and the right way for field extractions on the fly  Forwarders: UF OR HF designed to forward data, attach meta data like sourcetype, host, index. The HF can further parse data and extract fields (Cooked) less work for the Indexers.  Indexers, store the raw data and can also do parsing of the data if they are configured to do so (having TA’s)  and mainly when sending data directly from UF’s. Forwarders such as HF  affect how the data is pre-processed before it reaches the indexers, they apply sourcetype, host, index metadata. The UF's also apply metadata, but not parse unless some structured data types.    HF = By performing parsing and field extraction at index time, the data is pre-formatted – this can help with the indexer not having to more performant. Applies metatdata  UF = Sends the raw data to indexer, which means the indexer must handle the processing of the data and sometimes can become overloaded, so better to do it at the HF level if you can. Applies metatdata  These can possible reasons for it not working,  the upgrade should not do this, unless there was some config in the default props.conf file, and this has been overwritten, any custom code should be placed into the local props.conf file : Props/transforms changes (on the SH/Indexers)  Changes to the TA may have been modified Inputs changes to metadata sourcetype name  Overwritten props in default    This may further help understand the Index time vs Search Time process.  https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Indextimeversussearchtime Might be worth running btool on the search head and checking where the custom props is configured /opt/splunk/bin/splunk btool props list --debug my:sourcetype Typically is should look like the example below:  /opt/splunk/etc/apps/My_Custom_app_props/local/props.conf or OR   /opt/splunk/etc/system/local/props.conf     
Hi @marco_massari11 , you should run something ike this (if the search is only on index, host, source and sourcetype | tstats count WHERE index=your_index BY host | append [ | input... See more...
Hi @marco_massari11 , you should run something ike this (if the search is only on index, host, source and sourcetype | tstats count WHERE index=your_index BY host | append [ | inputlookup your_lookup.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 If you have to use a more comples search, you can use <your_search> | stats count BY host | append [ | inputlookup your_lookup.csv | eval count=0 | fields host count ] | stats sum(count) AS total BY host | where total=0 Ciao. Giuseppe