All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Has httpsCode been extracted OK? Please share some sample event, anonymised of course.
As you are talking about windows, it might be more complicated than that. By default TA_windows contains tranforms which extract the host field from the event itself so even if you set it to somethi... See more...
As you are talking about windows, it might be more complicated than that. By default TA_windows contains tranforms which extract the host field from the event itself so even if you set it to something in the UF's configuration, it will be overwritten by the value of ComputerName of Computer field from the event. (and that makes sense because often windows event are not generated on the host they are being ingested from - WEF is a commonly used mechanism to forward events within a windows environment to a single collector node from which it is pulled by UF).
It's not a question about the Splunk Connector for Kafka as such. It's more a question about how to manage your kubernetes cluster and kafka containers there. And these are questions which are defini... See more...
It's not a question about the Splunk Connector for Kafka as such. It's more a question about how to manage your kubernetes cluster and kafka containers there. And these are questions which are definitely out of scope of this forum.
The proper order for the pem file is Subject's certificate Subject's private key Issuing CA certificate chain (unless you explicitly trust the issuer of the subject's certificate). The location... See more...
The proper order for the pem file is Subject's certificate Subject's private key Issuing CA certificate chain (unless you explicitly trust the issuer of the subject's certificate). The location of the file is tricky because the settings can be either inherited from the default server-wide settings which you set up in server.conf - https://docs.splunk.com/Documentation/Splunk/latest/admin/serverconf#SSL.2FTLS_Configuration_details or can be overriden at the specific input level. As a side note - certificates for web interface are configured differently.
Remember that docker containers are volatile (except for the non-volatile space you "attach" to them) and docker images are "as is" after build so you'd have to either create a new image based on the... See more...
Remember that docker containers are volatile (except for the non-volatile space you "attach" to them) and docker images are "as is" after build so you'd have to either create a new image based on the ready-made splunk docker image or use the modify dockerfile to build a custom docker image from scratch. Also, the whole idea of running Splunk in a docker environment is that you do an upgrade by pulling a newer version of the whole image so you'd need to customize your image each time a new version is released.  
when I go to search head to change configuration of TA_vectra_detect_json I find this (You do not have permissions to edit this configuration.)   
try this booleans operators must be used in UPPERCASE, in addition the AND operator is mandatory only in eval. This means that you're searching using as additional conditions: action="blocked" and ... See more...
try this booleans operators must be used in UPPERCASE, in addition the AND operator is mandatory only in eval. This means that you're searching using as additional conditions: action="blocked" and the word "and". Ciao.
i have around 25  events with  httpsCode = 200 OK but when use the above function it returns 0 in the success column  
We don't have any "unofficial" release dates. And even if we had we probably couldn't share them with you. You need to check the download page to see when it becomes available.
| stats avg(timetaken) count(eval(httpsCode == 200)) as success count(eval(httpsCode != 200)) as failure
hi @scelikok , understood, thank you. I would love to hear from you how you recommend to monitor system health. I thought of rest call "health" in a playbook to run every few minuets. if you have... See more...
hi @scelikok , understood, thank you. I would love to hear from you how you recommend to monitor system health. I thought of rest call "health" in a playbook to run every few minuets. if you have another idea, please do tell  
I have a requirement where I need to fetch the success, failure count and average response time. In events field I have entry like httpsCode and timetaken. where timetaken returns values like 628, 48... See more...
I have a requirement where I need to fetch the success, failure count and average response time. In events field I have entry like httpsCode and timetaken. where timetaken returns values like 628, 484 etc.... the case is like httpscode is 200 it should be treated as success count and others should be treated as failure count.... finally the statistics table should show values of success,failure and average response time....
Hi @meshorer, Since your events could not be written to db, service would stop. That is why you should monitor system health.
hi @scelikok    so what happens when I reach the size limit in my server?
Any update from the Splunk on the issue? Do we have to upgrade to 9.1.2 to view the monitoring console? or do we have a work around? please advice. Thanks!
Hi @RS , I suppose that the total execution time is always displayed in minutes, otherwise, you have convert it based on the forma, so, please try, something like this: index = XXXXXX1 host = hostn... See more...
Hi @RS , I suppose that the total execution time is always displayed in minutes, otherwise, you have convert it based on the forma, so, please try, something like this: index = XXXXXX1 host = hostname.com source = artifactory-servicesourcetype = artifactory-service "Storage TRASH_AND_BINARIES garbage collector report" | rex "Total\s+execution\s+time:\s+(?<minutes>\d+)\.(?<seconds>\d+)" | eval Total_execution_time=minutes*60+seconds | timechart sum(Total_execution_time) AS Total_execution_time BY host  Ciao. Giuseppe
Hi @Roy_9 , if it's a script from the Universal Forwarder, you can open a case to Splunk Support. Some yers ago I had an issue with the UF installed on a server that wasn't able to connect to DNS s... See more...
Hi @Roy_9 , if it's a script from the Universal Forwarder, you can open a case to Splunk Support. Some yers ago I had an issue with the UF installed on a server that wasn't able to connect to DNS server and used too much memory to resolve addresses, and Support solved the issue, call them. Ciao. Giuseppe
Hello, I am currently using Splunk version 9.1.0.2, which includes several newly reported CVEs. Therefore, I need to upgrade it to the latest version. In the following link, it is mentioned that "S... See more...
Hello, I am currently using Splunk version 9.1.0.2, which includes several newly reported CVEs. Therefore, I need to upgrade it to the latest version. In the following link, it is mentioned that "Splunk recommends that customers use version 9.2.0.1 instead of version 9.2.0." Release Notes However, in the download link (Splunk Enterprise Download Page), the latest version available is 9.2.0. Could you please inform us when Splunk Enterprise 9.2.0.1 will be released?
Hi, I have following log data that are in splunk. Below is example data taken from splunk: 2024-02-04T00:15:15.209Z [jfrt ] [INFO ] [64920151065ecdd9] [.s.b.i.GarbageCollectorInfo:81] [cdd9|art-exe... See more...
Hi, I have following log data that are in splunk. Below is example data taken from splunk: 2024-02-04T00:15:15.209Z [jfrt ] [INFO ] [64920151065ecdd9] [.s.b.i.GarbageCollectorInfo:81] [cdd9|art-exec-153205] - Storage TRASH_AND_BINARIES garbage collector report: Total execution time:    15.25 minutes Candidates for deletion: 4,960 Checksums deleted:       4,582 Binaries deleted:        4,582 host = hostname.com index = XXXXXX1 source = artifactory-servicesourcetype = artifactory-service How I can display trend/timechart of "Total execution time" using splunk query group by timestamp and host name for Storage TRASH_AND_BINARIES garbage collector report? I appreciate any help. Thanks Rahul  
@gcusello  It came with splunk forwarder package. "C:\ProgramFiles\SplunkUniversalForwarder\bin\splunk-powershell.ps1   Thanks