All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

It exactly this way. The key word here is fixed window or sliding window. With fixed window time chart is correct way to do it, but if you need to look those event in sliding window (it change start a... See more...
It exactly this way. The key word here is fixed window or sliding window. With fixed window time chart is correct way to do it, but if you need to look those event in sliding window (it change start and end time continuously based on current event) then you must use stream stats.
Based on that document DS + CM is not allowed (supported) combination in one server instance.
I expecting that this is your server.conf file? As you are using your private CA you must add those chains into serverCert pem file. You can read more about it from https://docs.splunk.com/Document... See more...
I expecting that this is your server.conf file? As you are using your private CA you must add those chains into serverCert pem file. You can read more about it from https://docs.splunk.com/Documentation/Splunk/latest/Security/HowtoprepareyoursignedcertificatesforSplunk or that conf presentation or any other TLS cert documentation. Base on your description you haven't done this for your serverCert pem file. e.g I have this in one of my conf file (maybe not exactly the same what you will need) -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----  You should have also that RSA PRIVATE KEY in your pem file and also add parameter for it's password into your server.conf.
It still works.
Hi All, I am using the base search and post-process searches outlined below, along with additional post-process searches in my Splunk dashboard. The index name and fields are consistent across all t... See more...
Hi All, I am using the base search and post-process searches outlined below, along with additional post-process searches in my Splunk dashboard. The index name and fields are consistent across all the panels. I have explicitly included a fields command to specify the list of fields required for the post-process searches. However, I am observing a discrepancy: the result count in the Splunk search is higher than the result count displayed on the Splunk dashboard. Could you help me understand why this is happening ? base search:- index=myindex TERM(keyword) fieldname1="EXIT" | bin _time span=1d | fields _time, httpStatusCde, statusCde, respTime, EId Post process search1:- | search EId="5eb2aee9" | stats count as Total, count(eval(httpStatusCde!="200" OR statusCde!="0000")) as failures, exactperc95(respTime) as p95RespTime by _time | eval "FailureRate"= round((failures/Total)*100,2) | table _time, Total, FailureRate, p95RespTime | sort -_time Post process search2:- | search EId="5eb2aee8" | stats count as Total, count(eval(httpStatusCde!="200" OR statusCde!="0000")) as failures, exactperc95(respTime) as p95RespTime by _time | eval "FailureRate"= round((failures/Total)*100,2) | table _time, Total, FailureRate, p95RespTime | sort -_time
Thanks for the additional validation on my initial search.
It may be because my DS and CM are installed together. I need to test it further.
Please explain what you are doing (the search), what results you are getting, and, most importantly, why this is not what you expected.
What you are doing will work fine assuming your alert is triggering is there are 3 results i.e. all of the 3 minute slots in your 9 minute period have counts greater than 50. Using streamstats would... See more...
What you are doing will work fine assuming your alert is triggering is there are 3 results i.e. all of the 3 minute slots in your 9 minute period have counts greater than 50. Using streamstats would give you something different and doesn't quite fit with your stated requirement.
Hi I have an ask to create an alert that must trigger if there are more than 50 '404' status codes in a 3 min period. This window must repeat three times in a row - for e.g 9:00 - 9:03, 9:03 - 9:06, ... See more...
Hi I have an ask to create an alert that must trigger if there are more than 50 '404' status codes in a 3 min period. This window must repeat three times in a row - for e.g 9:00 - 9:03, 9:03 - 9:06, 9:06 - 9:09.   The count should trigger only for those requests with 404 status code and for certain urls. The alert must only trigger if there are three values over 50 in consecutive 3 min windows. I have some initial SPL not using streamstats, but was wondering if streamstats would be better? Initial SPL - run over a 9 min time range: index="xxxx" "httpMessage.status"=404 url = "xxxx/1" OR url="xxxx/2" OR url ="xxxx/3" | timechart span=3m count(httpMessage.status) AS HTTPStatusCount | where HTTPStatusCount>50 | table _time HTTPStatusCount   thanks.
Here is the configuration file.  [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/cert/CA.pem serverCert = /opt/splunk/etc/auth/cert/srv.pem For the PEM files, as mentione... See more...
Here is the configuration file.  [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/cert/CA.pem serverCert = /opt/splunk/etc/auth/cert/srv.pem For the PEM files, as mentioned earlier, they contain the 'BEGIN CERTIFICATE' and 'END CERTIFICATE' sections.
I'm trying to estimate how much would my Splunk Cloud setup cost me given my ingestion and searches. I'm currently using Splunk with a free license (Docker) and I'd like to get a number that represe... See more...
I'm trying to estimate how much would my Splunk Cloud setup cost me given my ingestion and searches. I'm currently using Splunk with a free license (Docker) and I'd like to get a number that represents either the price or some sort of credits. How can I do that? Thanks.
Can you show your conf files and explain what you have in which pem files? Please hide real passwords etc.
Hi here is one conf talk, How to find ingesting issues https://conf.splunk.com/files/2019/slides/FN1570.pdf. There are many apps in splunkbase which helps you to find that kind of issues. Also the... See more...
Hi here is one conf talk, How to find ingesting issues https://conf.splunk.com/files/2019/slides/FN1570.pdf. There are many apps in splunkbase which helps you to find that kind of issues. Also there are some conf presentations about this, but I cannot found those now r. Ismo
From what I understand, I need to combine the .pem file with the private key, and this combined file is what I should use in the configuration, correct ? 
Have you read and understand what this presentation said https://conf.splunk.com/files/2023/slides/SEC1936B.pdf ? There is also video presentation about it. Those should explain how this should do.
I'm quite sure that then it helps to update empty DM quicker than looking through all indexes. Also can also update times for looking updates into it and one optimization could be to update update fre... See more...
I'm quite sure that then it helps to update empty DM quicker than looking through all indexes. Also can also update times for looking updates into it and one optimization could be to update update frequency, but as you haven't data and just add individual empty index there those are not needed to get DM updated enough quickly. You should also check from MC that you haven't any skipped searches due to this DM update. MC -> Search -> Scheduler (or something similar, couldn't remember those exact names)
The format of a .pem file is as follows:  -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----
Hi Have you tried something like this? index="*wfd-rpt-app*" ("*504 Gateway Time-out*" AND "*Error code: 6039*") OR "*ExecuteFactoryJob: Caught soap exception*" | rex field=_raw "\*{4}(?<thread_id>... See more...
Hi Have you tried something like this? index="*wfd-rpt-app*" ("*504 Gateway Time-out*" AND "*Error code: 6039*") OR "*ExecuteFactoryJob: Caught soap exception*" | rex field=_raw "\*{4}(?<thread_id>\d+)\*" | stats values(_raw) as raw_messages by _time, thread_id | table _time, thread_id, raw_messages Are you sure that you want/can use _time inside by? This means that those events must have exactly same time even into ms level or deeper level?  If this didn't work for you then you should give some sample data which we can use to get better understanding for your case. Also giving example output from that data is valuable for us. r. Ismo
Thank you for your reply. I am doing what you mentioned: "You can restrict basic searches by whitelisting individual indices. This makes updating DM more efficient as there is no need to look thro... See more...
Thank you for your reply. I am doing what you mentioned: "You can restrict basic searches by whitelisting individual indices. This makes updating DM more efficient as there is no need to look through all indices to find the desired events". I've added index whitelists for some of the data models. However, for some of them, I have no data ingested, so I thought maybe I should use dummy index for those data models that I don't have data for, so that splunkd doesn't need to search all indexes with certain tags and return nothing.