All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi How many events base search is returning and how long it takes to finish? There are limits for those. Quite probably you have hit by those? When I look your base and post search you could modify... See more...
Hi How many events base search is returning and how long it takes to finish? There are limits for those. Quite probably you have hit by those? When I look your base and post search you could modify your base search to include stats there which is the recommended way to use it. index=myindex TERM(keyword) fieldname1="EXIT" | bin _time span=1d | stats count as Total, count(eval(httpStatusCde!="200" OR statusCde!="0000")) as failures, exactperc95(respTime) as p95RespTime by _time EId Then both post searches something like this | search EId="5eb2aee9" | stats count as Total, count(failures) as failures, first(p95RespTime) as p95RespTime by _time | eval "FailureRate"= round((failures/Total)*100,2) | table _time, Total, FailureRate, p95RespTime | sort -_time  r. Ismo
I’m not sure if this is working also on free license, but you could try settings - licensing - usage report There should be statistics of license usage. But if you are using free and it’s not locked ... See more...
I’m not sure if this is working also on free license, but you could try settings - licensing - usage report There should be statistics of license usage. But if you are using free and it’s not locked then you are using less than it’s maximum which is 500MB/day
Try changing your base search so that it ends with a tables command rather than fields command. Also, your Eid is different in your two post-processing searches.
Is there somewhere I can see how much am I spending on my searches (compute) rather than data (storage)?
The minimum SCP license size is 5gb per day. You get it’s price from your local splunk partner or directly from splunk.
Using wildcards at the beginning and end of search strings is not necessary (or advised) and if you can narrow your search of indexes, that might improve matters. As @isoutamo says, using _time in th... See more...
Using wildcards at the beginning and end of search strings is not necessary (or advised) and if you can narrow your search of indexes, that might improve matters. As @isoutamo says, using _time in the by clause may not give you what you expect as you will get a different result event (row) for each _time, thread id combination. Also, AND is implied in searches and therefore unnecessary in this instance. Try something like this index="wfd-rpt-app" ("504 Gateway Time-out" "Error code: 6039") OR "ExecuteFactoryJob: Caught soap exception" | rex field=_raw "\*{4}(?<thread_id>\d+)\*" | stats values(_raw) as raw_messages by thread_id | table thread_id, raw_messages The time of each of the events is likely to be in the _raw message, but if you want that broken out in some way, please provide some sample raw event data (anonymised appropriately) and a description / example of your expected results.
It exactly this way. The key word here is fixed window or sliding window. With fixed window time chart is correct way to do it, but if you need to look those event in sliding window (it change start a... See more...
It exactly this way. The key word here is fixed window or sliding window. With fixed window time chart is correct way to do it, but if you need to look those event in sliding window (it change start and end time continuously based on current event) then you must use stream stats.
Based on that document DS + CM is not allowed (supported) combination in one server instance.
I expecting that this is your server.conf file? As you are using your private CA you must add those chains into serverCert pem file. You can read more about it from https://docs.splunk.com/Document... See more...
I expecting that this is your server.conf file? As you are using your private CA you must add those chains into serverCert pem file. You can read more about it from https://docs.splunk.com/Documentation/Splunk/latest/Security/HowtoprepareyoursignedcertificatesforSplunk or that conf presentation or any other TLS cert documentation. Base on your description you haven't done this for your serverCert pem file. e.g I have this in one of my conf file (maybe not exactly the same what you will need) -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- -----BEGIN RSA PRIVATE KEY----- -----END RSA PRIVATE KEY----- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----  You should have also that RSA PRIVATE KEY in your pem file and also add parameter for it's password into your server.conf.
It still works.
Hi All, I am using the base search and post-process searches outlined below, along with additional post-process searches in my Splunk dashboard. The index name and fields are consistent across all t... See more...
Hi All, I am using the base search and post-process searches outlined below, along with additional post-process searches in my Splunk dashboard. The index name and fields are consistent across all the panels. I have explicitly included a fields command to specify the list of fields required for the post-process searches. However, I am observing a discrepancy: the result count in the Splunk search is higher than the result count displayed on the Splunk dashboard. Could you help me understand why this is happening ? base search:- index=myindex TERM(keyword) fieldname1="EXIT" | bin _time span=1d | fields _time, httpStatusCde, statusCde, respTime, EId Post process search1:- | search EId="5eb2aee9" | stats count as Total, count(eval(httpStatusCde!="200" OR statusCde!="0000")) as failures, exactperc95(respTime) as p95RespTime by _time | eval "FailureRate"= round((failures/Total)*100,2) | table _time, Total, FailureRate, p95RespTime | sort -_time Post process search2:- | search EId="5eb2aee8" | stats count as Total, count(eval(httpStatusCde!="200" OR statusCde!="0000")) as failures, exactperc95(respTime) as p95RespTime by _time | eval "FailureRate"= round((failures/Total)*100,2) | table _time, Total, FailureRate, p95RespTime | sort -_time
Thanks for the additional validation on my initial search.
It may be because my DS and CM are installed together. I need to test it further.
Please explain what you are doing (the search), what results you are getting, and, most importantly, why this is not what you expected.
What you are doing will work fine assuming your alert is triggering is there are 3 results i.e. all of the 3 minute slots in your 9 minute period have counts greater than 50. Using streamstats would... See more...
What you are doing will work fine assuming your alert is triggering is there are 3 results i.e. all of the 3 minute slots in your 9 minute period have counts greater than 50. Using streamstats would give you something different and doesn't quite fit with your stated requirement.
Hi I have an ask to create an alert that must trigger if there are more than 50 '404' status codes in a 3 min period. This window must repeat three times in a row - for e.g 9:00 - 9:03, 9:03 - 9:06, ... See more...
Hi I have an ask to create an alert that must trigger if there are more than 50 '404' status codes in a 3 min period. This window must repeat three times in a row - for e.g 9:00 - 9:03, 9:03 - 9:06, 9:06 - 9:09.   The count should trigger only for those requests with 404 status code and for certain urls. The alert must only trigger if there are three values over 50 in consecutive 3 min windows. I have some initial SPL not using streamstats, but was wondering if streamstats would be better? Initial SPL - run over a 9 min time range: index="xxxx" "httpMessage.status"=404 url = "xxxx/1" OR url="xxxx/2" OR url ="xxxx/3" | timechart span=3m count(httpMessage.status) AS HTTPStatusCount | where HTTPStatusCount>50 | table _time HTTPStatusCount   thanks.
Here is the configuration file.  [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/cert/CA.pem serverCert = /opt/splunk/etc/auth/cert/srv.pem For the PEM files, as mentione... See more...
Here is the configuration file.  [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/cert/CA.pem serverCert = /opt/splunk/etc/auth/cert/srv.pem For the PEM files, as mentioned earlier, they contain the 'BEGIN CERTIFICATE' and 'END CERTIFICATE' sections.
I'm trying to estimate how much would my Splunk Cloud setup cost me given my ingestion and searches. I'm currently using Splunk with a free license (Docker) and I'd like to get a number that represe... See more...
I'm trying to estimate how much would my Splunk Cloud setup cost me given my ingestion and searches. I'm currently using Splunk with a free license (Docker) and I'd like to get a number that represents either the price or some sort of credits. How can I do that? Thanks.
Can you show your conf files and explain what you have in which pem files? Please hide real passwords etc.
Hi here is one conf talk, How to find ingesting issues https://conf.splunk.com/files/2019/slides/FN1570.pdf. There are many apps in splunkbase which helps you to find that kind of issues. Also the... See more...
Hi here is one conf talk, How to find ingesting issues https://conf.splunk.com/files/2019/slides/FN1570.pdf. There are many apps in splunkbase which helps you to find that kind of issues. Also there are some conf presentations about this, but I cannot found those now r. Ismo