All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What you are doing will work fine assuming your alert is triggering is there are 3 results i.e. all of the 3 minute slots in your 9 minute period have counts greater than 50. Using streamstats would... See more...
What you are doing will work fine assuming your alert is triggering is there are 3 results i.e. all of the 3 minute slots in your 9 minute period have counts greater than 50. Using streamstats would give you something different and doesn't quite fit with your stated requirement.
Hi I have an ask to create an alert that must trigger if there are more than 50 '404' status codes in a 3 min period. This window must repeat three times in a row - for e.g 9:00 - 9:03, 9:03 - 9:06, ... See more...
Hi I have an ask to create an alert that must trigger if there are more than 50 '404' status codes in a 3 min period. This window must repeat three times in a row - for e.g 9:00 - 9:03, 9:03 - 9:06, 9:06 - 9:09.   The count should trigger only for those requests with 404 status code and for certain urls. The alert must only trigger if there are three values over 50 in consecutive 3 min windows. I have some initial SPL not using streamstats, but was wondering if streamstats would be better? Initial SPL - run over a 9 min time range: index="xxxx" "httpMessage.status"=404 url = "xxxx/1" OR url="xxxx/2" OR url ="xxxx/3" | timechart span=3m count(httpMessage.status) AS HTTPStatusCount | where HTTPStatusCount>50 | table _time HTTPStatusCount   thanks.
Here is the configuration file.  [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/cert/CA.pem serverCert = /opt/splunk/etc/auth/cert/srv.pem For the PEM files, as mentione... See more...
Here is the configuration file.  [sslConfig] enableSplunkdSSL = true sslRootCAPath = /opt/splunk/etc/auth/cert/CA.pem serverCert = /opt/splunk/etc/auth/cert/srv.pem For the PEM files, as mentioned earlier, they contain the 'BEGIN CERTIFICATE' and 'END CERTIFICATE' sections.
I'm trying to estimate how much would my Splunk Cloud setup cost me given my ingestion and searches. I'm currently using Splunk with a free license (Docker) and I'd like to get a number that represe... See more...
I'm trying to estimate how much would my Splunk Cloud setup cost me given my ingestion and searches. I'm currently using Splunk with a free license (Docker) and I'd like to get a number that represents either the price or some sort of credits. How can I do that? Thanks.
Can you show your conf files and explain what you have in which pem files? Please hide real passwords etc.
Hi here is one conf talk, How to find ingesting issues https://conf.splunk.com/files/2019/slides/FN1570.pdf. There are many apps in splunkbase which helps you to find that kind of issues. Also the... See more...
Hi here is one conf talk, How to find ingesting issues https://conf.splunk.com/files/2019/slides/FN1570.pdf. There are many apps in splunkbase which helps you to find that kind of issues. Also there are some conf presentations about this, but I cannot found those now r. Ismo
From what I understand, I need to combine the .pem file with the private key, and this combined file is what I should use in the configuration, correct ? 
Have you read and understand what this presentation said https://conf.splunk.com/files/2023/slides/SEC1936B.pdf ? There is also video presentation about it. Those should explain how this should do.
I'm quite sure that then it helps to update empty DM quicker than looking through all indexes. Also can also update times for looking updates into it and one optimization could be to update update fre... See more...
I'm quite sure that then it helps to update empty DM quicker than looking through all indexes. Also can also update times for looking updates into it and one optimization could be to update update frequency, but as you haven't data and just add individual empty index there those are not needed to get DM updated enough quickly. You should also check from MC that you haven't any skipped searches due to this DM update. MC -> Search -> Scheduler (or something similar, couldn't remember those exact names)
The format of a .pem file is as follows:  -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----
Hi Have you tried something like this? index="*wfd-rpt-app*" ("*504 Gateway Time-out*" AND "*Error code: 6039*") OR "*ExecuteFactoryJob: Caught soap exception*" | rex field=_raw "\*{4}(?<thread_id>... See more...
Hi Have you tried something like this? index="*wfd-rpt-app*" ("*504 Gateway Time-out*" AND "*Error code: 6039*") OR "*ExecuteFactoryJob: Caught soap exception*" | rex field=_raw "\*{4}(?<thread_id>\d+)\*" | stats values(_raw) as raw_messages by _time, thread_id | table _time, thread_id, raw_messages Are you sure that you want/can use _time inside by? This means that those events must have exactly same time even into ms level or deeper level?  If this didn't work for you then you should give some sample data which we can use to get better understanding for your case. Also giving example output from that data is valuable for us. r. Ismo
Thank you for your reply. I am doing what you mentioned: "You can restrict basic searches by whitelisting individual indices. This makes updating DM more efficient as there is no need to look thro... See more...
Thank you for your reply. I am doing what you mentioned: "You can restrict basic searches by whitelisting individual indices. This makes updating DM more efficient as there is no need to look through all indices to find the desired events". I've added index whitelists for some of the data models. However, for some of them, I have no data ingested, so I thought maybe I should use dummy index for those data models that I don't have data for, so that splunkd doesn't need to search all indexes with certain tags and return nothing.
Wait a minute, are we talking about server side not UF side? And you have several server roles in one splunk instance? If then you must read this https://docs.splunk.com/Documentation/Splunk/latest/De... See more...
Wait a minute, are we talking about server side not UF side? And you have several server roles in one splunk instance? If then you must read this https://docs.splunk.com/Documentation/Splunk/latest/Deploy/Manageyourdeployment and follow those restrictions what it has!
Below are 2 queries which returns different events but have a common field thread_id which can be taken by using below rex.  raw message logs are different for both queries. I want events list with... See more...
Below are 2 queries which returns different events but have a common field thread_id which can be taken by using below rex.  raw message logs are different for both queries. I want events list with raw message logs from both query but only if each raw message has this common thread_id I have tried multiple things like join, append, map and github copilot as well but not getting the desired results. Can somebody please help on how to achieve this.    rex field=_raw "\*{4}(?<thread_id>\d+)\*" index="*sample-app*" ("*504 Gateway Time-out*" AND "*Error code: 6039*") index="*sample-app*" "*ExecuteFactoryJob: Caught soap exception*"     index="*wfd-rpt-app*" ("*504 Gateway Time-out*" AND "*Error code: 6039*") | rex field=_raw "\*{4}(?<thread_id>\d+)\*" | append [ search index="*wfd-rpt-app*" "*ExecuteFactoryJob: Caught soap exception*" | rex field=_raw "\*{4}(?<thread_id>\d+)\*" ] | stats values(_raw) as raw_messages by _time, thread_id | table _time, thread_id, raw_messages     I tried above query but it is returning some results which is correct which contains raw message from both the queries, but some results are there which contains thread id and only the 504 gateway message even though the thread_id has both type of message when I checked separately. I'm new to splunk, any help is really appreciated.
I'm not sure what caused it. Normally, it shouldn't be caused by the inputs.cof file. The previous MC/DS was a distributed indexer cluster management node, and after the restart, it became a single d... See more...
I'm not sure what caused it. Normally, it shouldn't be caused by the inputs.cof file. The previous MC/DS was a distributed indexer cluster management node, and after the restart, it became a single deployment server.
Hi @Naa_Win , in all my projects I create a custom app containing dashboards to monitor infrastrcuture, with special attention to: fissing data sources, missing hosts, queues issues. Ciao. Gi... See more...
Hi @Naa_Win , in all my projects I create a custom app containing dashboards to monitor infrastrcuture, with special attention to: fissing data sources, missing hosts, queues issues. Ciao. Giuseppe
If you are taking about CIM DMs, then there are tags which it’s using to select events into specific DM. You could restrict the base search by separate white list of indexes. This makes updating the D... See more...
If you are taking about CIM DMs, then there are tags which it’s using to select events into specific DM. You could restrict the base search by separate white list of indexes. This makes updating the DM more efficient as it’s not need to look all indexes to find needed events. Usually there is no need / sense to create empty / dummy index fort that, you should just add your current indexes where that data is, into this field.
This is interesting! There should be $decideOnStartup$ (or something similar) as default, which gives you the current hostname when node / UF service has started. Is this multi interface node or any ... See more...
This is interesting! There should be $decideOnStartup$ (or something similar) as default, which gives you the current hostname when node / UF service has started. Is this multi interface node or any issues with hostname or is there any inputs which set host name / ip?
This should works correctly. When you are saying “restart UF”, are you meaning splunk UF process or whole windows node? Any reason why you are using separate ntpd instead of domain time? Have you ch... See more...
This should works correctly. When you are saying “restart UF”, are you meaning splunk UF process or whole windows node? Any reason why you are using separate ntpd instead of domain time? Have you checked how big time difference is after hibernation? You are aware that there are limits how big time difference ntpd can manage by itself without additional synchronization?
Hello Everyone, I'm trying to add index filtering for the datamodels in my setup. I found for some datamodels such as Vulnerabilities, there's no matching data at all.  In this case, should I create... See more...
Hello Everyone, I'm trying to add index filtering for the datamodels in my setup. I found for some datamodels such as Vulnerabilities, there's no matching data at all.  In this case, should I create an empty index for these datamodels? so that splunk won't do useless search for them. Please also know me if there are better solution for this case. Thanks & Regards, Iris