All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello Splunkers, is there anyone on the community that would be willing to talk with me about Splunk use cases for measuring, analyzing and communicating large amounts of data for carbon impact simi... See more...
Hello Splunkers, is there anyone on the community that would be willing to talk with me about Splunk use cases for measuring, analyzing and communicating large amounts of data for carbon impact similar to the SAP/NHL Venue Metrics Platform? Mick11
So this is a new install and new source.  In the splunk server there is no props.conf file. I assume I have to create it?  
At first glance it looks relatively OK. You have your inputs matching your outputs. Check your splunkd.log on the sending UF and the receiving HF. There should be hints as to the reason for lack of ... See more...
At first glance it looks relatively OK. You have your inputs matching your outputs. Check your splunkd.log on the sending UF and the receiving HF. There should be hints as to the reason for lack of connectivity. If nothing else helps - try to tcpdump the traffic and see what's going on there. EDIT: OK, your initial post says that you get "Connection reset by peer" but it's a bit unclear which side this is from.
Please share the props.conf stanza for that sourcetype.  It looks like the TIME_FORMAT string may be incorrect.
Not sure if this all of them, but I think it should cover any command defined in a searchbnf.conf file. | rest splunk_server=local /servicesNS/-/-/configs/conf-searchbnf | fields + title, sh... See more...
Not sure if this all of them, but I think it should cover any command defined in a searchbnf.conf file. | rest splunk_server=local /servicesNS/-/-/configs/conf-searchbnf | fields + title, shortdesc, description, eai:acl.app, eai:acl.sharing, usage | search title="*-command" | eval command=replace(title, "-command$", "") | fields + command, shortdesc, description
And that seems about right. Your router reports 13:35GMT so Splunk parses it as 13:35GMT and shows it to you in your local time zone. Your data quality is poor - configure your router to either repo... See more...
And that seems about right. Your router reports 13:35GMT so Splunk parses it as 13:35GMT and shows it to you in your local time zone. Your data quality is poor - configure your router to either report proper time zone or proper time (or even better - to report proper time in proper timezone).
While there might be some technical theoretical limit to the number of correlation searches you can configure in Splunk (if not for any other reason - you can't make more searches that would fill you... See more...
While there might be some technical theoretical limit to the number of correlation searches you can configure in Splunk (if not for any other reason - you can't make more searches that would fill your whole disk with savedsearches.conf - but that's a ridiculous idea for a limit, you'll most likely get into problems much earlier. But the "allowability" of certain number of searches heavily depends on the quantity and quality of your data and on the searches themselves (how they are written, how much time they search and so on). And of course how many searches you can run (both in parallel as well as generally total number across the whole day/week/month/whatever) depends on your environment specs. So the only reasonable answer is "it depends".
Hi,   Rather new to splunk. I got some logs ingested but they are showing Time incorrectly. I have my TZ set on the UF server, splunk server and in my preferences as EST but I am getting this: ... See more...
Hi,   Rather new to splunk. I got some logs ingested but they are showing Time incorrectly. I have my TZ set on the UF server, splunk server and in my preferences as EST but I am getting this: 12/11/23 8:35:24.000 AM   2023-12-11T13:35:24+00:00 routerXXXXXX   If I look at the field _time I have: 2023-12-11T08:35:24.000-05:00 I suspect the source host or I need a props.conf to fix?
How can you determine the number of correlation searches instances of Splunk Security can handle? For both Splunk Enterprise on prem and Splunk Enterprise in the Cloud. And yes i am taling about acti... See more...
How can you determine the number of correlation searches instances of Splunk Security can handle? For both Splunk Enterprise on prem and Splunk Enterprise in the Cloud. And yes i am taling about active and non-active correlation searches in Splunk. Please also let me know if you want to contact me with further questions!     Thanks!
How does Splunk know how long an interval between events being ingested is deemed intolerable for each index?
Changing repFactor does not remove the index from the cluster.  It just means there is only one copy of the data for that index.
Good afternoon, I want to generate an alert to control the loss of ingestion of the events in the different indexes, but I want to do it that for according to the index that is, the time of ingestio... See more...
Good afternoon, I want to generate an alert to control the loss of ingestion of the events in the different indexes, but I want to do it that for according to the index that is, the time of ingestion varies. That is to say, the windows servers, ingest me almost every minute, on the other hand the antivirus only ingests if it detects something, which can be that it generates at least one event every 5 days. So it does not make sense to check every minute, because the antivirus would generate a lot of noise, and not every 2 days, because in the case of losing communication with the forwarder I would realize 2 days later, and the service would not work efficiently. Does anyone know if it is possible to generate this alert, without having to generate an alert by index? Thank you very much in advance!
| bin _time span=1d | stats count by user file _time | eval days_ago = ((relative_time(now(), "@d") - _time) / 84600) + 1 | stats sum(days_ago) as day_flag by user file | where day_flag < 3 This wil... See more...
| bin _time span=1d | stats count by user file _time | eval days_ago = ((relative_time(now(), "@d") - _time) / 84600) + 1 | stats sum(days_ago) as day_flag by user file | where day_flag < 3 This will give you day_flag = 1 if the file was missing yesterday and day_flag = 2 if the file was missing today
Sure, thanks for the note.   is it possible for finding the missing file ? any reference
There is no problem with removing files from the directory. Other files are being removed using batch. This appears to be a regular expression processing issue. SplunkD log shows the watch being pu... See more...
There is no problem with removing files from the directory. Other files are being removed using batch. This appears to be a regular expression processing issue. SplunkD log shows the watch being put on the path, and processes the stanzas that relate to the files in question. The file I want to monitor has the filename of DefaultAuditRecorder.log. The files I want to use batch on have the form of DefaultAduitRecorder.############.log The automatic Splunk conversion to Regular expression can't differentiate between these tow filename formats, and defaults to monitor. I have tried several attempts to working on a whitelist regular expression for the monitor and batch, but it still doesn't work.  
appendcols is not often the way to go, as is probably the case here too. The reason for that is the the events which are appended are not correlated with the first set of results, e.g. by user. You... See more...
appendcols is not often the way to go, as is probably the case here too. The reason for that is the the events which are appended are not correlated with the first set of results, e.g. by user. You could try using chart   basesearch (including both days) | bin _time span=1d | chart count by user _time   This will at least give you the counts so you can subtract one day's count from the other. However, find out which file or files are missing, is more tricky.
Hi I'm not sure what you are trying, but some comments which maybe helps you? 1. If you have created indexes on SHC side, it's not affect on indexer cluster. Those are totally different entities an... See more...
Hi I'm not sure what you are trying, but some comments which maybe helps you? 1. If you have created indexes on SHC side, it's not affect on indexer cluster. Those are totally different entities and all indexes to indexer cluster must created via MN (manager node). What you are meaning with this " I want to write data to the index of this test through the Splunk API and obtain the written data from other search header nodes,"? Usually data is written to indexes by normal ingesting process. I suppose that you have set SHC side outputs.conf send all logs to indexer cluster and as you have added your MN as search target on your SHC side this should work. Of course it depends on how you have configured your SHC is this needed on every SHC node or is it enough to do it only in one node (if I recall right)? There are instructions on docs for this. 2. Kvstore data is synchronised all time on SHC. If not then you must fix your SHC cluster configuration. You can see this from internal logs, MC or CLI.  If you are using your own kvstore collections, you could clean those. But if you are cleaning SHC's own collections then you mesh up the SHC itself and you must create it again. Look more from docs. 3. What you are meaning with this? How SHC manage its internal communication, synchronisation etc have described on docs. Just read more from here https://docs.splunk.com/Documentation/Splunk/9.1.2/DistSearch/SHCarchitecture r. Ismo
Hi Yuanliu, The query field is the domain visited (e.g. www.youtube.com) I am just renaming that field to "Domains". The Workstation is the hostname of the user's computer (e.g. ABC193423). I am ult... See more...
Hi Yuanliu, The query field is the domain visited (e.g. www.youtube.com) I am just renaming that field to "Domains". The Workstation is the hostname of the user's computer (e.g. ABC193423). I am ultimately looking to return unique values for the domains seen, but only the highest count (e.g. in the current results, if a user goes to www.youtube.com 3 times, it will show up 3 times in the search as 1 :, 2 :, 3 :, etc.). The intention here is to see how many times a domain was visited on a given Workstation, and include the User that accessed the domains. So the search can either return only the highest count seen for a domain, OR I can sort the counts descending to get the highest hits on top (this is meant to be a workaround to get the search working until I can figure out how to remove the "duplicated" lower counts). I thank you for your suggestion, but unfortunately it did not change how the data was populated. It was still ascending the counts.  
Found the solution. I had created the forwarding profile for the traffic and threat logs and set the forwarding to the splunk server but i didnt attach it to the security policy i wanted to monitor s... See more...
Found the solution. I had created the forwarding profile for the traffic and threat logs and set the forwarding to the splunk server but i didnt attach it to the security policy i wanted to monitor so i was onyl getting the standard config and systems logs that monitor the fw itself, not the data that is getting passed through. Here is the knowledge article i found that helped me resolve my issue if anyone has a similar problem in the future. Tips & Tricks: Forward traffic logs to a syslog server - Knowledge Base - Palo Alto Networks
Hi, I want to create the panel (table) to monitor the todays data vs yesterdays log data as below.  Please could you help ? how to get the missed data Current SPL: basesearch | stats count as Co... See more...
Hi, I want to create the panel (table) to monitor the todays data vs yesterdays log data as below.  Please could you help ? how to get the missed data Current SPL: basesearch | stats count as Count_Today by User | appendcols [ basesearch | stats count as Count_Yesterday by User] | eval Missing=abs(round(VOLUMELASTWEEK-VOLUMETODAY)) | table User Count_Today Count_Yesterday Missing Expected Result: User Count_Today Count_Yesterday Missing Missed File Name ABC 5 4 1 abc*