All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

If your data is always in the same order, as others already suggested, it's just matter of setting up either regex-based or delimiter-based extraction to find a value in given position. But if the p... See more...
If your data is always in the same order, as others already suggested, it's just matter of setting up either regex-based or delimiter-based extraction to find a value in given position. But if the problem lies in the fact that column order can change (and is always determined by a header row in a file), only INDEXED_EXTRACTIONS can help because Splunk processes each event separately so it has no way of knowing which "format" particular row belongs go if different files had different header rows.
You can fire alert either once per whole result set or separately per each result row. So if you want three alerts from six rows, you have to adjust your search to "squeeze" multiple results into one... See more...
You can fire alert either once per whole result set or separately per each result row. So if you want three alerts from six rows, you have to adjust your search to "squeeze" multiple results into one row.  
I can't think of a practical way to make an alert that will alert once per title, but also have many separate rows per title. You may be trying to do too much with one module. You could set up the a... See more...
I can't think of a practical way to make an alert that will alert once per title, but also have many separate rows per title. You may be trying to do too much with one module. You could set up the alert to use multi-value fields as per my previous suggestion, but then include a link in the alert to a separate search where each title is separate.
There is no one general formula to find such things. And there cannot be because it depends on your needs. It's like asking "I'm gonna start a company, how big a warehouse should I rent?". It depend... See more...
There is no one general formula to find such things. And there cannot be because it depends on your needs. It's like asking "I'm gonna start a company, how big a warehouse should I rent?". It depends. You might not even need to rent any warehouse if you're just gonna do accounting or IT consulting.
Do you mean to say that each event contains a row of headers and another row of values like the following? Test1 Test2 Test3 Test4 Test5 Test6 Test7 Test8 Test9 Test 10 101 102 103. 104. 105. 106. 1... See more...
Do you mean to say that each event contains a row of headers and another row of values like the following? Test1 Test2 Test3 Test4 Test5 Test6 Test7 Test8 Test9 Test 10 101 102 103. 104. 105. 106. 107. 108. 109. 110 The easiest is like @gcusello suggested, create a form to match this format, then use kvform. No matter which method you use, you have to answer one question: What is the delimiter?  Obviously there is no comma.  But it is totally unclear whether the delimiter would be one space character, one tab character, or any number of white space characters can be interpreted as one delimiter.  Suitable solution can be different when delimiter is different. Here I illustrate a solution without using kvform that works with any number of white spaces between fields.   | rex mode=sed "s/\n/::/ s/\s+/,/g s/::/ /" | multikv   Your sample data will give you Test1 Test10 Test2 Test3 Test4 Test5 Test6 Test7 Test8 Test9 101 110 102 103. 104. 105. 106. 107. 108. 109. As I said, this is just one possible solution, and is most suitable if the number of white spaces (and even type of white spaces) between fields cannot be predetermined AND that field names and values do not contain any white space. Here is an emulation that you can play with and compare with real data   | makeresults | eval _raw = "Test1 Test2 Test3 Test4 Test5 Test6 Test7 Test8 Test9 Test10 101 102 103. 104. 105. 106. 107. 108. 109. 110" ``` data emulation above ```  
Hi @karthi2809 , let me understand: the issue is that your search runs if you choose a value but it doesn't run if you choose the "All" value ("*"), is it correct? I don't ee big problema in your s... See more...
Hi @karthi2809 , let me understand: the issue is that your search runs if you choose a value but it doesn't run if you choose the "All" value ("*"), is it correct? I don't ee big problema in your search, I'd only use search instead whene in the last condition and I'd add parenthesis in the main search: index=mulesoft environment=PRD ($BankApp$ OR priority IN ("ERROR", "WARN")) | stats values(*) AS * BY correlationId | rename content.InterfaceName AS InterfaceName content.FileList{} AS FileList content.Filename as FileName content.ErrorMsg as ErrorMsg | eval Status=case(priority="ERROR","ERROR", priority="WARN","WARN", priority!="ERROR","SUCCESS") | fields Status InterfaceName applicationName FileList FileName correlationId ErrorMsg message | search $interface$ FileList=* | sort -timestamp Ciao. Giuseppe
Hi All..  Good News - Created a bug with Splunk Support and Splunk Support team is working with Dev team.  One more small issue..  source="kuralOnly.txt" host="kuralonly" sourcetype="kural" |re... See more...
Hi All..  Good News - Created a bug with Splunk Support and Splunk Support team is working with Dev team.  One more small issue..  source="kuralOnly.txt" host="kuralonly" sourcetype="kural" |rex max_match=0 "(?<char>(?=\\S)\\X)" | eval KuralLen=mvcount(char) | table _raw KuralLen this SPL works, but the newline character is also getting counted.. pls refer the image, the kural  நாணாமை நாடாமை நாரின்மை யாதொன்றும் பேணாமை பேதை தொழில் ⁠got 23 characters only. but SPL reports it as 24 characters. so, i think the "newline" is also getting counted.  Could you pls suggest how to resolve this, thanks.   
You are avoiding answer the question everybody asks: What is your actual data structure?  Is it close to what I speculated?  Further, you have never illustrated what is expected output.  So, the two ... See more...
You are avoiding answer the question everybody asks: What is your actual data structure?  Is it close to what I speculated?  Further, you have never illustrated what is expected output.  So, the two screenshots with only timestamps means nothing to volunteers.  How can we help further? Specifically,  WHY should the output NOT have multiple timestamps after deduping application, name and target url?  In your original SPL, the only dedup is on x, which is Application+Action+Target_URL.  How is this different?  Anything after mvexpand in my search is based on my reading of your intent based only on that complex SPL sample.  Instead of making volunteers to read your mind, how about expressing the actual dataset, the result you are trying to get from the data, and the logic to derive the desired result and dataset in plain language (without SPL)?
First, some quick comment.  Judging from your attempted SPL, and your tendency to think "join", Splunk is not laughing.  It is just a stranger to you.  And strangers can be intimidating.  I often kee... See more...
First, some quick comment.  Judging from your attempted SPL, and your tendency to think "join", Splunk is not laughing.  It is just a stranger to you.  And strangers can be intimidating.  I often keep a browser tab open with https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ so I can easily lookup what's available, and what syntax any command or function requires.  For example, Time modifiers describe in much detail what you can give as earliest and latest.  MaxTime is just not a thing in SPL.  If MaxTime is a field you calculated previously, you cannot easily pass it into a subsearch (which is what join command must call).  For this latter point to strike home, you will need some more familiarity about how Splunk work.  Splunk's Search Manual can be a good start point to learn those. Back to your actual use case, technically you can make Splunk do exactly what you wanted.  But as I hinted above, Splunk - and Splunk practitioners like me, intimidate, nay bully those who dare to join.  Unless absolutely necessary, just use a Splunk-friendly query to achieve the same goal.  It will benefit you in the short term as well as long. You mention time periods you tried to connect the two searches, but give no indication as to what is the link between the two searches.  It seems obvious that you are not trying to "join" the two searches by _time.  So, there must be some other logic other than just wanting to set time interval differently.  Can you describe the actual logic in your use case?  What is the output you are trying to get?  What are some data characteristics that help you arrive at your output?  Illustrate in concrete terms or mockup data.
Hello Paul, Thank you for a quick response, Its direct from Palo to to Splunk. I am using Paloalto App and Add on,  I am not seeing indexes growing at all. I tried looking at the data from Search op... See more...
Hello Paul, Thank you for a quick response, Its direct from Palo to to Splunk. I am using Paloalto App and Add on,  I am not seeing indexes growing at all. I tried looking at the data from Search option and try to match with various filters. Regards Rabab
Hi Rabab, A few more details will be needed to help here. Is your Palo Alto setup sending directly to Splunk, with a syslog server, or  via an HF/UF? Where have you tried looking for the dat... See more...
Hi Rabab, A few more details will be needed to help here. Is your Palo Alto setup sending directly to Splunk, with a syslog server, or  via an HF/UF? Where have you tried looking for the data? Have you looked to see if any of your indexes are growing?   
I have Splunk Installed on a windows machine and configured PaloAlto app along with Add on.  I have done configurations on Palo Alto. I can see from packet Capture that palo alto is sending logs suc... See more...
I have Splunk Installed on a windows machine and configured PaloAlto app along with Add on.  I have done configurations on Palo Alto. I can see from packet Capture that palo alto is sending logs successfully to the windows machine where splunk is installed but I cannot see anything in splunk itself. Can anyone help?   Regards Rabab
Hi @yuanliu , Thanks much for your response. I tried the SPL query which you shared. If I dedup only application, name and target url, then I could see duplicates in _time field (Refer screenshot 1)... See more...
Hi @yuanliu , Thanks much for your response. I tried the SPL query which you shared. If I dedup only application, name and target url, then I could see duplicates in _time field (Refer screenshot 1) and here is the real data's timestamp in screenshot 2 which inturn gives us the incorrect count between the query output (count and the real data logs (count 12) Screenshot1    Screenshot2:  
It's been over a week now and I still don't have access to Splunk Cloud. Customer Support couldn't help me so far.    
Is splunk laughing even harder, does the second query - the one inside the join run before the query outside the join?
There are no miracles. So if you delete files or directories and they reappear after some time, there must be something responsible for redeploying them onto your server. There are three different ... See more...
There are no miracles. So if you delete files or directories and they reappear after some time, there must be something responsible for redeploying them onto your server. There are three different internal Splunk mechanisms that can cause that: 1) Indexer cluster config bundle management - as this is not an indexer, it doesn't apply here 2) Search head cluster deployer config push - you're sayint it's a standalone search head so it wouldn't apply either 3) Deployment from a Deployment Server - that's a possible scenario. I suppose the easiest way to verify if it is configured to pull apps from a DS is to either run splunk btool deploymentclient list target-broker:deploymentServer or verify existence of $SPLUNK_HOME/var/run/serverclass.xml file Of course there is also the possibility that your configs are managed by some external provisioning tool like ansible, puppet, chef or any kind of in-house built script. But this is something we cannot know.
Yes, thank you. I got focused on explaining why, that forgot to write what to do more explicitly.
Few additional remarks to an otherwise quite good explanation. 1. Cold buckets are rolled to frozen. By default it means they just get deleted but they might get archived onto a yet another storage ... See more...
Few additional remarks to an otherwise quite good explanation. 1. Cold buckets are rolled to frozen. By default it means they just get deleted but they might get archived onto a yet another storage location or processed by some external script (for example, compressed and encrypted for long-time storage). But yes, by default "freezing" equals deleting the bucket. 2. As you mentioned when listing the parameters affecting bucket lifecycle, there are also limits regarding a volume size. So the buckets might roll from cold to frozen if any of the conditions are met: 1) The bucket is older than the retention limit (the _newest_ event in the bucket is _older_ than the limit - in other words - whole bucket contains only events older than the retention limit). 2) The index has grown beyond the size limit 3) The volume has grown beyond the size limit. Obviously the 3) condition can be met only if your index directories are defined using volumes. In case of the 2) condition Splunk will freeze the oldest bucket for the index (again - the one for which the newest event is oldest), but in case of the 3) condition Splunk will freeze the oldest bucket from any of the indexes contained on the volume. You can find the actual reason for freezing bucket by searching your _internal index for AsyncFreezer Typically if just one index freezes before reaching retention period you'd expect this index to run out of space but if buckets from many indexes get prematurely frozen it might be the volume size issue. Yet, you can see volume size limit affecting just one index if your indexes have significantly differing retention periods so that one of them contains much older events than other ones.  
I am needing to find earlier version number of linux patches. I have to compare many patches, so I was wanting to use a join for two queries (assuming patching happens once a month, but not all packa... See more...
I am needing to find earlier version number of linux patches. I have to compare many patches, so I was wanting to use a join for two queries (assuming patching happens once a month, but not all packages have an update every month). The first query would get the latest packages patched (with in the last 30 days) - depending on what day of the month the patching occurred - I would like to pass the earliest datetime stamp found minus X seconds (as MaxTime)  to the second query. So, the second query could use the same index, source, sourcetype but where latest=MaxTime. Don't try this at home, putting  latest=MaxTime-10 in the second query caused Splunk to laugh at me and return "Invalid value 'MaxTime-10' for time term 'latest'"...no hard feelings, Splunk laughs at me often.   Thanks for any assistance in advance. JLund  
@deepakc Is there a formula i can use to determine the right diskSize, maxTotalDataSize, maxWarmDBCount? I think that will help me set the right values for these parameters.