All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

That doesn't appear to be what I recommended - perhaps that's why you are not getting any results? It would help if you could share some sample anonymised events so we can see what it is that you ar... See more...
That doesn't appear to be what I recommended - perhaps that's why you are not getting any results? It would help if you could share some sample anonymised events so we can see what it is that you are dealing with and try to figure a search that will work for you, because just discussing searches without knowing what they apply to is often fruitless.
Hi Team, I require merging three queries originating from the identical index and sourcetypes, yet each query necessitates extraction and manipulation of its output. Query 1: A single index is ... See more...
Hi Team, I require merging three queries originating from the identical index and sourcetypes, yet each query necessitates extraction and manipulation of its output. Query 1: A single index is linked to three unique sourcetypes. index = abc sourcetype= def, sourcetype=ghi & sourcetype=jkl Query 2 : Its same like Query 1  index = abc sourcetype= def, sourcetype=ghi & sourcetype=jkl Query 3: Its same like Query 1 & 2 index = abc sourcetype= def, sourcetype=ghi & sourcetype=jkl The index and sourcetype details remain consistent across all three queries, but the keywords differ. Thus, I aim to merge the three queries, compare them, and extract the desired output.   For instance, in the initial query, the "Step" field is extracted during the search process, containing diverse data such as computer names and OS information. In the second query, our aim is to ascertain the count of successful occurrences in the "Step" field, specifically the count of computer names indicating success. Likewise, in the third query, we intend to retrieve information regarding failures. Query1: index="abc" ("Restart transaction item" NOT "Pending : transaction item:") | rex field=_raw "Restart transaction item: (?<Step>.*?) \(WorkId:"| table Step |stats Count by Step Query 2: index="abc" ("Error restart workflow item:") | rex field=_raw "Error restart workflow item: (?<Success>.*?) \(WorkId:"| table Success |stats Count by Success Query 3: index="abc" "Restart Pending event from command," | rex field=_raw "Restart Pending event from command, (?<Failure>.*?) \Workid"| table Failure |stats Count by Failure Thus, in the initial query, the Step field is extracted, and our objective is to extract both success and failure data from this field, presenting it in a tabular format. Despite attempting to use a join query, it was unsuccessful. Assistance in this matter would be greatly appreciated. Kindly help on the same.
Wait a second. 9 out of 10 indexers have roughly the same number of buckets and 1 has just 1/3 of those? And this one has significantly larger buckets? That is strange. With ingestion imbalance a... See more...
Wait a second. 9 out of 10 indexers have roughly the same number of buckets and 1 has just 1/3 of those? And this one has significantly larger buckets? That is strange. With ingestion imbalance as a primary factor you should have one or a few indexers with bigger bucket count, not smaller. If you have larger buckets, I'd hazard a guess that: 1) You have primary buckets on that indexer (so you have some imbalance if this indexer receives all the primaries there) 2) The summaries are generated on that indexer (hence the increased size) 3) The summaries are not replicated between peers (if I remember correctly, replicating summaries must be explicitly enabled) So your indexer is overused because it has all the primaries and all summary-generating searches hit just this indexer. And probably due to size of the index(es) or the volume(s) your buckets might get frozen earlier than on other indexers.
I assume PAlto is sending events as syslog data. As you're using Windows I suspect you're not using any additional syslog receiver but want to receive syslog directly on your Splunk (which is not the... See more...
I assume PAlto is sending events as syslog data. As you're using Windows I suspect you're not using any additional syslog receiver but want to receive syslog directly on your Splunk (which is not the best idea but let's leave it for now). Have you configured any inputs on your Splunk instance to receive the syslog events? Do you have proper rules in your server's firewall to allow for this traffic?
Ok, you probably come to Splunk from a different background so it's only natural that you don't have Splunk habits (yet) and try to solve problems the way you know. Don't worry it will come with time... See more...
Ok, you probably come to Splunk from a different background so it's only natural that you don't have Splunk habits (yet) and try to solve problems the way you know. Don't worry it will come with time. One thing worth remembering is that join is very rarely the way to go with Splunk. Tell us what you have in your data (sample events, obfuscated if needed), explaining if there are any relationships between different events, what you want to get from that data (not how you're trying to get it) and we'll see what can be done.
If your data is always in the same order, as others already suggested, it's just matter of setting up either regex-based or delimiter-based extraction to find a value in given position. But if the p... See more...
If your data is always in the same order, as others already suggested, it's just matter of setting up either regex-based or delimiter-based extraction to find a value in given position. But if the problem lies in the fact that column order can change (and is always determined by a header row in a file), only INDEXED_EXTRACTIONS can help because Splunk processes each event separately so it has no way of knowing which "format" particular row belongs go if different files had different header rows.
You can fire alert either once per whole result set or separately per each result row. So if you want three alerts from six rows, you have to adjust your search to "squeeze" multiple results into one... See more...
You can fire alert either once per whole result set or separately per each result row. So if you want three alerts from six rows, you have to adjust your search to "squeeze" multiple results into one row.  
I can't think of a practical way to make an alert that will alert once per title, but also have many separate rows per title. You may be trying to do too much with one module. You could set up the a... See more...
I can't think of a practical way to make an alert that will alert once per title, but also have many separate rows per title. You may be trying to do too much with one module. You could set up the alert to use multi-value fields as per my previous suggestion, but then include a link in the alert to a separate search where each title is separate.
There is no one general formula to find such things. And there cannot be because it depends on your needs. It's like asking "I'm gonna start a company, how big a warehouse should I rent?". It depend... See more...
There is no one general formula to find such things. And there cannot be because it depends on your needs. It's like asking "I'm gonna start a company, how big a warehouse should I rent?". It depends. You might not even need to rent any warehouse if you're just gonna do accounting or IT consulting.
Do you mean to say that each event contains a row of headers and another row of values like the following? Test1 Test2 Test3 Test4 Test5 Test6 Test7 Test8 Test9 Test 10 101 102 103. 104. 105. 106. 1... See more...
Do you mean to say that each event contains a row of headers and another row of values like the following? Test1 Test2 Test3 Test4 Test5 Test6 Test7 Test8 Test9 Test 10 101 102 103. 104. 105. 106. 107. 108. 109. 110 The easiest is like @gcusello suggested, create a form to match this format, then use kvform. No matter which method you use, you have to answer one question: What is the delimiter?  Obviously there is no comma.  But it is totally unclear whether the delimiter would be one space character, one tab character, or any number of white space characters can be interpreted as one delimiter.  Suitable solution can be different when delimiter is different. Here I illustrate a solution without using kvform that works with any number of white spaces between fields.   | rex mode=sed "s/\n/::/ s/\s+/,/g s/::/ /" | multikv   Your sample data will give you Test1 Test10 Test2 Test3 Test4 Test5 Test6 Test7 Test8 Test9 101 110 102 103. 104. 105. 106. 107. 108. 109. As I said, this is just one possible solution, and is most suitable if the number of white spaces (and even type of white spaces) between fields cannot be predetermined AND that field names and values do not contain any white space. Here is an emulation that you can play with and compare with real data   | makeresults | eval _raw = "Test1 Test2 Test3 Test4 Test5 Test6 Test7 Test8 Test9 Test10 101 102 103. 104. 105. 106. 107. 108. 109. 110" ``` data emulation above ```  
Hi @karthi2809 , let me understand: the issue is that your search runs if you choose a value but it doesn't run if you choose the "All" value ("*"), is it correct? I don't ee big problema in your s... See more...
Hi @karthi2809 , let me understand: the issue is that your search runs if you choose a value but it doesn't run if you choose the "All" value ("*"), is it correct? I don't ee big problema in your search, I'd only use search instead whene in the last condition and I'd add parenthesis in the main search: index=mulesoft environment=PRD ($BankApp$ OR priority IN ("ERROR", "WARN")) | stats values(*) AS * BY correlationId | rename content.InterfaceName AS InterfaceName content.FileList{} AS FileList content.Filename as FileName content.ErrorMsg as ErrorMsg | eval Status=case(priority="ERROR","ERROR", priority="WARN","WARN", priority!="ERROR","SUCCESS") | fields Status InterfaceName applicationName FileList FileName correlationId ErrorMsg message | search $interface$ FileList=* | sort -timestamp Ciao. Giuseppe
Hi All..  Good News - Created a bug with Splunk Support and Splunk Support team is working with Dev team.  One more small issue..  source="kuralOnly.txt" host="kuralonly" sourcetype="kural" |re... See more...
Hi All..  Good News - Created a bug with Splunk Support and Splunk Support team is working with Dev team.  One more small issue..  source="kuralOnly.txt" host="kuralonly" sourcetype="kural" |rex max_match=0 "(?<char>(?=\\S)\\X)" | eval KuralLen=mvcount(char) | table _raw KuralLen this SPL works, but the newline character is also getting counted.. pls refer the image, the kural  நாணாமை நாடாமை நாரின்மை யாதொன்றும் பேணாமை பேதை தொழில் ⁠got 23 characters only. but SPL reports it as 24 characters. so, i think the "newline" is also getting counted.  Could you pls suggest how to resolve this, thanks.   
You are avoiding answer the question everybody asks: What is your actual data structure?  Is it close to what I speculated?  Further, you have never illustrated what is expected output.  So, the two ... See more...
You are avoiding answer the question everybody asks: What is your actual data structure?  Is it close to what I speculated?  Further, you have never illustrated what is expected output.  So, the two screenshots with only timestamps means nothing to volunteers.  How can we help further? Specifically,  WHY should the output NOT have multiple timestamps after deduping application, name and target url?  In your original SPL, the only dedup is on x, which is Application+Action+Target_URL.  How is this different?  Anything after mvexpand in my search is based on my reading of your intent based only on that complex SPL sample.  Instead of making volunteers to read your mind, how about expressing the actual dataset, the result you are trying to get from the data, and the logic to derive the desired result and dataset in plain language (without SPL)?
First, some quick comment.  Judging from your attempted SPL, and your tendency to think "join", Splunk is not laughing.  It is just a stranger to you.  And strangers can be intimidating.  I often kee... See more...
First, some quick comment.  Judging from your attempted SPL, and your tendency to think "join", Splunk is not laughing.  It is just a stranger to you.  And strangers can be intimidating.  I often keep a browser tab open with https://docs.splunk.com/Documentation/Splunk/latest/SearchReference/ so I can easily lookup what's available, and what syntax any command or function requires.  For example, Time modifiers describe in much detail what you can give as earliest and latest.  MaxTime is just not a thing in SPL.  If MaxTime is a field you calculated previously, you cannot easily pass it into a subsearch (which is what join command must call).  For this latter point to strike home, you will need some more familiarity about how Splunk work.  Splunk's Search Manual can be a good start point to learn those. Back to your actual use case, technically you can make Splunk do exactly what you wanted.  But as I hinted above, Splunk - and Splunk practitioners like me, intimidate, nay bully those who dare to join.  Unless absolutely necessary, just use a Splunk-friendly query to achieve the same goal.  It will benefit you in the short term as well as long. You mention time periods you tried to connect the two searches, but give no indication as to what is the link between the two searches.  It seems obvious that you are not trying to "join" the two searches by _time.  So, there must be some other logic other than just wanting to set time interval differently.  Can you describe the actual logic in your use case?  What is the output you are trying to get?  What are some data characteristics that help you arrive at your output?  Illustrate in concrete terms or mockup data.
Hello Paul, Thank you for a quick response, Its direct from Palo to to Splunk. I am using Paloalto App and Add on,  I am not seeing indexes growing at all. I tried looking at the data from Search op... See more...
Hello Paul, Thank you for a quick response, Its direct from Palo to to Splunk. I am using Paloalto App and Add on,  I am not seeing indexes growing at all. I tried looking at the data from Search option and try to match with various filters. Regards Rabab
Hi Rabab, A few more details will be needed to help here. Is your Palo Alto setup sending directly to Splunk, with a syslog server, or  via an HF/UF? Where have you tried looking for the dat... See more...
Hi Rabab, A few more details will be needed to help here. Is your Palo Alto setup sending directly to Splunk, with a syslog server, or  via an HF/UF? Where have you tried looking for the data? Have you looked to see if any of your indexes are growing?   
I have Splunk Installed on a windows machine and configured PaloAlto app along with Add on.  I have done configurations on Palo Alto. I can see from packet Capture that palo alto is sending logs suc... See more...
I have Splunk Installed on a windows machine and configured PaloAlto app along with Add on.  I have done configurations on Palo Alto. I can see from packet Capture that palo alto is sending logs successfully to the windows machine where splunk is installed but I cannot see anything in splunk itself. Can anyone help?   Regards Rabab
Hi @yuanliu , Thanks much for your response. I tried the SPL query which you shared. If I dedup only application, name and target url, then I could see duplicates in _time field (Refer screenshot 1)... See more...
Hi @yuanliu , Thanks much for your response. I tried the SPL query which you shared. If I dedup only application, name and target url, then I could see duplicates in _time field (Refer screenshot 1) and here is the real data's timestamp in screenshot 2 which inturn gives us the incorrect count between the query output (count and the real data logs (count 12) Screenshot1    Screenshot2:  
It's been over a week now and I still don't have access to Splunk Cloud. Customer Support couldn't help me so far.    
Is splunk laughing even harder, does the second query - the one inside the join run before the query outside the join?