All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

What happens if you manually use the sendemail command? | makeresults | sendemail to="it-security@durr.com" subject="Test mail" message="Test mail message"  
+1 with @TheLawsOfChaos, It's a common practise to create a Role with "Read Only" permission. You have any further questions / issues with respect to this @treven?
@gaurav10 Note that in @ITWhisperer 's solution, Event_Time is handled in 2 steps, with the binning in between: First, convert SUBMIT_TIME to a time field using strptime. Now you can bin based on a... See more...
@gaurav10 Note that in @ITWhisperer 's solution, Event_Time is handled in 2 steps, with the binning in between: First, convert SUBMIT_TIME to a time field using strptime. Now you can bin based on a time span. Do your binning in this in-between phase. Second, convert the new Event_Time to a string using strftime.
It seems that splunklib is trying to get the obsolete pycrypto library. Ideally you should configure it to use the Cryptography or pycrypodome library. There is an answer here how to change splunkli... See more...
It seems that splunklib is trying to get the obsolete pycrypto library. Ideally you should configure it to use the Cryptography or pycrypodome library. There is an answer here how to change splunklib to use pycryptodome: https://stackoverflow.com/questions/59104347/how-do-i-install-splunklib-for-python-3-7-on-windows In case that link goes down, here are the instructions: by Chris Chris Dec 2 2019, 8:42: I finally found the way to install it: Uninstall pycrypto pip uninstall pycrypto Install pycryptodome as replacement of pycypto pip install pycryptodome Install splunklib without dependencies pip install splunklib --no-deps Edit "pythonlib"\splunklib-1.0.0.dist-info\METADATA and replace "Requires-Dist: pycrypto" with "Requires-Dist: pycryptodome" install splunk-sdk pip install splunk-sdk check that everything is ok pip install splunklib  
Hi @bowesmana Thank for Answer! I checked balance as SPL that you gave to me. Balance looks like not bad.  I confirmed that the major indexes are counted as many as the number of indexers.  
Hi! @PickleRick Thank for Answer. I didn't know about primary, non-primary searchable copy terms until you said. In our operation environment, summary is rarely used. So, I think we need to collec... See more...
Hi! @PickleRick Thank for Answer. I didn't know about primary, non-primary searchable copy terms until you said. In our operation environment, summary is rarely used. So, I think we need to collect information about primary copy and find the cause. Thank you again!
Hello PickleRick, I he created data input o allow udp14 traffic. So is index. Please check these screenshots for clarity,  192.168.3.5 is Palo Device and 192.168.3.1 is windows machine where ... See more...
Hello PickleRick, I he created data input o allow udp14 traffic. So is index. Please check these screenshots for clarity,  192.168.3.5 is Palo Device and 192.168.3.1 is windows machine where Splunk is installed
Premium apps are associated with a specific account provided on a purchase order as far as I remember. Find the person responsible for this oder in your company or contact your local Splunk sales team.
Hi !  I want to try splunk UBA on a single linux machine.  But, on the app download page, I'm seeing error saying app installation is restricted to certain users and your user profile is not in that... See more...
Hi !  I want to try splunk UBA on a single linux machine.  But, on the app download page, I'm seeing error saying app installation is restricted to certain users and your user profile is not in that list. Any suggestions to resolve this ? Thanks, Abhishek
While @ITWhisperer 's response about the original data (and desired result) is valid, there is one important thing worth noting - with Splunk often the approach of "joining" separate searches is not ... See more...
While @ITWhisperer 's response about the original data (and desired result) is valid, there is one important thing worth noting - with Splunk often the approach of "joining" separate searches is not the best idea. The typical Splunk approach would be to search for all events in the initial search and then subsequently filter and split into separate categories further down the search pipeline.
It would help if you could share some sample anonymised events so we can see what it is that you are dealing with and try to figure a search that will work for you, because just discussing searches w... See more...
It would help if you could share some sample anonymised events so we can see what it is that you are dealing with and try to figure a search that will work for you, because just discussing searches without knowing what they apply to is often fruitless.
That doesn't appear to be what I recommended - perhaps that's why you are not getting any results? It would help if you could share some sample anonymised events so we can see what it is that you ar... See more...
That doesn't appear to be what I recommended - perhaps that's why you are not getting any results? It would help if you could share some sample anonymised events so we can see what it is that you are dealing with and try to figure a search that will work for you, because just discussing searches without knowing what they apply to is often fruitless.
Hi Team, I require merging three queries originating from the identical index and sourcetypes, yet each query necessitates extraction and manipulation of its output. Query 1: A single index is ... See more...
Hi Team, I require merging three queries originating from the identical index and sourcetypes, yet each query necessitates extraction and manipulation of its output. Query 1: A single index is linked to three unique sourcetypes. index = abc sourcetype= def, sourcetype=ghi & sourcetype=jkl Query 2 : Its same like Query 1  index = abc sourcetype= def, sourcetype=ghi & sourcetype=jkl Query 3: Its same like Query 1 & 2 index = abc sourcetype= def, sourcetype=ghi & sourcetype=jkl The index and sourcetype details remain consistent across all three queries, but the keywords differ. Thus, I aim to merge the three queries, compare them, and extract the desired output.   For instance, in the initial query, the "Step" field is extracted during the search process, containing diverse data such as computer names and OS information. In the second query, our aim is to ascertain the count of successful occurrences in the "Step" field, specifically the count of computer names indicating success. Likewise, in the third query, we intend to retrieve information regarding failures. Query1: index="abc" ("Restart transaction item" NOT "Pending : transaction item:") | rex field=_raw "Restart transaction item: (?<Step>.*?) \(WorkId:"| table Step |stats Count by Step Query 2: index="abc" ("Error restart workflow item:") | rex field=_raw "Error restart workflow item: (?<Success>.*?) \(WorkId:"| table Success |stats Count by Success Query 3: index="abc" "Restart Pending event from command," | rex field=_raw "Restart Pending event from command, (?<Failure>.*?) \Workid"| table Failure |stats Count by Failure Thus, in the initial query, the Step field is extracted, and our objective is to extract both success and failure data from this field, presenting it in a tabular format. Despite attempting to use a join query, it was unsuccessful. Assistance in this matter would be greatly appreciated. Kindly help on the same.
Wait a second. 9 out of 10 indexers have roughly the same number of buckets and 1 has just 1/3 of those? And this one has significantly larger buckets? That is strange. With ingestion imbalance a... See more...
Wait a second. 9 out of 10 indexers have roughly the same number of buckets and 1 has just 1/3 of those? And this one has significantly larger buckets? That is strange. With ingestion imbalance as a primary factor you should have one or a few indexers with bigger bucket count, not smaller. If you have larger buckets, I'd hazard a guess that: 1) You have primary buckets on that indexer (so you have some imbalance if this indexer receives all the primaries there) 2) The summaries are generated on that indexer (hence the increased size) 3) The summaries are not replicated between peers (if I remember correctly, replicating summaries must be explicitly enabled) So your indexer is overused because it has all the primaries and all summary-generating searches hit just this indexer. And probably due to size of the index(es) or the volume(s) your buckets might get frozen earlier than on other indexers.
I assume PAlto is sending events as syslog data. As you're using Windows I suspect you're not using any additional syslog receiver but want to receive syslog directly on your Splunk (which is not the... See more...
I assume PAlto is sending events as syslog data. As you're using Windows I suspect you're not using any additional syslog receiver but want to receive syslog directly on your Splunk (which is not the best idea but let's leave it for now). Have you configured any inputs on your Splunk instance to receive the syslog events? Do you have proper rules in your server's firewall to allow for this traffic?
Ok, you probably come to Splunk from a different background so it's only natural that you don't have Splunk habits (yet) and try to solve problems the way you know. Don't worry it will come with time... See more...
Ok, you probably come to Splunk from a different background so it's only natural that you don't have Splunk habits (yet) and try to solve problems the way you know. Don't worry it will come with time. One thing worth remembering is that join is very rarely the way to go with Splunk. Tell us what you have in your data (sample events, obfuscated if needed), explaining if there are any relationships between different events, what you want to get from that data (not how you're trying to get it) and we'll see what can be done.
If your data is always in the same order, as others already suggested, it's just matter of setting up either regex-based or delimiter-based extraction to find a value in given position. But if the p... See more...
If your data is always in the same order, as others already suggested, it's just matter of setting up either regex-based or delimiter-based extraction to find a value in given position. But if the problem lies in the fact that column order can change (and is always determined by a header row in a file), only INDEXED_EXTRACTIONS can help because Splunk processes each event separately so it has no way of knowing which "format" particular row belongs go if different files had different header rows.
You can fire alert either once per whole result set or separately per each result row. So if you want three alerts from six rows, you have to adjust your search to "squeeze" multiple results into one... See more...
You can fire alert either once per whole result set or separately per each result row. So if you want three alerts from six rows, you have to adjust your search to "squeeze" multiple results into one row.  
I can't think of a practical way to make an alert that will alert once per title, but also have many separate rows per title. You may be trying to do too much with one module. You could set up the a... See more...
I can't think of a practical way to make an alert that will alert once per title, but also have many separate rows per title. You may be trying to do too much with one module. You could set up the alert to use multi-value fields as per my previous suggestion, but then include a link in the alert to a separate search where each title is separate.
There is no one general formula to find such things. And there cannot be because it depends on your needs. It's like asking "I'm gonna start a company, how big a warehouse should I rent?". It depend... See more...
There is no one general formula to find such things. And there cannot be because it depends on your needs. It's like asking "I'm gonna start a company, how big a warehouse should I rent?". It depends. You might not even need to rent any warehouse if you're just gonna do accounting or IT consulting.