All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Please do not use screenshot to show text data.  Use text box. Show example(s) of complete event, anonymize as needed.  If complete event is too long with lots of repetitive structures, show a comp... See more...
Please do not use screenshot to show text data.  Use text box. Show example(s) of complete event, anonymize as needed.  If complete event is too long with lots of repetitive structures, show a complete structure. "events like Authentication failed , such as exceeding X number of failed logins over Y time." Is this condition per user or in total? What counts as 1 number?  One event?  Each entry in an event?  Each user failure in an event?
In the end the problem I have is is that the eventID is far from unique for reasonable search windows  Yes, you mentioned this earlier and I forgot.  So, it sounds like starttimeu and endtimeu a... See more...
In the end the problem I have is is that the eventID is far from unique for reasonable search windows  Yes, you mentioned this earlier and I forgot.  So, it sounds like starttimeu and endtimeu are not preexisting fields that you can search on but they signal a desire to limit each subsearch's search window.  Is this correct? (In the other thread, the purpose was also to limit search window, using simply earliest and latest.) If earliest and latest is what you want, you can do this and it should work the same way as your original search: search index=ix1 [search index=ix2 eventStatus="Successful" | return 1000 eventID ] [search index=ix2 eventStatus="Successful" | localize timeafter=0m timebefore=1m | fields starttime endtime | rename starttime as earliest, endtime as latest | format] | stats values(client) values(port) values(target) by eventID
No. Your understanding is wrong. At each step of the search pipeline Splunk only has the data from the previous step in the pipe. So when you filter your data with | search, it onky retains the resul... See more...
No. Your understanding is wrong. At each step of the search pipeline Splunk only has the data from the previous step in the pipe. So when you filter your data with | search, it onky retains the results matched by your command. The data from before that command is no longer there so stats cannot use it to group by. So how should it know what are the possible values? Splunk cannot find what is not there so it cannot generate a zero result for a potential non-existing by-field value because it doesn't know what value that would be. The only case when stats do return a zero value is when you do an overall stats count and you have no results at all.
Hello @bishida, Thank you for taking the time to look into it and for all your help and support. It's truly appreciated.
Hi, My understanding is the data is there somewhere, Splunk decided to not display 0 when using stats count. Thanks
And how is Splunk supposed to know that there should be something there if there isn't? You have to tell it somehow. If you know that there is alway something with the 2 at the end and it is to be t... See more...
And how is Splunk supposed to know that there should be something there if there isn't? You have to tell it somehow. If you know that there is alway something with the 2 at the end and it is to be the base of your count, that's a completely different use case and can be done by some groupping and filtering.
Hi all, I have this use case below: Need to create a splunk alert for this scenario: Detections will be created from Splunk logs for specific events like Authentication failed , such as exceeding X... See more...
Hi all, I have this use case below: Need to create a splunk alert for this scenario: Detections will be created from Splunk logs for specific events like Authentication failed , such as exceeding X number of failed logins over Y time.  Below search splunk i am using:           index=nprod_database sourcetype=tigergraph:app:auditlog:8542 host=VCAUSC11EUAT* | search userAgent OR "actionName":"login" "timestamp":"2025-01-07T*"| sort -_time           I am not able to write the correct search query to find Authentication failed exceeding, for example 3 times. Attached screenshot. Thanks for your help. Dieudonne.
OK. The important thing here is _why_ you're not getting results so this is a learning experience. Let's dig into your search (which is actually pretty close to being OK for the task).     index=... See more...
OK. The important thing here is _why_ you're not getting results so this is a learning experience. Let's dig into your search (which is actually pretty close to being OK for the task).     index=my_index app=a     That's apparently your initial search, nothing to see here. Move along.     | table app action user     Now you're making a table containing just fields app, action and user. This is a wrong approach. On two levels. One - this is a table command which might not make much difference in a small-scale scenario, especially when everything is done on a single server but, very importantly, table is a transforming command which means that in a bigger environment, with several indexers and separate search head your search would at this point be moved from the indexer tier (when the same operation can be run in parallel by each indexer on its own part of data) to the search head. So avoid using the table command early on in your pipeline. If you want to limit processed fields, use fields command. And remember that when using fast search mode only the fields used in the search are extracted and displayed so this intermediate limiting of processed fields might be useful in more complicated searches but in this case (and we'll go back to this shortly), it doesn't make much sense. Two - and that will be important soon - after this command you are left with just those three explicitly requested fields. While the fields command by default drags along the internal fields unless you explicitly exclude them (most importantly _time and _raw), the table command only returns the explicitly specified fields.     |eval Month=strftime(_time,"%m")     The overall idea is sound - you want to get a field called Month containing a string representation of your month. The problem is that in a previous step you removed the _time field from your data. So you don't have the timestamp anymore and you have nothing to render into this month-string. BTW, whenever manipulating timestamps in a middle of a search, I prefer to use fieldformat instead of eval so that you still have your original timestamps and can do arithmetics on them without needing to parse them again or can sort them but that's a story for another time.     |stats count by user Month     And that would be a good thing but in the previous step you didn't get any values for Month. So Splunk cannot group by Month because there is no Month. So you're not getting any values at all. Since you're doing the counting quite early, you could have simply omitted the table command earlier since the stats will be limited to the count value and the "by-fields" only. And you're missing the action part - you want the actions separately, right? So you can't aggregate them for a single user. You must add the action field to the by fields. So your search could finally look like this:     index=my_index app=a |eval Month=strftime(_time,"%m") |stats count by user action Month      
Hello, We are Splunk Cloud subscribers. We want to utilize the NetApp for Splunk Add-On. We've Two on-site Deployment servers, one Windows, one Linux and an on-site Heavy Forwarder. My interpretatio... See more...
Hello, We are Splunk Cloud subscribers. We want to utilize the NetApp for Splunk Add-On. We've Two on-site Deployment servers, one Windows, one Linux and an on-site Heavy Forwarder. My interpretation of the instructions are that we install the NetApp Add-Ons (ONTAP Indexes & Extractions) within the cloud hosted search head.   The Cloud instructions leave me with the impression, that we may need to utilize the heavy forwarder as a data collection node for the NetApp Add-Ons as well. There we would manually install the app components within the splunk home /etc/apps directory. Looking within the deployment server and the heavy forwarder. Both splunk home directories installed have directory permissions set to 700.  We're hoping this method of installation does not apply to us then and the cloud installation process automated much of this and obviated the need to manually configure the heavy forwarder. Upon completing these Add-On installations via the cloud hosted search head, are there any additional steps or actions we will need to take to complete the installation aside from the NetApp Appliance configurations? Thank you, Terry
Hello, Is there any other way to do this? The data is dynamic. If I am doing this way, I have to have another process to dump a CSV file. Thanks
Typically you don't have logs from the client's side It's way way more common use case to have webserver or proxy server logs. And yes, part of normalizing your data to make it CIM-compliant is m... See more...
Typically you don't have logs from the client's side It's way way more common use case to have webserver or proxy server logs. And yes, part of normalizing your data to make it CIM-compliant is making sure the fields are properly mapped or calculated if they're not there in the original data. You can help yourself on this task at least partially with Add-On Builder.
IMO, "correct" SPL is whatever produces the desired results. Some guidelines: filter events as early as possible use non-streaming commands as late as possible use fields instead of table until ... See more...
IMO, "correct" SPL is whatever produces the desired results. Some guidelines: filter events as early as possible use non-streaming commands as late as possible use fields instead of table until the end avoid expensive commands like join, transaction, and append avoid leading wildcards in the search command I'm sure others will have more guidelines.  
If this is email-related (didn't know that there is a way to send to teams using email), I'd seek _internal for anything related to senmail.py and the destination address. I don't remember though if ... See more...
If this is email-related (didn't know that there is a way to send to teams using email), I'd seek _internal for anything related to senmail.py and the destination address. I don't remember though if this data is available on Cloud. If it is not, it's probably something you'd need to go over with support.
There is no single best practice to writing SPL. Every use case is different and in order to write an effective SPL you must: 1) Know what you want to achieve 2) Know what you have 3) Know how to ... See more...
There is no single best practice to writing SPL. Every use case is different and in order to write an effective SPL you must: 1) Know what you want to achieve 2) Know what you have 3) Know how to "splunkify" your problem And in order to write a good and effective SPL it's good to know what each command does and how/where it works (especially considering the type of the command) and what limitations it has. Writing effective SPL is a bit of an art and it's not unusual that you go back to your search after a year or two and you discover that you simply didn't know a command or two back when you originally wrote your code and it can be rewritten much prettier and more effectively. There are some general rules which are _usually_ true in typical cases like "filter out as much as you can before doing anything serious with your data", "use as little data as you need", "don't overuse wildcards, especially at the beginning of a search term", and "use distributable streaming commands as early as you can" but as with everything - there can be exceptions to those rules simply because your particular use case might call for them. It's just good to know what are the pros and cons of breaking those rules.
I'm thinking the Hipster shop demo environment may be a lot of extra work to get running on MacOS/arm64.  In the following docs, they mention preferred option is to run in GKE. If you run locally, th... See more...
I'm thinking the Hipster shop demo environment may be a lot of extra work to get running on MacOS/arm64.  In the following docs, they mention preferred option is to run in GKE. If you run locally, they did it with ubuntu Linux. https://github.com/GoogleCloudPlatform/microservices-demo/blob/main/docs/development-guide.md
update: We did get this resolved earlier today. The cause was a port conflict as 8125 was already in use. With statsd, this can be tricky to catch because it's UDP--so normal testing methods for TCP ... See more...
update: We did get this resolved earlier today. The cause was a port conflict as 8125 was already in use. With statsd, this can be tricky to catch because it's UDP--so normal testing methods for TCP ports don't work. We found that 8127 was available and used that to get it working. If anyone else encounters this, be sure to check logs (e.g., /var/log/messages or /var/log/syslog) for port conflict error messages.
I'm currently going over our alerts, cleaning them up and optimizing them.  However, I recall there being a "best practice" when it comes to writing SPL. Obviously, there may be caveats to it, bu... See more...
I'm currently going over our alerts, cleaning them up and optimizing them.  However, I recall there being a "best practice" when it comes to writing SPL. Obviously, there may be caveats to it, but what is the usual best practice when structuring your SPL commands? Is this correct or no? search, index, source, sourcetype | where, filter, regex | rex, replace, eval | stats, chart, timechart | sort, sortby | table, fields, transpose | dedup, head | eventstats, streamstats | map, lookup
Hello, I have a .NET Transaction Rule named:  "/ws/rest/api"  The matching Rule is a Regex: /ws/rest/api/V[0-9].[0-9]/pthru A couple of examples of the the URLs that would match this rule are: /w... See more...
Hello, I have a .NET Transaction Rule named:  "/ws/rest/api"  The matching Rule is a Regex: /ws/rest/api/V[0-9].[0-9]/pthru A couple of examples of the the URLs that would match this rule are: /ws/rest/api/V3.0/pthru/workingorders /ws/rest/api/V4.0/pthru/cart /ws/rest/api/V4.0/pthru/cart/items I am splitting the Rule by URI segments, 4, 5, 6.  but the resulting name is:  /ws/rest/api.V4.0pthruCart Is there a way to add "/" between each segment, or is there a better way to do this that give us a better looking Transaction Name? Thanks for your help, Tom
Not sure if it helps as I have not tested it, but I found the error message under: /opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/error.py there is a section for 404 in ... See more...
Not sure if it helps as I have not tested it, but I found the error message under: /opt/splunk/lib/python3.7/site-packages/splunk/appserver/mrsparkle/lib/error.py there is a section for 404 in the script which contains: kwargs['message']='Page not found!'
This is the typical "proving the negative" case. https://www.duanewaddle.com/proving-a-negative/ Append (from a lookup or a here-document), sum and you're set.