All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi I'm not 100% sure, but I have understanding that alerts etc. are configures as local time not at server time. So could you check who has configured that alert and what TZ he/she has configured on... See more...
Hi I'm not 100% sure, but I have understanding that alerts etc. are configures as local time not at server time. So could you check who has configured that alert and what TZ he/she has configured on browser. r. Ismo
Hi as this is a generating command https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Commandsbytype#Generating_commands you must add "|" in front of it. It also must be the fir... See more...
Hi as this is a generating command https://docs.splunk.com/Documentation/SplunkCloud/latest/SearchReference/Commandsbytype#Generating_commands you must add "|" in front of it. It also must be the first command on your SPL (or inside subquery).
Please accept that solution as it works.
You should also look command timewrap which can help you with this kind of comparisions.
Or someone has added more servers under linux audit log collections. Then best options is look when amount has increased and is node amount also increased on splunk side. If not then just look if the... See more...
Or someone has added more servers under linux audit log collections. Then best options is look when amount has increased and is node amount also increased on splunk side. If not then just look if then content on any individual nodes has increased and changed. Based on that you have more to discuss with you linux and/or splunk DS admins.
Hi one thing what you should do is to check how events are in raw data. Probably the easiest way is check it via "Event Actions -> Show Source".   In that way you will see how it really is. Aft... See more...
Hi one thing what you should do is to check how events are in raw data. Probably the easiest way is check it via "Event Actions -> Show Source".   In that way you will see how it really is. After that you know (especially with json) are there any space or other character which you need to take care on your strings. r. Ismo
Hi @raculim .. @PickleRick 's suggestion works fine, tested (9.3.0)  
My apologies.  I was switching between two different approaches and the filters got crossed.  To use the subsearch method above, modify that line to | where isnotnull(OS) index=A sourcetype="Any" | ... See more...
My apologies.  I was switching between two different approaches and the filters got crossed.  To use the subsearch method above, modify that line to | where isnotnull(OS) index=A sourcetype="Any" | stats values("IP address") as "IP address" by Hostname OS | append [search index=B sourcetype="foo" | stats values(Reporting_Host) as Reporting_Host] | eventstats values(eval(lower(Reporting_Host))) as Reporting_Host | where isnotnull(OS) | mvexpand "IP address" | eval match = if(lower(Hostname) IN (Reporting_Host) OR 'IP address' IN (Reporting_Host), "ok", null()) | stats values("IP address") as "IP address" values(match) as match by Hostname OS | fillnull match value="missing"  Depending on your deployment, combining the two index searches could improve performance, like this (index=A sourcetype="Any") OR (index=B sourcetype="foo") | eventstats values(eval(lower(Reporting_Host))) as Reporting_Host | where index != "B" | mvexpand "IP address" | eval match = if(lower(Hostname) IN (Reporting_Host) OR 'IP address' IN (Reporting_Host), "ok", null()) | stats values("IP address") as "IP address" values(match) as match by Hostname OS | fillnull match value="missing" But eventstats and mvexpand could be bigger performance hindrances.  There could be ways to avoid mvexpand; there could be ways to improve eventstats.  But unless you can isolate the main contributor to slowness, they are not worth exploring. Performance is a complex subject with any querying language.  You can start by doing some basic tests.  For example, run those two subsearches separately and compare with combined search.  If the total time is comparable, index search is the main hindrance.  That will be very difficult to improve.  Another test could be to add dedup before stats.  And so on.
The "Expires" setting doesn't stop your alert from running after 24 hours. Your alert will continue to run daily indefinitely; the expiration only prevents repeated triggering within 24 hours of the ... See more...
The "Expires" setting doesn't stop your alert from running after 24 hours. Your alert will continue to run daily indefinitely; the expiration only prevents repeated triggering within 24 hours of the last trigger, helping to avoid alert fatigue for ongoing issues.   Hope this helps. 
ok, its the right time for me work with these Slack addons/apps and Splunk Enterprise. 
Hi @qs_chuy .. good catch. let me check this and revert back.  my mindvoice to me... some more "detailed understanding" required between -  the tstats, datamodels, accelerated, non-accelerated, thx
You make it sound so easy, but I should say that I'm a Splunk Observability newbie. If I add an APM Detector it doesn't give me many avenues to customise it, and if I create a Custom Detector I seem ... See more...
You make it sound so easy, but I should say that I'm a Splunk Observability newbie. If I add an APM Detector it doesn't give me many avenues to customise it, and if I create a Custom Detector I seem to be in the area where newbies shouldn't be. However, I tried adding "errors_sudden_static_v2" for the "A" signal, and besides which is an Add Filter button. Is this where I need to "filter for the errors, extract the customerid and count by customerid"? My use case sounds like it should be a fairly common one, so is there an explanatory guide somewhere on doing things like this? I haven't found one yet. If I show the SignalFlow for my APM Detector, this is what it looks like:   from signalfx.detectors.apm.errors.static_v2 import static as errors_sudden_static_v2 errors_sudden_static_v2.detector( attempt_threshold=1, clear_rate_threshold=0.01, current_window='5m', filter_=( filter('sf_environment', 'prod') and ( filter('sf_service', 'my-service-name') and filter('sf_operation', 'POST /api/{userId}/endpointPath') ) ), fire_rate_threshold=0.02, resource_type='service_operation' ) .publish('TeamPrefix my-service-name /endpointPath errors')   The {userId} in the sf_operation is what I want to group the results on and only alert if a particular userId is generating a high number of errors compared to everybody else. Thank you.
I got around this by installing Slack Add-on for Splunk.
I beleive using parameters with ds.savedsearch is not supported. You can use parameters with a regular search using the savedsearch command.   Hope this helps. 
I was working with DataModels and I came across something strange about them when they are accelerated vs when they are not.   I created 2 DataModels, TestAccelerated and TestNotAccelerated. They ... See more...
I was working with DataModels and I came across something strange about them when they are accelerated vs when they are not.   I created 2 DataModels, TestAccelerated and TestNotAccelerated. They are a copy of each other with a few differences. The name/id, and one is accelerated and the other is not.   When I run a query to get the count of "MyValue" inside of field "MyID", I get different results. The Accelerated Data Model returns less records, with different grouping of _time than the Non-Accelerated DataModel.   I'm curious if anyone knows what the seach difference really is for both accelerated and non accelerated data models.   The count ends up being the same, so no issue finding out the count of "MyValue".   I see an issue if we are piping the output into a different command that uses the rows for information and not the count in each row, such as `|  geostats`.   Query to a non-accelerated data model: Query to an accelerated data model:    
Try enclosing your search term with quotes. "\"TOPIC_COMPLETION\""
Hi @PickleRick  First of all, thanks for the reply.  Let me try to give you a more concrete example: 1. One search example that returns a single result (this works as expected) 2. Adding the ... See more...
Hi @PickleRick  First of all, thanks for the reply.  Let me try to give you a more concrete example: 1. One search example that returns a single result (this works as expected) 2. Adding the TOPIC_COMPLETION string to the search (this works as expected) 3. Adding the "TOPIC_COMPLETION" string to the search (this doesn't return any results. I was expecting the same results as in 1 and 2) Version 9.2.2406.107  
At the moment there is apparently no such input type. You can always check if someone already had that idea on https://ideas.splunk.com and back it up. If there isn't one, create a new one.
As already stated, splitting inputs into separate apps and associating them with different serverclasses is the way to go. An input is a relatively "simple" idea. It might have features letting you ... See more...
As already stated, splitting inputs into separate apps and associating them with different serverclasses is the way to go. An input is a relatively "simple" idea. It might have features letting you filter _what_ you're ingesting (like particular files or windows event ids) but not _where_ they run or not.  
answering myself: Version 9.3.1 had python.version=force.python39.  Changing to python.version=python3.9 resolved the issue