All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @qs_chuy .. good catch. let me check this and revert back.  my mindvoice to me... some more "detailed understanding" required between -  the tstats, datamodels, accelerated, non-accelerated, thx
You make it sound so easy, but I should say that I'm a Splunk Observability newbie. If I add an APM Detector it doesn't give me many avenues to customise it, and if I create a Custom Detector I seem ... See more...
You make it sound so easy, but I should say that I'm a Splunk Observability newbie. If I add an APM Detector it doesn't give me many avenues to customise it, and if I create a Custom Detector I seem to be in the area where newbies shouldn't be. However, I tried adding "errors_sudden_static_v2" for the "A" signal, and besides which is an Add Filter button. Is this where I need to "filter for the errors, extract the customerid and count by customerid"? My use case sounds like it should be a fairly common one, so is there an explanatory guide somewhere on doing things like this? I haven't found one yet. If I show the SignalFlow for my APM Detector, this is what it looks like:   from signalfx.detectors.apm.errors.static_v2 import static as errors_sudden_static_v2 errors_sudden_static_v2.detector( attempt_threshold=1, clear_rate_threshold=0.01, current_window='5m', filter_=( filter('sf_environment', 'prod') and ( filter('sf_service', 'my-service-name') and filter('sf_operation', 'POST /api/{userId}/endpointPath') ) ), fire_rate_threshold=0.02, resource_type='service_operation' ) .publish('TeamPrefix my-service-name /endpointPath errors')   The {userId} in the sf_operation is what I want to group the results on and only alert if a particular userId is generating a high number of errors compared to everybody else. Thank you.
I got around this by installing Slack Add-on for Splunk.
I beleive using parameters with ds.savedsearch is not supported. You can use parameters with a regular search using the savedsearch command.   Hope this helps. 
I was working with DataModels and I came across something strange about them when they are accelerated vs when they are not.   I created 2 DataModels, TestAccelerated and TestNotAccelerated. They ... See more...
I was working with DataModels and I came across something strange about them when they are accelerated vs when they are not.   I created 2 DataModels, TestAccelerated and TestNotAccelerated. They are a copy of each other with a few differences. The name/id, and one is accelerated and the other is not.   When I run a query to get the count of "MyValue" inside of field "MyID", I get different results. The Accelerated Data Model returns less records, with different grouping of _time than the Non-Accelerated DataModel.   I'm curious if anyone knows what the seach difference really is for both accelerated and non accelerated data models.   The count ends up being the same, so no issue finding out the count of "MyValue".   I see an issue if we are piping the output into a different command that uses the rows for information and not the count in each row, such as `|  geostats`.   Query to a non-accelerated data model: Query to an accelerated data model:    
Try enclosing your search term with quotes. "\"TOPIC_COMPLETION\""
Hi @PickleRick  First of all, thanks for the reply.  Let me try to give you a more concrete example: 1. One search example that returns a single result (this works as expected) 2. Adding the ... See more...
Hi @PickleRick  First of all, thanks for the reply.  Let me try to give you a more concrete example: 1. One search example that returns a single result (this works as expected) 2. Adding the TOPIC_COMPLETION string to the search (this works as expected) 3. Adding the "TOPIC_COMPLETION" string to the search (this doesn't return any results. I was expecting the same results as in 1 and 2) Version 9.2.2406.107  
At the moment there is apparently no such input type. You can always check if someone already had that idea on https://ideas.splunk.com and back it up. If there isn't one, create a new one.
As already stated, splitting inputs into separate apps and associating them with different serverclasses is the way to go. An input is a relatively "simple" idea. It might have features letting you ... See more...
As already stated, splitting inputs into separate apps and associating them with different serverclasses is the way to go. An input is a relatively "simple" idea. It might have features letting you filter _what_ you're ingesting (like particular files or windows event ids) but not _where_ they run or not.  
answering myself: Version 9.3.1 had python.version=force.python39.  Changing to python.version=python3.9 resolved the issue
Firstly, some general remarks about your search. You don't have to use multisearch. 1. You can specify two time ranges within a single search NOT status IN ( 200, 203, 204, 302, 201, 202, 206, 301... See more...
Firstly, some general remarks about your search. You don't have to use multisearch. 1. You can specify two time ranges within a single search NOT status IN ( 200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 ) (earliest=-4h@m latest=@m) OR (earliest=-1w-4h@m latest=-1w@m) and clasify by time difference | eval date=if(now()-_time<20000,"today","last week") 2. Inclusion is better than exclusion. I understand that sometimes the exclusion might be the only way to specify the search conditions but if you can just use (or at least add) something like status IN (412, 403, 40* ...) Do so. It will be quicker than having to parse every single event. Even if you do have status IN (10*, 21*, 4*) NOT status IN (200,203,204 ... ) it should be faster than pure exclusion 3. If you have your data clasiffied (each event has that additional field saying "today" or "last week"), you can do | stats count by status date which will give you almost what you want. The data is already there, now you need to display it properly | xyseries status date count or | xyseries date status count depending on which way you want it presented.   You can do it also completely another way. Do your initial search status IN (something) NOT status IN something (earliest=long_ago latest=long_ago) OR (earliest=lately latest=lately) Do your timechart | timechart span=1d count by status But now you have many unneeded days in the middle (actually your multisearch will _not_ yield that result - it will be limited to just two days so you can do the multisearch and skip the following steps) Now we need to filter out the days for which we have no results at all | eval filter="y" | foreach * [ eval filter=if('<<FIELD>>'!=0,"n",filter) ] | where filter="n" | fields - filter  OK. So now you have the data (as you had at the very beginning of this thread But it's transposed. So what can you do? Surprise, surprise... | transpose 0 header_field=_time Now your data is other way around. But you'll notice that two fields showed up now as "values" - _span and _spandays - they are hidden internal Splunk's fields which should be filtered out before the transpose command with | fields - _span _spandays And of course the timestamp was an epoch-based unix timestamp so you got some integer after transposing. You need to use strftime or any other method to format the _time to some human-readable format before transposing.  
Is there a way we can also update the timerange on the saved searrches or reports? "ds_saved_search_from_sr": { "type": "ds.savedSearch", "options": { "ref": "<your data source name>" }, "... See more...
Is there a way we can also update the timerange on the saved searrches or reports? "ds_saved_search_from_sr": { "type": "ds.savedSearch", "options": { "ref": "<your data source name>" }, "name": "Saved Search Data Source From S&R" } how can I implement this to the one above? { "type": "input.timerange", "options": { "token": "global", "defaultValue": "-15m,now" }, "title": "Global Time Range" }
This is not strictly Splunk question. If your systems started producing more audit events something must have changed. Probably either audit rules defined in your systems changed or the systems' beh... See more...
This is not strictly Splunk question. If your systems started producing more audit events something must have changed. Probably either audit rules defined in your systems changed or the systems' behaviour changed so they report more events. It's something you need to resolve with your Linux admins. You could compare old data with new data to see what changed - whether there are more messages of some particular types or maybe new processes started geting "caught" by audit.  
Seems to work for me.   9.3.0
Go to one of the Linux servers that is reporting audit logs and run btool on the CLI. splunk btool --debug inputs list | grep audit  The output will include the name of the inputs.conf file where t... See more...
Go to one of the Linux servers that is reporting audit logs and run btool on the CLI. splunk btool --debug inputs list | grep audit  The output will include the name of the inputs.conf file where the input is defined.  Edit that file (or its peer in /local) to disable the input.
Hi,  I'm having a hard time trying to narrow down my search results.  I would like to return only the results that contain the following string on the message: "progress":"COMPLETED","subtopics":"C... See more...
Hi,  I'm having a hard time trying to narrow down my search results.  I would like to return only the results that contain the following string on the message: "progress":"COMPLETED","subtopics":"COMPLETED" The text must be all together, in the sequence above.  I tried to add a string like the one below in my search but it didn't work: message="*\"progress\":\"COMPLETED\",\"subtopics\":\"COMPLETED\"*" Does anyone have suggestions on how to do that?  I appreciate any help you can provide.  
This thread is more than a year old so you are more likely to get responses by submitting a new question.
My linux_audit logs increased after updating apps and causing license manager to go over limit. Anyone know a fix for this, I have looked for the stanzas on the backend but not able to find out where... See more...
My linux_audit logs increased after updating apps and causing license manager to go over limit. Anyone know a fix for this, I have looked for the stanzas on the backend but not able to find out where these logs are coming from. 
I ran into the same thing, but I noticed that when I run my queries against a data model that is not accellerated, I get the right results. Is there a reason why running against a datamodel that is ... See more...
I ran into the same thing, but I noticed that when I run my queries against a data model that is not accellerated, I get the right results. Is there a reason why running against a datamodel that is accelerated and the same data model that is not accelerated yield different results?
I would like to compare specific response status stats vertically and not horizontally so that the values line up and do not rely on the appendcols command. My search: | multisearch [search NOT ... See more...
I would like to compare specific response status stats vertically and not horizontally so that the values line up and do not rely on the appendcols command. My search: | multisearch [search NOT status IN ( 200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 ) earliest=-4h@m latest=@m | eval date="Today"] [search NOT status IN (200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 ) earliest=-4h@m-1w latest=@m-1w | eval date="LastWeek"] | timechart span=1d count by status Example display of current results Desired results Status Today LastWeek 412 1 0 413 1 0 415 0 1 418 0 2 422 6 7