All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I got around this by installing Slack Add-on for Splunk.
I beleive using parameters with ds.savedsearch is not supported. You can use parameters with a regular search using the savedsearch command.   Hope this helps. 
I was working with DataModels and I came across something strange about them when they are accelerated vs when they are not.   I created 2 DataModels, TestAccelerated and TestNotAccelerated. They ... See more...
I was working with DataModels and I came across something strange about them when they are accelerated vs when they are not.   I created 2 DataModels, TestAccelerated and TestNotAccelerated. They are a copy of each other with a few differences. The name/id, and one is accelerated and the other is not.   When I run a query to get the count of "MyValue" inside of field "MyID", I get different results. The Accelerated Data Model returns less records, with different grouping of _time than the Non-Accelerated DataModel.   I'm curious if anyone knows what the seach difference really is for both accelerated and non accelerated data models.   The count ends up being the same, so no issue finding out the count of "MyValue".   I see an issue if we are piping the output into a different command that uses the rows for information and not the count in each row, such as `|  geostats`.   Query to a non-accelerated data model: Query to an accelerated data model:    
Try enclosing your search term with quotes. "\"TOPIC_COMPLETION\""
Hi @PickleRick  First of all, thanks for the reply.  Let me try to give you a more concrete example: 1. One search example that returns a single result (this works as expected) 2. Adding the ... See more...
Hi @PickleRick  First of all, thanks for the reply.  Let me try to give you a more concrete example: 1. One search example that returns a single result (this works as expected) 2. Adding the TOPIC_COMPLETION string to the search (this works as expected) 3. Adding the "TOPIC_COMPLETION" string to the search (this doesn't return any results. I was expecting the same results as in 1 and 2) Version 9.2.2406.107  
At the moment there is apparently no such input type. You can always check if someone already had that idea on https://ideas.splunk.com and back it up. If there isn't one, create a new one.
As already stated, splitting inputs into separate apps and associating them with different serverclasses is the way to go. An input is a relatively "simple" idea. It might have features letting you ... See more...
As already stated, splitting inputs into separate apps and associating them with different serverclasses is the way to go. An input is a relatively "simple" idea. It might have features letting you filter _what_ you're ingesting (like particular files or windows event ids) but not _where_ they run or not.  
answering myself: Version 9.3.1 had python.version=force.python39.  Changing to python.version=python3.9 resolved the issue
Firstly, some general remarks about your search. You don't have to use multisearch. 1. You can specify two time ranges within a single search NOT status IN ( 200, 203, 204, 302, 201, 202, 206, 301... See more...
Firstly, some general remarks about your search. You don't have to use multisearch. 1. You can specify two time ranges within a single search NOT status IN ( 200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 ) (earliest=-4h@m latest=@m) OR (earliest=-1w-4h@m latest=-1w@m) and clasify by time difference | eval date=if(now()-_time<20000,"today","last week") 2. Inclusion is better than exclusion. I understand that sometimes the exclusion might be the only way to specify the search conditions but if you can just use (or at least add) something like status IN (412, 403, 40* ...) Do so. It will be quicker than having to parse every single event. Even if you do have status IN (10*, 21*, 4*) NOT status IN (200,203,204 ... ) it should be faster than pure exclusion 3. If you have your data clasiffied (each event has that additional field saying "today" or "last week"), you can do | stats count by status date which will give you almost what you want. The data is already there, now you need to display it properly | xyseries status date count or | xyseries date status count depending on which way you want it presented.   You can do it also completely another way. Do your initial search status IN (something) NOT status IN something (earliest=long_ago latest=long_ago) OR (earliest=lately latest=lately) Do your timechart | timechart span=1d count by status But now you have many unneeded days in the middle (actually your multisearch will _not_ yield that result - it will be limited to just two days so you can do the multisearch and skip the following steps) Now we need to filter out the days for which we have no results at all | eval filter="y" | foreach * [ eval filter=if('<<FIELD>>'!=0,"n",filter) ] | where filter="n" | fields - filter  OK. So now you have the data (as you had at the very beginning of this thread But it's transposed. So what can you do? Surprise, surprise... | transpose 0 header_field=_time Now your data is other way around. But you'll notice that two fields showed up now as "values" - _span and _spandays - they are hidden internal Splunk's fields which should be filtered out before the transpose command with | fields - _span _spandays And of course the timestamp was an epoch-based unix timestamp so you got some integer after transposing. You need to use strftime or any other method to format the _time to some human-readable format before transposing.  
Is there a way we can also update the timerange on the saved searrches or reports? "ds_saved_search_from_sr": { "type": "ds.savedSearch", "options": { "ref": "<your data source name>" }, "... See more...
Is there a way we can also update the timerange on the saved searrches or reports? "ds_saved_search_from_sr": { "type": "ds.savedSearch", "options": { "ref": "<your data source name>" }, "name": "Saved Search Data Source From S&R" } how can I implement this to the one above? { "type": "input.timerange", "options": { "token": "global", "defaultValue": "-15m,now" }, "title": "Global Time Range" }
This is not strictly Splunk question. If your systems started producing more audit events something must have changed. Probably either audit rules defined in your systems changed or the systems' beh... See more...
This is not strictly Splunk question. If your systems started producing more audit events something must have changed. Probably either audit rules defined in your systems changed or the systems' behaviour changed so they report more events. It's something you need to resolve with your Linux admins. You could compare old data with new data to see what changed - whether there are more messages of some particular types or maybe new processes started geting "caught" by audit.  
Seems to work for me.   9.3.0
Go to one of the Linux servers that is reporting audit logs and run btool on the CLI. splunk btool --debug inputs list | grep audit  The output will include the name of the inputs.conf file where t... See more...
Go to one of the Linux servers that is reporting audit logs and run btool on the CLI. splunk btool --debug inputs list | grep audit  The output will include the name of the inputs.conf file where the input is defined.  Edit that file (or its peer in /local) to disable the input.
Hi,  I'm having a hard time trying to narrow down my search results.  I would like to return only the results that contain the following string on the message: "progress":"COMPLETED","subtopics":"C... See more...
Hi,  I'm having a hard time trying to narrow down my search results.  I would like to return only the results that contain the following string on the message: "progress":"COMPLETED","subtopics":"COMPLETED" The text must be all together, in the sequence above.  I tried to add a string like the one below in my search but it didn't work: message="*\"progress\":\"COMPLETED\",\"subtopics\":\"COMPLETED\"*" Does anyone have suggestions on how to do that?  I appreciate any help you can provide.  
This thread is more than a year old so you are more likely to get responses by submitting a new question.
My linux_audit logs increased after updating apps and causing license manager to go over limit. Anyone know a fix for this, I have looked for the stanzas on the backend but not able to find out where... See more...
My linux_audit logs increased after updating apps and causing license manager to go over limit. Anyone know a fix for this, I have looked for the stanzas on the backend but not able to find out where these logs are coming from. 
I ran into the same thing, but I noticed that when I run my queries against a data model that is not accellerated, I get the right results. Is there a reason why running against a datamodel that is ... See more...
I ran into the same thing, but I noticed that when I run my queries against a data model that is not accellerated, I get the right results. Is there a reason why running against a datamodel that is accelerated and the same data model that is not accelerated yield different results?
I would like to compare specific response status stats vertically and not horizontally so that the values line up and do not rely on the appendcols command. My search: | multisearch [search NOT ... See more...
I would like to compare specific response status stats vertically and not horizontally so that the values line up and do not rely on the appendcols command. My search: | multisearch [search NOT status IN ( 200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 ) earliest=-4h@m latest=@m | eval date="Today"] [search NOT status IN (200, 203, 204, 302, 201, 202, 206, 301, 304, 404, 500, 400, 401, 403, 502, 504 ) earliest=-4h@m-1w latest=@m-1w | eval date="LastWeek"] | timechart span=1d count by status Example display of current results Desired results Status Today LastWeek 412 1 0 413 1 0 415 0 1 418 0 2 422 6 7
The default value of the product selection should be 'latest'. The token for the default value is determined by a hidden search for the latest product. This is dependent on the selected device. If th... See more...
The default value of the product selection should be 'latest'. The token for the default value is determined by a hidden search for the latest product. This is dependent on the selected device. If the device selection changes, the product selection should revert to the default value, which is the latest product ID for the newly selected device. Currently, setting the latest product ID upon device change is not functioning. How can I resolve this issue?   <search id="base_search"> <query> | mpreview index="my_index" | search key IN $token_device$ </query> <earliest>$token_time.earliest$</earliest> <latest>$token_time.latest$</latest> <refresh>300</refresh> </search> <input id="select_device" type="dropdown" token="token_device" searchWhenChanged="true"> <label>Device</label> <selectFirstChoice>true</selectFirstChoice> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <search> <query> | mpreview index="my_index" | stats count by key | fields key | lookup device-mapping.csv ... | fields key full_name </query> </search> <fieldForLabel>full_name</fieldForLabel> <fieldForValue>key</fieldForValue> <delimiter>,</delimiter> <change> <unset token="token_product"></unset> <unset token="form.token_product"></unset> </change> </input> <search> <query> | mpreview index="my_index" | search key IN $token_device$ | stats latest(_time) as latest_time by product_id | sort -latest_time | head 1 | fields product_id </query> <earliest>-24h@h</earliest> <latest>now</latest> <done> <condition match="$job.resultCount$ != 0"> <set token="latest_product_id">$result.product_id$</set> </condition> <condition match="$job.resultCount$ == 0"> <set token="latest_product_id">*</set> </condition> </done> </search> <input id="select_product" type="multiselect" token="token_product" searchWhenChanged="true"> <label>Product</label> <default>$latest_product_id$</default> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> <choice value="*">All</choice> <search base="base_search"> <query> | stats latest(_time) as latest_time by product_id | eventstats max(latest_time) as max_time | eval label=if(latest_time == max_time, "latest", product_id) | sort - latest_time | fields label, product_id </query> </search> <fieldForLabel>label</fieldForLabel> <fieldForValue>product_id</fieldForValue> <delimiter>,</delimiter> <change> <condition label="All"> <set token="token_product">("*") AND product_id != "LoremIpsum"</set> </condition> </change> </input>  
Thanks! Works like a charm!