All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @sairajkiran  Try checking the values from the job inspector for your event/search. Not sure if it will fulfil your needs. The field you can use is search_id -- in _introspection and _audit ... See more...
Hi @sairajkiran  Try checking the values from the job inspector for your event/search. Not sure if it will fulfil your needs. The field you can use is search_id -- in _introspection and _audit indexes For _internal, you'll need to extract this value from job which looks something like this search/search/jobs/1710936732.74/control so the search_id field value is 1710936732.74  If the reply helps, a Karma vote would be appreciated.
Hello, Thank you so much for your response. The query that contain the search is actually in the statistic table, but the condition is a condition based on the drop down token. This is the mai... See more...
Hello, Thank you so much for your response. The query that contain the search is actually in the statistic table, but the condition is a condition based on the drop down token. This is the main question: How to dynamically search / where based on variable like below? | search day_no_each_timestamp = day_in_week OR | where day_no_each_timestamp = day_in_week  
Hi, We are getting below error on the machines running with Network Toolkit app. It's affecting the Data forwarding to Splunk cloud. Please help.   0000 ERROR ExecProcessor [5441 ExecProcessorSc... See more...
Hi, We are getting below error on the machines running with Network Toolkit app. It's affecting the Data forwarding to Splunk cloud. Please help.   0000 ERROR ExecProcessor [5441 ExecProcessorSchedulerThread] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/network_tools/bin/ping.py"   self.logger.warn("Thread limit has been reached and thus this execution will be skipped for stanza=%s, thread_count=%i", stanza, len(self.threads))   Thanks!  
Hi @LearningGuy  Not sure if I understand your requirement correctly. But below maybe something you can use. <form version="1.1"> <label>Dropdown-token-condition</label> <fieldset submitB... See more...
Hi @LearningGuy  Not sure if I understand your requirement correctly. But below maybe something you can use. <form version="1.1"> <label>Dropdown-token-condition</label> <fieldset submitButton="false" autoRun="true"> <input type="dropdown" token="token_week_or_day" searchWhenChanged="true"> <label>Week Or Day</label> <choice value="w">Week</choice> <choice value="d">Day</choice> </input> <input type="dropdown" token="token_day" searchWhenChanged="true"> <label>Day Number</label> <choice value="0">Sunday</choice> <choice value="1">Monday</choice> <choice value="2">Tuesday</choice> <choice value="3">Wednesday</choice> <choice value="4">Thursday</choice> <choice value="5">Friday</choice> <choice value="6">Saturday</choice> </input> </fieldset> <row> <panel> <table> <search> <query>| makeresults | eval selected_week_or_day_option="$token_week_or_day$" | eval selected_day=$token_day$ | table _time selected_week_or_day_option selected_day date_day | eval day_no_each_timestamp=strftime(_time,"%w") | eval day_in_week = if(selected_week_or_day_option="w", $token_day$, "*")</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> </row> </form>   If the reply helps, a Karma vote would be appreciated.
format can take up to 6 parameters - these default so that the values are put in quotes, there are ANDs between field/value pairs from the same row, rows are enclosed in brackets, there are ORs betwe... See more...
format can take up to 6 parameters - these default so that the values are put in quotes, there are ANDs between field/value pairs from the same row, rows are enclosed in brackets, there are ORs between rows, and the whole thing is enclosed in brackets. For example: ( ( a="11" AND b="21" AND c="31" ) OR ( a="12" AND b="22" AND c="32" ) OR ( a="13" AND b="23" AND c="33" ) ) These are how the parameter (positions) map to the formatted result 1 2 a="11" 3 b="21" 3 c="31" 4 5 2 a="12" 3 b="22" 3 c="32" 4 5 2 a="13" 3 b="23" 3 c="33" 4 6 You can test this with this runanywhere example | makeresults count=3 | streamstats count as a | eval a=a+10, b=a+10, c=a+20 | format 1 2 3 4 5 6
Hi,  I ran the HTTP splunk integrations with https://splunkbase.splunk.com/app/5904  I have a problem sending a put request to splunk ES according to the https://docs.splunk.com/Documentation/ES/l... See more...
Hi,  I ran the HTTP splunk integrations with https://splunkbase.splunk.com/app/5904  I have a problem sending a put request to splunk ES according to the https://docs.splunk.com/Documentation/ES/latest/API/NotableEventAPIreference#.2Fservices.2Fnotable_update  I don't know what the body and header should look like, the connection works for me because it returns that I have access, Status Code: 400 Data from server: \"ValueError: One of comment, newOwner, status, urgency, disposition is required". \ Anyone encountered this probleme ?
Then you should check this troubleshooting guide... https://docs.splunk.com/Documentation/Splunk/9.2.0/Admin/TroubleshootKVstore
Hi, We needed to get a list of all the knowledge objects owned by a user in our SplunkCloud instance via APIs. I have access to SplunkCloud ACS and am able to get a list of all users but it does no... See more...
Hi, We needed to get a list of all the knowledge objects owned by a user in our SplunkCloud instance via APIs. I have access to SplunkCloud ACS and am able to get a list of all users but it does not have details of the knowledge objects owned by the user. Is it possible to get this data via ACS ? How can i get this data ? Details of our SplunkCloud instance Version: 9.1.2308.203 Experience: Victoria          
I'm trying to connect my SOAR cluster (primary & warm backup) to an external postgresql database. the docs seem to only have information on backing up a SOAR node into another or backing up an exist... See more...
I'm trying to connect my SOAR cluster (primary & warm backup) to an external postgresql database. the docs seem to only have information on backing up a SOAR node into another or backing up an existing SOAR cluster with an already existing external postgresql db server. How can I connect my splunk SOAR nodes to an external db? to be specific, if I already backed up the phantom db and restored it onto the external server (using postgres docs), how can I "tell" splunk SOAR to start using the external db instead of the internal socket?
Just to add to this I doubled the RAM in my dev environment for the VM to 24GB and still get the same error
As my colleague used to say - "Try and see". Set up a HEC input and try to push a few requests using different compression methods. As far as I remember, there are no settings for selectively enabli... See more...
As my colleague used to say - "Try and see". Set up a HEC input and try to push a few requests using different compression methods. As far as I remember, there are no settings for selectively enabling/disabling compression (methods) on HTTP level so you'll either hit something that Splunk can process or you'll get an error.
As the guys already pointed out, there is more to bucket lifecycle than meets the eye. And that's why you might want to involve PS or your local friendly Splunk Partner for assistance. First and mos... See more...
As the guys already pointed out, there is more to bucket lifecycle than meets the eye. And that's why you might want to involve PS or your local friendly Splunk Partner for assistance. First and most obvious thing is that data is not rolled to "next" lifecycle stage per event but as full buckets. It has its consequences. While the hot->warm and warm->cold roll is based on either _bucket_ size or number of buckets, rolling to frozen is based on the _latest_ event in the bucket (unless of course you hit the size limit). So the bucket will not be rolled until all events in the buckets are past the retention period set for the index. That's the first thing which makes managing retention unintuitive (especially if you have strict compliance requirements not only regarding for how long you should retain your events but also when you should delete them). Another more subtle thing is that Splunk creates so called "quarantine buckets" into which it inserts events which are relatively "out of order" - way too old or coming supposedly from the future. The idea is that all those events are put into a separate bucket so they don't impact performance of "normal" searches. So it's not unusual, especially if you had some issues with data quality, to have buckets covering quite big time spans. You can list buckets with their metadata using the dbinspect command. (Most of this should be covered by the presentation @isoutamo pointed you to). And unless you have very strict and unusual compliance requirements you should not fiddle with the index bucket settings.
@ITWhisperer  With 'format' at the end worked - thank you very much Just checked documentation which indicates [to me] that returned string have input search results separated by 'OR' command - do ... See more...
@ITWhisperer  With 'format' at the end worked - thank you very much Just checked documentation which indicates [to me] that returned string have input search results separated by 'OR' command - do I understand correctly? format - Splunk Documentation This command is used implicitly by subsearches. This command takes the results of a subsearch, formats the results into a single result and places that result into a new field called search. The format command performs similar functions as the return command. . . . mvsepSyntax: mvsep="<string>"Description: The separator to use for multivalue fields.Default: ORmvsepSyntax: mvsep="<string>"Description: The separator to use for multivalue fields.Default: OR    
Thanks for the reply but will prefer checking through the cli, maybe like a command to achieve that 
As I understand, you're searching for events meaning that you have an outage/failure. Your 100% uptime would mean that you have no events at all, right? Well, you can't find something that isn't the... See more...
As I understand, you're searching for events meaning that you have an outage/failure. Your 100% uptime would mean that you have no events at all, right? Well, you can't find something that isn't there so you have to "cheat" a little. See https://www.duanewaddle.com/proving-a-negative/
Try with format (I thought this was no longer necessary but it looks like it is!) index=blah [search index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<search>[^:]+)" | table search | dedup search |... See more...
Try with format (I thought this was no longer necessary but it looks like it is!) index=blah [search index=blah "BAD_REQUEST" | rex "(?i) requestId (?P<search>[^:]+)" | table search | dedup search | format]
So I spun up a new Splunk instance in Podman (completely clean) and ingested the same file and the behaviourt is the same with no line breaking! This is with UTF8 encoding and CRLF or LF endings. So ... See more...
So I spun up a new Splunk instance in Podman (completely clean) and ingested the same file and the behaviourt is the same with no line breaking! This is with UTF8 encoding and CRLF or LF endings. So I went into the UI and created a sourcetype for it:   [netlogon] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) NO_BINARY_CHECK = true SHOULD_LINEMERGE = false category = Custom pulldown_type = 1   Now working on 9.2.0.1 but not on 9.1.2
You haven't provided description/sample of your data so we don't know - for example - how many events you can have per each contextId but I suppose you're simply looking for something like index="lo... See more...
You haven't provided description/sample of your data so we don't know - for example - how many events you can have per each contextId but I suppose you're simply looking for something like index="log-3258-prod-c" | stats values(user_name) as user_name  values(Flow) as Flow [... more aggregations here ...] by contextId If you want to list all fields, you can simply shorthand the stats to | stats values(*) as * by contextId
Hello dear Splunk experts, I like to understand a "re-connection hickup": One of my indexers from the index cluster needed a timeout. So I used   ~/bin/splunk offline --decommission_node_force_ti... See more...
Hello dear Splunk experts, I like to understand a "re-connection hickup": One of my indexers from the index cluster needed a timeout. So I used   ~/bin/splunk offline --decommission_node_force_timeout 3600   to take it (kind of gracefully) offline. The cluster master shows "Restarting" (~/bin/splunk show cluster-status --verbose) so I started working on it and later rebooted the machine, Splunk started on the indexer and then.... nothing - The cluster master showed still "Restarting" for over 10 minutes. So I decided to login to the web view of the indexer, navigated to the settings / peer stuff and about 20 seconds later the cluster master cluster-status showed it as "status UP" without changing anything. Some days later I did the same with another indexer and it was the same story - I needed to login to the web view in order to have the peer shown as UP in the cluster. Is there anything in the docs that I have missed? Is this normal? (How can I trust the self-healing capabilities of the cluster, if one needs to manually log in to the peer after a downtime?) Thanks a lot + kind regards, Triv
Firstly, if your subsearch uses the same source index as the outer search, it's more often than not that the search can be written without using the subsearch. Secondly, the subsearches have their l... See more...
Firstly, if your subsearch uses the same source index as the outer search, it's more often than not that the search can be written without using the subsearch. Secondly, the subsearches have their limitations (for execution time and number of returned results). Their most confusing and annoying "feature" however is that if the subsearch hits such limit, it gets silently finalized and you're only getting partial (possibly empty) results from the subsearch _with no warning about that whatsoever_. So if your subsearch run on its own produces proper results and your "outer search" with the results from the subsearch manually copy-pasted produces proper results as well it's highly probable that this is the issue you're hitting. Check your job log to see what your main search is rendered into in the end (after the subsearch is run). (Of course @ITWhisperer 's point of field extraction is still valid).