All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

You can read the comments on a container by using the the API in a code or custom function block.   comment_url = phantom.build_phantom_rest_url('container', container_id, 'comments') comment_resp... See more...
You can read the comments on a container by using the the API in a code or custom function block.   comment_url = phantom.build_phantom_rest_url('container', container_id, 'comments') comment_resp_json = phantom.requests.get(comment_url, verify=False).json() if comment_resp_json.get('count', 0) > 0: phantom.debug(comment_resp_json)   You can then parse the comments to your heart's content.
/opt/splunkforwarder/etc/system/default/props.conf [_json] /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/system/default/props.conf descripti... See more...
/opt/splunkforwarder/etc/system/default/props.conf [_json] /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/system/default/props.conf description = JavaScript Object Notation format. For more information, visit http://json.org/ /opt/splunkforwarder/etc/apps/armor/local/props.conf [armor_json_02] /opt/splunkforwarder/etc/apps/armor/local/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/apps/armor/local/props.conf description = JavaScript Object Notation format. For more information, visit http://json.org/ /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/system/default/props.conf [json_no_timestamp] /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/apps/splunk_internal_metrics/default/props.conf [log2metrics_json] /opt/splunkforwarder/etc/apps/splunk_internal_metrics/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/apps/splunk_internal_metrics/default/props.conf METRIC-SCHEMA-TRANSFORMS = metric-schema:log2metrics_default_json /opt/splunkforwarder/etc/apps/splunk_internal_metrics/default/props.conf description = JSON-formatted data. Log-to-metrics processing converts the numeric values in json keys into metric data points. /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json /opt/splunkforwarder/etc/system/default/props.conf INDEXED_EXTRACTIONS = json       json only extraction   all props.txt is way too long.
 Does it leverage an API call to directly to the data sources, or does it use data indexed in Splunk already?
Seeing the same after an upgrade from v8 to v9.0.6. I'm suspecting something went wrong during the upgrade but don't have any solid evidence yet.  Did anyone manage to get to the bottom of this ?
Hi There!    I'm having the case, If present day is "Monday" and if user selects the option "Exclude weekend", the time range picker should looks for the data on friday If user selects the option... See more...
Hi There!    I'm having the case, If present day is "Monday" and if user selects the option "Exclude weekend", the time range picker should looks for the data on friday If user selects the option "Include weekend", the time range picker should be yesterday <input type="radio" token="weekends" searchWhenChanged="true"> <label>Weekends</label> <choice value="exclude">Exclude Weekends</choice> <choice value="include">Include Weekends</choice> <default>exclude</default> <initialValue>exclude</initialValue> </input> thanks!
The fill_summary_index.py script referenced in the above link merely runs your saved searches that populate a summary index.  You can use the same script to run other saved searches that populate/upd... See more...
The fill_summary_index.py script referenced in the above link merely runs your saved searches that populate a summary index.  You can use the same script to run other saved searches that populate/update a KVStore.
Hi @ITWhisperer         I tried few way, I didn't got it. <condition match="isnull($office_filter$) == &quot;Front_Office*&quot;"> <set token="office_filter_drilldown">form.office_filter=Fron... See more...
Hi @ITWhisperer         I tried few way, I didn't got it. <condition match="isnull($office_filter$) == &quot;Front_Office*&quot;"> <set token="office_filter_drilldown">form.office_filter=Front%20Office</set> </condition> <condition match="isnull($office_filter$) == &quot;Back_Office*&quot;"> <eval token="office_filter_drilldown">form.office_filter=Back%20Office</eval> </condition> <condition match="isnull($office_filter$) == &quot;Front_Office*&quot; AND == &quot;Back_Office*&quot;"> <eval token="office_filter_drilldown">form.office_filter=Front%20Office&amp;form.office_filter=Back%20Office</eval> </condition>    can you please share that as well. Thanks in Advance!
It's purposely never set. The javascript is responsible for displaying the panel instead. However, I do believe that my placement in my example was incorrect and the depends clause should be bound to... See more...
It's purposely never set. The javascript is responsible for displaying the panel instead. However, I do believe that my placement in my example was incorrect and the depends clause should be bound to the panel rather than the row.
I don't see how to edit my post, so I'll make a correction here. The "$HIDEME$" token appears to need to be at the panel level, not the row. <row> <panel id="help" depends="$HIDEME$">   There ... See more...
I don't see how to edit my post, so I'll make a correction here. The "$HIDEME$" token appears to need to be at the panel level, not the row. <row> <panel id="help" depends="$HIDEME$">   There is no issue with using that token otherwise, and I have dashboards where this will work until it just stops working. My feeling is that it functions properly early in the dashboard loading process, but then stops once it's complete. My troubleshooting leads me to believe that it might be the case that using <done> conditions to set tokens on search completion is the culprit, e.g. ... <query> ... </query> <earliest>0</earliest> <latest></latest> <done> <condition match="$job.resultCount$ &gt; 0"> <set token="has_notables">true</set> </condition> <condition> <unset token="has_notables"></unset> </condition> </done> </search>  
I ended up using the chart command instead of stats and got it to come out correctly.  Thanks again!!
Hi @_JP  There are two automatic lookups (for the two csv-s) under Splunk Add-on for Sysmon. Both are enabled. The one I am interested in looks like this:
Hi @richgalloway , Thank you for your response.  Similar to summary index , I have KV Stores as well , where I am pushing data in similar manner in 10 days batches and appending data in KV Store. Ca... See more...
Hi @richgalloway , Thank you for your response.  Similar to summary index , I have KV Stores as well , where I am pushing data in similar manner in 10 days batches and appending data in KV Store. Can you please suggest a workaround for KV Stores as well for pushing  2years data in batches without manual intervention.
Try something like this index=* | fields - _time _raw | foreach * [| eval <<FIELD>>=if("<<FIELD>>"=="index",index,if("<<FIELD>>"=="source",source,sourcetype))] | table * | fillnull value="N/A" |... See more...
Try something like this index=* | fields - _time _raw | foreach * [| eval <<FIELD>>=if("<<FIELD>>"=="index",index,if("<<FIELD>>"=="source",source,sourcetype))] | table * | fillnull value="N/A" | foreach * [eval sourcetype=if("<<FIELD>>"!="sourcetype" AND "<<FIELD>>"!="source" AND "<<FIELD>>"!="index",if('<<FIELD>>'!="N/A",mvappend(sourcetype,"<<FIELD>>"),sourcetype),sourcetype)] | dedup sourcetype | table index source sourcetype
Thank you very much! After 8 years this script is still relevant and working correctly! Karma is given!
Try something like this | where strftime(_time, "%H") != "22"
Thats my filter now and it seems working index=nessus Risk=Critical | transaction CVE, extracted_Host | table CVE, extracted_Host  
Thanks for the reply @ITWhisperer  Would it be possible to help me out creating something similar which can include both source and sourcetype, please? Thank you!
Hello, Upon attempting to execute the command $SPLUNK_HOME/bin/splunk reload deploy-server following the update of app inputs, a warning message is generated, which states: "Could not look up HOME ... See more...
Hello, Upon attempting to execute the command $SPLUNK_HOME/bin/splunk reload deploy-server following the update of app inputs, a warning message is generated, which states: "Could not look up HOME variable. Auth tokens cannot be cached. WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Reloading serverclass(es). " Could you please suggest a solution to address this problem, as the changes do not appear to be taking effect. Thanks    
I tried to build below SPL so far:- |inputlookup table1.csv |table index, sourcetype |eval key="index=custom_index orig_index=".index." orig_sourcetype=".sourcetype." | timechart span=1d avg(event_... See more...
I tried to build below SPL so far:- |inputlookup table1.csv |table index, sourcetype |eval key="index=custom_index orig_index=".index." orig_sourcetype=".sourcetype." | timechart span=1d avg(event_count) AS avg_event_count |predict avg_event_count future_timespan=1 |tail 1 | fields prediction(avg_event_count)" With above SPL, I get three columns: index, sourcetype and key. In column key, I get the corresponding SPL of the row. Thus, I need help to execute and generate results for each row. Thank you
Hello All, I have a lookup file: table1.csv with two columns: index, sourcetype. I have a custom index which has fields: orig_index, orig_sourcetype I need to build and execute an SPL for each row... See more...
Hello All, I have a lookup file: table1.csv with two columns: index, sourcetype. I have a custom index which has fields: orig_index, orig_sourcetype I need to build and execute an SPL for each row of the lookup file. Thus, need your inputs to build the same. Thank you Taruchit