All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Any progress here?
Hi! Thank you for your response. When I take out the table command, only the _time, host, Level, and RuleTitle fields show up. The fields I have included in <fields></fields> don't all show up.
You used httpout which doesn't use this option at all so I completely missed that.
Well I was using this already as mentioned in my original post . 
for anyone that would like to see this work better, please consider voting for my idea here to support long query urls: https://ideas.splunk.com/ideas/EID-I-2569 to me, this is not uncommon at all. ... See more...
for anyone that would like to see this work better, please consider voting for my idea here to support long query urls: https://ideas.splunk.com/ideas/EID-I-2569 to me, this is not uncommon at all.  it's a daily problem that I have to work around.   (I'm aware of the current solutions and already use them.)
for anyone that would like to see this work better, please consider voting for my idea here to support long query urls: https://ideas.splunk.com/ideas/EID-I-2569 to me, this is not uncommon at all. ... See more...
for anyone that would like to see this work better, please consider voting for my idea here to support long query urls: https://ideas.splunk.com/ideas/EID-I-2569 to me, this is not uncommon at all.  it's a daily problem that I have to work around.   (I'm aware of the current solutions and already use them.)
Hi @gheller   The latest docs are at https://docs.tenable.com/integrations/Splunk/Content/Welcome.htm which they have recently updated, there is a great diagram to show where things should be instal... See more...
Hi @gheller   The latest docs are at https://docs.tenable.com/integrations/Splunk/Content/Welcome.htm which they have recently updated, there is a great diagram to show where things should be installed:       In short, the Tenable Add-On for Splunk should be installed on your SH and HF (with inputs created on HF, or pushed out via your deployment server to HF if appropriate) and then install the Tenable App for Splunk on just the SH).  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Good find, @PickleRick !  The docs do imply one should set sendCookedData=false when sending to third-party systems. @vikas_gopalPlease try that and report the results.
CLONE_SOURCETYPE works on all events on which it is fired regardless of the REGEX value. In other words - you cannot limit its scope. If you assign a transform with CLONE_SOURCETYPE to a sourcetype, ... See more...
CLONE_SOURCETYPE works on all events on which it is fired regardless of the REGEX value. In other words - you cannot limit its scope. If you assign a transform with CLONE_SOURCETYPE to a sourcetype, source or host, it will clone your event without any filtering. And yes, the docs on CLONE_SOURCETYPE are a bit misleading and confusing.
Depends on what you means by "require HF". Modular inputs must be run on a "full" Splunk Enterprise instance. So in this meaning - it requires HF because it won't run on UF. Technically you can run t... See more...
Depends on what you means by "require HF". Modular inputs must be run on a "full" Splunk Enterprise instance. So in this meaning - it requires HF because it won't run on UF. Technically you can run the modular input on an All-in-one instance without spinning up a separate HF. While you could run it also directly on an indexer or SH, it's not a recommended architecture - those roles are best left alone with what they do.
It is based on very simple search. index=<index_name> sourcetype= <blaahaa>  field2. After this, a number of fields are extracted using rex. I would like to include in the search as a new contrain ... See more...
It is based on very simple search. index=<index_name> sourcetype= <blaahaa>  field2. After this, a number of fields are extracted using rex. I would like to include in the search as a new contrain a  very simple dedup clause "| dedup _raw|".  Is this advisable?
1. The parameter is indeed serverPort, not Port (and I'm a bit surprised that you didn't get an error for unknown option for that action). 2. useHttps="on" is indeed the way to go to enable TLS on t... See more...
1. The parameter is indeed serverPort, not Port (and I'm a bit surprised that you didn't get an error for unknown option for that action). 2. useHttps="on" is indeed the way to go to enable TLS on this module. 3. With HTTPS you will either need to disable peer certificate verification (strongly discouraged) or will need to provide proper CA. 4. For HEC you will need to provide token httpheaderkey="Authorization" httpheadervalue="Splunk your-token-value" 5. For performance, you will most probably want to send events in batches. For example: batch="on" batch.format="newline" batch.maxsize="256" 6. For /event endpoint you will need another template. Posting raw syslog to the /event endpoint will yield format errors. You need to render your event to json containing raw message as "event" field. 7. The omhttp uses restpath and checkpath options, not uri. restpath="services/collector/event" checkpath="services/collector/health" 8. Remember that if you're posting to the /event endpoint you're skipping timestamp recognition so if you're not providing it explicitly in the "time" field of your posted data, it will be assigned the time of ingestion.
Update to the included classic dashboard code taking care of new framework and handling "capability_group" extraction in lines 42 and 91 related to unescaped HTML tags. <form version="1.1" theme="li... See more...
Update to the included classic dashboard code taking care of new framework and handling "capability_group" extraction in lines 42 and 91 related to unescaped HTML tags. <form version="1.1" theme="light"> <label>Native Role Capabilities (not inherited)</label> <description>(select roles and capabilities to compare)</description> <fieldset submitButton="false"> <input type="checkbox" token="role" searchWhenChanged="true"> <label>Roles</label> <fieldForLabel>role</fieldForLabel> <fieldForValue>role</fieldForValue> <search> <query>| rest /services/authentication/users splunk_server=local | table roles | mvexpand roles | dedup roles | table roles | sort roles | rename roles as role</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <prefix>(</prefix> <suffix>)</suffix> <valuePrefix>role="</valuePrefix> <valueSuffix>"</valueSuffix> <delimiter> OR </delimiter> <choice value="*">All</choice> <default>admin,power,sc_admin,user</default> </input> <input type="dropdown" token="capability_group" searchWhenChanged="true"> <label>Capability Group</label> <choice value="*">All</choice> <default>*</default> <prefix>capability_group="</prefix> <suffix>"</suffix> <fieldForLabel>capability_group</fieldForLabel> <fieldForValue>capability_group</fieldForValue> <search> <query>| rest /services/authorization/roles splunk_server=local | table capabilities | mvexpand capabilities | dedup capabilities | sort capabilities | rex field=capabilities "^(?&lt;capability_group&gt;[^_]+)" | table capability_group | dedup capability_group</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> <input type="dropdown" token="capabilities" searchWhenChanged="true"> <label>Capabilities</label> <choice value="*">All</choice> <default>*</default> <prefix>capabilities="</prefix> <suffix>"</suffix> <fieldForLabel>capabilities</fieldForLabel> <fieldForValue>capabilities</fieldForValue> <search> <query>| rest /services/authorization/roles splunk_server=local | table capabilities | mvexpand capabilities | dedup capabilities | sort capabilities</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </input> </fieldset> <row> <panel> <title>Capabilities by Role</title> <table> <search> <query>| rest /services/authorization/roles splunk_server=local | table capabilities | dedup capabilities | sort capabilities | eval role="Capabilities List" | table capabilities | stats count by role capabilities | appendcols [| rest /services/authorization/roles | table title capabilities | dedup title | rename title as role | table role capabilities | stats count by role capabilities] | eval _time=now() | search $role$ | stats count(eval(capabilities="accelerate_datamodel")) as accelerate_datamodel count(eval(capabilities="accelerate_search")) as accelerate_search count(eval(capabilities="admin_all_objects")) as admin_all_objects count(eval(capabilities="change_authentication")) as change_authentication count(eval(capabilities="change_own_password")) as change_own_password count(eval(capabilities="delete_by_keyword")) as delete_by_keyword count(eval(capabilities="dispatch_rest_to_indexers")) as dispatch_rest_to_indexers count(eval(capabilities="dmc_deploy_apps")) as dmc_deploy_apps count(eval(capabilities="dmc_deploy_token_http")) as dmc_deploy_token_http count(eval(capabilities="edit_cmd")) as edit_cmd count(eval(capabilities="edit_deployment_client")) as edit_deployment_client count(eval(capabilities="edit_deployment_server")) as edit_deployment_server count(eval(capabilities="edit_dist_peer")) as edit_dist_peer count(eval(capabilities="edit_encryption_key_provider")) as edit_encryption_key_provider count(eval(capabilities="edit_forwarders")) as edit_forwarders count(eval(capabilities="edit_httpauths")) as edit_httpauths count(eval(capabilities="edit_indexer_cluster")) as edit_indexer_cluster count(eval(capabilities="edit_indexerdiscovery")) as edit_indexerdiscovery count(eval(capabilities="edit_input_defaults")) as edit_input_defaults count(eval(capabilities="edit_local_apps")) as edit_local_apps count(eval(capabilities="edit_monitor")) as edit_monitor count(eval(capabilities="edit_restmap")) as edit_restmap count(eval(capabilities="edit_roles")) as edit_roles count(eval(capabilities="edit_roles_grantable")) as edit_roles_grantable count(eval(capabilities="edit_scripted")) as edit_scripted count(eval(capabilities="edit_search_head_clustering")) as edit_search_head_clustering count(eval(capabilities="edit_search_schedule_priority")) as edit_search_schedule_priority count(eval(capabilities="edit_search_schedule_window")) as edit_search_schedule_window count(eval(capabilities="edit_search_scheduler")) as edit_search_scheduler count(eval(capabilities="edit_search_server")) as edit_search_server count(eval(capabilities="edit_server")) as edit_server count(eval(capabilities="edit_server_crl")) as edit_server_crl count(eval(capabilities="edit_sourcetypes")) as edit_sourcetypes count(eval(capabilities="edit_splunktcp")) as edit_splunktcp count(eval(capabilities="edit_splunktcp_ssl")) as edit_splunktcp_ssl count(eval(capabilities="edit_splunktcp_token")) as edit_splunktcp_token count(eval(capabilities="edit_statsd_transforms")) as edit_statsd_transforms count(eval(capabilities="edit_tcp")) as edit_tcp count(eval(capabilities="edit_tcp_stream")) as edit_tcp_stream count(eval(capabilities="edit_telemetry_settings")) as edit_telemetry_settings count(eval(capabilities="edit_token_http")) as edit_token_http count(eval(capabilities="edit_udp")) as edit_udp count(eval(capabilities="edit_upload_and_index")) as edit_upload_and_index count(eval(capabilities="edit_user")) as edit_user count(eval(capabilities="edit_view_html")) as edit_view_html count(eval(capabilities="edit_web_settings")) as edit_web_settings count(eval(capabilities="embed_report")) as embed_report count(eval(capabilities="export_results_is_visible")) as export_results_is_visible count(eval(capabilities="get_diag")) as get_diag count(eval(capabilities="get_metadata")) as get_metadata count(eval(capabilities="get_typeahead")) as get_typeahead count(eval(capabilities="indexes_edit")) as indexes_edit count(eval(capabilities="indexes_list_all")) as indexes_list_all count(eval(capabilities="input_file")) as input_file count(eval(capabilities="license_edit")) as license_edit count(eval(capabilities="license_tab")) as license_tab count(eval(capabilities="license_view_warnings")) as license_view_warnings count(eval(capabilities="list_deployment_client")) as list_deployment_client count(eval(capabilities="list_deployment_server")) as list_deployment_server count(eval(capabilities="list_forwarders")) as list_forwarders count(eval(capabilities="list_httpauths")) as list_httpauths count(eval(capabilities="list_indexer_cluster")) as list_indexer_cluster count(eval(capabilities="list_indexerdiscovery")) as list_indexerdiscovery count(eval(capabilities="list_inputs")) as list_inputs count(eval(capabilities="list_introspection")) as list_introspection count(eval(capabilities="list_metrics_catalog")) as list_metrics_catalog count(eval(capabilities="list_search_head_clustering")) as list_search_head_clustering count(eval(capabilities="list_search_scheduler")) as list_search_scheduler count(eval(capabilities="list_settings")) as list_settings count(eval(capabilities="list_storage_passwords")) as list_storage_passwords count(eval(capabilities="output_file")) as output_file count(eval(capabilities="pattern_detect")) as pattern_detect count(eval(capabilities="refresh_application_licenses")) as refresh_application_licenses count(eval(capabilities="request_remote_tok")) as request_remote_tok count(eval(capabilities="rest_apps_management")) as rest_apps_management count(eval(capabilities="rest_apps_view")) as rest_apps_view count(eval(capabilities="rest_properties_get")) as rest_properties_get count(eval(capabilities="rest_properties_set")) as rest_properties_set count(eval(capabilities="restart_reason")) as restart_reason count(eval(capabilities="restart_splunkd")) as restart_splunkd count(eval(capabilities="rtsearch")) as rtsearch count(eval(capabilities="run_debug_commands")) as run_debug_commands count(eval(capabilities="schedule_rtsearch")) as schedule_rtsearch count(eval(capabilities="schedule_search")) as schedule_search count(eval(capabilities="search")) as search count(eval(capabilities="search_process_config_refresh")) as search_process_config_refresh count(eval(capabilities="web_debug")) as web_debug by role | transpose 1000 column_name=capabilities header_field=role | rex field=capabilities "^(?&lt;capability_group&gt;[^_]+)" | search $capabilities$ $capability_group$</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">false</option> <format type="color" field="admin"> <colorPalette type="map">{"0":#555555,"1":#A2CC3E}</colorPalette> </format> <format type="color" field="apps"> <colorPalette type="map">{"0":#555555,"1":#A2CC3E}</colorPalette> </format> <format type="color" field="capability_group"> <colorPalette type="sharedList"></colorPalette> <scale type="sharedCategory"></scale> </format> <format type="color" field="power"> <colorPalette type="map">{"0":#555555,"1":#A2CC3E}</colorPalette> </format> <format type="color" field="sc_admin"> <colorPalette type="map">{"0":#555555,"1":#A2CC3E}</colorPalette> </format> <format type="color" field="user"> <colorPalette type="map">{"0":#555555,"1":#A2CC3E}</colorPalette> </format> <format type="number" field="internal_automation_role"> <option name="precision">0</option> </format> <format type="color" field="internal_automation_role"> <colorPalette type="map">{"0":#555555,"1":#A2CC3E}</colorPalette> </format> </table> </panel> </row> </form>  
@richgalloway Shouldn't UF send raw data when sendCookedData=false on tcpout? Never tried it myself but the docs say so.
@gheller Inputs must be configured to run from the Heavy Forwarder. The Search Head is used for dashboards and adaptive response actions, but it relies on data collected and forwarded by the Heavy Fo... See more...
@gheller Inputs must be configured to run from the Heavy Forwarder. The Search Head is used for dashboards and adaptive response actions, but it relies on data collected and forwarded by the Heavy Forwarder. It's important to enable the KV Store on the Heavy Forwarder to support the add-on's functionality Tenable and Splunk Integration Guide  The Tenable Add-on has specific purposes for each Splunk component. Components Install the add-on on both the Heavy Forwarder and the Search Head but create data inputs only on the heavy forwarder. https://splunkbase.splunk.com/app/4060  Install the app exclusively on the Search Head. https://splunkbase.splunk.com/app/4061 
This seemed like the solution at first, but there's a little quirk. foo | eval hasFoo = if (searchmatch("\"foo\"]"), "YES", "NO") | table _raw hasFoo In the case where _raw is like  ... ["foo", "b... See more...
This seemed like the solution at first, but there's a little quirk. foo | eval hasFoo = if (searchmatch("\"foo\"]"), "YES", "NO") | table _raw hasFoo In the case where _raw is like  ... ["foo", "bar"] ... hasFoo evaluates to "YES".    
I am trying to set up the Tenable App for Splunk and the documentation is a bit vague about whether it requires a Heavy Forwarder to operate.  I found an old post from 2017 that mentioned it did, but... See more...
I am trying to set up the Tenable App for Splunk and the documentation is a bit vague about whether it requires a Heavy Forwarder to operate.  I found an old post from 2017 that mentioned it did, but it was referencing older versions of Nessus than what is used in my environment.  Does anyone know if a heavy forwarder is still required for the  Tenable App for Splunk?
Whether via HTTP or TCP, the UF only sends data using the Splunk-to-Splunk protocol so cannot send successfully to Logstash.  I suggest using a Logstash agent, instead. The sending of UF internal lo... See more...
Whether via HTTP or TCP, the UF only sends data using the Splunk-to-Splunk protocol so cannot send successfully to Logstash.  I suggest using a Logstash agent, instead. The sending of UF internal logs is a setting in an inputs.conf file.  Turning that off will not solve the above problem, however.
@NoSpaces  Yes, the behavior you're observing with CLONE_SOURCETYPE is expected. When you use CLONE_SOURCETYPE in Splunk, it creates a duplicate of every event that matches the props.conf stanza, re... See more...
@NoSpaces  Yes, the behavior you're observing with CLONE_SOURCETYPE is expected. When you use CLONE_SOURCETYPE in Splunk, it creates a duplicate of every event that matches the props.conf stanza, regardless of the REGEX specified in the corresponding transforms.conf stanza. The REGEX is applied to the cloned event, not to determine whether an event should be cloned in the first place. This means that all events are cloned, and then the REGEX is used to modify or route the cloned events as specified. https://community.splunk.com/t5/Getting-Data-In/Priority-precedence-in-props-conf/m-p/669047  https://community.splunk.com/t5/Getting-Data-In/Can-I-use-CLONE-SOURCETYPE-to-send-events-to-multiple-indexes/td-p/300277  To clone only the events matching the REGEX to the new sourcetype and redirect them to the general index, while keeping all original events in the original index under the original sourcetype, you need to filter events before cloning. Unfortunately, Splunk’s CLONE_SOURCETYPE doesn’t natively support filtering during cloning.   You can use two transforms: one to filter out events that don’t match the REGEX and send them to nullQueue (discarding them from cloning), and another to clone and redirect the matching events.    Events matching FIREWALL-PKTLOG: are cloned and routed to the general index. The same matching events are dropped from the original index using nullQueue.
Hi all, I’ve recently encountered several challenges since migrating to Splunk Mission Control (MS) and would appreciate any guidance or insights. Summary of Issues: We had a dashboard set up ... See more...
Hi all, I’ve recently encountered several challenges since migrating to Splunk Mission Control (MS) and would appreciate any guidance or insights. Summary of Issues: We had a dashboard set up to pull all the data needed for our monthly report. Since switching to MS, all those dashboards are broken with errors like: "Could not find object id="*. I recreated the dashboard with new searches, which initially worked fine and allowed report creation. However, when revisiting the new dashboard, most searches now fail or return no results within the expected time frame, despite previously working and being used in the latest report. Several items such as charts for "top hosts (consolidated)" and "top hosts" that were available under Security Domain > Network > Exec View are now missing post-migration. Search Aborts and Resource Issues: One major problem is searches being aborted with SVC errors. After contacting the customer, workload restrictions on my account were lifted, but searches still fail due to resource usage. Even limiting searches to a single day results in failures, and this has become quite frustrating. Example Problem with Macros and Searches: The macro sim_licensing_summary_base appears to be missing since moving to MS, and even the customer cannot locate it. The following search, intended to replicate the macro’s function, returns incomplete results after 2025-04-10 without any errors in the job manager:   (host=*.*splunk*.* NOT host=sh*.*splunk*.* index=_telemetry source=*license_usage_summary.log* type="RolloverSummary") | bin _time span=1d | stats latest(b) AS b by slave, pool, _time | timechart span=1d sum(b) AS "volume" fixedrange=true | eval GB=round((volume / 1073741824),3) | fields - b, volume | stats avg/max(GB) Additional Notes: We’ve also noticed missing dashboards and objects that were previously part of Enterprise Security views. Searches aborting due to resource limits remain an issue despite workload adjustments. Has anyone else experienced similar problems after switching to Mission Control? Any advice on troubleshooting these dashboard errors, missing macros, or search aborts would be greatly appreciated.