All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi  We have successfully configure dashboard for the ups monitoring , however the dashboard was working fine with no issue. From 1st /08/2024 no data are showing up in the tile.    Checked UF & s... See more...
Hi  We have successfully configure dashboard for the ups monitoring , however the dashboard was working fine with no issue. From 1st /08/2024 no data are showing up in the tile.    Checked UF & services - All working with no issue, restarted the service issue not resolved Checked splunk index , can find the latest event is 17 days not sure what is the problem    could you please advice the issue 
Hi all, index=sky sourcetype=sky_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "op... See more...
Hi all, index=sky sourcetype=sky_trade_wss_timestamp | rex field=_raw "trade_id=\"(?<trade_id>\X+)\", event_id" | rex field=_raw "mx_status=\"(?<status>\X+)\", operation" | rex field=_raw "operation=\"(?<operation>\X+)\", action" | rex field=_raw " action=\"(?<action>\X+)\", tradebooking_sgp" | rex field=_raw " eventtime_sgp=\"(?<distributor_timestamp>\X+)\", sky_to_mq" | rex field=_raw "distributor_latency=\"(?<distributor_latency>[^\"]+)\".*\bportfolio_name=\"(?<portfolio_name>[^\"]+)\".*\bportfolio_entity=\"(?<portfolio_entity>[^\"]+)\".*\btrade_type=\"(?<trade_type>[^\"]+)" | join event_id [ search index=sky sourcetype=Sky_WSS_EP_Logs "Successfully processed event" | rex field=_raw "INFO: (?<ep_timestamp>\d{8} \d{2}:\d{2}:\d{2}.\d{3})" | rex field=_raw "Successfully processed event: (?<event_id>\X+), action" | eval ep_timestamp = strftime(strptime(ep_timestamp."+0800", "%Y%d%m %H:%M:%S.%N%z"), "%Y-%m-%d %H:%M:%S.%3N")] | join type=left sky_id [ search index=sky "New trades in amendment" "*pnl*" | rex "Trade Events (?<trades>.*)" | rex max_match=0 field=trades "(?<both_id>\d+:\d+)" | mvexpand both_id | rex field=both_id ":(?<sky_id>\d+)" | rex max_match=1 field=_raw "(?<booking_pnl_timestamp>\d{4}+-\d{2}+-\d{2} \d{2}:\d{2}:\d{2},\d{3})"] | rex field=tradebooking_sgp "(?<booking_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2})" | rex field=distributor_timestamp "(?<distributor_timestamp>\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}\.\d{3})" | eval booking_pnl_timestamp = booking_pnl_timestamp."+0800" | eval ep_latency = strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | eval pnl_latency = strptime(booking_pnl_timestamp, "%Y-%m-%d %H:%M:%S,%N%z") - strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S.%N%z") | search trade_id = "*" | search sky_id = "*" | search event_id = "*" | search action = "*" | search mx_status = "live" | search operation = "*" | table trade_id, sky_id, event_id, booking_timestamp, distributor_timestamp, ep_timestamp, mx_status, operation, action, distributor_latency, ep_latency, portfolio_name, portfolio_entity | sort ep_timestamp | join type=left sky_id [ search index=sky sourcetype=sky_cashfx_catchup_logs "[WSS] - Done incremental update" | rex field=_raw "Max Skylib TradeID: (?<sky_id>\d+)" | rex field=_raw "^(?<catchup_updated_time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2},\d{3})" | eval catchup_updated_time = strftime(strptime(catchup_updated_time, "%Y-%m-%d %H:%M:%S,%3N"), "%Y/%m/%d %H:%M:%S.%3N") | table sky_id, catchup_updated_time, _raw, ] | eval booking_timestamp_unix = strptime(booking_timestamp, "%Y/%m/%d %H:%M:%S") | eval catchup_unix_time = strptime(catchup_updated_time, "%Y/%m/%d %H:%M:%S.%3N") | eval wss_to_sky_latency = catchup_unix_time - booking_timestamp_unix | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_latency = distributor_timestamp_unix - booking_timestamp_unix | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | eval mq_to_sky_update_latency = catchup_unix_time - distributor_timestamp_unix | eval ep_timestamp = strftime(strptime(ep_timestamp, "%Y-%m-%d %H:%M:%S.%3N"), "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp = strftime(strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N"), "%Y/%m/%d %H:%M:%S.%4N") | eval ep_timestamp_unix = strptime(ep_timestamp, "%Y/%m/%d %H:%M:%S.%3N") | eval distributor_timestamp_unix = strptime(distributor_timestamp, "%Y/%m/%d %H:%M:%S.%4N") | eval ep_latency = ep_timestamp_unix - distributor_timestamp_unix | table trade_id, portfolio_name, portfolio_entity, sky_id, event_id, booking_timestamp, booking_timestamp_unix, distributor_timestamp, distributor_timestamp_unix, ep_timestamp, distributor_latency, ep_latency, catchup_updated_time, wss_to_sky_latency, mq_to_sky_update_latency, mx_status, operation, action, catchup_unix_time | dedup sky_id | sort booking_timestamp | rex field=trade_id "^\w+ (?<dealnumber>\d+)$" | join type=left dealnumber [ search index=wss "Sending message" source="/proj/flowfx/wss/FFXWS01P/log/MQ1.log" ```Exclude Far Legs of Swap Trades for first Iteration of Dash``` NOT "<swap_leg>2</swap_leg>" ```Exclude Cancels, Amends, Auxiliaries, Allocations, Blocks - allocated ``` NOT "<status>" ```Exclude MM Deals ``` NOT "<WSSMMTRADE>" | rex "\<transaction\>(?P<tid>.*?)\<\/transaction\>" | rex "\<deal_number\>(?P<dealnumber>.*?)\<\/deal_number\>" | rex "\<external_deal\>(?P<sourcesystemid>.*?)\<\/external_deal\>" | rex "\<cust_type\>(?P<custType>.*?)\<\/cust_type\>" | eval region=case(host == "pffxa01z", "Global", host == "pffxa02z", "China") | eval wssSendingTime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-processor.log" "INFO SLA FFX-Processor received" | rex "transactionId\=(?P<tid>.*?)\," | eval flowfxincomingtime=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q") | table flowfxincomingtime,tid, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime | eval wssSendingTimeUnix=strptime(wssSendingTime,"%Y-%m-%d %H:%M:%S.%Q") | eval flowfxincomingtimeUnix=strptime(flowfxincomingtime,"%Y-%m-%d %H:%M:%S.%Q") | eval timebetweenWssFlowfx = flowfxincomingtimeUnix - wssSendingTimeUnix | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time | join type=left tid [ search index=wss source="/proj/flowfx/ffx/log/flowfx-trade-sender-cim.log" "INFO SLA FFX-Trade-Sender sent" | rex "nearTransactionId\=(?P<tid>.*?)\," | eval CIMsendingTime=strftime(_time,"%Y/%m/%d %H:%M:%S.%Q") | eval MQ_available_time=strftime(_time - 7200, "%Y-%m-%d %H:%M:%S.%Q") | table CIMsendingTime,tid,MQ_available_time,booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix ] | table tid,dealnumber,region,custType,sourcesystemid,wssSendingTime,flowfxincomingtime,timebetweenWssFlowfx,wssSendingTimeUnix,flowfxincomingtimeUnix,CIMsendingTime, MQ_available_time, booking_timestamp, booking_timestamp_unix, distributor_timestamp_unix, catchup_unix_time ] | eval CIMsendingTime_unix = strptime(CIMsendingTime, "%Y/%m/%d %H:%M:%S.%3Q") | eval cim_latency = CIMsendingTime_unix - booking_timestamp_unix | eval distributor_latency = distributor_timestamp_unix - CIMsendingTime_unix | eval distributor_to_sky_latency = catchup_unix_time - CIMsendingTime_unix | where len(CIMsendingTime) > 0 | eval cim_latency = round(cim_latency * 1000,0) | eval distributor_latency = round(distributor_latency * 1000,0) | eval ep_latency = round(ep_latency * 1000,0) | eval wss_to_sky_latency = round(wss_to_sky_latency * 1000,0) | eval mq_to_sky_update_latency = round(mq_to_sky_update_latency * 1000,0) | eval distributor_to_sky_latency = round(distributor_to_sky_latency * 1000,0) | table trade_id, portfolio_name, sky_id, booking_timestamp,CIMsendingTime, distributor_timestamp, ep_timestamp, catchup_updated_time, wss_to_sky_latency, cim_latency, distributor_latency, ep_latency, mq_to_sky_update_latency, distributor_to_sky_latency, mx_status, operation, action My above current search query but i get more events and less statistics results in last 24 hours period and compared to last 4 hours period.
Hi, Let's say I have sample data below all being ingested to index="characters". How do I create two separate sub-indexes "superheroes" and "villains" such that for events where archetype="superher... See more...
Hi, Let's say I have sample data below all being ingested to index="characters". How do I create two separate sub-indexes "superheroes" and "villains" such that for events where archetype="superhero", the index "superheroes" will contain only events with id=superman, batman and archetype="villain" will only contain event with id="joker"? The reasoning is I want to set permissions on the sub-indexes so only specific users can see the index (e.g. only people with role "good guys" can see superhero data).  I have tried summary indexing with the following query, scheduled the search, and enabled summary indexing but it doesn't capture the original fields in the data.  index=characters | fields id, strengths, archetype | where archetype="superhero" | eventstats count as total_superheroes | table id, strengths, archetype Sample Json Data: [ { "id": "superman", "strengths": "super strength, flight, and heat vision", "archetype": "superhero" }, { "id": "batman", "strengths": "exceptional martial arts skills, detective abilities, and psychic abilities", "archetype": "superhero" }, { "id": "joker", "strengths": "cunning and unpredictable personality", "archetype": "villain" } ]
Hi all:           I'm a rookie user ask for help, I want to extract all vault in one _raw data(CLI command log as below photo),           I want have Location,Card,Type,Mnemonic,Part Number,Serial ... See more...
Hi all:           I'm a rookie user ask for help, I want to extract all vault in one _raw data(CLI command log as below photo),           I want have Location,Card,Type,Mnemonic,Part Number,Serial Number,CLEI,Pmax(W),Imax(A) these fields all vault, can some one help me please, thank you very much.  
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Window... See more...
Does anyone know how does Cluster Manager populate dmc_forwarder_assets input lookup csv table? I have an issue where my UF forwarder reports show hosts with os containing repeated entries of Windows hundreds and even 000's of times. I'd like to check how this data table is being populated by CM?
I can see below status for the scheduled savedsearches. status="deferred" status="continued" What is the difference between the two and which one will get skipped later on(status="skipped") Are... See more...
I can see below status for the scheduled savedsearches. status="deferred" status="continued" What is the difference between the two and which one will get skipped later on(status="skipped") Are there any "failed" status as well?
Hi, Are there plans to upgrade the html to be compatible with Splunk 9.1?   https://lantern.splunk.com/Splunk_Platform/Product_Tips/Searching_and_Reporting/Updating_deprecated_HTML_dashboards
I have search query, if the Status is field is true for more than 5 min, I need to trigger an alert  no matter the Event count result. if its within the timeframe then fire. Mabey even have it searc... See more...
I have search query, if the Status is field is true for more than 5 min, I need to trigger an alert  no matter the Event count result. if its within the timeframe then fire. Mabey even have it search for every 1minute. for example  this should not fire an Alert because it recovered within the 5 min 1:00 Status = Down   (event result count X5) 1:03 Status = up 1:07 Status = Down  (event count X3) 1:10 Status = up 1:13 Status = up 1:16 Status = up for example  this should  fire an Alert  1:00 Status = Down  (event result count X1) 1:03 Status = Down (event result count X1) 1:07 Status = Down (event result count X1) 1:10 Status = up 1:13 Status = up 1:16 Status = up
I am not seeing results for count on each of the fields for the 2 different searches below:   The first one shows the (lets say 3 storefront names  ) with no counts.  If I just run a | stats count by... See more...
I am not seeing results for count on each of the fields for the 2 different searches below:   The first one shows the (lets say 3 storefront names  ) with no counts.  If I just run a | stats count by Storefront it returns with the correct number of counts.  The  fields are created in statistics with no counts or names of the the netscalers, site, or user.   The second search does not return any statistical results.  Hoping to see the count of connections to the Storefront and its correlating NetScaler in a Sankey diagram.     | stats count by Storefront | rename Storefront as source | appendpipe [ stats count by Netscaler | rename Netscaler as source, count as count_Netscaler ] | appendpipe [ stats count by site | rename site as source, count as count_site ] | appendpipe [ stats count by UserName | rename UserName as source, count as count_UserName ] | fields source, count_Netscaler, count_site, count_UserName | search source=*     | stats count by Storefront | rename Storefront as source | appendpipe [ stats count by Netscaler | rename Netscaler as source, Storefront as target ] | appendpipe [ stats count by site | rename site as source, Netscaler as target ] | appendpipe [ stats count by UserName | rename UserName as source, site as target ] | search source=* AND target=* | stats sum(count) as count by source, target | fields source, target, count
Hi there, i have a small lab at home on which I am running splunk enterprise 9.0.0 build 6818ac46f2ec and a developer license. The Licensing » Installed licenses page shows 3 valid licenses with the ... See more...
Hi there, i have a small lab at home on which I am running splunk enterprise 9.0.0 build 6818ac46f2ec and a developer license. The Licensing » Installed licenses page shows 3 valid licenses with the following information: . Splunk Enterprise Term Non-Production License creation_time 2024-08-11 07:00:00+00:00 expiration_time 2025-02-11 07:59:59+00:00 features Acceleration AdvancedSearchCommands AdvancedXML Alerting ArchiveToHdfs Auth ConditionalLicensingEnforcement CustomRoles DeployClient DeployServer FwdData GuestPass KVStore LocalSearch MultifactorAuth NontableLookups RcvData RollingWindowAlerts SAMLAuth ScheduledAlerts ScheduledReports ScheduledSearch ScriptedAuth SigningProcessor SplunkWeb SubgroupId SyslogOutputProcessor     is_unlimited False label Splunk Enterprise Term Non-Production License max_violations 5 notes None payload None quota_bytes 53687091200.0 sourcetypes   stack_name enterprise status VALID type enterprise window_period 30   Splunk Forwarder creation_time 2010-06-20 07:00:00+00:00 expiration_time 2038-01-19 03:14:07+00:00 features Auth DeployClient FwdData RcvData SigningProcessor SplunkWeb SyslogOutputProcessor hash FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFD is_unlimited False label Splunk Forwarder max_violations 5 notes None payload None quota_bytes 1048576.0 sourcetypes   stack_name forwarder status VALID type forwarder window_period 30   Splunk Free creation_time 2010-06-20 07:00:00+00:00 expiration_time 2038-01-19 03:14:07+00:00 features FwdData KVStore LocalSearch RcvData ScheduledSearch SigningProcessor SplunkWeb hash FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF is_unlimited False label Splunk Free max_violations 3 notes None payload None quota_bytes 524288000.0 sourcetypes   stack_name free status VALID type free window_period 30   I would like to experiment with Splunk Stream for capturing DNS records before implementing in our production environment. I have installed Splunk Stream 8.1.3 and most of the menu's within the app work, however when I go to Configuration > Distributed Forwarder Management it just displays a blank page. When i look at the splunk_app_stream.log I can see the following error   2024-08-15 14:51:58,543 ERROR rest_indexers:62 - failed to get indexers peer Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_stream/bin/rest_indexers.py", line 55, in handle_GET timeout=splunk.rest.SPLUNKD_CONNECTION_TIMEOUT File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 612, in simpleRequest raise splunk.LicenseRestriction splunk.LicenseRestriction: [HTTP 402] Current license does not allow the requested action 2024-08-15 14:51:58,580 ERROR indexer:52 - failed to list indexers Traceback (most recent call last): File "/opt/splunk/etc/apps/splunk_app_stream/bin/splunk_app_stream/models/indexer.py", line 43, in get_indexers timeout=splunk.rest.SPLUNKD_CONNECTION_TIMEOUT File "/opt/splunk/lib/python3.7/site-packages/splunk/rest/__init__.py", line 669, in simpleRequest raise splunk.InternalServerError(None, serverResponse.messages) splunk.InternalServerError: [HTTP 500] Splunkd internal error; [] Does this mean that the splunk dev license does not support Splunk Stream app?
I have a dropdown where I select the event name and that event name value is passed as a token to the variable search. This variable search is a multiselect. One issue that I've noticed is that the m... See more...
I have a dropdown where I select the event name and that event name value is passed as a token to the variable search. This variable search is a multiselect. One issue that I've noticed is that the multiselect values stay populated when a different event is selected. The search for variable will update the dropdown, though. Is there a way to reset the selected variables when a different event is selected? I have seen the simple xml versions for this but haven't seen any information on how to do this in dashboard stuido. Any help is greatly appreciated. { "visualizations": { "viz_Visualization": { "type": "splunk.line", "dataSources": { "primary": "ds_mainSearch" }, "options": { "overlayFields": [], "y": "> primary | frameBySeriesNames($dd2|s$)", "y2": "> primary | frameBySeriesNames('')", "lineWidth": 3, "showLineSmoothing": true, "xAxisMaxLabelParts": 2, "showRoundedY2AxisLabels": false, "x": "> primary | seriesByName('_time')" }, "title": "Visualization", "containerOptions": { "visibility": {} }, "eventHandlers": [ { "type": "drilldown.linkToSearch", "options": { "type": "auto", "newTab": false } } ] } }, "dataSources": { "ds_dd1": { "type": "ds.search", "options": { "query": "index=index source=source sourcetype=sourcetype |dedup EventName \n| sort str(EventName)" }, "name": "dd1Search" }, "ds_mainSearch": { "type": "ds.search", "options": { "query": "index=index source=source sourcetype=sourcetype EventName IN (\"$dd1$\") VariableName IN ($dd2|s$) \n| timechart span=5m max(Value) by VariableName", "enableSmartSources": true }, "name": "mainSearch" }, "ds_dd2": { "type": "ds.search", "options": { "enableSmartSources": true, "query": "index=index source=source sourcetype=sourcetype EventName = \"$dd1$\" |dedup VariableName \n| sort str(VariableName)" }, "name": "dd2Search" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } } }, "inputs": { "input_global_trp": { "type": "input.timerange", "options": { "token": "global_time", "defaultValue": "-24h@h,now" }, "title": "Global Time Range" }, "input_dd1": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "dd1" }, "encoding": { "label": "primary[0]", "value": "primary[0]" }, "dataSources": { "primary": "ds_dd1" }, "title": "Event Name", "type": "input.dropdown", "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [], "label": ">primary | seriesByName(\"EventName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"EventName\") | renameSeries(\"value\") | formatByType(formattedConfig)" } }, "input_dd2": { "options": { "items": ">frame(label, value) | prepend(formattedStatics) | objects()", "token": "dd2" }, "encoding": { "label": "primary[0]", "value": "primary[0]" }, "dataSources": { "primary": "ds_dd2" }, "title": "Variable(s)", "type": "input.multiselect", "context": { "formattedConfig": { "number": { "prefix": "" } }, "formattedStatics": ">statics | formatByType(formattedConfig)", "statics": [], "label": ">primary | seriesByName(\"VariableName\") | renameSeries(\"label\") | formatByType(formattedConfig)", "value": ">primary | seriesByName(\"VariableName\") | renameSeries(\"value\") | formatByType(formattedConfig)" } } }, "layout": { "type": "grid", "options": { "width": 1440, "height": 960 }, "structure": [ { "item": "viz_Visualization", "type": "block", "position": { "x": 0, "y": 0, "w": 1440, "h": 653 } } ], "globalInputs": [ "input_global_trp", "input_dd1", "input_dd2" ] }, "description": "", "title": "Test" }  
Hello, i face strugling to make base search using a datamodel with tstats command. My objective is to make dashboard easily access with tsats datamodels and chain search for each panel with that. Thi... See more...
Hello, i face strugling to make base search using a datamodel with tstats command. My objective is to make dashboard easily access with tsats datamodels and chain search for each panel with that. This my sample  | tstats summariesonly=true values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.dest) as dest values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.hostname) as hostname values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.os_type) as os_type values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.exploit_title) as exploit_title values(Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.malware_title) as malware_title from datamodel=Vulnerabilities_Custom.Vulnerabilities_Non_Remediation where nodename IN ("Vulnerabilities_Custom.Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.High_Or_Critical_Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.Medium_Vulnerabilities_Non_Remediation", "Vulnerabilities_Custom.Low_Or_Informational_Vulnerabilities_Non_Remediation") by Vulnerabilities_Custom.Vulnerabilities_Non_Remediation._time, Vulnerabilities_Custom.Vulnerabilities_Non_Remediation.dest | table event_time dest hostname os_type exploit_title malware_title  Has anyone have clues about this?   
How can I constantly hit a http end point in a remote server to collect useful metrics and then import it to splunk hourly for example and use it for useful visualisations?
Is this a per profile basis? Per cluster basis? how does this restart back?  
I`ve got 2 base searches:      <search id="Night">     and      <search id="Day">      And a dropdown input:     <input type="dropdown" token="shift_tok" searchWhenChanged="true"> ... See more...
I`ve got 2 base searches:      <search id="Night">     and      <search id="Day">      And a dropdown input:     <input type="dropdown" token="shift_tok" searchWhenChanged="true"> <label>Shift:</label> <choice value="Day">Day</choice> <choice value="Night">Night</choice> <default>Day</default> <initialValue>Day</initialValue> </input>      I need to find a way to reference the base searches, depending on the input provided by the user. I was hoping to use a token to reference the base searches, but donesn`t seem to be working:     <row> <panel> <title>Timeline</title> <table> <title>$shift_tok$</title> <search base="$Shift_tok$"> <query>| table Date Shift Timeline "Hourly details of shift"</query> </search> <option name="count">13</option> <option name="drilldown">none</option> </table> </panel> </row> </form>  
Hi, I'm unable to launch the Splunk Add-on on AWS page on the Admin console, page show as Loading but no output at all. No abnormalities seen in the splunkd.log, only some checksum mismatch errors. ... See more...
Hi, I'm unable to launch the Splunk Add-on on AWS page on the Admin console, page show as Loading but no output at all. No abnormalities seen in the splunkd.log, only some checksum mismatch errors.  My splunk was recently upgraded to 9.2.2, last tried on earlier version it was working.  Splunk Add-on on AWS version is 5.1.0. Can I check if anyone came across the same issue and managed to resolve it?
This is my current search query index=abc sourcetype = example_sourcetype | transaction startswith="Saved messages to DB" endswith="Done bulk saving messages" keepevicted=t | eval no_msg_wait_ti... See more...
This is my current search query index=abc sourcetype = example_sourcetype | transaction startswith="Saved messages to DB" endswith="Done bulk saving messages" keepevicted=t | eval no_msg_wait_time = mvcount(noMessageHandleCounter) * 1000 | fillnull no_msg_wait_time | rename duration as processing_time | eval _raw = mvindex(split(_raw, " "), -1) | rex "Done Bulk saving .+ used (?<db_bulk_write_time>\w+)" | eval processing_time = processing_time * 1000 | eval mq_read_time = processing_time - db_bulk_write_time - no_msg_wait_time | where db_bulk_write_time > 0 | rename processing_time as "processing_time(ms)", db_bulk_write_time as "db_bulk_write_time(ms)", no_msg_wait_time as "no_msg_wait_time(ms)", mq_read_time as "mq_read_time(ms)" | table _time, processing_time(ms), db_bulk_write_time(ms), no_msg_wait_time(ms), mq_read_time(ms), Count, _raw So now for processing_time(ms) column the calculation instead is starting from the 2 previous occurences of All Read threads finished flush the messages to Done bulk saving messages So in the example below: 2024-08-12 10:02:20,542 will have a processing_time from 10:02:19,417 to 10:02:20,542. 2024-08-12 10:02:19,417 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-12 10:02:20,526 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1  Count=1 2024-08-12 10:02:20,542 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 6 ms How can I also create a time series graph on same graph where x axis is time and then y axis is a bar chart of count column + line chart of new processing_time(ms) Raw log data looks something like: | makeresults | eval data = split("2024-08-07 21:13:07,710 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:07,710 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=4), retry in 1000 ms 2024-08-07 21:13:08,742 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:08,742 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=5), retry in 1000 ms 2024-08-07 21:13:09,757 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:09,757 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=6), retry in 1000 ms 2024-08-07 21:13:10,773 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:10,773 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=7), retry in 1000 ms 2024-08-07 21:13:11,007 [15] INFO DistributorCommon.WMQClient [(null)] - Message Read from Queue, Message Length:4504 2024-08-07 21:13:11,132 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=1), retry in 10 ms. 2024-08-07 21:13:11,257 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=2), retry in 10 ms. 2024-08-07 21:13:11,382 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=3), retry in 10 ms. 2024-08-07 21:13:11,507 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=4), retry in 10 ms. 2024-08-07 21:13:11,632 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=5), retry in 10 ms. 2024-08-07 21:13:11,757 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=6), retry in 10 ms. 2024-08-07 21:13:11,882 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=7), retry in 10 ms. 2024-08-07 21:13:11,882 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1 2024-08-07 21:13:11,882 [39] INFO DistributorCommon.WMQClient [(null)] - Processing messages, Count=1 2024-08-07 21:13:11,882 [39] INFO DistributorCommon.WMQClient [(null)] - Done Processing messages, Count=1, IsBufferedEvent=True 2024-08-07 21:13:11,882 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Bulk saving messages, Count=1 2024-08-07 21:13:12,007 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 113 ms 2024-08-07 21:13:12,007 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=8), retry in 10 ms. 2024-08-07 21:13:12,054 [39] INFO DistributorCommon.WMQClient [(null)] - Saved messages to DB, Q Manager to Commit (Remove messages from Queue) 2024-08-07 21:13:12,132 [15] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=9), retry in 10 ms. 2024-08-07 21:13:12,179 [39] INFO DistributorCommon.WMQClient [(null)] - Clear Write Buffer 2024-08-07 21:13:12,257 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:12,398 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:12,528 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:12,778 [33] INFO DistributorCommon.WMQClient [(null)] - Message Read from Queue, Message Length:4668 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.WMQClient [(null)] - Processing messages, Count=1 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.WMQClient [(null)] - Done Processing messages, Count=1, IsBufferedEvent=True 2024-08-07 21:13:12,809 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Bulk saving messages, Count=1 2024-08-07 21:13:12,825 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 24 ms 2024-08-07 21:13:12,841 [39] INFO DistributorCommon.WMQClient [(null)] - Saved messages to DB, Q Manager to Commit (Remove messages from Queue) 2024-08-07 21:13:12,934 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=1), retry in 10 ms. 2024-08-07 21:13:12,966 [39] INFO DistributorCommon.WMQClient [(null)] - Clear Write Buffer 2024-08-07 21:13:13,059 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=2), retry in 10 ms. 2024-08-07 21:13:13,059 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:13,184 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=3), retry in 10 ms. 2024-08-07 21:13:13,200 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:13,325 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=4), retry in 10 ms. 2024-08-07 21:13:13,341 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:13,466 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=5), retry in 10 ms. 2024-08-07 21:13:13,466 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:13,466 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=4), retry in 1000 ms 2024-08-07 21:13:13,591 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=6), retry in 10 ms. 2024-08-07 21:13:13,716 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=7), retry in 10 ms. 2024-08-07 21:13:13,841 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=8), retry in 10 ms. 2024-08-07 21:13:13,966 [33] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=9), retry in 10 ms. 2024-08-07 21:13:14,481 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:14,481 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=5), retry in 1000 ms 2024-08-07 21:13:15,497 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:15,497 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=6), retry in 1000 ms 2024-08-07 21:13:15,731 [20] INFO DistributorCommon.WMQClient [(null)] - Message Read from Queue, Message Length:7648 2024-08-07 21:13:15,856 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=1), retry in 10 ms. 2024-08-07 21:13:15,981 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=2), retry in 10 ms. 2024-08-07 21:13:16,106 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=3), retry in 10 ms. 2024-08-07 21:13:16,231 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=4), retry in 10 ms. 2024-08-07 21:13:16,356 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=5), retry in 10 ms. 2024-08-07 21:13:16,481 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=6), retry in 10 ms. 2024-08-07 21:13:16,606 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=7), retry in 10 ms. 2024-08-07 21:13:16,606 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1 2024-08-07 21:13:16,606 [39] INFO DistributorCommon.WMQClient [(null)] - Processing messages, Count=1 2024-08-07 21:13:16,606 [39] INFO DistributorCommon.WMQClient [(null)] - Done Processing messages, Count=1, IsBufferedEvent=True 2024-08-07 21:13:16,606 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Bulk saving messages, Count=1 2024-08-07 21:13:16,622 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 11 ms 2024-08-07 21:13:16,637 [39] INFO DistributorCommon.WMQClient [(null)] - Saved messages to DB, Q Manager to Commit (Remove messages from Queue) 2024-08-07 21:13:16,731 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=8), retry in 10 ms. 2024-08-07 21:13:16,762 [39] INFO DistributorCommon.WMQClient [(null)] - Clear Write Buffer 2024-08-07 21:13:16,856 [20] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=9), retry in 10 ms. 2024-08-07 21:13:16,856 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:16,997 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:17,137 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:17,278 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:17,278 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=4), retry in 1000 ms 2024-08-07 21:13:18,294 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:18,294 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=5), retry in 1000 ms 2024-08-07 21:13:19,309 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 0 2024-08-07 21:13:19,309 [39] INFO DistributorCommon.WMQClient [(null)] - No message to handle (noMessageHandleCounter=6), retry in 1000 ms 2024-08-07 21:13:19,544 [28] INFO DistributorCommon.WMQClient [(null)] - Message Read from Queue, Message Length:13568 2024-08-07 21:13:19,669 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=1), retry in 10 ms. 2024-08-07 21:13:19,794 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=2), retry in 10 ms. 2024-08-07 21:13:19,919 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=3), retry in 10 ms. 2024-08-07 21:13:20,044 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=4), retry in 10 ms. 2024-08-07 21:13:20,169 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=5), retry in 10 ms. 2024-08-07 21:13:20,294 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=6), retry in 10 ms. 2024-08-07 21:13:20,419 [28] INFO DistributorCommon.WMQClient [(null)] - No msg in the queue (NoMessageCounter=7), retry in 10 ms. 2024-08-07 21:13:20,419 [39] INFO DistributorCommon.WMQClient [(null)] - All Read threads finished flush the messages, total messages: 1 2024-08-07 21:13:20,419 [39] INFO DistributorCommon.WMQClient [(null)] - Processing messages, Count=1 2024-08-07 21:13:20,419 [39] INFO DistributorCommon.WMQClient [(null)] - Done Processing messages, Count=1, IsBufferedEvent=True 2024-08-07 21:13:20,419 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Bulk saving messages, Count=1 2024-08-07 21:13:20,434 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 12 ms" It looks something like this now _time processing_time Count db_bulk_write_time no_msg_wait_time _raw 2024-08-07 21:13:16.637 3.797 1 12 3000 2024-08-07 21:13:20,434 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 12 ms 2024-08-07 21:13:12.841 3.781 1 11 3000 2024-08-07 21:13:16,622 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 11 ms 2024-08-07 21:13:12.054 0.771 1 24 0 2024-08-07 21:13:12,825 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 24 ms 2024-08-07 21:13:07.710 4.297 1 113 4000 2024-08-07 21:13:12,007 [39] INFO DistributorCommon.DBHandlerBase [(null)] - Done Bulk saving messages, Count=1, used 113 ms
Hello,  I'm wondering if we can send the PaloAlto firewall logs to splunk *cloud* via HEC? We've done that once when evaluating other SIEM solution (crowd-strike NG-SIEM). As to splunk, the documen... See more...
Hello,  I'm wondering if we can send the PaloAlto firewall logs to splunk *cloud* via HEC? We've done that once when evaluating other SIEM solution (crowd-strike NG-SIEM). As to splunk, the documents I can find in Internet all recommend using this flow:  PaloAlto -> syslog-ng+universal forwarder -> splunk cloud. Does anyone know why HEC is not a preferred option in this case? any potential issue here? Regards, Iris
I have arguments for my macro that contain other values e.g. $env:user$ and $timepicker.earliest$/$timepicker.latest$. How do I include these in my macro definition as it doesn't allow me since macro... See more...
I have arguments for my macro that contain other values e.g. $env:user$ and $timepicker.earliest$/$timepicker.latest$. How do I include these in my macro definition as it doesn't allow me since macro arguments must only contain alphanumeric, '_' and '-' characters?    
Hello, I am trying to display my Splunk dashboard on a tv 24/7 in the front of my shop to show a running count of customers who support our store and analysis on their feedback. Issue I am having =... See more...
Hello, I am trying to display my Splunk dashboard on a tv 24/7 in the front of my shop to show a running count of customers who support our store and analysis on their feedback. Issue I am having = my dashboard is NOT updating correctly. It is set to refresh every 15 minutes but when it does this, it takes the dashboard out of full screen which I do not want (shows my tabs and apps rather than just the dashboard) Questions:--> How can I ensure when Splunk webpage refreshes through the browser, the dashboard is refreshed/reset in full screen? Thank you