All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I made a typo in my post, so here is the actual version. Thank you in advance for your help! <row> <panel> <html><div> <a target="_blank" href="/app/app/operating_system?form.case_token=$case_t... See more...
I made a typo in my post, so here is the actual version. Thank you in advance for your help! <row> <panel> <html><div> <a target="_blank" href="/app/app/operating_system?form.case_token=$case_token$&amp;form.host_name=$host_name$">Operating System Artifacts</a> </div></html> </panel> </row>
I have a Classic Dashboard where I have an HTML panel. I am trying to link to another dashboard with tokens that the user can select via multiselects. However, it isn't working. This is my HTML panel... See more...
I have a Classic Dashboard where I have an HTML panel. I am trying to link to another dashboard with tokens that the user can select via multiselects. However, it isn't working. This is my HTML panel: <row> <panel> <html><div> <a target="_blank" href=/app/app/operating_system?form.case_token=$case_token$&amp;form.host_name=$host_name$">Operating System Artifacts</a> </div></html> </panel> </row>
Hi   I'm installing Splunk UBA and I got the next error:   waiting on impala containerized service to come up Running CaspidaCleanup, resetting rules Cleaning up node domain.com checking if zo... See more...
Hi   I'm installing Splunk UBA and I got the next error:   waiting on impala containerized service to come up Running CaspidaCleanup, resetting rules Cleaning up node domain.com checking if zookeeper is reachable at: domain.com:2181 zookeeper reachable at: domain.com:2181 checking if postgres is reachable at: domain.com:5432 postgres server reachable at: domain.com:5432 checking if impala is reachable at: jdbc:impala://domain.com:21050/;auth=noSasl impala jdbc server at:jdbc:impala://domain.com:21050/;auth=noSasl not reachable, aborting required services not up, aborting cleanup CaspidaCleanup failed, exiting   There are not logs from impala [caspida@ubasplunk ~]$ ls /var/log/impala/ [caspida@ubasplunk ~]$ ls /var/log/impala/     The docker container is running ant the port is map and the 21050 port is open [caspida@ubasplunk ~]$ sudo docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 7d198d890b13 domain.com:5000/impala:latest "/bin/bash -c './imp…" 4 minutes ago Up 4 minutes 0.0.0.0:21000->21000/tcp, :::21000->21000/tcp, 0.0.0.0:21050->21050/tcp, :::21050->21050/tcp, 0.0.0.0:24000->24000/tcp, :::24000->24000/tcp, 0.0.0.0:25000->25000/tcp, :::25000->25000/tcp, 0.0.0.0:25010->25010/tcp, :::25010->25010/tcp, 0.0.0.0:25020->25020/tcp, :::25020->25020/tcp, 0.0.0.0:26000->26000/tcp, :::26000->26000/tcp impala 15d899b79ed2 07655ddf2eeb "/dashboard --insecu…" 6 minutes ago Up 6 minutes     Can you helpe to resolve the issue?
Splunk Enterprise ships with a copy of PostGreSQL. The latest splunk installer, v9.4.1, however still ships with a version of Postgresql 16.0 which has several Security vulnerabilities. Is there a do... See more...
Splunk Enterprise ships with a copy of PostGreSQL. The latest splunk installer, v9.4.1, however still ships with a version of Postgresql 16.0 which has several Security vulnerabilities. Is there a documented way to upgrade the version to 16.7? Information on the PostgreSQL CVE https://www.postgresql.org/about/news/postgresql-173-167-1511-1416-and-1319-released-3015/
I'll give that a try Will. Thanks for the suggestion!
Hi @moumoutaru  I did something similar last week for someone else who wanted to derive multiple tokens based on a single dropdown, I ended up create a table search off the side of the visibile dash... See more...
Hi @moumoutaru  I did something similar last week for someone else who wanted to derive multiple tokens based on a single dropdown, I ended up create a table search off the side of the visibile dashboard using the token from the dropdown, which then allows them to use the $searchName.result.<field>$ in there other searches.  I think this same approach could work for what you are trying to achieve, as the problem with a search on a hidden dropdown which selects the first result, is that if you change the value of the visible dropdown, whilst the search does re-run for the hidden one, it doesnt change the selected value! Check out this link for an example (https://community.splunk.com/t5/Dashboards-Visualizations/Dropdown-filter-in-Splunk-dashboard-studio/m-p/741817/highlight/true#M58408) or find the sample dashboard below to have a play around with to see what I mean: { "title": "Answers - testing", "description": "", "inputs": { "input_mwKjgBTB": { "options": { "items": [ { "label": "useast01", "value": "useast01" }, { "label": "uswest01", "value": "uswest01" }, { "label": "usqaf01", "value": "usqaf01" } ], "selectFirstSearchResult": true, "token": "cluster" }, "title": "Cluster", "type": "input.dropdown" } }, "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } } } } }, "visualizations": { "viz_2uQp06o5": { "dataSources": { "primary": "ds_c8AfQapt" }, "type": "splunk.table" }, "viz_JQgp3c9c": { "containerOptions": {}, "dataSources": { "primary": "ds_eGOCKAoM_ds_qBGlESX2" }, "eventHandlers": [ { "options": { "tokens": [ { "key": "name", "token": "method" } ] }, "type": "drilldown.setToken" } ], "options": { "majorValue": "> sparklineValues | lastPoint()", "trendValue": "> sparklineValues | delta(-2)" }, "showLastUpdated": false, "showProgressBar": false, "title": "Cluster", "type": "splunk.singlevalue" }, "viz_bxuHwYLl": { "containerOptions": {}, "dataSources": { "primary": "ds_FKvGrvZ4_ds_qBGlESX2" }, "eventHandlers": [ { "options": { "tokens": [ { "key": "name", "token": "method" } ] }, "type": "drilldown.setToken" } ], "options": { "majorValue": "> sparklineValues | lastPoint()", "trendValue": "> sparklineValues | delta(-2)" }, "showLastUpdated": false, "showProgressBar": false, "title": "Space", "type": "splunk.singlevalue" }, "viz_column_chart": { "containerOptions": {}, "dataSources": { "primary": "ds_qBGlESX2" }, "eventHandlers": [ { "options": { "tokens": [ { "key": "name", "token": "method" } ] }, "type": "drilldown.setToken" } ], "options": { "majorValue": "> sparklineValues | lastPoint()", "trendValue": "> sparklineValues | delta(-2)" }, "showLastUpdated": false, "showProgressBar": false, "title": "Region", "type": "splunk.singlevalue" } }, "dataSources": { "ds_FKvGrvZ4_ds_qBGlESX2": { "name": "get_space", "options": { "enableSmartSources": true, "query": "| makeresults\n| eval space=\"$myvars:result.space$\"" }, "type": "ds.search" }, "ds_c8AfQapt": { "name": "myvars", "options": { "enableSmartSources": true, "query": "| makeresults\n| eval cluster=\"$cluster$\"\n| eval space=CASE(cluster==\"usqaf01\", \"abs-qff\", cluster==\"uswest01\",\"abs-qpp1\",cluster==\"useast01\",\"abs-qpp1\")\n| eval region=CASE(cluster==\"usqaf01\", \"QAF\", cluster==\"uswest01\",\"QAWEST\",cluster==\"useast01\",\"QAEAST\")", "queryParameters": { "earliest": "-24h@h", "latest": "now" } }, "type": "ds.search" }, "ds_eGOCKAoM_ds_qBGlESX2": { "name": "get_cluster", "options": { "enableSmartSources": true, "query": "| makeresults\n| eval cluster=\"$myvars:result.cluster$\"" }, "type": "ds.search" }, "ds_qBGlESX2": { "name": "Search_1", "options": { "enableSmartSources": true, "query": "| makeresults\n| eval region=\"$myvars:result.region$\"" }, "type": "ds.search" } }, "layout": { "globalInputs": [ "input_mwKjgBTB" ], "layoutDefinitions": { "layout_1": { "options": { "display": "auto", "height": 960, "width": 1440 }, "structure": [ { "item": "viz_column_chart", "position": { "h": 250, "w": 350, "x": 370, "y": 0 }, "type": "block" }, { "item": "viz_2uQp06o5", "position": { "h": 100, "w": 580, "x": 1480, "y": 70 }, "type": "block" }, { "item": "viz_JQgp3c9c", "position": { "h": 250, "w": 350, "x": 0, "y": 0 }, "type": "block" }, { "item": "viz_bxuHwYLl", "position": { "h": 250, "w": 350, "x": 740, "y": 0 }, "type": "block" } ], "type": "absolute" } }, "tabs": { "items": [ { "label": "New tab", "layoutId": "layout_1" } ] } } } Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
@lcguilfoil  Did this work for appending to your dropdown? @livehybrid wrote: Hi @lcguilfoil  You need to use "form.rule_token" in the set token like this: <set token="form.rule_token">$clic... See more...
@lcguilfoil  Did this work for appending to your dropdown? @livehybrid wrote: Hi @lcguilfoil  You need to use "form.rule_token" in the set token like this: <set token="form.rule_token">$click.value$</set>   Updated:  If you want to append to existing selections then use: <eval token="form.rule_token">mvappend($form.rule_token$,$click.value$)</eval> Here is a full example to demonstrate if it helps <form version="1.1" theme="light"> <label>AnswersTesting</label> <fieldset submitButton="false"> <input type="multiselect" token="rule_token" searchWhenChanged="true"> <label>Rule</label> <choice value="*">All Rules</choice> <default>*</default> <fieldForLabel>host</fieldForLabel> <fieldForValue>host</fieldForValue> <search> <query>| tstats count where index=_internal by host</query> <earliest>-7d@h</earliest> <latest>now</latest> </search> <prefix>host IN (</prefix> <delimiter>,</delimiter> <valuePrefix>"</valuePrefix> <valueSuffix>"</valueSuffix> </input> </fieldset> <row> <panel> <table> <search> <query>|tstats count where index=_internal by host</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> <option name="drilldown">cell</option> <drilldown> <eval token="form.rule_token">mvappend($form.rule_token$,$click.value$)</eval> </drilldown> </table> </panel> </row> </form> Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will  
Hi @hema_5757  did you see my response with other options under the other reply? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards ... See more...
Hi @hema_5757  did you see my response with other options under the other reply? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
The Search process around 8K results in 400M events 
Hello, I am running into an issue with using multiple dropdowns What I am trying to achieve is dynamic index selection via a hidden splunk dropdown filter that get's autopopulated with the first res... See more...
Hello, I am running into an issue with using multiple dropdowns What I am trying to achieve is dynamic index selection via a hidden splunk dropdown filter that get's autopopulated with the first result value from the datasource search on the hidden dropdown. The hidden filter's data source query for populating the dropdown makes uses the token from the first dropdown. What seems to be working: - Hidden Dropdown successfully lists the correct index based on the selection from the first dropdown. What isn't working: - The result from the hidden index datasource search isn't selected despite it being the only result returned and default selected values is set to first value. Any thoughts or recommendations for how to handle this problem?
Thanks to yuanliu !  the code below worked.   Sorry about the typos...that was just some fat-fingering trying to post the query.   I was not aware of the the true() function, but learn something new ... See more...
Thanks to yuanliu !  the code below worked.   Sorry about the typos...that was just some fat-fingering trying to post the query.   I was not aware of the the true() function, but learn something new everyday. where if ("$PhoneNumber$" ="*", true(), like('Wireless number and descriptions',"%$PhoneNumber$%"))  
Do you know how to do that? I just know I can, I don't know how
Hi @hema_5757 , your search is very long, so the only way to avoid timeouts like your is to send the job in background [Job > Send Job to background]. eventually adding an email to receive the comp... See more...
Hi @hema_5757 , your search is very long, so the only way to avoid timeouts like your is to send the job in background [Job > Send Job to background]. eventually adding an email to receive the completion of the job. Then remember that you have the limit of 10,000 results, so maybe it's better to use more filters if you have too many results. Ciao. Giuseppe
Hi @Skinny  What does your search look like so far? If you're doing  | stats sum(All_Email.size) as size by All_Email.src_user, All_Email.recipient Then I think it should already be grouping it li... See more...
Hi @Skinny  What does your search look like so far? If you're doing  | stats sum(All_Email.size) as size by All_Email.src_user, All_Email.recipient Then I think it should already be grouping it like this? Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
Hi @hema_5757  There could be a number of reasons your search is auto-cancelling: 1) The SH does not have enough RAM. Can you confirm how much RAM the SH has, and how much is free during the search... See more...
Hi @hema_5757  There could be a number of reasons your search is auto-cancelling: 1) The SH does not have enough RAM. Can you confirm how much RAM the SH has, and how much is free during the search? 2) Certain savedsearches.conf properties can affect the amount of time and/or number of results that might return (https://docs.splunk.com/Documentation/Splunk/latest/Admin/Savedsearchesconf) such as:   dispatch.max_count = <integer> * The maximum number of results before finalizing the search. * Defaults to 500000. dispatch.max_time = <integer> * Indicates the maximum amount of time (in seconds) before finalizing the search. * Defaults to 0. dispatch.auto_cancel = <integer> * Specifies the amount of inactive time, in seconds, after which the job is automatically canceled. * 0 means to never auto-cancel the job. * Default: 0   Please review these in your environment to see if this could be impacting. 3) Workload management (WLM) - Are your searches subject to WLM policies? 4) Check the job inspector, if you look at the search.log from within the job inspector for things like cancel/fail/error etc and see if there is more information that you can share with us it might help investigate further. Please let me know how you get on and consider adding karma to this or any other answer if it has helped. Regards Will
This Search can be used to check data correctly during search time: SPL:- index= <index_name> source=<source> host="<host>" sourcetype="XmlWinEventLog" | rex mode=sed field=_raw "s/&lt;/</g" | rex... See more...
This Search can be used to check data correctly during search time: SPL:- index= <index_name> source=<source> host="<host>" sourcetype="XmlWinEventLog" | rex mode=sed field=_raw "s/&lt;/</g" | rex mode=sed field=_raw "s/&gt;/>/g"
Does anyone know of a risk assessment done for apps like the Cisco SNA addon Cisco Secure Network Analytics (Stealthwatch) App for Splunk Enterprise | Splunkbase that require all users to have list_s... See more...
Does anyone know of a risk assessment done for apps like the Cisco SNA addon Cisco Secure Network Analytics (Stealthwatch) App for Splunk Enterprise | Splunkbase that require all users to have list_storage_passwords capability?  Does this capability mean that users (once authenticated) could craft a request that would provide them with a sensitive password in plaintext? Thanks
Hi All, I have following Query  index=wineventlog |eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S") |eval device_name = lower(Workstation_Name)|dedup device_name | table _time user device_name src... See more...
Hi All, I have following Query  index=wineventlog |eval _time = strftime(_time,"%Y-%m-%d %H:%M:%S") |eval device_name = lower(Workstation_Name)|dedup device_name | table _time user device_name src_nt_host action ComputerName host SourceName Account_Name Security_ID Logon_Type TaskCategory Type app eventtype product vendor vendor_product Account_Domain dest dest_nt_domain dest_nt_host Error_Code EventCode EventType name source SourceName sourcetype src src_domain src_ip src_nt_domain src_port Virtual_Account LogName Logon_GUID Impersonation_Level on Yesterday time filter This search takes more than one hour and when I use this query to output search It process till 60% and then it is giving error like search auto-cancelled. Is there any way that we can handle time for processing this query. or how can I get data in other ways.    If I give shorted timeframe like last 60 min time takes almost 5 min and I can get data. Please suggest.
Thanks @ITWhisperer it worked 
| timechart span=mon avg(properties.elapsed) as AverageResponsetime | eval AverageResponseTime=round(AverageResponseTime,2)