All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I want to store the Splunk dashboard code in Gitlab or Bitbucket so I do not lose the dashboard. Any ideal if its possible? 
Hi all, I have created a dashboard incorporating few external domains I am receiving the error message like  the dashboard is attempting to receive content from outside of splunk.the content urls a... See more...
Hi all, I have created a dashboard incorporating few external domains I am receiving the error message like  the dashboard is attempting to receive content from outside of splunk.the content urls are not in the dashboards trusted domains list.   Thanks..
Hi all, recently my customer asked me to integrate different JSON log sources (VPN concentrator, WAF and Load Balancers) comeing from only one Azure event hub. I onboarded it using the Splunk Add-o... See more...
Hi all, recently my customer asked me to integrate different JSON log sources (VPN concentrator, WAF and Load Balancers) comeing from only one Azure event hub. I onboarded it using the Splunk Add-on for Microsoft Cloud Services (https://splunkbase.splunk.com/app/3110) from the Inputs Data Manager Instance (IDM) and I selected the deafult sourcetype "mscs:azure:eventhub". At this point I need to split this sourcetype in three new ones, one for each log type (VPN concentrator, WAF and Load Balancers) distinguishing them and creating custom field extractions and so on for the Data Models. I found a field "category"  within the JSON logs which can be used as splitting criteria: Have you any idea to do that? Thanks in advance!
I am trying to create a custom metric on the database. i have selected the database but when I add a query it gives me the ORA-00942: table or view does not exist error The query returns just one va... See more...
I am trying to create a custom metric on the database. i have selected the database but when I add a query it gives me the ORA-00942: table or view does not exist error The query returns just one value when it is run against the database. Can you advise why this error is being thrown?
Hello Experts , I am trying to delete the fishbucket but I want to delete only one index=syslog..Is there a command I can run that only delete for a  particular index   Thanks in Advance 
Hello All, I recently started ingesting vac flow logs from my AWS environment using the data manager app, and everything works fine in terms of getting the logs into splunk.  There is however o... See more...
Hello All, I recently started ingesting vac flow logs from my AWS environment using the data manager app, and everything works fine in terms of getting the logs into splunk.  There is however one issue, when creating the VPC flow logs on AWS, we opted for a custom format to be able to glean additional fields like the "pkt-srcaddr" and pat-dstaddr". As a result of this, Splunk does not correctly interpret the logs on the console. I believe that Splunk is reading the logs using the default log format detailed below: Default Format: ${version} ${account-id} ${interface-id} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${protocol} ${packets} ${bytes} ${start} ${end} ${action} ${log-status} how do I get it to read the logs using the custom format detailed below: Custom Format ${version} ${account-id} ${vpc-id} ${subnet-id} ${interface-id} ${instance-id} ${flow-direction} ${srcaddr} ${dstaddr} ${srcport} ${dstport} ${pkt-srcaddr} ${pkt-dstaddr} ${protocol} ${packets} ${bytes} ${start} ${end} ${action} ${log-status}
I have a table with 3 columns: _time, type and action | makeresults count=10 | eval type = "typeA" | eval action = if((random()%2) == 1, "open", "close") | union [| makeresults count=10 | eva... See more...
I have a table with 3 columns: _time, type and action | makeresults count=10 | eval type = "typeA" | eval action = if((random()%2) == 1, "open", "close") | union [| makeresults count=10 | eval type = "typeB" | eval action = if((random()%2) == 1, "open", "close")] I need to create a column for each type that would identify the change in the column action and count # of actions in ascending order like this... _time typeA typeB typeA_count typeB_count 2022-01-01 05:00:00 open close 1 1 2022-01-01 05:00:01 open open 2 1 2022-01-01 05:00:02 close close 1 1 2022-01-01 05:00:03 open open 1 1 2022-01-01 05:00:04 close open 1 2 2022-01-01 05:00:05 open close 1 1 2022-01-01 05:00:06 open close 2 2 2022-01-01 05:00:07 open close 3 3 2022-01-01 05:00:08 close open 1 1 2022-01-01 05:00:09 open close 1 1 Thanks
  Hi , I need to extract the value FISOBPIT10101 from the below lines.   message:PSUS7|8897|FISOBPIT10101|OWA|8897|8897|SignOnID|SPT|adding routing key in producer
HI, I have a multivalued field with values as A B C I want it to be replaced as 'A','B','C' . I tried to do it with eval mvjoin, but the first and last values misses the quote like this A','B',... See more...
HI, I have a multivalued field with values as A B C I want it to be replaced as 'A','B','C' . I tried to do it with eval mvjoin, but the first and last values misses the quote like this A','B','C SPL used is:    index=* sourcetype=* host=abc NAME IN ("*A*","*B*","*C*") |stats values(NAME) as NAME by host | eval NAME = mvjoin(NAME,"','")   Any help would be appreciated, thankyou
i am working on splunk cloud , i don't have access to server and i am using dashboard studio .  This is my table code also i have attached the screenshot of my table , so i just want to know how can... See more...
i am working on splunk cloud , i don't have access to server and i am using dashboard studio .  This is my table code also i have attached the screenshot of my table , so i just want to know how can i add tooltip to each column header of my table. "viz_zfv78G8Y": { "type": "splunk.table", "title": "GP Metrics", "options": { "tableFormat": { "align": "> table |pick(alignment)" }, "columnFormat": { "messegetype": { "data": "> table | seriesByName(\"messegetype\") | formatByType(messegetypeColumnFormatEditorConfig)" } } }, "dataSources": { "primary": "ds_f9ztfdW3" } }
Hi All, I am looking for the Dashboard where it can say current Active session and User Details F5 VPN. i checked any App but those are old one not supported for cloud Splunk, can anyone know how do... See more...
Hi All, I am looking for the Dashboard where it can say current Active session and User Details F5 VPN. i checked any App but those are old one not supported for cloud Splunk, can anyone know how do we achieve this requirement. Currently the log being pushed via syslog to Splunk cloud. Any recommendation and help highly appreciated.  
Splunk Addon for Citrix netscaler is contiguously logging the following ERRORs  in splunkd.log file. 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt... See more...
Splunk Addon for Citrix netscaler is contiguously logging the following ERRORs  in splunkd.log file. 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" splunklib.binding.HTTPError: HTTP 404 Not Found -- citrix_netscaler_templates does not exist 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" During handling of the above exception, another exception occurred: 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" Traceback (most recent call last): 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"   File "/opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/lib/solnlib/conf_manager.py", line 457, in get_conf 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"     conf = self._confs[name] 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"   File "/opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/lib/splunklib/client.py", line 1816, in __getitem__ 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"     raise KeyError(key) 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" KeyError: 'citrix_netscaler_templates' 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" During handling of the above exception, another exception occurred: 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" Traceback (most recent call last): 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"   File "/opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/lib/solnlib/utils.py", line 153, in wrapper 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"     return func(*args, **kwargs) 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"   File "/opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/lib/solnlib/conf_manager.py", line 459, in get_conf 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py"     raise ConfManagerException(f"Config file: {name} does not exist.") 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" solnlib.conf_manager.ConfManagerException: Config file: citrix_netscaler_templates does not exist. 12-08-2022 10:55:55.677 +0000 ERROR ExecProcessor [20336 ExecProcessor] - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/Splunk_TA_citrix-netscaler/bin/citrix_netscaler.py" .   Any solution?
The result should look like the table given below.Need to find the matching product number within customers and the result should be the matching product number, if not null. Can you please suggest ... See more...
The result should look like the table given below.Need to find the matching product number within customers and the result should be the matching product number, if not null. Can you please suggest   Customer_name Product_number Result Customer1 P1 P1, P2 P2   P3   P4           Customer 2 P2 P2 P5       Customer 3 P3 Null P6          
We have distributed environment with 4 Splunk Indexers which are consuming high memory . It reaches to 100% and remains unreachable until we restart splunkd service. Once restarted, memory comes down... See more...
We have distributed environment with 4 Splunk Indexers which are consuming high memory . It reaches to 100% and remains unreachable until we restart splunkd service. Once restarted, memory comes down and the same process repeats on other indexers within a span of couple of hours. 64GB of Physical Memory is available on each indexer and saved searches/Scheduled searches are not consuming high memory. Unable to understand why there is spike in the memory usage.  In DMC It shows,Splunkd server is using high Physical Memory usage by process class. PID keeps increasing as below. Please suggest how do i find the root cause for this issue and how to fix it    
Hi, I'm looking for how to make conditional stats aggregation query according to a form input "With users" (value : Yes or No) I got a list of events per User When form input With users is equa... See more...
Hi, I'm looking for how to make conditional stats aggregation query according to a form input "With users" (value : Yes or No) I got a list of events per User When form input With users is equal "Yes", i'd like to present such a table User URI avgNb avgDuration A A 1 1 A B 2 5 A C 3 1 B A 5 9 B C 6 10 C A 4 11 C B 6 8   Query : | stats count as avgNb avg(DUR) as avgDuration by USR An when form input With users is equal "No", i'd like to present this one URI avgNb avgDuration A 3,33333333 7 B 4 6,5 C 4,5 5,5 Query : | stats count as avgNb avg(DUR) as avgDuration   How can I build my query according to this form input condition? Thanks
Hi Guys,   I am comparing the values from a csv with those returned in a json format on a splunk search.   At the moment the search works as i want it.  But i noticed that in some instances t... See more...
Hi Guys,   I am comparing the values from a csv with those returned in a json format on a splunk search.   At the moment the search works as i want it.  But i noticed that in some instances the results from the splunk search do not bring back all the entries because simply for that customer they do not exist.   In the csv i have all the entries that should exist and match, and if one doesnt match then return it as a result, but where i am struggling is getting the search to also output and say, hold on, this entry with this value is in the csv but its not in the search.   The entries which are not returned in the search are important to us because it means something isn't turned on so we need to go to that customer and rectify it.   The search atm looks like this  index=main sourcetype="my_stats" type="add-ons" | spath config{} | mvexpand config{} | spath input=config{} | lookup add-ons.csv "Configuration Item" as displayName OUTPUTNEW "Configuration Setting" as "default" |stats list(type) as type list(displayName) as item list(name) as value list(default) as default list(owner) as owner by company Thanks, Greg
Hi Guys. I have a distributed setup consisting of 1 search head, 1 deployment/license server, 1 indexer. And a whole bunch of universal forwarders. I am trying to filter out some of the data com... See more...
Hi Guys. I have a distributed setup consisting of 1 search head, 1 deployment/license server, 1 indexer. And a whole bunch of universal forwarders. I am trying to filter out some of the data coming in with transforms.conf:   [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue [setparsing] REGEX = "name":"System availability" DEST_KEY = queue FORMAT = indexQueue     and a props.conf   [Zabbix-history] SHOULD_LINEMERGE = falde MAX_TIMESTAMP_LOOKAHEAD = 300 detect_trailing_nulls = auto TIME_PREFIX = \"clock\": KV_MODE = json AUTO_KV_JSON = true TRANSFORMS-set = setnull,setparsing     A log example that i would like to index, matching the regex in transforms.conf   {"host":{"host":"xxxxx","name":"xxxx"},"groups":["xxxx Prod","xxxx","Windows servers"],"item_tags":[{"tag":"SAP Basis","value":""},{"tag":"System availability","value":""},{"tag":"SID1","value":""},{"tag":"Product","value":"Web Server"},{"tag":"SID","value":"WSP"}],"itemid":900162,"name":"System availability","clock":1670486400,"count":13,"min":1,"avg":1,"max":1,"type":3}       Currently the props.conf and transforms.conf are on the indexer in the designated app. Its currently filtering out all the logs with the sourcetype Zabbix-history, and not indexing the "name":"System Availability" Does the props/transforms also need to be on the searchhead, or pushed to the universalforwarder with the deployment server?
Hello, in old dashboard versions I was able to  create multi line strings in table cells. However, in Dashboard Studio it doesn't work. Is there a workaround? example for dashboard studio: { ... See more...
Hello, in old dashboard versions I was able to  create multi line strings in table cells. However, in Dashboard Studio it doesn't work. Is there a workaround? example for dashboard studio: {     "dataSources": {         "ds_search_1": {             "type": "ds.search",             "options": {                 "query": "| makeresults | eval value=\"line1\r\nline2\r\nline3\" \r\n| table value",                 "queryParameters": {                     "earliest": "-24h@h",                     "latest": "now"                 }             }         }     },     "visualizations": {         "viz_table_1": {             "type": "splunk.table",             "options": {                 "count": 100,                 "dataOverlayMode": "none",                 "drilldown": "none",                 "percentagesRow": false,                 "rowNumbers": false,                 "totalsRow": false,                 "wrap": true             },             "dataSources": {                 "primary": "ds_search_1"             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-24h@h,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "grid",         "options": {},         "structure": [             {                 "item": "viz_table_1",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 1200,                     "h": 250                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "title": "dashboardstudio",     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         }     } }     examle for xml dashboard:     <dashboard version="1.1"> <label>delete_me_8</label> <row> <panel> <table> <search> <query>| makeresults | eval value="line1 line2 line3" | table value</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </dashboard>    
Hello Experts! This is questoin about queue type FIFO in SQS based S3 ( Splunk add-on for AWS ). - Is there a setting that must be enabled on Splunk addon for AWS when using SQS based S3 with queue... See more...
Hello Experts! This is questoin about queue type FIFO in SQS based S3 ( Splunk add-on for AWS ). - Is there a setting that must be enabled on Splunk addon for AWS when using SQS based S3 with queue type as FIFO? - Does  addon for AWS automatically recognize the queue type? Also I know that if we use FIFO in SQS, we cannot use event notification (SNS) from S3 to SQS. Does it mean Splunk only support S3 -> SNS -> SQS(Standard queue)? I just want to know how to use FIFO queue in add-on for AWS. Thanks in advance!!
Hi . I have use-case to add panel details for one of the dashboard .After the panel name need to add hyperlink .When I click on this it should pop up window with static information which i provided... See more...
Hi . I have use-case to add panel details for one of the dashboard .After the panel name need to add hyperlink .When I click on this it should pop up window with static information which i provided. Any suggestions ?