All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am trying to set some token values when a dashboard loads or when the page is refreshed.   The documentation gives the following example: "defaults": { "dataSources": { "ds.search": { ... See more...
I am trying to set some token values when a dashboard loads or when the page is refreshed.   The documentation gives the following example: "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } }, "tokens": { "default": { "tokenName": { "value": "1986" } } } }, This my code: "defaults": { "dataSources": { "ds.search": { "options": { "queryParameters": { "latest": "$global_time.latest$", "earliest": "$global_time.earliest$" } } } }, "tokens": { "default": { "Slot1_TailNum": { "value": "false" } } } }, Which is not working.  I am using the "Interactions" Set tokens to set the value of the "Slot1_TailNum" token to something other than false to hide/show a table, which works fine.  However when reloading the Dashboard or refreshing the page the table is still displayed, it does not seem to be setting the value to false when loading. Any help would be greatly appreciated, I can run a zoom if required it you want/need to see.   Thanks David  
We want to add a host drop down in a dashboard  please find the host details below. dev1 appdev1host logdev1host cordev1host dev2  appdev2host logdev2host cordev2host dev3 appdev3hos... See more...
We want to add a host drop down in a dashboard  please find the host details below. dev1 appdev1host logdev1host cordev1host dev2  appdev2host logdev2host cordev2host dev3 appdev3host logdev3host cordev4host dev4 appdev4host logdev4host cordev4host sit1 appsit1host logsit1host corsit1host sit2 appsit2host logsit2host corsit2host sit3 appsit3host logsit3host corsit3host sit4 appsit4host logsit4host corsit4host drop down in dashboard should  have only 8 drop downs . For example: if i choose dev1 it should capture all the hosts mentioned for dev1(appdev1host, logdev1host,cordev1host) dev1 dev2 dev3 dev4 sit1 sit2 sit3 sit4
I have some JSON output that is in key value structure (protobuf3 formatted--this is OTLP data going into Splunk Enterprise events) and it has multiple values in each field. There are multiple key va... See more...
I have some JSON output that is in key value structure (protobuf3 formatted--this is OTLP data going into Splunk Enterprise events) and it has multiple values in each field. There are multiple key value attributes stored under an attributes parent, and then its fields are under a metric parent. I want to take the host.name attribute and map it to every metrics value I see. Here is working example of the raw json:     { "resourceMetrics": [ { "resource": { "attributes": [ { "key": "host.name", "value": { "stringValue": "myname1" } }, { "key": "telemetry.sdk.name", "value": { "stringValue": "my_sdk" } } ] }, "scopeMetrics": [ { "metrics": [ { "name": "hw.host.energy", "gauge": { "dataPoints": [ { "timeUnixNano": "1712951030986039000", "asDouble": 359 } ] } }, { "name": "hw.host.power", "gauge": { "dataPoints": [ { "timeUnixNano": "1712951030986039000", "asDouble": 26 } ] } } ] } ] }, { "resource": { "attributes": [ { "key": "host.name", "value": { "stringValue": "myname2" } }, { "key": "telemetry.sdk.name", "value": { "stringValue": "my_sdk" } } ] }, "scopeMetrics": [ { "metrics": [ { "name": "hw.host.energy", "gauge": { "dataPoints": [ { "timeUnixNano": "1712951030987780000", "asDouble": 211 } ] } } ] } ] } ] }     There may be multiple attributes, in various order, but I am only interested in grabbing the host.name value from there, and then associating host.name to all metrics under the metrics parent within the resource parent. The metrics parent may contain multiple metrics in the array. And then new resources (with new host.name and new metrics) would show up as the next resource entry in the resources array. So what I want is something like this in a row-based format of host.name.value > metric: host.name metric host.name,myname1 hw.host.energy,359 host.name,myname1 hw.host.power,26 host.name,myname2 hw.host.energy,211   The problem I am having is I don't want the other attributes from the attribute parent, which in the example is the telemetry.sdk.name key and value. But since they are there, I can't figure out how to zip and expand properly, as the telemetry.sdk.name value gets associated to legit metrics, looking something like below, which would mean if I drop row 2 I lose the power metric = 26 for myname1. Parsing some spaths, the structure looks something like this: attr_zip metric_zip host.name,myname1 hw.host.energy,359 telemetry.sdk.name,my_sdk hw.host.power,26 host.name,myname2 hw.host.energy,211 telemetry.sdk.name,my_sdk     I looked at mvfilter but can't seem to find a way to handle a variable amount of attributes that may show up in the left column attr_zip, as it seems I ned to know how many values I fill down in the field, and I am not sure how to get a count of the values fro the right column metric_zip to know how many values down in attr_zip to fill. In JSON, all the metrics values share the same resource so I should logically be able to reference the parent resource.attribute.host.name.value, and concatenate that to every metric value. Here's my current SPL, where I can get the columns concatenated properly, but would need to drop the rows in attr_zip that don't match the key of host.name:     | spath output=host_name path=resourceMetrics{}.resource.attributes{} | mvexpand host_name | spath output=attribute path=resourceMetrics{}.resource.attributes{}.key | spath output=attribute_value path=resourceMetrics{}.resource.attributes{}.value.stringValue | spath output=time resourceMetrics{}.scopeMetrics{}.metrics{}.gauge.dataPoints{}.timeUnixNano | spath output=metric_name resourceMetrics{}.scopeMetrics{}.metrics{}.name | spath output=metric_value resourceMetrics{}.scopeMetrics{}.metrics{}.gauge.dataPoints{}.asDouble | eval attr_zip=mvzip(attribute, attribute_value) | eval metric_zip=mvzip(metric_name, metric_value) | table attribute,attribute_value, attr_zip, metric_zip       Anyone able to offer some guidance?
I have signed up and validated my account but I cannot get access to the free trial. When I click on the free trial button, it says they are gonna send me an email but I am not receiving anything, I ... See more...
I have signed up and validated my account but I cannot get access to the free trial. When I click on the free trial button, it says they are gonna send me an email but I am not receiving anything, I have checked my spam box.
Hi, I was trying the below token logic to get the results count from two different panels and find the variance between results. However, it gives error message as in snapshot. Note: Panel A and ... See more...
Hi, I was trying the below token logic to get the results count from two different panels and find the variance between results. However, it gives error message as in snapshot. Note: Panel A and Panel B i have enabled the set token ( basically tic marked "Use search results or job status as tokens") Also, please suggest how to draw line not just horizontally but also vertically or custom? SPL: | makeresults | eval variance=$A:result.count$ - $B:result.count$ | table variance Error:   Thanks, Selvam.
Hi All, I have data like below with three fields : srcip,dstip and title . When I execute below query  .........| stats count by srcip,dstip,title Result : srcip        dstip           title s... See more...
Hi All, I have data like below with three fields : srcip,dstip and title . When I execute below query  .........| stats count by srcip,dstip,title Result : srcip        dstip           title srcip1     dstip1         title srcip1     dstip2       title srcip2     dstip2        title1 srcip2      dstip3       title1 srcip1       dstip2       title2   So we required to alert separate on basis title values.  For all events of one title, there should be one alert. So above example there should be trigger 3 separate alerts .   Thank you ! in Advance  
Hi, if i run this query in simple search bar it works fine. However, when i create panel and add the below, i'm getting error as waiting for input. Please could you advise? index=hello sourcety... See more...
Hi, if i run this query in simple search bar it works fine. However, when i create panel and add the below, i'm getting error as waiting for input. Please could you advise? index=hello sourcetype=welcome | stats max(DATETIME) as LatestTime | map search="search index=hello sourcetype=welcome DATETIME=$LatestTime$" | stats sum(HOUSE_TRADE_COUNT) as HOUSE_Trade_Count Thanks, selvam.
Hi All, I have an output from a lookup table in splunk where the team work timings field is coming as:: TeamWorkTimings 09:00:00-18:00:00 I want the output to be separated in two fields, like: T... See more...
Hi All, I have an output from a lookup table in splunk where the team work timings field is coming as:: TeamWorkTimings 09:00:00-18:00:00 I want the output to be separated in two fields, like: TeamStart   TeamEnd 09:00:00       18:00:00   Please help me in getting this output in splunk
Hello, I've below dataset from Splunk search. Name percentage A 71% B 90% C 44% D 88% E 78%   All I need to change the percentage field values color as per below rule i... See more...
Hello, I've below dataset from Splunk search. Name percentage A 71% B 90% C 44% D 88% E 78%   All I need to change the percentage field values color as per below rule in the email alert. My requirement to achieve this by updating the sendemail.py. 95+ green, 80-94 amber, <80 = red @tscroggins @ITWhisperer @yuanliu @bowesmana 
Hi, I am getting Axios 500 errors after installing the Salesforce Streaming API add-on app on my Splunk Cloud Trial (Classic). I can't configure the Inputs or Configuration tabs at all. I have a feel... See more...
Hi, I am getting Axios 500 errors after installing the Salesforce Streaming API add-on app on my Splunk Cloud Trial (Classic). I can't configure the Inputs or Configuration tabs at all. I have a feeling that this add-on isn't properly supported in the Trial Cloud instances. Has anyone had any luck getting this to work on Cloud Classic? Am I missing an additional configuration or app that I need to install to get this to work? Any help would be greatly appreciated. P.S.: I was able to get the Salesforce add-on to install, configure, and connect to my Sandbox just fine. It is this streaming api add-on that seems to be an issue. 
I am trying to create a report that pulls a version, but only shows one instance and then list all the hosts within that version  
Hello My lookup table has fields of src_ip, dst_ip, and description. src_ip=192.168.1.1 dst_ip=192.168.1.100 description="internal IP" I want to convert the src_ip field and dst_ip to decimal.... See more...
Hello My lookup table has fields of src_ip, dst_ip, and description. src_ip=192.168.1.1 dst_ip=192.168.1.100 description="internal IP" I want to convert the src_ip field and dst_ip to decimal. If you know how to convert it, please add a reply.   Thank you
Hi  Can you please let me know how i can display the below 3 rows in a single row :   Query :  index=events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console (TERM(VVF119... See more...
Hi  Can you please let me know how i can display the below 3 rows in a single row :   Query :  index=events_prod_cdp_penalty_esa source="SYSLOG" sourcetype=zOS-SYSLOG-Console (TERM(VVF119P)) ("- ENDED" OR "- STARTED" OR "PURGED --") | rex field=TEXT "(VVF119P -)(?<Function>[^\-]+)" | fillnull Function value=" PURGED" | eval DAT = strftime(relative_time(_time, "+0h"), "%Y/%m/%d") | rename DAT as Date_of_reception | table JOBNAME,Date_of_reception ,Function , _time | sort _time   I want to display the result in the below format:  | JOBNAME | Date_of_reception | STARTED_TIME | ENDED_TIME | PURGED_TIME| | $VVF119P | 2024/04/17 | 02:12:37 | 02:12:46 | 02:12:50   Thanks in advance. 
Hi everyone, I have a line chart which works perfectly but only for one single value: index=events ComputerName=* Account_Name=*** EventCode=$event_code_input$ | | timechart count by EventCode ... See more...
Hi everyone, I have a line chart which works perfectly but only for one single value: index=events ComputerName=* Account_Name=*** EventCode=$event_code_input$ | | timechart count by EventCode As you can see it reads EventCode as a user input. This is a multi-selection box.  So if the user selects:  4624 it plots the line - no issue But if they select 4624 AND 4625, it produces an error.    I've tried many different variations and chart types but no success.  Thanks  
Hello Splunkers!! I want to achieve below screenshot visualization.    Below is my current query : ====================================================== index=ABC sourcetype=Replenish... See more...
Hello Splunkers!! I want to achieve below screenshot visualization.    Below is my current query : ====================================================== index=ABC sourcetype=ReplenishmentOrderAssign OR sourcetype=ReplenishmentOrderCompleted OR sourcetype=ReplenishmentOrderStarted OR sourcetype=ReplenishmentOrderCancel | rex field=_raw "SenderFmInstanceName\>(?P<Workstation>[A-Za-z0-9]+\/[A-Za-z0-9]+)\<\/SenderFmInstanceName" | rename ReplenishmentOrderAssign.OrderId as OrderId | eval TimeAssigned=if(like(sourcetype,"%Assign"),_time,null) , TimeStarted=if(like(sourcetype,"%Started"),_time,null), TimeCompleted=if(like(sourcetype,"%Completed"),_time,null) | eventstats count(OrderId) as CountOrderTypes by OrderId | timechart span=5m count(TimeAssigned) as Assigned count(TimeStarted) as Started count(TimeCompleted) as Completed by Workstation | streamstats sum(*) | foreach "sum(Assigned:*)" [| eval <<MATCHSEG1>>Assigned='<<FIELD>>'-'sum(Completed:<<MATCHSEG1>>)'] | foreach "sum(Started:*)" [| eval <<MATCHSEG1>>Started='<<FIELD>>'-'sum(Completed:<<MATCHSEG1>>)'] | fields _time DEP* | foreach "DEP/*" [| eval <<MATCHSEG1>>=if('<<FIELD>>'>0,1,0)] | fields - DEP/* | foreach "*Assigned" [| eval <<FIELD>>='<<FIELD>>'-'<<MATCHSEG1>>Started'] | foreach "*Assigned" [| eval <<MATCHSEG1>>Idle=1-'<<FIELD>>'-'<<MATCHSEG1>>Started'] | addtotals *Started fieldname=Active | addtotals *Assigned fieldname=Assigned | addtotals *Idle fieldname=Idle | fields _time Idle Assigned Active | bin span=$span$ _time | eventstats sum(*) as * by _time | dedup _time Current query is giving me below visualization. Please help me where I need to change in the query to get the above visualization?  
When we start official Docker container image splunk/splunk:9.2.1 with extra var SPLUNK_DISABLE_POPUPS=true docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=OUR... See more...
When we start official Docker container image splunk/splunk:9.2.1 with extra var SPLUNK_DISABLE_POPUPS=true docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=OUR_PASS" -e "SPLUNK_DISABLE_POPUPS=true" --name splunk splunk/splunk:9.2.1 The Ansible task Disable Popups fails, with this error message: TASK [splunk_common : Disable Popups] ****************************************** changed: [localhost] => (item={'key': '/servicesNS/admin/user-prefs/data/user-prefs/general', 'value': 'hideInstrumentationOptInModal=1&notification_python_3_impact=false&showWhatsNew=0'}) failed: [localhost] (item={'key': '/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general', 'value': 'showOptInModal=0&optInVersionAcknowledged=4'}) => { "ansible_loop_var": "item", "changed": false, "item": { "key": "/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general", "value": "showOptInModal=0&optInVersionAcknowledged=4" } } MSG: POST/servicesNS/nobody/splunk_instrumentation/admin/telemetry/generaladmin********8089{'showOptInModal': '0&optInVersionAcknowledged=4'}NoneNone[200, 201, 409];;; AND excep_str: URL: https://127.0.0.1:8089/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general; data: {"showOptInModal": "0&optInVersionAcknowledged=4"}, exception: API call for https://127.0.0.1:8089/servicesNS/nobody/splunk_instrumentation/admin/telemetry/general and data as {'showOptInModal': '0&optInVersionAcknowledged=4'} failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"showOptInModal": "0" is not supported by this handler.</msg> </messages> </response> , failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"showOptInModal": "0" is not supported by this handler.</msg> </messages> </response> failed: [localhost] (item={'key': '/servicesNS/admin/search/data/ui/ui-tour/search-tour', 'value': 'tourPage=search&viewed=1'}) => { "ansible_loop_var": "item", "changed": false, "item": { "key": "/servicesNS/admin/search/data/ui/ui-tour/search-tour", "value": "tourPage=search&viewed=1" } } MSG: POST/servicesNS/admin/search/data/ui/ui-tour/search-touradmin********8089{'tourPage': 'search&viewed=1'}NoneNone[200, 201, 409];;; AND excep_str: URL: https://127.0.0.1:8089/servicesNS/admin/search/data/ui/ui-tour/search-tour; data: {"tourPage": "search&viewed=1"}, exception: API call for https://127.0.0.1:8089/servicesNS/admin/search/data/ui/ui-tour/search-tour and data as {'tourPage': 'search&viewed=1'} failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"tourPage": "search" is not supported by this handler.</msg> </messages> </response> , failed with status code 400: <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Argument "{"tourPage": "search" is not supported by this handler.</msg> </messages> </response>   Because of this the container fail to start. When the Disable Popups variable is not given Splunk starts without issue. Other Docker image versions, like splunk/splunk:9.2 doesn't have this issue.   Any help is appreciated.
Hello, I am building a custom alert action for advanced webhook functionality (allowing header values, removing some data from the payload etc.) and I want to validate the url provided in the config... See more...
Hello, I am building a custom alert action for advanced webhook functionality (allowing header values, removing some data from the payload etc.) and I want to validate the url provided in the config of the alert to be one of the listed in webhook allowed urls. There is a standard list of allowed urls for Splunk standard action Webhook which I want to use. Do you know how can I pull the list of allowed webhook urls (patterns) from my python code? I want to reuse the existing configuration instead of creating a custom list of allowed patterns. Only admin should be able to modify this list, whereas the URL for each alert is created by the user. Thanks!
I need to bring events related to creating and changing a user in the application to this CIM (Change->Account Management). To do this, I need the following values to be specified in the action field... See more...
I need to bring events related to creating and changing a user in the application to this CIM (Change->Account Management). To do this, I need the following values to be specified in the action field - acl_modified, cleared, created, deleted, modified, stopped, lockout, read, logoff, updated, started, restarted, unlocked according to this documentation. The problem is that the action field already exists in events with the following values - create, delete and it is used not only to describe actions with users but also for other objects. What method can you recommend to make the field CIM compliant? Event example: { [-] action: delete actor_details: { [+] } actor_uuid: 11111111 location: { [+] } object_details: { [+] } object_type: user #Also can be item, vault, etc object_uuid: 333333333 session: { [+] } timestamp: 33213123 uuid: 4444444 }  
Hi Splunkers,  we have a Windows log source with a UF installed on it. We have no access to this log source: we only know that we collect Windows logs via UF and it works properly. Collected logs ar... See more...
Hi Splunkers,  we have a Windows log source with a UF installed on it. We have no access to this log source: we only know that we collect Windows logs via UF and it works properly. Collected logs are the usual one: Security, Applications, and so on. Starting from today, we need to add a monitor input: some files are stored in a folder and we need to collect them. So, on our DS, we created another app, inside deployment-app folder, with a proper inputs.conf and props.conf and then we deployed it. Why we created another app and does not simply added a monitor stanza in inputs.conf for Windows addon? Simply because Windows addon is deployed on many host; on the other side, we need to monitor the path only on 1 specific host, so we preferred to deploy another dedicated app, with its server class and so on. DS give no error; app is shown as deployed with no issues. At the same time, we got no error looking on splunkd.log and/or _internal index. By the way, logs are not collected. For sure, we are going to reach Host owner and perform basic checks, like: Is provided path the right one? User in charge of execute UF has read permission on that folder? In UF app folder, is the one deployed by us viewable?  But before this, there is a doubt I have: above point 2, in case of permission denied, I should see in _internal logs some error message, right? Because currently I don't see any error message related to this issue. The behavior is like the inputs.conf we set in deployment app is totally ignored: searching on _internl and/or splunkd.log, I cannot see anything related to path we have to monitor.
Hi all,  I am trying to set up SSE (v3.8.0) however all the searches that are using sseanalytics are failing. ERROR SearchMessages - orig_component="script" app="Splunk_Security_Essentials" sid... See more...
Hi all,  I am trying to set up SSE (v3.8.0) however all the searches that are using sseanalytics are failing. ERROR SearchMessages - orig_component="script" app="Splunk_Security_Essentials" sid="1713345298.75124_FB5E91CC-FD94-432D-8605-815038CDF897" message_key="EXTERN:SCRIPT_NONZERO_RETURN" message=External search command 'sseanalytics' returned error code 1. .