All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Rick, same user. i did use the earliest and latest in the search query itself as filters. API is using the services/export
Ah there's your problem. You assign the variable "extracted_ip_1" which then works fine within the function, but in the following phantom.save_run_data function call, it does not actually dump the va... See more...
Ah there's your problem. You assign the variable "extracted_ip_1" which then works fine within the function, but in the following phantom.save_run_data function call, it does not actually dump the value of the "extracted_ip_1" variable into the output, but rather the "code_3__extracted_ip_1" variable, which is previously set to None. You should change the phantom.save_run_data command to use the correct variable name in the value parameter: phantom.save_run_data(key="code_3:extracted_ip_1", value=json.dumps(extracted_ip_1)) Or, if you want to constrain all custom code between the "custom code" comment blocks, you can change the variable name: code_3__extracted_ip_1 = regex_extract_ipv4_3_data_extracted_ipv4[0]   Also you mentioned your data path on the input to the following block is "code_3:customer_function:extraced_ip_1", which has "customer_function" but it should have "custom_function". Not sure if this is just a typo in your post but if it exists also in your SOAR instance then it can also cause problems.
Archive/live links for conf files:   2016 talk by David Veuve: https://web.archive.org/web/20161205164708/http://conf.splunk.com/sessions/2016-sessions.html#search=David%20Veuve& Video recording:... See more...
Archive/live links for conf files:   2016 talk by David Veuve: https://web.archive.org/web/20161205164708/http://conf.splunk.com/sessions/2016-sessions.html#search=David%20Veuve& Video recording: https://conf.splunk.com/files/2016/recordings/how-to-scale-from-raw-to-tstats.mp4 Video recording archive: https://web.archive.org/web/20250601131324/https://conf.splunk.com/files/2016/recordings/how-to-scale-from-raw-to-tstats.mp4 Slides: https://conf.splunk.com/files/2016/slides/how-to-scale-from-raw-to-tstats.pdf Slides Archive: https://web.archive.org/web/20250601130416/https://conf.splunk.com/files/2016/slides/how-to-scale-from-raw-to-tstats.pdf   2017 talk again by David Veuve: https://web.archive.org/web/20171220012042/http://conf.splunk.com/sessions/2017-sessions.html#search=David%20Veuve& Video recording: https://conf.splunk.com/files/2017/recordings/searching-fast-how-to-start-using-tstats-and-other-acceleration-techniques.mp4 Video recording archive: https://web.archive.org/web/20171220012042/http://conf.splunk.com/files/2017/recordings/searching-fast-how-to-start-using-tstats-and-other-acceleration-techniques.mp4 Slides: https://conf.splunk.com/files/2017/slides/searching-fast-how-to-start-using-tstats-and-other-acceleration-techniques.pdf Slides archive: https://web.archive.org/web/20211202200036/http://conf.splunk.com/files/2017/slides/searching-fast-how-to-start-using-tstats-and-other-acceleration-techniques.pdf   2017 talk by Satoshi Kawasaki: https://web.archive.org/web/20171220012042/http://conf.splunk.com/sessions/2017-sessions.html#search=speed%20up& Recording: https://conf.splunk.com/files/2017/recordings/speed-up-your-searches.mp4 Recording archive: https://web.archive.org/web/20240122110515/https://conf.splunk.com/files/2017/recordings/speed-up-your-searches.mp4 Slides: https://conf.splunk.com/files/2017/slides/speed-up-your-searches.pdf Slides archive: https://web.archive.org/web/20250601130246/https://conf.splunk.com/files/2017/slides/speed-up-your-searches.pdf  
@Amira Have you verified this?  https://splunkbase.splunk.com/app/6657 
I'm experiencing an issue with the Cisco SD-WAN application in Splunk where the dashboards are not displaying the expected data. We have followed the official documentation step by step and are succ... See more...
I'm experiencing an issue with the Cisco SD-WAN application in Splunk where the dashboards are not displaying the expected data. We have followed the official documentation step by step and are successfully receiving both syslog and NetFlow data. However, it seems that the data model "Cisco_SDWAN" associated with the syslog data is not functioning correctly, which is likely causing the dashboards to fail. We've already performed extensive troubleshooting without success. Has anyone encountered a similar issue or can offer guidance on resolving the data model problem? Splunk Enterprise Security  Cisco Catalyst SD-WAN App for Splunk  and Cisco Catalyst SD-WAN Add-on for Splunk 
I don't think it is possible to constrain a dataset to "only intake 1 event containing each value of EventId and then exclude the rest of the events with the same EventId value." This would require t... See more...
I don't think it is possible to constrain a dataset to "only intake 1 event containing each value of EventId and then exclude the rest of the events with the same EventId value." This would require the dataset to check against a list of already-included EventId values for every new event it intakes. It would be better to do this in another way. Ideally you could change the events themselves so that they only have one event per EventID, but there are other tricks you could try, like making a search that makes summary-indexed events once per EventID while excluding all EventIDs that already exist in the destination index. Then you could set the datamodel+dataset to include events from the index of summary-indexed events.
If you suspect there's some time range discrepancy between those two searches, check their job logs. After the search is expanded as it's being dispatched to be executed, if I remember correctly it s... See more...
If you suspect there's some time range discrepancy between those two searches, check their job logs. After the search is expanded as it's being dispatched to be executed, if I remember correctly it should have the earliest and latest as epoch-based timestamps. Check if they differ. I assume you're spawning the searches from the same user, aren't you?
when i look at the _time which is pulled through API values look like below _time 2025-05-30 10:28:06.234 UTC 2025-05-30 04:48:45.178 UTC 2025-05-30 16:33:09.755 UTC 2025-05-30 14:20:23.054 UTC
when i look at the last row/record and look for _time the value it has is 2025-05-30 23:30:28.314 there is no record after this
Hi, I have this very simple splunk search query and i was able to run in splunk search portal or UI and I am using the same search query API (using the same query but in the form of encoded URL) - w... See more...
Hi, I have this very simple splunk search query and i was able to run in splunk search portal or UI and I am using the same search query API (using the same query but in the form of encoded URL) - what is the issue? I am getting total number of events as 164 in splunk portal but when i run the same query which is transted into encoded URL through python script i am getting 157 records/rows only... since this search is only for yesterday iam using earliest=-1d@d latest=-0d@d index=App001_logs sourcetype="App001_logs_st" earliest=-1d@d latest=-0d@d organization IN ("InternalApps","ExternalApps") AppclientId="ABC123" status_code=200 environment="UAT" | table _time, AppclientId,organization,environment,proxyBasePath,api_name the same exact query which is translated in encoded URL like https:// whole search query and when i run the python script in my desktop (my time zone is CST) i get only 157 records/rows I think there is something going on UTC and CST - this is what i see in splunk portal 164 events (5/30/25 12:00:00.000 AM to 5/31/25 12:00:00.000 AM) any guidance please?
{ "visualizations": { "viz_gsqlcpsd": { "type": "splunk.line", "dataSources": { "primary": "ds_xcdWhjuu" }, "title": "${sel... See more...
{ "visualizations": { "viz_gsqlcpsd": { "type": "splunk.line", "dataSources": { "primary": "ds_xcdWhjuu" }, "title": "${selected_server:-All Servers} - CPU Usage %" } }, "inputs": { "input_IAwTOhNf": { "options": { "items": [], "token": "selected_server", "defaultValue": "" }, "title": "Server Name", "type": "input.multiselect", "dataSources": { "primary": "ds_dIoNDOrf" }, "showProgressBar": true, "showLastUpdated": true, "context": {} }, "input_mj9iUMvw": { "options": { "defaultValue": "-15m,now", "token": "tr_hMOOrvcD" }, "title": "Time Range Input Title", "type": "input.timerange" } }, "layout": { "type": "grid", "globalInputs": [ "input_VtWuBSik", "input_mj9iUMvw" ], "options": { "backgroundColor": "transparent" }, "structure": [ { "item": "viz_gsqlcpsd", "type": "repeating", "repeatFor": { "input": "input_VtWuBSik" }, "position": { "x": 0, "y": 0, "w": 1200, "h": 400 } } ] }, "dataSources": { "ds_xcdWhjuu": { "type": "ds.search", "options": { "queryParameters": { "earliest": "-24h@h", "latest": "now" }, "query": "index=cto_epe_observability sourcetype=otel_host_metrics measurement=otel_system_cpu_time \r\n| search url IN($selected_server$) OR url=\"default_server\"\r\n| eval state_filter=if(match(state, \"^(idle|interrupt|nice|softirq|steal|system|user|wait)$\"), 1, 0)\r\n| where state_filter = 1\r\n| sort 0 _time url cpu state\r\n| streamstats current=f last(counter) as prev by url cpu state\r\n| eval delta = counter - prev\r\n| where delta >= 0\r\n| bin _time span=1m\r\n| eventstats sum(delta) as total by _time, url, cpu\r\n| eval percent = round((delta / total) * 100, 2)\r\n| eval url_state = url . \"_\" . state \r\n| timechart span=1m avg(percent) by url_state\r\n| foreach * [eval <<FIELD>> = round('<<FIELD>>', 2)]" }, "name": "CPU_Util_Search_1" } }, "ds_dIoNDOrf": { "type": "ds.search", "options": { "query": "index=server | dedup server|table server", "queryParameters": { "earliest": "$global_time.earliest$", "latest": "$global_time.latest$" } }, "name": "Server_Search_1" }, "title": "Test_Multi Line chart" } @kiran_panchavat Thanks for the quick response. Your understanding is right. I believe your code is static , but I want dynamic according to the query results in multi select. Here's my full code
@Sudhagar  Are you looking something like this? Attached image. I created using some dummy data with static values.  {     "title": "Static CPU Usage Charts per Host",     "visualizations": ... See more...
@Sudhagar  Are you looking something like this? Attached image. I created using some dummy data with static values.  {     "title": "Static CPU Usage Charts per Host",     "visualizations": {         "viz_host123": {             "dataSources": {                 "primary": "ds_host123"             },             "options": {                 "legendPlacement": "right",                 "xAxisTitle": "Time",                 "yAxisTitle": "CPU Usage (%)"             },             "title": "host123 - CPU Usage %",             "type": "splunk.line"         },         "viz_host456": {             "dataSources": {                 "primary": "ds_host456"             },             "options": {                 "legendPlacement": "right",                 "xAxisTitle": "Time",                 "yAxisTitle": "CPU Usage (%)"             },             "title": "host456 - CPU Usage %",             "type": "splunk.line"         },         "viz_host789": {             "dataSources": {                 "primary": "ds_host789"             },             "options": {                 "legendPlacement": "right",                 "xAxisTitle": "Time",                 "yAxisTitle": "CPU Usage (%)"             },             "title": "host789 - CPU Usage %",             "type": "splunk.line"         }     },     "dataSources": {         "ds_host123": {             "options": {                 "query": "| makeresults count=10\n| streamstats count as row\n| eval _time = relative_time(now(), \"-\" . (10 - row) . \"m\")\n| eval host=\"host123\"\n| eval state_list=split(\"user,system,idle\", \",\")\n| mvexpand state_list\n| eval state=state_list\n| eval percent=case(state==\"user\",20+random()%10,state==\"system\",10+random()%5,state==\"idle\",70+random()%10)\n| eval host_state=host.\"_\".state\n| timechart span=1m avg(percent) by host_state",                 "queryParameters": {                     "earliest": "-30m",                     "latest": "now"                 }             },             "type": "ds.search"         },         "ds_host456": {             "options": {                 "query": "| makeresults count=10\n| streamstats count as row\n| eval _time = relative_time(now(), \"-\" . (10 - row) . \"m\")\n| eval host=\"host456\"\n| eval state_list=split(\"user,system,idle\", \",\")\n| mvexpand state_list\n| eval state=state_list\n| eval percent=case(state==\"user\",20+random()%10,state==\"system\",10+random()%5,state==\"idle\",70+random()%10)\n| eval host_state=host.\"_\".state\n| timechart span=1m avg(percent) by host_state",                 "queryParameters": {                     "earliest": "-30m",                     "latest": "now"                 }             },             "type": "ds.search"         },         "ds_host789": {             "options": {                 "query": "| makeresults count=10\n| streamstats count as row\n| eval _time = relative_time(now(), \"-\" . (10 - row) . \"m\")\n| eval host=\"host789\"\n| eval state_list=split(\"user,system,idle\", \",\")\n| mvexpand state_list\n| eval state=state_list\n| eval percent=case(state==\"user\",20+random()%10,state==\"system\",10+random()%5,state==\"idle\",70+random()%10)\n| eval host_state=host.\"_\".state\n| timechart span=1m avg(percent) by host_state",                 "queryParameters": {                     "earliest": "-30m",                     "latest": "now"                 }             },             "type": "ds.search"         }     },     "layout": {         "layoutDefinitions": {             "layout_1": {                 "options": {                     "backgroundColor": "transparent"                 },                 "structure": [                     {                         "item": "viz_host123",                         "position": {                             "h": 400,                             "w": 1200,                             "x": 0,                             "y": 0                         },                         "type": "block"                     },                     {                         "item": "viz_host456",                         "position": {                             "h": 400,                             "w": 1200,                             "x": 0,                             "y": 400                         },                         "type": "block"                     },                     {                         "item": "viz_host789",                         "position": {                             "h": 400,                             "w": 1200,                             "x": 0,                             "y": 800                         },                         "type": "block"                     }                 ],                 "type": "grid"             }         },         "tabs": {             "items": [                 {                     "label": "New tab",                     "layoutId": "layout_1"                 }             ]         }     } }  
I am trying to repeat line chart for multiple host selection. Each line chart should display the cpu usage for each selected hosts separately. Here is my full source code in Dashboard studio. { ... See more...
I am trying to repeat line chart for multiple host selection. Each line chart should display the cpu usage for each selected hosts separately. Here is my full source code in Dashboard studio. { "visualizations": { "viz_gsqlcpsd": { "type": "splunk.line", "dataSources": { "primary": "ds_xcdWhjuu" }, "title": "${selected_server:-All Servers} - CPU Usage %" } }, "inputs": { "input_VtWuBSik": { "options": { "items": [ { "label": "All", "value": "*" }, { "label": "host123", "value": "host123" }, { "label": "host1234", "value": "host1234" } ], "defaultValue": [ "*" ], "token": "selected_server" }, "title": "server", "type": "input.multiselect" }, "input_mj9iUMvw": { "options": { "defaultValue": "-15m,now", "token": "tr_hMOOrvcD" }, "title": "Time Range Input Title", "type": "input.timerange" } }, "layout": { "type": "grid", "globalInputs": [ "input_VtWuBSik", "input_mj9iUMvw" ], "options": { "backgroundColor": "transparent" }, "structure": [ { "item": "viz_gsqlcpsd", "type": "repeating", "repeatFor": { "input": "input_VtWuBSik" }, "position": { "x": 0, "y": 0, "w": 1200, "h": 400 } } ] }, "dataSources": { "ds_xcdWhjuu": { "type": "ds.search", "options": { "queryParameters": { "earliest": "-24h@h", "latest": "now" }, "query": "index=host_metrics measurement=cpu_time \r\n| search url IN($selected_server$) OR url=\"default_server\"\r\n| eval state_filter=if(match(state, \"^(idle|interrupt|nice|softirq|steal|system|user|wait)$\"), 1, 0)\r\n| where state_filter = 1\r\n| sort 0 _time url cpu state\r\n| streamstats current=f last(counter) as prev by url cpu state\r\n| eval delta = counter - prev\r\n| where delta >= 0\r\n| bin _time span=1m\r\n| eventstats sum(delta) as total by _time, url, cpu\r\n| eval percent = round((delta / total) * 100, 2)\r\n| eval url_state = url . \"_\" . state \r\n| timechart span=1m avg(percent) by url_state\r\n| foreach * [eval <<FIELD>> = round('<<FIELD>>', 2)]" }, "name": "CPU_Util_Search_1" } }, "title": "Test_Multi Line chart" }  
Added a note to  the original post that indexers are having no IO issues and plenty of idle cpu. This post is for the scenario where  replication queue is full causing pipeline queues full as well b... See more...
Added a note to  the original post that indexers are having no IO issues and plenty of idle cpu. This post is for the scenario where  replication queue is full causing pipeline queues full as well but plenty of resources(cpu/IO) are still available. 
One question though - won't the parallelIngestionPipelines starve the searches of cpu cores?
Added a note to  the original post that indexers are having no IO issues and plenty of idle cpu.
@hrawat  Further Insights on the Suggestion Shared by @gcusello  It is recommended that indexers are provisioned with 12 to 48 CPU cores, each running at 2 GHz or higher, to ensure optimal perfor... See more...
@hrawat  Further Insights on the Suggestion Shared by @gcusello  It is recommended that indexers are provisioned with 12 to 48 CPU cores, each running at 2 GHz or higher, to ensure optimal performance. The disk subsystem should support at least 800 IOPS, ideally using SSDs for hot and warm buckets to handle the indexing workload efficiently. https://docs.splunk.com/Documentation/Splunk/latest/Capacity/Referencehardware  For environments still using traditional hard drives, prioritize models with higher rotational speeds, and lower average latency and seek times to maximize IOPS. For further insights, refer to this guide on Analyzing I/O Performance in Linux. Note that insufficient disk I/O is one of the most common performance bottlenecks in Splunk deployments. It is crucial to thoroughly review disk subsystem requirements during hardware planning. If the indexer's CPU resources exceed those of the standard reference architecture, it may be beneficial to tune parallelization settings to further enhance performance for specific workloads.
Wait a second. Something doesn't add up here. Even ignoring the syntax of that 200MB cold volume limit, if you set hot/warm to 100GB, cold to 200GB you'll get at most 300GB of space. In ideal conditi... See more...
Wait a second. Something doesn't add up here. Even ignoring the syntax of that 200MB cold volume limit, if you set hot/warm to 100GB, cold to 200GB you'll get at most 300GB of space. In ideal conditions that's 30*10GB (in reality you need some buffer for acceleration summaries and pushing a filesystem to 100% usage is not a healthy practice anyway) but for your one index for which you've shown the config you have 90 days retention policy. Ok, you wrote that you have multiple indexes with different retention requirements but remember to take them all into account.
Hi @hrawat , two little questions: how many CPUs have you on your Indexers? what's the throughput on the storage of your indexers? in other words, have you iowait and delayed searches issues? p... See more...
Hi @hrawat , two little questions: how many CPUs have you on your Indexers? what's the throughput on the storage of your indexers? in other words, have you iowait and delayed searches issues? probably the problem is related to an insufficient processing capacity, so the easiest solution is adding some CPUs. If instead the problema is the second, the only solution is changing the storage that hasn't a sufficient IOPS: Splunk requires at least 800 IOPS. Ciao. Giuseppe
I enabled additional logging on the production setup and updated the passwords.conf and customfile.conf files—first on the search head captain (sh01), and then on another member (sh03). In both case... See more...
I enabled additional logging on the production setup and updated the passwords.conf and customfile.conf files—first on the search head captain (sh01), and then on another member (sh03). In both cases, logs were generated for the passwords.conf updates. However, there were no logs related to the customfile.conf file. The first set of logs corresponds to the update on the captain (sh01), and the second set corresponds to the update on the member (sh03). Sensitive fields have been redacted or anonymized. 05-30-2025 10:10:10.185 +0000 DEBUG ConfReplication [1692624 TcpChannelThread] - addCommit: to_repo=https://sh01.acme.com:8089, op_id=1252dcef9d0f33386e7feab562eba92d424515ea, applied_at=1748599810, asset_id=c922db4bf111d426f1e8eb78181cb8f43b185f52, asset_uri=/nobody/custom-app/passwords/credential:custom-app_realm:password:, optype=WRITE_STANZA, payload={ password = REDACTED [ { }, removable: yes ]\n }, extra_payload= 05-30-2025 10:10:13.591 +0000 DEBUG ConfReplication [2010047 ConfReplicationThread] - pullFrom_Locked: status=handling, from_repo=https://sh01.acme.com:8089, to_repo=https://sh03.acme.com:8089, op_id=1252dcef9d0f33386e7feab562eba92d424515ea, applied_at=1748599810, asset_id=c922db4bf111d426f1e8eb78181cb8f43b185f52, asset_uri=/nobody/custom-app/passwords/credential:custom-app_realm:password:, optype=WRITE_STANZA, payload={ password = REDACTED [ { }, removable: yes ]\n }, extra_payload= 05-30-2025 10:10:13.591 +0000 DEBUG ConfReplication [2010047 ConfReplicationThread] - pullFrom_Locked: status=applied, reason="", from_repo=https://sh01.acme.com:8089, to_repo=https://sh03.acme.com:8089, op_id=1252dcef9d0f33386e7feab562eba92d424515ea, applied_at=1748599813, asset_id=c922db4bf111d426f1e8eb78181cb8f43b185f52, asset_uri=/nobody/custom-app/passwords/credential:custom-app_realm:password:, optype=WRITE_STANZA, payload={ password = REDACTED [ { }, removable: yes ]\n }, extra_payload= 05-30-2025 10:10:10.497 +0000 DEBUG ConfReplication [3612371 TcpChannelThread] - addCommit: to_repo=https://sh03.acme.com:8089, op_id=481af55d46acfb6f4da973c3aac4af9e8ab2e0e6, applied_at=1748599810, asset_id=c922db4bf111d426f1e8eb78181cb8f43b185f52, asset_uri=/nobody/custom-app/passwords/credential:custom-app_realm:password:, optype=WRITE_STANZA, payload={ password = REDACTED [ { }, removable: yes ]\n }, extra_payload= 05-30-2025 10:10:10.501 +0000 DEBUG ConfReplication [2010047 ConfReplicationThread] - ConfOpStorage: toPush ptr=0x7ff55ebfcd50, pos=0, repo=https://sh03.acme.com:8089, op_id=481af55d46acfb6f4da973c3aac4af9e8ab2e0e6, applied_at=1748599810, asset_id=c922db4bf111d426f1e8eb78181cb8f43b185f52, asset_uri=/nobody/custom-app/passwords/credential:custom-app_realm:password:, optype=WRITE_STANZA, payload={ password = REDACTED [ { }, removable: yes ]\n }, extra_payload= 05-30-2025 10:10:10.507 +0000 DEBUG ConfReplication [1993289 TcpChannelThread] - acceptPush_Locked: status=handling, from_repo=https://sh03.acme.com:8089, to_repo=https://sh01.acme.com:8089, op_id=481af55d46acfb6f4da973c3aac4af9e8ab2e0e6, applied_at=1748599810, asset_id=c922db4bf111d426f1e8eb78181cb8f43b185f52, asset_uri=/nobody/custom-app/passwords/credential:custom-app_realm:password:, optype=WRITE_STANZA, payload={ password = REDACTED [ { }, removable: yes ]\n }, extra_payload= 05-30-2025 10:10:10.511 +0000 DEBUG ConfReplication [1993289 TcpChannelThread] - acceptPush_Locked: status=applied, reason="", from_repo=https://sh03.acme.com:8089, to_repo=https://sh01.acme.com:8089, op_id=481af55d46acfb6f4da973c3aac4af9e8ab2e0e6, applied_at=1748599810, asset_id=c922db4bf111d426f1e8eb78181cb8f43b185f52, asset_uri=/nobody/custom-app/passwords/credential:custom-app_realm:password:, optype=WRITE_STANZA, payload={ password = REDACTED [ { }, removable: yes ]\n }, extra_payload=