All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

{   "abcdxyz" : {     "transaction" : "abcdxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 418.71666666666664,     "medianResTime" : 264.5... See more...
{   "abcdxyz" : {     "transaction" : "abcdxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 418.71666666666664,     "medianResTime" : 264.5,     "minResTime" : 0.0,     "maxResTime" : 4418.0,     "pct1ResTime" : 368.4,     "pct2ResTime" : 3728.049999999985,     "pct3ResTime" : 4418.0,     "throughput" : 0.25086548592644625,     "receivedKBytesPerSec" : 0.16945669591340123,     "sentKBytesPerSec" : 0.3197146692547623   },   "efghxyz" : {     "transaction" : "efghxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 421.8,     "medianResTime" : 32.0,     "minResTime" : 0.0,     "maxResTime" : 3566.0,     "pct1ResTime" : 3258.5,     "pct2ResTime" : 3497.6,     "pct3ResTime" : 3566.0,     "throughput" : 0.24752066797577596,     "receivedKBytesPerSec" : 0.34477244084256037,     "sentKBytesPerSec" : 0.08463804872238082   },   "ijklxyz" : {     "transaction" : "ijklxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 27.733333333333338,     "medianResTime" : 27.5,     "minResTime" : 0.0,     "maxResTime" : 241.0,     "pct1ResTime" : 41.599999999999994,     "pct2ResTime" : 52.699999999999974,     "pct3ResTime" : 241.0,     "throughput" : 0.25115636576738737,     "receivedKBytesPerSec" : 0.3331214746541367,     "sentKBytesPerSec" : 0.08588125143891667   },   "mnopxyz" : {     "transaction" : "mnopxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 491.74999999999994,     "medianResTime" : 279.5,     "minResTime" : 0.0,     "maxResTime" : 4270.0,     "pct1ResTime" : 381.29999999999995,     "pct2ResTime" : 4076.55,     "pct3ResTime" : 4270.0,     "throughput" : 0.2440254437195985,     "receivedKBytesPerSec" : 0.16483632755942018,     "sentKBytesPerSec" : 0.2839297997262848   } } I need to create a table view from the above log event which was captured as a single event, like the below table format: samples abcdxyz efghxyz ijklxyz mnopxyz     "transaction" :             "sampleCount"                                                     "errorCount"              "errorPct"                                                "meanResTime"                                    "medianResTime"                                          "minResTime"                                               "maxResTime"                                               "pct1ResTime"                                          "pct2ResTime"                                        "pct3ResTime"                                           "throughput"                                        "receivedKBytesPerSec"                                      "sentKBytesPerSec"                                
HI @zaks191 ,  Please consider the below points for the better performance in your environment. 1. Be Specific in Searches: Always use index= and sourcetype= and add unique terms early in you... See more...
HI @zaks191 ,  Please consider the below points for the better performance in your environment. 1. Be Specific in Searches: Always use index= and sourcetype= and add unique terms early in your search string to narrow down data quickly. 2. Filter Early, Transform Late: Place filtering commands (like where, search) at the beginning and transforming commands (stats, chart) at the end of your SPL. 3.Leverage Index-Time Extractions: Ensure critical fields are extracted at index time for faster searching, especially with JSON data. 4.Utilize tstats: For numeric or indexed data, tstats is highly efficient as it operates directly on pre-indexed data (.tsidx files), making it much faster than search | stats. 5.Accelerate Data Models: Define and accelerate data models for frequently accessed structured data. This pre-computes summaries, allowing tstats searches to run extremely fast. 6.Accelerate Reports: For specific, repetitive transforming reports, enable report acceleration to store pre-computed results. 7.Minimize Wildcards and Regex: Avoid leading wildcards (*term) and complex, unanchored regular expressions as they are resource-intensive. 8.Optimize Lookups: For large lookups, consider KV Store lookups or pre-generate summaries via scheduled searches. 9.Use Job Inspector: Regularly analyze slow searches with the Job Inspector to pinpoint bottlenecks (e.g., search head vs. indexer processing). 10.Review limits.conf (Carefully): While not a primary fix, review settings like max_mem_usage_mb or max_keymap_rows in limits.conf after monitoring resource usage, but proceed with caution and thorough testing. 11.Setup Alerts for Expensive searches: use internal metrics to detect problematic searches 12.Monitor and Limit User Search Concurrency: Users running unbounded or wide time-range ad hoc searches can harm performance. Happy Splunking
Hi @Sahansral , Please try to execute the below btool command on the SH $SPLUNK_HOME/bin/splunk cmd btool user-prefs list --debug then output should be like below /opt/splunk/etc/users/so... See more...
Hi @Sahansral , Please try to execute the below btool command on the SH $SPLUNK_HOME/bin/splunk cmd btool user-prefs list --debug then output should be like below /opt/splunk/etc/users/someuser/user-prefs/local/user-prefs.conf [general] lang = de if you find  lang = de thats the actual problem- splunk will try to redirect /de/ even though it should be /de-DE/, and /de/ doesnt exist. so try to update the below in the local if the issue persists for all the user $SPLUNK_HOME/etc/system/local/user-prefs.conf [general] lang = de-DE and restart splunk once. note: if the issue is only for specific user then you need to change in the user level. Please let me know if its worked. Happy Splunking!!
Hi @verbal_666 , if the indexer resource usage is stable and this happen periodically, this indicates a network issue. Try to capture a pcap during the delay window and check for the dropped ack'... See more...
Hi @verbal_666 , if the indexer resource usage is stable and this happen periodically, this indicates a network issue. Try to capture a pcap during the delay window and check for the dropped ack's and engage the network team or firewall team and try to do some analyze on the traffic and session timeouts, it could be affecting splunk traffic.
I agreee with you on that, if your CPU, IOPS, and searches all seem steady. Some network appliances have default TCP session timeout, If forwarder/indexer sessions idle or ACKs delay just enough, th... See more...
I agreee with you on that, if your CPU, IOPS, and searches all seem steady. Some network appliances have default TCP session timeout, If forwarder/indexer sessions idle or ACKs delay just enough, the connection may be dropped, forcing re-establishment and buffering. Also network switches/routers might prune idle TCP flows, this affects forwarders that don’t constantly send. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
The strange thing is that the resource usage is quite equal all time from 09 to 17, with some "normal" CPU peak (i have to add some Indexers asap), also same number of searches, and quality of search... See more...
The strange thing is that the resource usage is quite equal all time from 09 to 17, with some "normal" CPU peak (i have to add some Indexers asap), also same number of searches, and quality of searches (none of them seems to create some loop or resources bottleneck!!!). I was also wondering if some Network Device makes some "refresh" (every 1 hour), maybe breaking the Indexers ACK responses 🤷‍ 🤷‍ 🤷‍ quite strange...
@verbal_666  Splunk doesn’t offer a built-in scheduler for bucket management tasks like rolling or retention. I would say focus on resource monitoring, and possibly scaling your indexer infrastruct... See more...
@verbal_666  Splunk doesn’t offer a built-in scheduler for bucket management tasks like rolling or retention. I would say focus on resource monitoring, and possibly scaling your indexer infrastructure, not on manipulating Splunk's internal maintenance timing. But you can consider below possible tuning, but not a recommended approach. -Tune max_peer_rep_load and max_peer_build_load in server.conf reduce these values to throttle replication -Adjust forwarder behavior by editing autoLBFrequency - reduces how often forwarders switch indexers, lowering channel creation rate #https://community.splunk.com/t5/Getting-Data-In/Why-did-ingestion-slow-way-down-after-I-added-thousands-of-new/m-p/465796 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
And what is you stats wants a _time group? index=* source=/my.log | bin span=5m _time | stats count by _time[,source] 00:00 5 00:05 (no records, skipped) 00:10 10 00:15 20 The result will be, ... See more...
And what is you stats wants a _time group? index=* source=/my.log | bin span=5m _time | stats count by _time[,source] 00:00 5 00:05 (no records, skipped) 00:10 10 00:15 20 The result will be, 00:00 5 00:10 10 00:15 20 The only way in this case is to use a timechart index=* source=/my.log | timechart span=5m count by source 00:00 5 00:05 0 00:10 10 00:15 20
Hi. During the day, some on my Indexers completely stops sending back the ACK, so many agents keep data in queue until there's the ACK and the flow restarts (in some cases also 15/20 minutes passes!... See more...
Hi. During the day, some on my Indexers completely stops sending back the ACK, so many agents keep data in queue until there's the ACK and the flow restarts (in some cases also 15/20 minutes passes!!!). Meanwhile, obviously, i have many delays of data and ACK errors. This happens at some hours, from 09:00 to 17:00, during very high data ingestion the issue is clear visible, during the other hours is trasparent, no issue (few data flowing and few users interaction). I'm wondering, maybe an Indexer internal task to manage indexes/buckets, to optimize system and manage retentions? If so, is this task "editable" to run "once per day only" (in night hours)? Thanks.
Hi livehybrid, checked our test and production environment. Neither of them have a lang-setting.
This should be accepted as solution. This workaround works, not sure why Splunk hasn't put this workaround in the known issues section in the docs.
did you able to find solution for this.
just use the <fields> element in your <table> to restrict what fields are shown in the table. All other fields are still available for drilldown with $row.x$ https://docs.splunk.com/Documentation/Sp... See more...
just use the <fields> element in your <table> to restrict what fields are shown in the table. All other fields are still available for drilldown with $row.x$ https://docs.splunk.com/Documentation/Splunk/latest/Viz/PanelreferenceforSimplifiedXML#table  
ご教授ください。 Linuxでsplunk(9.4.3)の構築を学習中です。 WebUIのフォワーダー管理でエージェントが表示されない状態です。 解決策をご存知ないでしょうか。 Deployment ServerはLicense Manager、Monitoring Serverと同居しています。(XX.XX.XX.10) ClientはHeavy Forwarderです。(XX.XX... See more...
ご教授ください。 Linuxでsplunk(9.4.3)の構築を学習中です。 WebUIのフォワーダー管理でエージェントが表示されない状態です。 解決策をご存知ないでしょうか。 Deployment ServerはLicense Manager、Monitoring Serverと同居しています。(XX.XX.XX.10) ClientはHeavy Forwarderです。(XX.XX.XX.8) Monitoring ConsoleではDSもClientもStatus=Upの状態です。 ・各confは下記のように設定 -------[Client(XX.XX.XX.8)]------- $ sudo cat /opt/splunk/etc/system/local/deploymentclient.conf [deployment-client] disabled = false clientName = HF1 [target-broker:deploymentServer] targetUri = XX.XX.XX.10:8089 $ sudo /opt/splunk/bin/splunk show deploy-poll WARNING: Server Certificate Hostname Validation is disabled. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Your session is invalid. Please login. Splunk username: siv_admin Password: Deployment Server URI is set to "XX.XX.XX.10:8089". $ ------------- -------[DS(XX.XX.XX.10)]------- $ sudo cat /opt/splunk/etc/system/local/serverclass.conf [serverClass:HFs] whitelist.0 = host=heavy* whitelist.1 = XX.XX.*.* whitelist.2 = host=HF* whitelist.3 = clientName:HF* [serverClass:HFs:app:app_test] stateOnClient = enabled -------------   ・確認した事 -------[Client(XX.XX.XX.8)]------- #Client=>DS 疎通確認で応答がある $ curl -k https://XX.XX.XX.10:8089/services/server/info -u admin:XXXXX <?xml version="1.0" encoding="UTF-8"?> ・・・・・ <s:list> <s:item>indexer</s:item> <s:item>license_master</s:item> <s:item>license_manager</s:item> <s:item>deployment_server</s:item> <s:item>search_head</s:item> <s:item>kv_store</s:item> </s:list> ・・・・・ -------------- -------[DS(XX.XX.XX.10)]------- #DSにClient の設定はしていない #DS=>Client 疎通確認で応答がある $ sudo /opt/splunk/bin/splunk btool deploymentclient list --debug $ $ curl -k https://XX.XX.XX.8:8089/services/server/info -u admin:XXXXX <?xml version="1.0" encoding="UTF-8"?> ・・・・・ <s:key name="server_roles"> <s:list> <s:item>deployment_client</s:item> <s:item>search_peer</s:item> <s:item>kv_store</s:item> </s:list> </s:key> ・・・・・ --------------   その他気になる事があります。 ・キャッシュ(serverclass_cache.json)が生成されません。 ・テストで作成したAppの配布はできているように見える。 -------[Client(XX.XX.XX.8)]------- $ sudo ls -l /opt/splunk/var/run/HFs total 12 -rw-------. 1 splunk splunk 10240 Jul 22 23:23 app_test-1753173390.bundle -------------- ・splunkd.logでも疎通ができていると思われるログが出ている -------[DS(XX.XX.XX.10)]------- 07-23-2025 23:30:28.287 +0000 INFO PubSubSvr [2431 TcpChannelThread] - Subscribed: channel=deploymentServer/phoneHome/default/reply/heavy-1/HF1 connectionId=connection_XX.XX.XX.8_8089_heavy-1.internal.cloudapp.net_heavy-1_HF1 listener=0x7fbd68ccc400 --------------   よろしくお願いします。
Please can you provide some sample events to demonstrate your issue?
Check out this API reference for being able to use SignalFlow through the API: https://dev.splunk.com/observability/reference/api/signalflow/latest#endpoint-create-websocket-connection Also, if... See more...
Check out this API reference for being able to use SignalFlow through the API: https://dev.splunk.com/observability/reference/api/signalflow/latest#endpoint-create-websocket-connection Also, if you want to work within Splunk Cloud/Enterprise, you can use the Observability Cloud Infrastructure Monitoring TA which will allow you to  use the sim command in your spl and you can use SignalFlow there to get that metric. https://splunkbase.splunk.com/app/5247
Interestingly, when using tostring() before the sort it still identifies as a number?!   The only way I can make it sort as a string is using | sort str(fruit) as @richgalloway  mentioned.  ... See more...
Interestingly, when using tostring() before the sort it still identifies as a number?!   The only way I can make it sort as a string is using | sort str(fruit) as @richgalloway  mentioned.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
hi @pedropiin  Is this the sort of thing you're looking for? Its a little complicated to setup due to the way the expression works in the <format> (cant reference other fields) so you need to create... See more...
hi @pedropiin  Is this the sort of thing you're looking for? Its a little complicated to setup due to the way the expression works in the <format> (cant reference other fields) so you need to create an mvfield containing the two values you want to compare, then use CSS to hide the other! Anyway - let me know what you think! Full example code <dashboard version="1.1"> <label>Colour smaller number cells</label> <description>App1 should be green if App1&lt;App2, App2 should be green if App2&lt;App1</description> <row> <panel> <html> <style> #tableCellColour table tbody td div.multivalue-subcell[data-mv-index="1"]{ display: none; } </style> </html> <table id="tableCellColour"> <search> <query>| makeresults count=10 | streamstats count AS row | eval "Hidden Avg Time App1" = (random() % 100) + 1 | eval "Hidden Avg Time App2" = (random() % 100) + 1 | eval "Avg Time App1" = mvappend('Hidden Avg Time App1', 'Hidden Avg Time App2') | eval "Avg Time App2" = mvappend('Hidden Avg Time App2', 'Hidden Avg Time App1') | fields - Hidden* | table _time *</query> <earliest>@d</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">50</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <format type="color" field="Avg Time App1"> <colorPalette type="expression">case(mvindex(value,1) &gt; mvindex(value,0), "#1ce354", 1=1, "#de2121")</colorPalette> </format> <format type="color" field="Avg Time App2"> <colorPalette type="expression">case(mvindex(value,1) &gt; mvindex(value,0), "#1ce354", 1=1, "#de2121")</colorPalette> </format> </table> </panel> </row> </dashboard>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
When I add  | eval type=typeof(fruit) to the query the results say the numbers are indeed numbers rather than strings.  That would explain the sort. When I use  | sort str(fruit) the results are... See more...
When I add  | eval type=typeof(fruit) to the query the results say the numbers are indeed numbers rather than strings.  That would explain the sort. When I use  | sort str(fruit) the results are in the expected lexicographical order. FWIW, the docs do say "Numeric data is sorted as you would expect for numbers and the sort order is specified as ascending or descending."
Further to my last reply, I've tested with the following which I think does what you need <form version="1.1"> <label>Testing</label> <row> <panel> <title>Support cases</title> <tab... See more...
Further to my last reply, I've tested with the following which I think does what you need <form version="1.1"> <label>Testing</label> <row> <panel> <title>Support cases</title> <table id="myTable"> <search> <query>index=_internal | head 3 | eval "Time (UTC)"=_time | eval "Support ID"="Testing" |eval _epoch_time=_time, _epoch_plus60=_time+60 | table "Support ID","Time (UTC)", _time, _epoch_time, _epoch_plus60</query> <earliest>-15m</earliest> <latest>now</latest> </search> <!-- Drilldown Configuration --> <!-- Enable row-level drilldown --> <option name="drilldown">row</option> <option name="refresh.display">progressbar</option> <drilldown> <link target="_blank">/app/search/search?q=search index=sony* sourcetype=sony_logs support_id="$click.value$"&amp;earliest=$row._epoch_time$&amp;latest=$row._epoch_plus60$</link> </drilldown> </table> </panel> </row> </form>  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing