All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @livehybrid , Yes, you've hit on my exact point. I'm trying to determine the best way to contact support – specifically, if their assistance is limited to paying customers or if there's an avenu... See more...
Hi @livehybrid , Yes, you've hit on my exact point. I'm trying to determine the best way to contact support – specifically, if their assistance is limited to paying customers or if there's an avenue for the general public to inquire. This is precisely why I brought my question to the Splunk forum. If you have any information on how to reach the Splunk or TruSTAR technical teams, I would greatly appreciate your guidance.
Hi @TestUser , Could you please frame your question with more details and clarity so that it would be helpful for other Splunkers to clarify your question.
Hi @LoMueller  Based on the existing code for this TA (https://github.com/thatfrankwayne/TA_oui-lookup) its using the python requests library, so therefore it should be possible to implement a proxy... See more...
Hi @LoMueller  Based on the existing code for this TA (https://github.com/thatfrankwayne/TA_oui-lookup) its using the python requests library, so therefore it should be possible to implement a proxy for this with some code changes. There are no contact details for the author ( @frankwayne ) in the app so I would recommend raising an issue on GitHub with the feature request and then hopefully it can be worked in to a future version.  Did this answer help you? If so, please consider: Adding karma to show it was useful Marking it as the solution if it resolved your issue Commenting if you need any clarification Your feedback encourages the volunteers in this community to continue contributing
Is it possible to implement the possibility to use a proxy url to download the vendor oui-file? Thanks
Assuming that this is an accurate representation of your data, you could try something like this | eval array=json_array_to_mv(json_keys(_raw),false()) | foreach array mode=multivalue [| eval s... See more...
Assuming that this is an accurate representation of your data, you could try something like this | eval array=json_array_to_mv(json_keys(_raw),false()) | foreach array mode=multivalue [| eval set=mvappend(set,json_extract(_raw,<<ITEM>>))] | eval row=mvrange(0,mvcount(array)) | mvexpand row | eval key=mvindex(array,row) | eval fields=mvindex(set,row) | table key fields | fromjson fields | fields - fields | transpose 0 header_field=key column_name=samples
There is no such functionality in SimpleXML. And it's understandable to some extent - dashboards are meant for interactive work. You probably could create something like that with custom JS but it wo... See more...
There is no such functionality in SimpleXML. And it's understandable to some extent - dashboards are meant for interactive work. You probably could create something like that with custom JS but it won't be easy. 
3, 4 and partially 7 - not really. 3. Indexed fields - unless they contain additional metadata not present in the original events - are usually best avoided entirely. There are other ways of achievi... See more...
3, 4 and partially 7 - not really. 3. Indexed fields - unless they contain additional metadata not present in the original events - are usually best avoided entirely. There are other ways of achieving the same result. 4. You can't use tstats instead of stats-based search just because the field is a number. It requiers specific types of data. True though that if you can use tstats instead of normal stats, it's way faster. 7. Wildcards at the beginning of search term should not be "avoided", they should not be used at all unless you have a very very very good for using them, know and understand the performance impact and can significantly limit sought through events using other means. The remark about regexes is generally valid but this is most often not the main reason for performance problems.
Hi @nagendra1111 , Splunk does not currently support sending dashboard panel searches directly to background jobs from the UI.
Hi thahir, the btool output doesn't find any lang-setting. I think Splunk tries to honor the language preference of the browser. When we change to english language in the browser, en-GB is add... See more...
Hi thahir, the btool output doesn't find any lang-setting. I think Splunk tries to honor the language preference of the browser. When we change to english language in the browser, en-GB is added correctly into the link and the link works just fine. In our browser (e.g. MS Edge) we have multiple language packs installed. There's one name "German (Germany)" and another one called "German". I tested different languages. Our test environment can handle all of them. Our production messes up when we use language pack "German" without the country in brackets.  
{   "abcdxyz" : {     "transaction" : "abcdxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 418.71666666666664,     "medianResTime" : 264.5... See more...
{   "abcdxyz" : {     "transaction" : "abcdxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 418.71666666666664,     "medianResTime" : 264.5,     "minResTime" : 0.0,     "maxResTime" : 4418.0,     "pct1ResTime" : 368.4,     "pct2ResTime" : 3728.049999999985,     "pct3ResTime" : 4418.0,     "throughput" : 0.25086548592644625,     "receivedKBytesPerSec" : 0.16945669591340123,     "sentKBytesPerSec" : 0.3197146692547623   },   "efghxyz" : {     "transaction" : "efghxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 421.8,     "medianResTime" : 32.0,     "minResTime" : 0.0,     "maxResTime" : 3566.0,     "pct1ResTime" : 3258.5,     "pct2ResTime" : 3497.6,     "pct3ResTime" : 3566.0,     "throughput" : 0.24752066797577596,     "receivedKBytesPerSec" : 0.34477244084256037,     "sentKBytesPerSec" : 0.08463804872238082   },   "ijklxyz" : {     "transaction" : "ijklxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 27.733333333333338,     "medianResTime" : 27.5,     "minResTime" : 0.0,     "maxResTime" : 241.0,     "pct1ResTime" : 41.599999999999994,     "pct2ResTime" : 52.699999999999974,     "pct3ResTime" : 241.0,     "throughput" : 0.25115636576738737,     "receivedKBytesPerSec" : 0.3331214746541367,     "sentKBytesPerSec" : 0.08588125143891667   },   "mnopxyz" : {     "transaction" : "mnopxyz",     "sampleCount" : 60,     "errorCount" : 13,     "errorPct" : 21.666666,     "meanResTime" : 491.74999999999994,     "medianResTime" : 279.5,     "minResTime" : 0.0,     "maxResTime" : 4270.0,     "pct1ResTime" : 381.29999999999995,     "pct2ResTime" : 4076.55,     "pct3ResTime" : 4270.0,     "throughput" : 0.2440254437195985,     "receivedKBytesPerSec" : 0.16483632755942018,     "sentKBytesPerSec" : 0.2839297997262848   } } I need to create a table view from the above log event which was captured as a single event, like the below table format: samples abcdxyz efghxyz ijklxyz mnopxyz     "transaction" :             "sampleCount"                                                     "errorCount"              "errorPct"                                                "meanResTime"                                    "medianResTime"                                          "minResTime"                                               "maxResTime"                                               "pct1ResTime"                                          "pct2ResTime"                                        "pct3ResTime"                                           "throughput"                                        "receivedKBytesPerSec"                                      "sentKBytesPerSec"                                
HI @zaks191 ,  Please consider the below points for the better performance in your environment. 1. Be Specific in Searches: Always use index= and sourcetype= and add unique terms early in you... See more...
HI @zaks191 ,  Please consider the below points for the better performance in your environment. 1. Be Specific in Searches: Always use index= and sourcetype= and add unique terms early in your search string to narrow down data quickly. 2. Filter Early, Transform Late: Place filtering commands (like where, search) at the beginning and transforming commands (stats, chart) at the end of your SPL. 3.Leverage Index-Time Extractions: Ensure critical fields are extracted at index time for faster searching, especially with JSON data. 4.Utilize tstats: For numeric or indexed data, tstats is highly efficient as it operates directly on pre-indexed data (.tsidx files), making it much faster than search | stats. 5.Accelerate Data Models: Define and accelerate data models for frequently accessed structured data. This pre-computes summaries, allowing tstats searches to run extremely fast. 6.Accelerate Reports: For specific, repetitive transforming reports, enable report acceleration to store pre-computed results. 7.Minimize Wildcards and Regex: Avoid leading wildcards (*term) and complex, unanchored regular expressions as they are resource-intensive. 8.Optimize Lookups: For large lookups, consider KV Store lookups or pre-generate summaries via scheduled searches. 9.Use Job Inspector: Regularly analyze slow searches with the Job Inspector to pinpoint bottlenecks (e.g., search head vs. indexer processing). 10.Review limits.conf (Carefully): While not a primary fix, review settings like max_mem_usage_mb or max_keymap_rows in limits.conf after monitoring resource usage, but proceed with caution and thorough testing. 11.Setup Alerts for Expensive searches: use internal metrics to detect problematic searches 12.Monitor and Limit User Search Concurrency: Users running unbounded or wide time-range ad hoc searches can harm performance. Happy Splunking
Hi @Sahansral , Please try to execute the below btool command on the SH $SPLUNK_HOME/bin/splunk cmd btool user-prefs list --debug then output should be like below /opt/splunk/etc/users/so... See more...
Hi @Sahansral , Please try to execute the below btool command on the SH $SPLUNK_HOME/bin/splunk cmd btool user-prefs list --debug then output should be like below /opt/splunk/etc/users/someuser/user-prefs/local/user-prefs.conf [general] lang = de if you find  lang = de thats the actual problem- splunk will try to redirect /de/ even though it should be /de-DE/, and /de/ doesnt exist. so try to update the below in the local if the issue persists for all the user $SPLUNK_HOME/etc/system/local/user-prefs.conf [general] lang = de-DE and restart splunk once. note: if the issue is only for specific user then you need to change in the user level. Please let me know if its worked. Happy Splunking!!
Hi @verbal_666 , if the indexer resource usage is stable and this happen periodically, this indicates a network issue. Try to capture a pcap during the delay window and check for the dropped ack'... See more...
Hi @verbal_666 , if the indexer resource usage is stable and this happen periodically, this indicates a network issue. Try to capture a pcap during the delay window and check for the dropped ack's and engage the network team or firewall team and try to do some analyze on the traffic and session timeouts, it could be affecting splunk traffic.
I agreee with you on that, if your CPU, IOPS, and searches all seem steady. Some network appliances have default TCP session timeout, If forwarder/indexer sessions idle or ACKs delay just enough, th... See more...
I agreee with you on that, if your CPU, IOPS, and searches all seem steady. Some network appliances have default TCP session timeout, If forwarder/indexer sessions idle or ACKs delay just enough, the connection may be dropped, forcing re-establishment and buffering. Also network switches/routers might prune idle TCP flows, this affects forwarders that don’t constantly send. Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
The strange thing is that the resource usage is quite equal all time from 09 to 17, with some "normal" CPU peak (i have to add some Indexers asap), also same number of searches, and quality of search... See more...
The strange thing is that the resource usage is quite equal all time from 09 to 17, with some "normal" CPU peak (i have to add some Indexers asap), also same number of searches, and quality of searches (none of them seems to create some loop or resources bottleneck!!!). I was also wondering if some Network Device makes some "refresh" (every 1 hour), maybe breaking the Indexers ACK responses 🤷‍ 🤷‍ 🤷‍ quite strange...
@verbal_666  Splunk doesn’t offer a built-in scheduler for bucket management tasks like rolling or retention. I would say focus on resource monitoring, and possibly scaling your indexer infrastruct... See more...
@verbal_666  Splunk doesn’t offer a built-in scheduler for bucket management tasks like rolling or retention. I would say focus on resource monitoring, and possibly scaling your indexer infrastructure, not on manipulating Splunk's internal maintenance timing. But you can consider below possible tuning, but not a recommended approach. -Tune max_peer_rep_load and max_peer_build_load in server.conf reduce these values to throttle replication -Adjust forwarder behavior by editing autoLBFrequency - reduces how often forwarders switch indexers, lowering channel creation rate #https://community.splunk.com/t5/Getting-Data-In/Why-did-ingestion-slow-way-down-after-I-added-thousands-of-new/m-p/465796 Regards, Prewin Splunk Enthusiast | Always happy to help! If this answer helped you, please consider marking it as the solution or giving a Karma. Thanks!
And what is you stats wants a _time group? index=* source=/my.log | bin span=5m _time | stats count by _time[,source] 00:00 5 00:05 (no records, skipped) 00:10 10 00:15 20 The result will be, ... See more...
And what is you stats wants a _time group? index=* source=/my.log | bin span=5m _time | stats count by _time[,source] 00:00 5 00:05 (no records, skipped) 00:10 10 00:15 20 The result will be, 00:00 5 00:10 10 00:15 20 The only way in this case is to use a timechart index=* source=/my.log | timechart span=5m count by source 00:00 5 00:05 0 00:10 10 00:15 20
Hi. During the day, some on my Indexers completely stops sending back the ACK, so many agents keep data in queue until there's the ACK and the flow restarts (in some cases also 15/20 minutes passes!... See more...
Hi. During the day, some on my Indexers completely stops sending back the ACK, so many agents keep data in queue until there's the ACK and the flow restarts (in some cases also 15/20 minutes passes!!!). Meanwhile, obviously, i have many delays of data and ACK errors. This happens at some hours, from 09:00 to 17:00, during very high data ingestion the issue is clear visible, during the other hours is trasparent, no issue (few data flowing and few users interaction). I'm wondering, maybe an Indexer internal task to manage indexes/buckets, to optimize system and manage retentions? If so, is this task "editable" to run "once per day only" (in night hours)? Thanks.
Hi livehybrid, checked our test and production environment. Neither of them have a lang-setting.
This should be accepted as solution. This workaround works, not sure why Splunk hasn't put this workaround in the known issues section in the docs.