All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Starting 9.2.0 release internal metrics log event generation can be controlled by group or subgroup.  If there are thousands of forwarders,  _internal index becomes the most active index and generate... See more...
Starting 9.2.0 release internal metrics log event generation can be controlled by group or subgroup.  If there are thousands of forwarders,  _internal index becomes the most active index and generates lot of hot buckets. 9.2.0 provides ability to control each metrics group/subgroup. checkout `interval` https://docs.splunk.com/Documentation/Splunk/latest/Admin/limitsconf for more details. Upon splunk start, metrics.log will log all the controllable metrics groups/subgroups.  Example metrics.log.     06-08-2024 03:14:49.659 +0000 INFO Metrics - Will log metrics_module=dutycycle:ingest at metrics_interval=30.000.     metrics_module is the controllable module logged in metrics.log. dutycycle:ingest is the the controllable module. dutycycle is the metrics group name and ingest is the subgroup name. It's default logging interval is 30 sec.     06-08-2024 03:14:49.703 +0000 INFO Metrics - Will log metrics_module=tailingprocessor:tailreader0 at metrics_interval=60.000.     tailingprocessor is the group name and tailreader0 is the subgroup name( where trailing `0` is the first pipeline number). It's default logging interval is 60 sec. New metrics logging framework has global default metrics logging interval 60 sec in limits.conf with exception for some modules(30 sec) you will find in metrics.log       [metrics] interval = <integer> * Number of seconds between logging splunkd metrics to metrics.log. * Minimum of 10. * Default (Splunk Enterprise): 60 * Default (Splunk Universal Forwarder): 60     There are so many modules you will find in metrics.log that are never queried. For example queue and thruput metrics is probably the most queried metrics but not necessarily others. You can increase global default to 120 second     [metrics] interval=120     Customize logging interval for other very critical metrics groups. Example there are various `queue` metrics logging. Some are always checked, some rarely.     06-08-2024 03:14:49.603 +0000 INFO Metrics - Will log metrics_module=queue:parsingqueue at metrics_interval=30.000. 06-08-2024 03:14:49.663 +0000 INFO Metrics - Will log metrics_module=queue:httpinputq at metrics_interval=30.000. 06-08-2024 03:14:49.651 +0000 INFO Metrics - Will log metrics_module=queue:stashparsing at metrics_interval=30.000. 06-08-2024 03:14:49.603 +0000 INFO Metrics - Will log metrics_module=queue:teequeue at metrics_interval=30.000.      You can set global default for queue group.     [queue] interval=60     parsing 30 sec.     [queue:parsingqueue] interval=30     stashparsing 150 sec.     [queue:stashparsing] interval=150       Interval can be set for [<group>] or [<group>:<subgroup>].
Hello everyone, I'm new to Splunk, can anyone help me: enable "Using visualizations to determine TTP coverage" from https://lantern.splunk.com/?title=Security%2FUCE%2FGuided_Insights%2FCyber_framewor... See more...
Hello everyone, I'm new to Splunk, can anyone help me: enable "Using visualizations to determine TTP coverage" from https://lantern.splunk.com/?title=Security%2FUCE%2FGuided_Insights%2FCyber_frameworks%2FAssessing_and_expanding_MITRE_ATT%26CK_coverage_in_Splunk_Enterprise_Security# ? https://docs.splunk.com/Documentation/ES/7.1.0/RBA/ViewMitreMatrixforRiskNotable#View_the_MITRE_ATT.... Splunk Enterprise Security Splunk Security Essentials 
Hello, I am using Splunk Cloud, for some our sourcetypes we have defined specific TRUNCATE values. I have a couple of questions. If `TRUNCATE` value is not defined for a sourcetype, what is the def... See more...
Hello, I am using Splunk Cloud, for some our sourcetypes we have defined specific TRUNCATE values. I have a couple of questions. If `TRUNCATE` value is not defined for a sourcetype, what is the default limit of chars? Is there any guideline document or rules on how to define TRUNCATE? Especially if it is recommended to set something higher than 50k or 80k chars as a limit.
can I find all the saved searches which are using index=* rather than giving specific name. And all the saved searches which are not using (index) in their search
I want to monitor Splunk Enterprise in a cluster environment. I monitor the Splunk infrastructure with Newrelic, and I also want to use the DMC health check item. Where can I get the health check it... See more...
I want to monitor Splunk Enterprise in a cluster environment. I monitor the Splunk infrastructure with Newrelic, and I also want to use the DMC health check item. Where can I get the health check item other than by updating it? Also, please let me know if there are any other ways to monitor Splunk.
I have a search that outputs the hostlist by test. index=abc | stats count by host test | stats count as total_count values(host) as host_list by test which gives me list of hosts by test like bel... See more...
I have a search that outputs the hostlist by test. index=abc | stats count by host test | stats count as total_count values(host) as host_list by test which gives me list of hosts by test like below  test host_list new abc0002 abc0003 abc0004 abc0005 abc0006 abc0007 abc0008 abc0009 abc0010 abc0011 abc0012 abc0013 abc0014 abc0015 abc0016 abc0017 abc0018 abc0019 abc0020 abc0022 abc0024 abc0025 abc0026 abc0027 abc0028 abc0029 abc0031   II would like to group the range of host like [abc0002-abc0020] [abc0022] [abc0024-abc0029] [abc0031] instead of the whole list  by test like below image  test host_list host_array          new abc0002 abc0003 abc0004 abc0005 abc0006 abc0007 abc0008 abc0009 abc0010 abc0011 abc0012 abc0013 abc0014 abc0015 abc0016 abc0017 abc0018 abc0019 abc0020 abc0022 abc0024 abc0025 abc0026 abc0027 abc0028 abc0029 abc0031 [abc0002-abc0020] [abc0022] [abc0024-abc0029] [abc0031]     Thank you in Advance Splunkers 
Hi AppD SME's, When we enable the RUM for the X application, what options are available to pull the user count details? For example: 1. Can we get the number of users for a given time in the appli... See more...
Hi AppD SME's, When we enable the RUM for the X application, what options are available to pull the user count details? For example: 1. Can we get the number of users for a given time in the application? not in accordance with time intervals 2. Is possible to fetch concurrent users details from RUM ? I would really appreciate your assistance and would appreciate your insights. Thanks, MSK
Hi, I have the following JSON object that is indexed via the default JSON extraction (INDEXED_EXTRACTIONS) { "assetId": 123456, "cloudProvider": { "aws": { "ec2": { ... See more...
Hi, I have the following JSON object that is indexed via the default JSON extraction (INDEXED_EXTRACTIONS) { "assetId": 123456, "cloudProvider": { "aws": { "ec2": { ... }, "tags": [ { "key": "AAA", "value": "aaa" }, { "key": "BBB", "value": "bbb" }, { "key": "CCC", "value": "ccc" } ] } } }   I'm attempting to re-write the following original search into tstats:   ... | spath output=AWS_TAGS path="cloudProvider.aws" | latest(AWS_TAGS) AS AWS_TAGS by assetId | spath input=AWS_TAGS output=AWS_TAGS path="tags{}" | eval AWS_TAGS=mvmap(AWS_TAGS,spath(AWS_TAGS,"key")."::".spath(AWS_TAGS,"value"))   This creates the AWS_TAGS multivalue list with the result like this for each assetId: AAA::aaa BBB::bbb CCC::ccc The issue with tstats is that the JSON object found at the path 'cloudProvider.aws' does not exist with tstats. I.e. there's no JSON object value for the TERM(cloudprovider.aws) That's why my original search had an spath, to explicitly grab the JSON object at 'cloudprovider.aws'. This way it allowed me to achieve latest tags for each assetId and preserve the key-value pairs with mvmap. With tstats, it only sees the terms cloudprovider.aws.tags{}.key and cloudprovider.aws.tags{}.value Which I could do with tstats values() but it may or may NOT be latest. Plus it will be tricky to line up them as key-value pairs. I definitely get the fact that tstats looks for terms in tsidx files so _raw is not searched. I guess the ask here is, any idea how to get the cloudprovider.aws JSON object extracted for tstats at searchtime?  
is there a way to remove the header comes with non syslog source types that include hostname and timestamp with priority at the begnning of the event sended   i have configuered outputs.conf,props.... See more...
is there a way to remove the header comes with non syslog source types that include hostname and timestamp with priority at the begnning of the event sended   i have configuered outputs.conf,props.conf,transforms.conf   is there a way to remove the priority and hostname associated with timestamp on the third-party system   thanks
I use Splunk to ingest events from the windows Security, Application and System event logs. We have a scanner that is very noisy and I would like for Splunk not ingest the events that the scanner cre... See more...
I use Splunk to ingest events from the windows Security, Application and System event logs. We have a scanner that is very noisy and I would like for Splunk not ingest the events that the scanner creates.  I have tried without success to use SEDCMD on my indexer's Props.conf: SEDCMD-Remove_Scanner_IP_Address = s/\b12\.34\.567\.89\b//g SEDCMD-Remove_Scanner_Host_Name = s/Workstation_Name\s*=\s*scanner-name01\s*//g I have also tried to blacklist the IP on each of the host's Splunk UF inputs.conf file: blacklist = 12\.34\.567\.89 Would appreciate any assistance\suggestions given.  
How do I trace if a server in a network path behind a firewall? The data is presented in the table below. For example: IP 192.168.1.7 of server-A is connected to "LoadBalancer-to-Server" network,... See more...
How do I trace if a server in a network path behind a firewall? The data is presented in the table below. For example: IP 192.168.1.7 of server-A is connected to "LoadBalancer-to-Server" network, LoadBalancer-A is connected to "LoadBalancer-to-Server" network and "Firewall-to-Loadbalancer" network. So, server-A is behind a firewall. Please suggest. Thanks ip name network behindfirewall 192.168.1.1 LoadBalancer-A Loadbalancer-to-Server yes 172.168.1.1 LoadBalancer-A Firewall-to-Loadbalancer yes 192.168.1.7 server-A Loadbalancer-to-Server yes 192.168.1.8 server-B Loadbalancer-to-Server yes 192.168.1.9 server-C network-1 no 192.168.1.9 server-D network-2 no  
Hi Team, I need to extract the string which is between the two different special characters using regex. Could you please assist on this? Thank you. Here is the string below where I need to extract ... See more...
Hi Team, I need to extract the string which is between the two different special characters using regex. Could you please assist on this? Thank you. Here is the string below where I need to extract the string provisionById which is between the semicolon and period charcter. Method End: com.bi.gb.rest.endpoint.PolicyAdminEndPoint.provisionById;  Execution Time: 7
I've made a dashboard to show some statistics on it. The information that appears on my dashboard differs from that of my users. A Studio Dashboard is used to design the dashboard. I checked the time... See more...
I've made a dashboard to show some statistics on it. The information that appears on my dashboard differs from that of my users. A Studio Dashboard is used to design the dashboard. I checked the time zone and me and other user both are in same time zone.  What could be the issue? Please guide me.
Environment requirements according to best practices for large companies in Splunk, installing Splunk ES in it, activating more than 10,000roal, and connecting 4,000 devices. What are the best requir... See more...
Environment requirements according to best practices for large companies in Splunk, installing Splunk ES in it, activating more than 10,000roal, and connecting 4,000 devices. What are the best requirements for RAM, CPU, and storage?
Hi community,   I need to write a query which can adjust its search string based on event time. For example, if the event time is before 2024/01/01, events should include string "A" OR "B";   ind... See more...
Hi community,   I need to write a query which can adjust its search string based on event time. For example, if the event time is before 2024/01/01, events should include string "A" OR "B";   index="aws" sourcetype="dev" ("A" OR "B")   Else, events should include string "C" OR "D".     index="aws" sourcetype="dev" ("C" OR "D")       I have written this to get the search string, but have no idea how to make use of it.   index="aws" sourcetype="dev" | eval search_string=if(_time < strptime("2024-01-01", "%Y-%m-%d"), "(\"A\" OR \"B\")", "(\"C\" OR \"D\")") | search search_string     I've got a lot of help here, and really appreciate it!
Hi All,   Has anyone used TA https://github.com/SplunkBAUG/CCA/blob/main/TA_genesys_cloud-1.0.14.spl and splunk genesys app https://splunkbase.splunk.com/app/6552 ?   I am having issues with TA w... See more...
Hi All,   Has anyone used TA https://github.com/SplunkBAUG/CCA/blob/main/TA_genesys_cloud-1.0.14.spl and splunk genesys app https://splunkbase.splunk.com/app/6552 ?   I am having issues with TA where the data is stopping after 24 hours, has anyone faced similar issue with genesys cloud TA?
How can I use this app? I have it installed, but configuration I do not understand how to set this up   Thanks, ewholz
Hi All, I have created a react component, which contains 3 things. School Name, Description & link to school website. These details are stored in a lookup file called School-details.csv. How do I r... See more...
Hi All, I have created a react component, which contains 3 things. School Name, Description & link to school website. These details are stored in a lookup file called School-details.csv. How do I run the Splunk input command/any search query to fetch results and map them on the react component built using Splunk UI toolkit?
Hey everyone!    Would anyone have any resources on how this works we have working scripts mostly external api calls that are working in the testing environment however when attempting to configure... See more...
Hey everyone!    Would anyone have any resources on how this works we have working scripts mostly external api calls that are working in the testing environment however when attempting to configure the app to work as an adaptive response action we are running into some problem. Are there any resources out there on this at all, we have been unable to find anything really help on this section or anyone to really offer any insight or guidance into how this is supposed to work. Any insight or suggestions at all would be greatly appreciated, thank you in advance!!    
Hi everyone, I'm trying to extract fields from salesforce from a complex architecture. I created a dedicated index for extracting a log that contains the summary of the order with the various items... See more...
Hi everyone, I'm trying to extract fields from salesforce from a complex architecture. I created a dedicated index for extracting a log that contains the summary of the order with the various items. The structure of the objects is not editable and the query I would like to be able to execute is this: SELECT Id, (SELECT Id, (SELECT Description FROM FulfillmentOrderLineItems) from FulfillmentOrders) FROM OrderSummary Is there a way to extract this log?