All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello Splunkers, On many of sites, we are experiencing this Buckets Error.  Does anyone have the same issues? and how can we solve this issue?  I really appreciate about your work will be prov... See more...
Hello Splunkers, On many of sites, we are experiencing this Buckets Error.  Does anyone have the same issues? and how can we solve this issue?  I really appreciate about your work will be provided.    Buckets Root Cause(s): The percentage of small buckets (100%) created over the last hour is high and exceeded the red thresholds (50%) for index=_internal, and possibly more indexes, on this indexer. At the time this alert fired, total buckets created=4, small buckets=4 Unhealthy Instances idx3 idx4
Background story: We have some customers using a site to site VPN to reach our corporate networks.  The customer has like 3-4 network prefixes in their environment. I want to check network traffic co... See more...
Background story: We have some customers using a site to site VPN to reach our corporate networks.  The customer has like 3-4 network prefixes in their environment. I want to check network traffic counters to see if the customer networks are sending/receiving any traffic to/from my corporate network.  Please share some suggested searches.  I'm looking for ANY type of network traffic. For example: customer network A 192.168.1.0/24 customer network B 192.168.2.0/24
Splunk data retention period is for 7 days. But i could still see 2 years back data now. I am not sure why?  Can anyone help on this 
I have two queries from the same set of index and app names using different search terms from which I am extracting a set of fields as below: Query1: index=A cf_app_name=B  "search string 1" | r... See more...
I have two queries from the same set of index and app names using different search terms from which I am extracting a set of fields as below: Query1: index=A cf_app_name=B  "search string 1" | rex field=_raw "(?ms)Id: (?P<Id>[^,]+), service: (?P<service>[^,]+), serial: (?P<serial>[^,]+), Type: (?P<Type>[a-zA-Z-]+)" | table serial Id Type service _time Query 2: index=A cf_app_name=B "search string 2" | rex field=_raw "(?ms)serial\\W+(?P<serial>[^\\\\]+)\\W+\\w+\\W+(?P<Type>[^\\\\]+)\\W+\\w+\\W+\\w+\\W+(?P<Id>[a-zA-Z]+-\\d+-\\d+)\\W+\\w+\\W+(?P<gtw>[^\\\\]+)\\W+\\w+\\W+(?P<service>[^\\\\]+)" | table serial Type Id service _time My requirement is to list all the values in Query1 and then show a Y/N flag if there is a match in Query2 based on the field 'Id'. Tried join and append, but do not seem to be getting the right results, any suggestions will be appreciated.
Hello, I'm new working with Splunk and I want to create reports and email notification to me  when  any systems go down. Can any of you help me with any search string for that? Thank you! Thelma
Hello,  I have the following log: Month date time, ip address, host, [system] 2022 194 16:15:14 X01: Freq error: phase start: -13.5 ns, phase end: +4.7 ns I'm trying to create custom fields nam... See more...
Hello,  I have the following log: Month date time, ip address, host, [system] 2022 194 16:15:14 X01: Freq error: phase start: -13.5 ns, phase end: +4.7 ns I'm trying to create custom fields named "Start" and "End" that hold the positive and negative numerical values only, but I am fairly new to field extraction and can't seem to find a way to tie the values to "phase start" and "phase end" without having them included in the field....  
Can we do the event sampling in forwarder before indexing the event in indexer to reduce the event size ?
Hello, I have in the "Network_Traffic.All_Traffic" a Calculated Field called "rule". The Datamodel is accelerated, therefore the eval expression is not editable from Web UI and I cannot see the e... See more...
Hello, I have in the "Network_Traffic.All_Traffic" a Calculated Field called "rule". The Datamodel is accelerated, therefore the eval expression is not editable from Web UI and I cannot see the expression to extract/calculate the field. I tried searching in all the *.conf files but I do not find it, I was expecting to find it on a props.conf I know the workaround is to temporary disable the acceleration, so that the calculated field becomes editable and I can see how it is calculated, but I would like to avoid doing that. Is there any other way to do that OR do you know where the Datamodel Calculated Fields are saved? Thanks a lot, Edoardo
Hi Team, I have a field like below : Cost : 0.4565534553453 0.0000435463466 0.0021345667788 0.0000000005657 I want to get values from this cost field which has value till 4 decimals i.e ... See more...
Hi Team, I have a field like below : Cost : 0.4565534553453 0.0000435463466 0.0021345667788 0.0000000005657 I want to get values from this cost field which has value till 4 decimals i.e only 0.4565534553453 and 0.0021345667788.  How can I achieve this in my splunk query. Please can anyone help me . Regards, NVP
Hi Splunkers, I struggled badly trying to get this solved, but no luck? I need to join to a different search using the ip_address to get the host name : Base search for the join: index= X  sour... See more...
Hi Splunkers, I struggled badly trying to get this solved, but no luck? I need to join to a different search using the ip_address to get the host name : Base search for the join: index= X  sourcetype=server  dv_ir=4311.00. The dv_name field is the host name and the dv_ip_address is the ip_address. Any help will be appreciated. Thank you all!  
I have the following sample data in a csv file.I am trying to import it but its  unable to break the line and detect the timestamp. Sample events "Jun30.22.21.55, LVVL@abc.LOCAL, InOctets, 557766... See more...
I have the following sample data in a csv file.I am trying to import it but its  unable to break the line and detect the timestamp. Sample events "Jun30.22.21.55, LVVL@abc.LOCAL, InOctets, 557766140, OutOctets, 3462815293, Total MB used, 502.572679125" "Jun30.22.21.55, ALU@abc.LOCAL, InOctets, 4238119433, OutOctets, 3683403330, Total MB used, 990.190345375" "Jun30.22.21.55, RXGH@abc.LOCAL, InOctets, 233853544, OutOctets, 485536206, Total MB used, 89.92371875"    
Anyone else having issues with returning all the Business Transactions from an application using the Splunk addon for AppDynamics? Experiencing this issue across all our apps in AppD. For instance, w... See more...
Anyone else having issues with returning all the Business Transactions from an application using the Splunk addon for AppDynamics? Experiencing this issue across all our apps in AppD. For instance, we have one app with a couple hundred BTs and the addon will only capture 19 of them. When I run the same command that the addon is using as a CURL it will return all the BTs. Running v1.9.0 of the addon on Splunk 8.2.5 on Linux.
I've had quite a good look around the internet and have been unable to find an answer to this question. This question in particular touches on it, but the performance comparison is left unanswered. W... See more...
I've had quite a good look around the internet and have been unable to find an answer to this question. This question in particular touches on it, but the performance comparison is left unanswered. We are thinking about moving away from Splunk UF to an open source solution, which will likely only support HEC. Before making this change I'd like to know any consequences on performance/resource usage on the indexers. What are the impacts on resource usage and index/search performance between UF and HEC?
I am not able to find the host field information for the events coming from a particular machine.  This is related to a particular source type. Other logs from a different source type from the same m... See more...
I am not able to find the host field information for the events coming from a particular machine.  This is related to a particular source type. Other logs from a different source type from the same machne has host field information. Events are reaching splunk, but they are missing host field information. Can someone help?
Hi, We are having JMS Unresolved destinations as remote services and it is showing broken transactions due to this. We would like to include the pooled-jms package (https://github.com/messaginghub/... See more...
Hi, We are having JMS Unresolved destinations as remote services and it is showing broken transactions due to this. We would like to include the pooled-jms package (https://github.com/messaginghub/pooled-jms) in the Java agent as it is a Spring recommendation for JMS 2.0 since its version 2.1 ( 8th Jun 2020) https://github.com/spring-projects/spring-boot/wiki/Spring-Boot-2.1-Release-Notes#activemq-pooling # If you were using activemq-pool, support has been removed in this release in favor of pooled-jms that offers the same features while being JMS 2.0 compliant. We are using org.messaginghub.pooled.jms.JmsPoolMessageConsumer and the agent error reported is: Caused by: java.lang.NoSuchMethodException: org.messaginghub.pooled.jms.JmsPoolMessageConsumer.getDestination() Thank you, David
When I send the splunk search result data via webhook I am only getting only the  first row. Is there any alternative to this?
I have used the "Prometheus Metrics for Splunk" plugin from the Splunk Apps to get data from the Prometheus remote write. Both Prometheus and Splunk are installed on the local Windows machine (for ... See more...
I have used the "Prometheus Metrics for Splunk" plugin from the Splunk Apps to get data from the Prometheus remote write. Both Prometheus and Splunk are installed on the local Windows machine (for testing).  A Prometheus remote write is used to send data to the splunk. Splunk Configuration ```` [prometheusrw] port = 8098 maxClients = 10 [prometheusrw://856412] bearerToken = ABC123 index = prometheus whitelist = * sourcetype = prometheus:metric disabled = 0 ```` Prometheus configuration ```` - url: "http://localhost:8098" authorization: credentials: "ABC123" tls_config: insecure_skip_verify: true write_relabel_configs: - source_labels: [__name__] regex: expensive.* action: drop ```` prometheus error log: ```` ts=2022-07-12T11:40:22.139Z caller=dedupe.go:112 component=remote level=info remote_name=856412 url=http://localhost:8098 msg="Done replaying WAL" duration=10.5184238s ts=2022-07-12T11:40:22.438Z caller=dedupe.go:112 component=remote level=warn remote_name=856412 url=http://localhost:8098 msg="Failed to send batch, retrying" err="Post \"http://localhost:8098\": EOF" ```` Suggest corrections/ways to get prometheus data to Splunk.
Hi, I'm trying to configure Drill-down Earliest Offset in my Notable from Adaptive Response Action. I'd like to run the Drill-down  search setting as earliest 2 minutes before the earliest time o... See more...
Hi, I'm trying to configure Drill-down Earliest Offset in my Notable from Adaptive Response Action. I'd like to run the Drill-down  search setting as earliest 2 minutes before the earliest time of the search: $info_min_time$ - 2minutes. I'm trying this configuration but seems not to work properly. Is there a way to do so? Is there a way to set earliest in the Drill-down search?   Thanks a lot Marta    
Hi Splunkers, Can anyone share the link for Splunk Demo Portal. The old link is no more working https://o2.splunkit.io/oxygen
I am looking for drilldown option in boxplot. I tried editing in source and tried to do drilldown but it is not working. Is there any workaround solution for this ?