All Topics

Top

All Topics

Hi, I am facing a executable permission issue for the few scripts for a splunk app and seeing these errors on various search heads, what is the best way to fix it? can someone help me with the scrip... See more...
Hi, I am facing a executable permission issue for the few scripts for a splunk app and seeing these errors on various search heads, what is the best way to fix it? can someone help me with the script or a fix if you ever come across?   thanks in advance.
Is there a good step-by-step, practical, hands-on, how-to, starting at the first step, ending at successful completion guide to do this: Ingest AWS cloudwatch logs into Enterprise Splunk running on ... See more...
Is there a good step-by-step, practical, hands-on, how-to, starting at the first step, ending at successful completion guide to do this: Ingest AWS cloudwatch logs into Enterprise Splunk running on an EC2 instance in the particular AWS environment. I've read a lot of documents, tried different things, followed a couple of videos, and I'm able to see cloudwatch configuration entries in my main index, but so far have not gotten any cloudwatch logs. I am not interested in deep architectural understanding.  I just want to start from the very beginning at the true step one, and end at the last step with logs showing up in my main index. Also, the community "ask a question" page requires an "associated Apps" and I picked one from the available list, but I don't care which app works, I just want to use the one that works. Thank you very much in advance.
We have a table where i see no data for few coloumns tried fillnull value=0 but its not working. But this is happening only when there no count for complete column, for example, For invalidcount we ... See more...
We have a table where i see no data for few coloumns tried fillnull value=0 but its not working. But this is happening only when there no count for complete column, for example, For invalidcount we have data for Login but no data for other applications so it automatically filled zero values,  but for rejectedcount, trmpcount, topiccount there is no data for any application  0 value is not getting filled up. Application incomingcount rejectedcount invalidcount topcount trmpcount topiccount Login 1   2 5     Success 8   0 2     Error 0   0 10     logout 2   0 4     Debug 0   0 22     error-state 0   0 45     normal-state 0   0 24      
Hello,  I have the following data. I want to return tabled data if the events happened within 100ms, and they match by same hashcode and same thirdPartyId. So essentially the search has to be sorted... See more...
Hello,  I have the following data. I want to return tabled data if the events happened within 100ms, and they match by same hashcode and same thirdPartyId. So essentially the search has to be sorted by each combination of thirdPartyId and hashcode and then compare events line by line to see if the previous line and current happened within 100ms. How should the query look like? | makeresults format=csv data="startTS,thirdPartyId,hashCode,accountNumber 2024-04-16 21:53:02.455-04:00,AAAAAAAA,00000001,11111111 2024-04-16 21:53:02.550-04:00,AAAAAAAA,00000001,11112222 2024-04-16 21:53:02.650-04:00,BBBBBBBB,00001230,22222222 2024-04-16 21:53:02.650-04:00,CCCCCCCC,00000002,12121212 2024-04-16 21:53:02.730-04:00,DDDDDDDD,00000005,33333333 2024-04-16 21:53:02.830-04:00,DDDDDDDD,00000005,33334444 2024-04-16 21:53:02.670-04:00,BBBBBBBB,00000002,12121212 2024-04-16 21:53:02.700-04:00,CCCCCCCC,00000002,21212121" | sort by startTS, thirdPartyId
This is an odd one happening on each of our indexers.  The same behavior happens quite frequently, where we will get exactly 11 of these Remote token requests from splunk-system-user, and exactly 1 o... See more...
This is an odd one happening on each of our indexers.  The same behavior happens quite frequently, where we will get exactly 11 of these Remote token requests from splunk-system-user, and exactly 1 of them will fail.  Here is how it looks in the audit logs. 04-22-2024 21:30:31.964 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:31.964, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:31.986 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:31.986, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:32.384 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:32.384, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:32.395 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:32.395, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:40.687 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:40.687, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:40.694 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:40.694, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:46.803 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:46.803, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:46.815 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:46.815, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:47.526 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:47.526, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:47.542 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:47.542, user=splunk-system-user, action=Remote token requested, info=success] 04-22-2024 21:30:55.317 -0700 INFO  AuditLogger - Audit:[timestamp=04-22-2024 21:30:55.317, user=splunk-system-user, action=Remote token requested, info=failed] My problem is I can't do much more with this information.  I have no notion of where these requests are coming from since no other information is included here.  Is there anything else I can investigate?  The number 11 doesn't seem to line up with anything I can think of either, there are 3 searchheads, 3 indexers, 1 cluster manager, in this particular deployment.  Not sure where the 11 requests is coming from.
We setup two cluster managers with load balancer, according to this document.  According to the document, The active manager should respond with 200 for health check probe while the standby managers... See more...
We setup two cluster managers with load balancer, according to this document.  According to the document, The active manager should respond with 200 for health check probe while the standby managers respond with 503. However, in our case, both responds with 200. In addition, when running the following command on each cluster manager, instead of listing all cluster managers, only the current node is output. What could be the problem in the setup?   splunk cluster-manager-redundancy -show-status  
What if there was a way you could keep all the metrics data you need while saving on storage costs? This is now possible with the launch of Archived Metrics in Splunk Infrastructure Monitoring for... See more...
What if there was a way you could keep all the metrics data you need while saving on storage costs? This is now possible with the launch of Archived Metrics in Splunk Infrastructure Monitoring for US-AWS customers! Archived metrics is a new data platform capability within Metrics Pipeline Management that allows you to dynamically sort and store metrics data that are less critical in a low-cost data tier for cheaper than real-time storage. Need to use those stored metrics for an ad-hoc analysis? Use Route Exceptions to create rules and re-direct the subset of metrics you need into the Real-Time tier! The new "Archived Metrics" tier extends the powerful metrics aggregation and filtering capabilities of Metrics Pipeline Management, giving you multiple benefits, including: 1. Better cost control: save significantly on your monitoring bills, up to 85%, by using Archived Metrics for high cardinality data 2. Better service reliability: improve your query performance with your charts and detectors operating on aggregated metrics  3. Better data management: gain access to granular customer-level data for pinpoint incident investigation or customer issue resolutions. Check out this video to learn more!   Got more questions? Our Product Documentation can help you out. Archived metrics is available on April 23rd in US0 and US1 - AWS realms. Other regions will follow next. Happy Splunking!
Hi, the size of my Splunk database is at around >1TB+. I would like to know about all available Indexes and especially all of the associated SourceTypes and the amount of it. The search in WebUI ... See more...
Hi, the size of my Splunk database is at around >1TB+. I would like to know about all available Indexes and especially all of the associated SourceTypes and the amount of it. The search in WebUI works no problem for the last 24hrs but searching for all of the data takes forever and times out. I'm aware that saved searches would be an option but i'm curious to know if  a script would work which recursive scans the database and process all SourceTypes.data file like < /opt/splunk/var/lib/splunk/sampledb/db/db_1680195600_1672423200_0/SourceTypes.data < /opt/splunk/var/lib/splunk/sampledb/db/db_1698782400_1680199200_1/SourceTypes.data ... ... Would this be a feasable option? Many thanks
Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit Bucket Histograms, allowing users to efficiently capture and transmit distributions of m... See more...
Splunk introduces native support for histograms as a metric data type within Observability Cloud with Explicit Bucket Histograms, allowing users to efficiently capture and transmit distributions of measurements and compute statistical calculations like percentiles without increasing costs. Explicit Bucket Histograms empowers users with unparalleled analytical capabilities as they can now seamlessly ingest, store, and query histograms. Native histogram support in Observability Cloud means users will no longer have to run special infrastructure to pre-aggregate their percentile data, which could incur additional costs and obscure source data. Natively ingesting histograms also eliminates the need to send and store each unique observation, which could prove costly, especially for cloud-forward enterprises.  Open Telemetry Native Flexibility And Accuracy For Your Performance Data Histograms, as defined by OpenTelemetry, give users a cost-effective way to send data to Splunk Observability while maintaining the flexibility to analyze performance data in real time. Histograms combine data for the min, max, sum, and count of a population along with a set of buckets that allow end users to compute percentiles. Because of the increase in data represented by a histogram, a histogram MTS will be equivalent to 8 standard MTS. Billing reports will track customers’ total usage and new metrics will track histogram-specific usage. Histograms For Your Charts And Detectors Like gauge and counter metrics, histogram metrics can be used in charts and detectors. Explicit Bucket Histograms are useful for performance data, such as request latency or response time. The most common way to use histogram data is to calculate percentiles for your charts and detectors. When creating a chart or detector with histogram data, users can: Compute any percentile Compute the percentile across multiple different services Compute a percentile over a period of time Getting Started With Histograms in Observability Cloud Histograms are defined in OpenTelemetry. They can be sent into Observability Cloud: Via the Prometheus receiver in the OpenTelemetry Collector: The Prometheus receiver will scrape Prometheus histograms to be sent into Splunk Observability Cloud. Many existing infrastructure components - like Kubernetes and Istio - make histograms available for scraping. Using OpenTelemetry libraries: Users can instrument their code to send in histograms using OpenTelemetry libraries for all major programming languages. Learn more and start using histograms today!
Can you dynamically change the charts (ie. from bar to line), using a dropdown menu? At the moment, I've created multiple charts and utilizing show and hide (depending on the options selected), to s... See more...
Can you dynamically change the charts (ie. from bar to line), using a dropdown menu? At the moment, I've created multiple charts and utilizing show and hide (depending on the options selected), to serve this purpose.   I was wondering if there's an easier/cleaner/simpler way of achieving this.
Hi All, I have a field called content.payload and the value is like .How to extract these values {fileName=ExchangeRates.csv, periodName=202403, status=SUCCESS, subject=, businessEventMessage=Reque... See more...
Hi All, I have a field called content.payload and the value is like .How to extract these values {fileName=ExchangeRates.csv, periodName=202403, status=SUCCESS, subject=, businessEventMessage=RequestID: 101524, GL Monthly Rates - Validate and upload program}
Hi, I am trying to do a chart overlay using a normal distribution graphic based upon the mean and standard deviation acquired from the fieldsummary command. I can generate the values in perl (bel... See more...
Hi, I am trying to do a chart overlay using a normal distribution graphic based upon the mean and standard deviation acquired from the fieldsummary command. I can generate the values in perl (below) for a bell curve. Can you tell me how to do this in the Splunk Dashboard xml? Thanks. #!/usr/bin/perl # min, max, count, mean, stdev all come from the fieldsummary command. $min = 0.442; $max = 0.507; $mean = 0.4835625; $stdev = 0.014440074377630105; $count = 128; $pi = 3.141592653589793238462; # The numbers above do not indicate a Gaussian distribution. # Create an artificial normal distribution (for the plot overlay) # based on 6-sigma. $min = sprintf("%.3f", $mean - 3.0*$stdev); # use sprintf as a rounding function $max = sprintf("%.3f", $mean + 3.0*$stdev); $interval = ($max - $min)/($count - 1); $x = $min; for ($i=0; $i<$count; $i++) { $y = (1.0/($stdev*sqrt(2.0*$pi))) * exp(-0.5*((($x-$mean)/$stdev)**2)); $myFIELD[$i] = sprintf(%.3f",$y); printf("myFIELD[$i]\n"); $x = $x + $interval; } exit;
Hello, Can anyone help me with the query that lists all the savedsearches in my splunk system along with the time taken by them to run completely?
just moved to Almalinux 9.3 (from rhel 7 yikes!) systemd managed boot start works fine. my problem is when I tried to deploy an app with a restart, splunk was not able to start up complaining it was ... See more...
just moved to Almalinux 9.3 (from rhel 7 yikes!) systemd managed boot start works fine. my problem is when I tried to deploy an app with a restart, splunk was not able to start up complaining it was managed by systemd. has anyone else come across this? Splunk 9.0.5
I'm investigating why Splunk is keeping data beyond retention period stated in frozenTimePeriodInSecs? How can i fix this?  
I have logs being monitored form winodws as below:   [monitor://D:\Logs\*] sourcetype = abc index = def I also currently have info logs being null routed which applies to  all the //D:\Logs\jkl.... See more...
I have logs being monitored form winodws as below:   [monitor://D:\Logs\*] sourcetype = abc index = def I also currently have info logs being null routed which applies to  all the //D:\Logs\jkl.txt and therefor we dont see any logs from //D:\Logs\jkl.txt in Splunk.   Now without modifying the nullroute in props and transforms, I want to ingest logs from //D:\Logs\jkl.txt, how can i avoid the null route to not apply on this specific logs?
Hello Team, Deployment with: - HF with ACK when sending to Indexer - HEC on HF with ACK - application sending events via HEC on HF with ACK Still in this model there is a chance that some of the... See more...
Hello Team, Deployment with: - HF with ACK when sending to Indexer - HEC on HF with ACK - application sending events via HEC on HF with ACK Still in this model there is a chance that some of the events will be missed. Application might get ACK from HEC, but if the event is still on the HF output queue (not yet sent to the indexer) and we have non-gracefull reboot of HF (so that it could not flush out it's output queue). Can you confirm ? What would be the best way to address it ? So that once the application receives ACK we do have end to end guarantee that event is indexed ? Thanks, Michal  
We’re making some changes to the team landing page in Splunk Observability, based on your feedback. The landing page for team members now shows content more efficiently, and we’ve also made it easier... See more...
We’re making some changes to the team landing page in Splunk Observability, based on your feedback. The landing page for team members now shows content more efficiently, and we’ve also made it easier to find the alerts and dashboards you’re interested in. Alerts The Alerts section is where you can find active alerts from detectors linked to each team. This section now defaults to a compact summary, which you can click on to expand for full details. Dashboards The Dashboards section shows dashboards from groups that have been linked to the team. Dashboards are now listed within their respective dashboard groups, in a stable order. You can search for dashboards or groups by name just as before, and we’ve now added a new control that makes it easy to find just your favorite dashboards, or just dashboards that you created. The Teams feature in Observability is a great way to make it easy to help newer users get started, or to collect content that's relevant to important topics for your organization. Click here to learn more about using Teams in Observability.
Regardless of where you are in Splunk Observability, you can search for relevant APM targets including service maps, service views, traceID, workflows and Splunk Infrastructure navigator names. Nav... See more...
Regardless of where you are in Splunk Observability, you can search for relevant APM targets including service maps, service views, traceID, workflows and Splunk Infrastructure navigator names. Navigate to the upper-right-hand-corner and find the magnifying glass. There, you can open the search feature and type in your navigation target.         Simply type in your APM service name and results will populate.                         To narrow down your results, simply type in the prefix "service:" or "workflows:" to filter your results directly in the search feature.                         Be on the lookout as we add more search targets across the entire observability portfolio.
Question in the title. Thanks in advance!