All Topics

Top

All Topics

in my environment i have 4 indexers. daily indexeing is 50gb/day.retention period is 30 days . In these 30 days retention period  for hot bucket 10 days and for cold bucket retention period is 20 day... See more...
in my environment i have 4 indexers. daily indexeing is 50gb/day.retention period is 30 days . In these 30 days retention period  for hot bucket 10 days and for cold bucket retention period is 20 days .how can we calculate indexer storage .
Hello everyone, I am new to Splunk. I am trying to get the queue or event counts with status=“spooling” that happened after the very first error(status=“printing,error”) occurred. How could I do this... See more...
Hello everyone, I am new to Splunk. I am trying to get the queue or event counts with status=“spooling” that happened after the very first error(status=“printing,error”) occurred. How could I do this? So I have events with: sourcetype=winprintmon host=bartender2020 type=PrintJob printer="*"(gets all printer) ex: zebra1065 could have status of "printing"/"printing,error"/"spooling" so what I wanted to do is if a printer has error(status="printing,error") at 6am,  count the events of that printer that has status="spooling"(which is the queue) that occurred after 6am. Desired result format: printer name      |          Counts of spooling(queue)         | Hope this explains better, been dealing with this for days  Thank you so much in advance! 
this is my splunk query: index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPACA.log" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as ... See more...
this is my splunk query: index="webmethods_prd" source="/apps/WebMethods/IntegrationServer/instances/default/logs/ExternalPACA.log" |eval timestamp=strftime(_time, "%F") | chart limit=30 count as count over source by timestamp it is showing result as : but I want to add a custom name to it, how should I do that?    
Introduction This blog post is part of an ongoing series on SOCK enablement. In this blog post, I will explain the behavior of SOCK (Splunk Otel Collector for Kubernetes) with the default configu... See more...
Introduction This blog post is part of an ongoing series on SOCK enablement. In this blog post, I will explain the behavior of SOCK (Splunk Otel Collector for Kubernetes) with the default configuration of values.yaml file. It is a file that contains configurations and variables that get passed to chart templates and dictate how the collector works. A basic way of working with SOCK is to create a new my_values.yaml file and overwrite some configuration values inside of it before passing it to a chart installer. Today we will discuss the default behavior with minimal configuration. How to install it? Structure of values.yaml file Values.yaml file consists of nested variables configuring various system parts. For example, this is a part of the default configuration for the Splunk platform - the core setting responsible for your connection to Splunk: splunkPlatform:  # Required for Splunk Enterprise/Cloud. URL to a Splunk instance to send data  # to. e.g. "http://X.X.X.X:8088/services/collector/event". Setting this parameter  # enables Splunk Platform as a destination. Use the /services/collector/event  # endpoint for proper extraction of fields.  endpoint: ""  # Required for Splunk Enterprise/Cloud (if `endpoint` is specified). Splunk  # Alternatively the token can be provided as a secret.  # Refer to https://github.com/signalfx/splunk-otel-collector-chart/blob/main/docs/advanced-configuration.md#provide-tokens-as-a-secret  # HTTP Event Collector token.  token: ""  # Name of the Splunk event type index targeted. Required when ingesting logs to Splunk Platform.  index: "main"  # Name of the Splunk metric type index targeted. Required when ingesting metrics to Splunk Platform.  metricsIndex: ""  # Name of the Splunk event type index targeted. Required when ingesting traces to Splunk Platform.  tracesIndex: "" (...)   As you can see, various values are used to configure Splunk. These default values are used to set up an application without many extra features, just a Kubernetes logs collection. Other features have to be turned on and configured manually. Comments above each entry are used to describe what it does. As an example, here is a piece of configuration responsible for request timeout: # HTTP timeout when sending data. Defaults to 10s.  timeout: 10s   As we can see, the timeout for sending an event to Splunk is set to 10 seconds. You can customize this value for your own system if there is a need for a longer timeout with your configuration. You can take a look at this documentation to get a better idea about the advanced configuration details, but in this post, we will explore a basic workable configuration and what it does. What kind of minimal configuration do you need to make it work? First, you need to create a my_values.yaml file to overwrite some values in the default configuration. The official documentation explains how to do it: Splunk OpenTelemetry Collector docs. You don’t need many things to run it and a basic configuration file will look something like this: clusterName: "test_cluster" splunkPlatform:  endpoint: "https://X.X.X.X:8088/services/collector/event"  token: "00000000-0000-0000-0000-000000000000"  index: "my_index"  insecureSkipVerify: true   You have to overwrite the clusterName value, as it is required to run the application. It will act as an identifier of your k8s cluster and will be attached to every log, metric, and trace sent, as a k8s.cluster.name attribute. We also have to set the Splunk platform endpoint that will ingest our data and a HEC token that will be used to access it. Refer to this doc on how to set up your HTTP event collector in Splunk. You don’t have to overwrite the index value - by default logs will be sent to the main index -  but it is considered a good practice to do so. In this example, I have changed this value to my_index, an index I created in my instance of Splunk. The logs gathered by SOCK will go to this index. We are setting the insecureSkipVerify flag to true because we want to skip checking a certificate of our HEC endpoint when sending data over HTTPS in our test environment. If you have a certificate configured you should ignore this flag. A short installation guide Great, now we have a working configuration file and can test our application! To install it you first need to add a Splunk otel collector chart with this command: helm repo add splunk-otel-collector-chart https://signalfx.github.io/splunk-otel-collector-chart   You should get the response “splunk-otel-collector-chart has been added to your repositories”  if everything was correctly installed. Now that our repo has been added we can install it: helm install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector As we can see we are installing the my-splunk-otel-collector repo that we just added, with configuration --values from the my_values.yaml file that we’ve created. After running this command we should see a response: Splunk OpenTelemetry Collector is installed and configured to send data to the Splunk Platform endpoint (...) That means our chart was installed correctly! Now that it is running it should be sending data to our Splunk instance using our custom configuration. Another useful command that we can use is this: helm upgrade --install my-splunk-otel-collector --values my_values.yaml splunk-otel-collector-chart/splunk-otel-collector It’s very similar to the command we previously used but can be used even when the repo is already installed and then it will upgrade it with any changes that we made in the my_values.yaml file. What features will be enabled by default? By default, only logs will be sent to Splunk, as metrics and traces have to be turned on manually: logsEnabled: true metricsEnabled: false tracesEnabled: false   So if you want to collect metrics or traces, the values (metricsEnabled and tracesEnabled) need to be changed. If you enabled metrics you will also have to specify metricsIndex, or the application won’t run. If you configured your values correctly and installed your chart, you can now run kubectl get pods command in a console to check if it works. You should see one pod running, something like this: splunker@test:~/splunk-otel-collector-chart$ kubectl get pods  NAME                                   READY   STATUS    RESTARTS   AGE my-splunk-otel-collector-agent-lxns8   1/1     Running   0          63m   As we can see there is one agent pod running because there is only one node in our cluster. In a real-world scenario, you would probably see more agents as there would be more nodes - one agent running per node. And in case you enabled metrics you should be able to also see a cluster receiver pod, like this one: splunker@test:~/splunk-otel-collector-chart$ kubectl get pods  NAME                                                             READY   STATUS    RESTARTS   AGE my-splunk-otel-collector-k8s-cluster-receiver-5d754c9fff-4wzrf   1/1     Running   0          104s my-splunk-otel-collector-agent-fmc9p                             1/1     Running   0          104s   If everything is working fine the status field should state “Running”. So by default our application will send logs (and metrics, traces if enabled) to Splunk. It will also try to resend dropped events in case of failure and use batches to optimize the process of sending data. There are some other commonly used settings like: cloudProvider (aws, azure…) and distribution (aks, eks, openshift…) - these can be used to scrape additional data sendingQueue.persistentQueue - by default, without any configuration, data is queued in memory only. You can enable a persistent queue so queued data is saved on a disk and sent even after collector restarts autodetect - can enable autodetection of prometheus and istio metrics extraAttributes - configuration for additional metadata that can be collected from labels and attributes  So where will my data go? By default logs will go to the “main” index but as mentioned before you can change the index value to send them to a different index. In this example we changed this value to my_index so if you have this index configured in Splunk that’s where data will end up. The simplest way to see if the data is correctly processed is to filter data by index inside Splunk:   There, you should see events being sent to Splunk in real-time. If you want to check your metrics you can use mpreview command like this:   We can observe that the system metrics are now being stored in Splunk. I want to learn more - what other features are there? Many other powerful features of SOCK won't be covered in this article, but there are resources that you can use to learn more about them. You can browse the chart repository and the examples directory to look for ideas for what you can use it for. Reading through values.yaml will also give you a good idea of what can be done with it - all of the settings are described there. Lastly, this series of blog posts is designed to help you learn about SOCK, I recommend taking a look at our other articles, talking about the subjects of routing and multiline logs!
  Splunk support portal doesn't let file a case as it expects an input "Splunk Support access to your company data" However no option is available to select.
Hello Splunkers!! I want to achieve below results in Splunk. Please help me how to achieve this in SPL. Whenever the field is carrying number string then I want below expected results. Current r... See more...
Hello Splunkers!! I want to achieve below results in Splunk. Please help me how to achieve this in SPL. Whenever the field is carrying number string then I want below expected results. Current results Expected values 1102.1.1 1102.01.01 1102.1.2 1102.01.02 Thanks in advance!!
Hello ALL,   I installed On-Premises AppDynamics 24.7 on Rocky Linux 9.4 host. After complete the Enterprise Console installation (through installation script "platform-setup-x64-linux-24.7.0.10038.... See more...
Hello ALL,   I installed On-Premises AppDynamics 24.7 on Rocky Linux 9.4 host. After complete the Enterprise Console installation (through installation script "platform-setup-x64-linux-24.7.0.10038.sh", I continued to setup the Controller (demo profile) and Events Service. The three jobs completed successfully, as shown below. Controller starts OK. But Events Service can not start up. There is Red Critical health status highlighted. The error message: Task failed: Starting the Events Service api store node ... How to make Events Service get started up ? Thanks.
What Is the Splunk Cloud Platform Value Calculator?   The Splunk Cloud Value Calculator is a comprehensive tool that provides an analysis of the total value and benefits associated with migrating... See more...
What Is the Splunk Cloud Platform Value Calculator?   The Splunk Cloud Value Calculator is a comprehensive tool that provides an analysis of the total value and benefits associated with migrating self-managed deployments to Splunk Cloud Platform in minutes . It evaluates multiple factors including hardware, labor, licensing, training, implementation costs and more so you can discover how much admin effort you can save by moving to  Splunk Cloud , and how you can turn the saved admin time into deploying new  use-cases. How Does It Work?   Using the Cloud Value Calculator is simple and intuitive. Here’s a step-by-step guide: Input License Size: Provide information about your total license size (GB/Day). Choose Scenarios: Compare Cloud vs. Self-Managed On-premises and/or Self-Managad in cloud deployment. Choose Premium Apps and/or Deployment Options that apply.  Review Results: Instantly receive the comparative assessment highlighting additional value and lower cost benefits of migrating to Splunk Cloud Platform. Make Informed Decisions: Engage our cloud experts to dive deeper into the analysis and help you make well-informed decisions about your migration to cloud strategy. Get Started Today   The Splunk Cloud Platform Value Calculator is now available for all customers. Start your journey to the cloud by accessing the new calculator and experience firsthand the transformative value of migrating to the Splunk Cloud. Embrace the future of data security and insights with a cloud platform delivered as-a-service—where innovation meets efficiency. Try the cloud value calculator today and take the first step towards building a smarter, more resilient, and cost-effective cloud environment. 
Hello everyone, I am trying to get the queue or event counts with status=“spooling” that happened after the very first error(status=“*error*”) occurred. How could I do this? Thank you in advance.  ... See more...
Hello everyone, I am trying to get the queue or event counts with status=“spooling” that happened after the very first error(status=“*error*”) occurred. How could I do this? Thank you in advance.  this is for our company’s printer server. 
Good morning,  I have been looking for a solution to this problem for a while. What I am trying to accomplish is re-ingesting .evtx files back into the system or another system so that I can use a U... See more...
Good morning,  I have been looking for a solution to this problem for a while. What I am trying to accomplish is re-ingesting .evtx files back into the system or another system so that I can use a UF to re-ingest old logs that have been exported and archived. I hope I am clear as it is hard for me to articulate the ask.  old .evtx files -> Windows Machine (put the logs back into the Windows machine) Which will then allow me to use a UF to send re-ingested logs to Splunk. I have tried converting the evtx files to text with a PowerShell script, but this would take a significant amount of time due to the size of my current evtx files. On average it was taking about 30 minutes per log file, and I have too many to count. 
We were running Splunk Enterprise v9.2 on our Deployment Server.  Everything worked fine.... Upgraded to v9.3.0, now the path "https://<fqhn>/en-US/manager/system/deploymentserver" no longer renders.... See more...
We were running Splunk Enterprise v9.2 on our Deployment Server.  Everything worked fine.... Upgraded to v9.3.0, now the path "https://<fqhn>/en-US/manager/system/deploymentserver" no longer renders. Tried on 3 computers using several different browsers.  All return a blank white screen on this URL only.  All other dashboards on this host work fine, it is only the "Forwarder Manager" link.    Nothing in the logs other than INFO events, and nothing to indicate a problem.  Any ideas what is going on?
I use a stats command in a search in a dashboard which results in about 600 rows. Splunk places a "next" button in the dashboard for each 100 rows (option name="count" is 100). We deliver the result... See more...
I use a stats command in a search in a dashboard which results in about 600 rows. Splunk places a "next" button in the dashboard for each 100 rows (option name="count" is 100). We deliver the result of this dashboard as a pdf so much of the results get lost. I can solve this by using "streamstats" to show the result in parts but I wonder why the limit is 100 and if it is possible to display more than 100 rows at once (without using tricks like streamstats).
Hi,  ok, so updated AME to version 3.0.8. Now i cant access anything, even though I am sc_admin.    cant see the start, cant configure due to the fact that is says I must be sc_admin.    Checked... See more...
Hi,  ok, so updated AME to version 3.0.8. Now i cant access anything, even though I am sc_admin.    cant see the start, cant configure due to the fact that is says I must be sc_admin.    Checked users and roles and they are fine.    any thoughts?
The classic dashboard format was xml; the new Dashboard Studio format is json. Our app/launcher/home is failing to load json dashboards with a 400 Bad Request, displaying the "horse" and complaining ... See more...
The classic dashboard format was xml; the new Dashboard Studio format is json. Our app/launcher/home is failing to load json dashboards with a 400 Bad Request, displaying the "horse" and complaining that the first line must be xml. How do we remove this restriction? Thank you.
Hi Splunk community,   I'm facing an issue with my Splunk deployment server, running on version 9.2.1 (splunk-9.2.1-78803f08aabb-linux-2.6-x86_64-manifest). I’ve added new configurations to the inp... See more...
Hi Splunk community,   I'm facing an issue with my Splunk deployment server, running on version 9.2.1 (splunk-9.2.1-78803f08aabb-linux-2.6-x86_64-manifest). I’ve added new configurations to the inputs.conf file for a WebLogic server within a specific deployment class. After making these changes, I pushed the configurations to the target WebLogic server and triggered a restart. Unfortunately, the new settings in the inputs.conf file are not being applied to the WebLogic server, even though the deployment server logs indicate that the service was successfully restarted. Has anyone experienced this issue or can offer advice on what might be causing the problem and how to resolve it? Thanks in advance!
Hi All, Need help with Timechart and trendline command for below query Both timechart and trendline command are not working index=_introspection sourcetype=splunk_resource_usage component=Hostwi... See more...
Hi All, Need help with Timechart and trendline command for below query Both timechart and trendline command are not working index=_introspection sourcetype=splunk_resource_usage component=Hostwide | eval total_cpu_usage=('data.cpu_system_pct' + 'data.cpu_user_pct') | stats Perc90(total_cpu_usage) AS cpu_usage latest(_time) as _time by Env Tenant | timechart span=12h values(cpu_usage) as CPU | trendline sma2(CPU) AS trend
There is a request to provide the list of P1C alerts for  JMET cluster from Splunk we have provided the following query, but user wants only priority will be P1C | rest /servicesNS/-/-/saved/searc... See more...
There is a request to provide the list of P1C alerts for  JMET cluster from Splunk we have provided the following query, but user wants only priority will be P1C | rest /servicesNS/-/-/saved/searches | table title, eai:acl.owner, search, actions, action.apple_alertaction * This query is giving all the alerts configured but we want only P1C alerts. Its urgent.
Hi all, I installed splunk enterprise 9.2.1 on my machine recently. There are no other external apps or components installed. But the UI is very slow. The loading time for each webpage, including th... See more...
Hi all, I installed splunk enterprise 9.2.1 on my machine recently. There are no other external apps or components installed. But the UI is very slow. The loading time for each webpage, including the login page is slow. It takes around a minute to finish loading. Could anyone provide some suggestions as to why this is happening and how to fix it?
Hi Splunk experts, I want to compare the response code of our API for last 4 hours with last 2 days data over the same time. And if possible I would need results in a chart/table format where i... See more...
Hi Splunk experts, I want to compare the response code of our API for last 4 hours with last 2 days data over the same time. And if possible I would need results in a chart/table format where it shows the data as below. <Reponse Codes | Last 4 Hours | Yesterday | Day before Yesterday> As of now i am getting results in hours wise. Can we achieve this one in Splunk ? Can you guys please guide me in the right direction to achieve this.  
Hi, I requested a Dev license a while ago, but I don't hear anything from Splunk anymore. I have re-requested it a couple times, but still no answer. I even emailed Splunk, yet even that email is be... See more...
Hi, I requested a Dev license a while ago, but I don't hear anything from Splunk anymore. I have re-requested it a couple times, but still no answer. I even emailed Splunk, yet even that email is being ignored. I am new to Splunk and I just want to get started with the Developer license. How do I get my request to be approved? As in for real now, as I already attempted every standard solution. I just want somebody to approve my request, that's all.