All Topics

Top

All Topics

Downloaded SA-cim_Vladiator app on my splunk enterprise , however it's stuck on search type= _raw and Target Datamodel= "Network_Traffic" which is grayed out and I can't select any different datamode... See more...
Downloaded SA-cim_Vladiator app on my splunk enterprise , however it's stuck on search type= _raw and Target Datamodel= "Network_Traffic" which is grayed out and I can't select any different datamodels. Could anyone provide any info to help me get this working correctly.   Thanks all
Where do I learn strategies for full-stack observability across my tech stack? To deliver the experience that today's users demand, full-stack observability is a must. Join our webinar to di... See more...
Where do I learn strategies for full-stack observability across my tech stack? To deliver the experience that today's users demand, full-stack observability is a must. Join our webinar to discover strategies for achieving full-stack observability across containerized, virtual, hybrid cloud-native, and enterprise applications with an OpenTelemetry-based solution.    Webinar | Embark on your OpenTelemetry-based full‑stack observability journey​​​​ You’ll learn:  Why OpenTelemetry is the gold standard for observability — and how to simplify its adoption.  How observability increases your influence over decision-making in IT and lines of business.   Ways to use CloudFabrix observability data modernization service with the Cisco FSO Platform.  Secure your spot to gain expert insights that will transform your approach to observability.   Register for a session now! AMER September 6, 11am PST | 2pm EST EMEA September 7, 10am BST | 11am CEST APAC   September 7, 8:30am IST | 11am SGT | 2pm AEST   Presenters Gregg Ostrowski, Regional CTO, Cisco AppDynamics, is a senior executive and thought leader with over 25 years’ experience. In leadership positions for technology companies including Research in Motion and Samsung, Gregg was responsible for Enterprise Services, Enterprise Developer Relations, Sales Engineering and Ecosystem development. He has worked with many F1000 customers, government agencies and partners on digital transformation, mobility application deployments, DevOps strategies, analytics and high-ROI business solutions.   Shailesh Manjrekar, Chief Marketing Officer, CloudFabrix, is a seasoned IT professional who has over two decades of experience in building and managing emerging global businesses. He brings an established background in providing effective product and solutions marketing, product management, and strategic alliances spanning AI and Deep Learning, FinTech, Lifesciences SaaS solutions. Manjrekar is an avid speaker at AI conferences like NVIDIA GTC and Storage Developer Conference and is also a Forbes Technology Council contributor since 2020, an invitation only organization of leading CxO's and Technology Executives. Register for "Embark on your OpenTelemetry-based full‑stack observability journey​​​​" here. 
Introduction Spinnaker is an open-source, multi-cloud continuous delivery platform composed of a series of microservices, each performing a specific function. Understanding the performance and heal... See more...
Introduction Spinnaker is an open-source, multi-cloud continuous delivery platform composed of a series of microservices, each performing a specific function. Understanding the performance and health of these individual components is critical for maintaining a robust Spinnaker environment. Here are the primary services you might want to monitor: Deck: The browser-based UI. Key monitoring aspects include load times, error rates, and user activity. Gate: The API gateway and the main entry point for all programmatic access to internal Spinnaker services. Monitor error rates, request/response times, and traffic volume. Orca: The orchestration engine. It handles all ad-hoc operations and pipelines. Monitor task execution times, task failure rates, and queue length. Clouddriver: Responsible for all mutating operations and caching infrastructure. Monitor cache times, error rates, and operation completion times. Front50: The metadata store, persisting application, project, pipeline definitions, and pipeline execution history. Monitor read/write times, error rates, and data volumes. Rosco: Responsible for producing machine images. Monitor image baking times, error rates, and queue lengths. Echo: The eventing service. Monitor event delivery times, error rates, and queue lengths. Fiat: The authorization service. Monitor authorization times, error rates, and volume of authorization checks. Igor: Integrates with CI systems and other tools. Monitor job completion times, error rates, and queue lengths. Each of these services exposes its metrics that can be scraped by a Splunk Distribution of the OpenTelemetry collector and analyzed for performance and health insights. This document serves as a comprehensive guide for monitoring Spinnaker infrastructure and services running on Kubernetes (K8s) via Splunk Observability Cloud & Splunk Platform. Given that there is no direct integration available currently, we will need to establish several steps to enable end-to-end monitoring of Spinnaker. Enabling Prometheus Endpoints for Spinnaker Monitoring Armory, the provider of hosted Spinnaker, recommends using their Observability plugin to expose metrics via Prometheus endpoints. These endpoints can then be scraped through Splunk OpenTelemetry collector. Follow the link to install the Observability plugin in your Spinnaker cluster: Armory Observability Plugin Installation The plugin offers direct integration with other observability products, but connecting it to Splunk Observability is straightforward once the Splunk Distribution of the OpenTelemetry collector is set to scrape the Prometheus endpoints. Installation of the Splunk Distribution of the OpenTelemetry Collector Install the collector using the Helm chart on the Spinnaker Kubernetes Cluster. For detailed instructions, refer to Collector Installation. Given the need for extensive custom configuration to enable the sending of logs to Splunk Cloud/Enterprise and metrics to Splunk Observability Cloud, we recommend the use of a custom values.yaml file. Below is a sample configuration snippet for Splunk Observability Cloud and Splunk Platform:   splunkPlatform: # Required for Splunk Enterprise/Cloud. URL to a Splunk instance to send data # to. e.g. "http://10.202.11.190:8088/services/collector/event". Setting this parameter # enables Splunk Platform as a destination. Use the /services/collector/event # endpoint for proper extraction of fields. endpoint: https://10.202.7.134:8088/services/collector # Required for Splunk Enterprise/Cloud (if `endpoint` is specified). Splunk # Alternatively the token can be provided as a secret. # Refer to https://github.com/signalfx/splunk-otel-collector-chart/blob/main/docs/advanced-configuration.md#provide-tokens-as-a-secret # HTTP Event Collector token. token: xxx-xxx-xxx-xxx-xxx # Name of the Splunk event type index targeted. Required when ingesting logs to Splunk Platform. index: "pure" logsEnabled: true metricsEnabled: false tracesEnabled: false splunkObservability: realm: us1 accessToken: XXXXXXXX ingestUrl: https://ingest.us1.signalfx.com apiUrl: "" metricsEnabled: true tracesEnabled: true logsEnabled: false logsEngine: otel   Ensure that metricsEnabled is set to true to send Prometheus metrics. If logsEnabled is set to true on Splunk Platform, it can be false here. The following configuration in values.yaml file will enable collector pods to scrape Spinnaker pods for Prometheus metrics:   config: receivers: prometheus/spinnaker: config: scrape_configs: - job_name: 'spinnaker' kubernetes_sd_configs: - role: pod metrics_path: /aop-prometheus scheme: https relabel_configs: - source_labels: [__meta_kubernetes_pod_ip, __meta_kubernetes_pod_container_port_number] action: replace target_label: __address__ separator: ":" tls_config: insecure_skip_verify: true   This configuration sets up a job to scrape metrics from all pods (role: pod) in the Spinnaker Kubernetes cluster, using the /aop-prometheus metrics path. The insecure_skip_verify: true is used to bypass TLS verification, but be aware that this can be a security risk and should only be used for testing purposes or if you understand the implications. Sample helm command to install otel collector using: helm install splunk-otel-collector splunk-otel-collector-chart/splunk-otel-collector -f condensed_values.yaml Verification of Metrics Ingestion in Splunk Observability Cloud Confirming the correct ingestion of metrics in Splunk Observability Cloud may initially pose a challenge, particularly if you don't immediately know the name of the Prometheus metrics. However, you can work around this by performing the following steps: Curl the ndpoint: Curl the /aop/prometheus endpoint to retrieve the names of the metrics. https://localhost:7002/aop-prometheus Enable Debug Logging on the OpenTelemetry Collector: Adjust the collector's configuration to enable debug logging. This setting will let you view more detailed information about the collector's operations, including metric names. Here is a sample configuration to enable debug logging:   config: service: telemetry: logs: level: "debug"   Use Splunk SignalFlow to Identify Metrics: Splunk SignalFlow allows you to write data computations for your metrics. Using SignalFlow, you can isolate and display the metrics collected from the Prometheus endpoints. Here's an example of a SignalFlow query that lists all the metrics exposed by Prometheus endpoints: A = data('*', filter=filter('sf_metric', '*') and filter('k8s.pod.name', 'spin-orca-*')).count(by=['sf_metric']).publish(label='A') By following these steps, you should be able to verify the ingestion of metrics from your Spinnaker services into Splunk Observability Cloud. Creation of Spinnaker Metrics Dashboard Presently, Splunk Observability Cloud does not come with out-of-the-box (OOTB) dashboards for Spinnaker. However, this does not preclude you from creating insightful, customized visualizations of your Spinnaker performance metrics. One approach is to leverage the plethora of open-source Grafana dashboards available for each Spinnaker service. A repository containing these dashboards can be found at uneeq-oss/spinnaker-mixin. To create your own Splunk dashboards, examine the code of these Grafana dashboards and construct corresponding SignalFlow expressions in Splunk. Let's consider the following Grafana query as an example:   lessCopy code'sum by (controller, status) ( rate(controller_invocations_seconds_sum{container="orca"}[$__rate_interval]) ) / sum by (controller, status) ( rate(controller_invocations_seconds_count{container="orca"}[$__rate_interval]) )   You can translate this Grafana query into the following SignalFlow expressions:   A = data('controller_invocations_seconds_sum', filter=filter('k8s.container.name', 'orca')).sum(by=['controller', 'status']).publish(label='A', enable=False) B = data('controller_invocations_seconds_count', filter=filter('k8s.container.name', 'orca')).sum(by=['controller', 'status']).publish(label='B', enable=False) C = (A/B).publish(label='C')   Using this strategy, you can create a Splunk Observability Cloud dashboard that suits your specific monitoring needs for Spinnaker. Sample dashboard would look like this . Enabling Armory Continuous Deployment Logging Data To log data about individual accounts and functions within Armory Continuous Deployment, you can directly push this data to Splunk HEC endpoints without going through the OpenTelemetry collector. For more details, follow this link: Developer Insights. We could import the JSON code for available dashboards into Splunk cloud/enterprise. Read more. Modify the index accordingly as the default index in these dashboards is spinnaker.  Troubleshooting Tips If you encounter issues, such as not seeing any metrics being ingested from the Prometheus endpoints, consider the following tips to identify and resolve the problem: Inspect the Logs of the OpenTelemetry Collector Pods: The logs of the collector pods can provide valuable insights when you don't see any metrics coming in from the Prometheus endpoints. These logs may contain debugging information that can help pinpoint any issues with the scraping of the endpoints.You can check the logs of the collector pods by using the kubectl logs command. Suppose your OpenTelemetry Collector pod is named otel-collector-abcde, you can view its logs with the following command:   kubectl logs otel-collector-abcde   To continuously stream the logs, add the -f flag as shown below:   kubectl logs -f otel-collector-abcde   Replace app.kubernetes.io/name=otel-collector with the appropriate label selector for your OpenTelemetry Collector pods. Understand the Role of the Service Discovery (SD) Config: To view the SD Config, you need to inspect the configuration of your OpenTelemetry Collector. Depending on how you have deployed the collector, this configuration might be located in a ConfigMap, a command-line argument, or a file in the pod. If it's in a ConfigMap, you can view it with:   kubectl get configmap otel-collector-config -o yaml   If the configuration is passed as a command-line argument or a file, you might need to examine the pod specification or access the pod's filesystem to find it. To inspect the pod specification, use:   kubectl get pod otel-collector-abcde -o yaml   Datapoints being dropped: There are certain organizational level dashboards which could help in finding if there are any throttling issues at thecollector/token level which might stop the data points from being ingested into the platform. Start by looking at the dashboards here. Dashboards -> Built-in Dashboard Groups -> Organization metrics -> IMM Throttling Look at the token throttling and data points dropped dashboards, which will look something like below:   Start by looking at the collector pods which would give you the metrics which are getting dropped. The logs may look like this:   2023-07-20T22:45:51.977Z debug translation/converter.go:240 dropping datapoint {"kind": "exporter", "data_type": "metrics", "name": "signalfx", "reason": "number of dimensions is larger than 36", "datapoint": "source:\"\" metric:\"controller_invocations_contentLength_total   Splunk will drop the data points/MTS if they don't follow certain standards , like in the above case the number of dimensions reached more than 36. Read more. The best way to eliminate throttling and get rid of these errors is to drop these metrics at the prometheus receiver level if they are not required. metric_relabel_configs is the important key here. Read more.   receivers: prometheus: config: scrape_configs: - job_name: 'otel-collector' scrape_interval: 5s static_configs: - targets: ['0.0.0.0:8888'] - job_name: k8s kubernetes_sd_configs: - role: pod relabel_configs: - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape] regex: "true" action: keep metric_relabel_configs: - source_labels: [__name__] regex: "(request_duration_seconds.*|response_duration_seconds.*)" action: keep   Another error that can cause data points to be dropped and token throttling could be this: 2023-07-21T17:28:49.553Z debug translation/converter.go:105 Datapoint does not match the filter, skipping {"kind": "exporter", "data_type": "metrics", "name": "signalfx", "dp": "source:\"\" metric:\"k8s.pod.memory.working_set\" The above error shows that the Signalfx exporter is skipping this metric as it is excluded by default from the exporter. These errors can be ignored as they are expected out of k8s clusters. Read more. ----------- We hope you found this informative and helpful. Want to dive in even further? Experience the difference for yourself and start your free trial of our observability platform now!
Hi all, I have an table with the start time and stop time in each case as below. ID Case Name Start Time Stop Time user_1 Case_A 2023.08.10 13:26:37.867787 2023.08.10 13:29:42.159543 ... See more...
Hi all, I have an table with the start time and stop time in each case as below. ID Case Name Start Time Stop Time user_1 Case_A 2023.08.10 13:26:37.867787 2023.08.10 13:29:42.159543 user_2 Case_B 2023.08.10 13:29:42.159545 2023.08.10 13:29:48.202143   Because I want to merge the duration of case execution with another event, I hope to transfer the above table into this kind of table. _time ID Case Name case_action 2023.08.10 13:26:37.867787 user_1 Case_A start 2023.08.10 13:29:42.159543 user_1 Case_A stop 2023.08.10 13:29:42.159545 user_2 Case_B start 2023.08.10 13:29:48.202143 user_2 Case_B stop   I could transfer the start time into _time by    |eval _time='Start Time'   However, I can't think of a solution to record "Stop Time" into _time as well. Does any one have a idea about how to accomplish this?   Thanks a lot.
I downloaded splunkforwarder-9.1.0.1-77f73c9edb85-Linux-x86_64.tgz, untared it on a fresh Ubuntu 22.04, ran ./splunk start (also tried ./splunk start --debug) then accepted license, set password.... ... See more...
I downloaded splunkforwarder-9.1.0.1-77f73c9edb85-Linux-x86_64.tgz, untared it on a fresh Ubuntu 22.04, ran ./splunk start (also tried ./splunk start --debug) then accepted license, set password.... lsof -i -P -n | grep 8089 command doesn't show anything But ps command show UF is running: user    8267  0.2  0.1 403216 98920 ?        Sl   14:00   0:10 splunkd -p 8089 restart --debug user    8268  0.0  0.0 120792 15068 ?        Ss   14:00   0:00 [splunkd pid=8267] splunkd -p 8089 restart --debug [process-runner] I tried the same steps on 9.0.5.  It worked: splunkd    462339        user    4u  IPv4 54062151      0t0  TCP 127.0.0.1:8089 (LISTEN) How do I get UF 9.1 to listen on port 8089?  Or UF 9.1 doesn't work the same way?  Thanks
Hi, I have an issue where my UF installed on a linux server is not uploading data to Splunk from a specific folder. My inputs.conf file contains multiple simliar set ups for several folders to be u... See more...
Hi, I have an issue where my UF installed on a linux server is not uploading data to Splunk from a specific folder. My inputs.conf file contains multiple simliar set ups for several folders to be uploaded. Everything is working perfectly except for one folder. In the inputs.conf file, I have the following set up for this folder: [monitor:///DATA/remotelogs-ORACLE-MESS/test/*] index=test_bd_oracle sourcetype=test_oracle:audit:xml host_segment = 3 This set up is to upload all files under path /DATA/remotelogs-ORACLE-MESS/test/  However, no files are being uploaded. What  is also wierd is that when I open one of those files using the Linux Vim command, a temporary copy of that file is autocreated with extension .swp and the UF UPLOADS the .swp file. Any help is appreciated. Thank you.
Hi All,   I am trying to create a single value panel like below and add tooltip which displays values dynamically.     But when i implemented this, the single value panel is disappeared and ... See more...
Hi All,   I am trying to create a single value panel like below and add tooltip which displays values dynamically.     But when i implemented this, the single value panel is disappeared and the output of the mouseover tooltip query is displaying as a panel in splunk as shown below.   Below is the source Code.   <label>Finance Job Status Clone</label> <row> <panel id="panel1"> <title>Functional Area</title> <search> <query>| makeresults | eval FA="HR and Finance ACN"</query> <earliest>-15m</earliest> <latest>now</latest> </search> <html id="htmlToolTip1" depends="$tokToolTipShow1$"> <!-- Style for Tooltip Text for center alignment with panel --> <style>       #htmlToolTip1{         margin:auto !important;         width: 20% !important;       } </style> <div class="tooltip fade top in"> <div class="tooltip-arrow"/> <div class="tooltip-inner">$tokToolTipText1$</div> </div> </html> <table> <search> <query>*Search query*</query> <earliest>1691485200</earliest> <latest>1691658000</latest> <done> <set token="tokToolTipText1">$result.Job$</set> </done> </search> <option name="refresh.display">progressbar</option> </table> </panel> </row> </dashboard>     In the above sourcecode, the output of the striked over query should appear as a dashboard panel(first screenshot) and the out put of the second search query should be appeared when we hover over that panel.     Could someone please suggest whats wrong in the above code.?            
We would very much like to restrict certain users in our Splunk environment to the apps that have been provided to them and prevent them from reaching the Search interface. We have established sepa... See more...
We would very much like to restrict certain users in our Splunk environment to the apps that have been provided to them and prevent them from reaching the Search interface. We have established separate roles for each app, and assigned to the users to those roles, but are having some difficulty determining exactly which set of capabilities the roles require for the apps to function, but to make sure the users can't reach the search bar. We remove the "Open in Search" option from the bottom on the dashboard panels, and we would like to remove access to the Search & Reporting app to all but the necessary roles.  We just want to be sure everything still functions for the users in their various apps. Any guidance would be helpful. Thanks!
hi, i created a job runner. its not fetching any results but when ran separately in search gives me data. screenshot for reference.  any help here please    
Hello. During issue creation with Jira Service Desk  , custom fields get ignored i.e. an issue gets created with mandatory fields, like  description, summary, etc., however custom fields are not pop... See more...
Hello. During issue creation with Jira Service Desk  , custom fields get ignored i.e. an issue gets created with mandatory fields, like  description, summary, etc., however custom fields are not populated Syntax of CF like  "customfield_21264": "Hello"  ______________________________________________ I can successfully set this CF  if I create issue via HTTP with Curl $ curl -k -u 'account:password' --request POST --url 'https://jira-host/rest/api/2/issue' --header 'Accept: application/json' --header 'Content-Type: application/json' --data '{"fields": { "project": { "key": "PROJ" }, "summary": "Pictures", "description": "SIEM-Eng", "issuetype": {"name":"Event"}, "customfield_21264": "Hello"}}'
Hello, When i getting results while doing search query, the complete pages doesn't display. For example, I searched 9am to 7 pm (10 hrs) logs. But, the result i got was upto 1 pm only, after that m... See more...
Hello, When i getting results while doing search query, the complete pages doesn't display. For example, I searched 9am to 7 pm (10 hrs) logs. But, the result i got was upto 1 pm only, after that missing. Please refer the screen capture for better understanding. Thanks, Ragav 
Hi all! I have a field called "correlation id" in my search output, out of which I am trying to extract another field called "key". e.g. Correlation id field value: Stores_XstorePOSError_tjm1554_... See more...
Hi all! I have a field called "correlation id" in my search output, out of which I am trying to extract another field called "key". e.g. Correlation id field value: Stores_XstorePOSError_tjm1554_2023320 Then its corresponding key value: Stores_XstorePOSError_tjm1554, which I am able to achieve using this regex - | rex field=correlation_id "^(?P<key>(?P<geo>(\w+[\._])?Stores)[\._](?P<incident_group>[^\._]+)([\._][^\._]+)?[\._](?P<device>[a-zA-Z]{3,4}[a-zA-Z\d]*))([\._])?"  which is unfortunately not working for some correlation ids. e.g. - Correlation id field value: STP_Stores_DiskSpace_stp-44slcapp9_20230809 Key value coming is: STP_Stores_DiskSpace_stp I assume it is because in the regex, it is mentioned to take "_" and not "-"  How do I fix it?
Hello, I have this dashboard with a graph that shows values per week day, the goal is, with the vertical line as circled in green at the photo, change a value inside assigned to it, dynamically when... See more...
Hello, I have this dashboard with a graph that shows values per week day, the goal is, with the vertical line as circled in green at the photo, change a value inside assigned to it, dynamically when selecting the filter. Is it possible to do that? If so, how?  
Unable to find the proper documentation on the REST API endpoints for accessing DataModels or Datasets etc of my splunk enterprise account and unable to achieve the accessibility to those data using ... See more...
Unable to find the proper documentation on the REST API endpoints for accessing DataModels or Datasets etc of my splunk enterprise account and unable to achieve the accessibility to those data using REST API via postman configuration. Any help would be appreciated
Hi Team, Is it possible to recover my previous cloud instance url ( expired due to 15 days of limitation).  Thanks.
Hi, I would like to create an environment to practice Splunk enterprise as standalone Deployment  in Windows and I would also like to know that where to run the commands  as we do for linux
Interesting field Showing values count when I click its get automatically added search  its showing 0 events and if i use * then its work if i search for particular string then its showing 0 events ... See more...
Interesting field Showing values count when I click its get automatically added search  its showing 0 events and if i use * then its work if i search for particular string then its showing 0 events   index=abc   Index=abdc cluster_name="abc"   (not working)  Index=abdc cluster_name="*"      Showing Result 
Hi all, I am in a trouble to extract values from a structure. Here is the structure of a event:       Event{ ID: user_1 data: { c:[ { Case Name: case_A St... See more...
Hi all, I am in a trouble to extract values from a structure. Here is the structure of a event:       Event{ ID: user_1 data: { c:[ { Case Name: case_A Start Time: 2023.08.10 13:26:37.867787 Stop Time: 2023.08.10 13:29:42.159543 } { Case Name: case_B Start Time: 2023.08.10 13:29:42.159543 Stop Time: 2023.08.10 13:29:48.202143 } { Case Name: case_C Start Time: 2023.08.10 13:29:48.202143 Stop Time: 2023.08.10 13:29:51.193276 } ] } }         I tried to compose a table for lookup as below ID case_name case_start_time case_stop_time user_1 case_A 2023.08.10 13:26:37.867787 2023.08.10 13:29:42.159543 user_1 case_B 2023.08.10 13:29:42.159543 2023.08.10 13:29:48.202143 user_1 case_C 2023.08.10 13:29:48.202143 2023.08.10 13:29:51.193276   but I fail to comose as my expectation, I can only compose a table like this: ID case_name case_start_time case_stop_time user_1 case_A case_B case_C 2023.08.10 13:26:37.867787 2023.08.10 13:29:42.159543 2023.08.10 13:29:48.202143 2023.08.10 13:29:42.159543 2023.08.10 13:29:48.202143 2023.08.10 13:29:51.193276 Here is my code:       index="my_index" | rename "data.c{}.Case Name" as case_name, "data.c{}.Start Time" as case_start_time, "data.c{}.Stop Time" as case_stop_time | table ID case_name case_start_time case_stop_time         Can anyone help to compose the output table I need? I hope to seperate each case_name with its own case_start_time and case_stop_time.   Thank you so much.
Hi, I want to run the command "splunk reload deploy-server" on my deployment server, but it fails with the following error:     [root@server etc]# su splunk [splunk@server etc]$ splunk reload... See more...
Hi, I want to run the command "splunk reload deploy-server" on my deployment server, but it fails with the following error:     [root@server etc]# su splunk [splunk@server etc]$ splunk reload deploy-server Your session is invalid. Please login. ERROR: IP address 127.0.0.1 not in server certificate. Please see server.conf/[sslConfig]/cliVerifyServerName for details. Couldn't request server info: Couldn't complete HTTP request: error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed     I'm running Splunk Enterprise 9.0.4. The deployment server also acts as a license server and monitoring console. Of course, my certificate does not have the localhost IP in it.   My Splunk has a Systemd Unit File.   #This unit file replaces the traditional start-up script for systemd #configurations, and is used when enabling boot-start for Splunk on #systemd-based Linux distributions. [Unit] Description=Systemd service file for Splunk, generated by 'splunk enable boot-start' After=network-online.target Wants=network-online.target [Service] Type=simple Restart=always ExecStart=/data/splunk/bin/splunk _internal_launch_under_systemd KillMode=mixed KillSignal=SIGINT TimeoutStopSec=360 LimitNOFILE=65536 LimitRTPRIO=99 SuccessExitStatus=51 52 RestartPreventExitStatus=51 RestartForceExitStatus=52 User=splunk Group=splunk Delegate=true CPUShares=1024 MemoryLimit=24949776384 PermissionsStartOnly=true ExecStartPost=-/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/cpu/system.slice/%n" ExecStartPost=-/bin/bash -c "chown -R splunk:splunk /sys/fs/cgroup/memory/system.slice/%n" [Install] WantedBy=multi-user.target     sslConfig Part of my server.conf     [sslConfig] useClientSSLCompression = true sslVersions = tls1.2 sslVerifyServerCert = true sslVerifyServerName = true requireClientCert = false serverCert = <Combined PEM Cert> sslRootCAPath = <Root CA PEM Cert> sslPassword = <Password> cliVerifyServerName = true       If you need any more info, let me know.
Hi everyone I got a question on the frozenTimePeriodInSecs parameter. Here are my settings inside the indexes.conf file: /opt/splunk/etc/system/local/indexes.conf [_internal] frozenTimePeriodI... See more...
Hi everyone I got a question on the frozenTimePeriodInSecs parameter. Here are my settings inside the indexes.conf file: /opt/splunk/etc/system/local/indexes.conf [_internal] frozenTimePeriodInSecs = 864000 # Data retention set to 10 days. maxTotalDataSizeMB = 750 [_audit] frozenTimePeriodInSecs = 864000 # Data retention set to 10 days. maxTotalDataSizeMB = 750 What I would expect is, that buckets in _internal and _audit where all events are older than 10 days get deleted. However, this is not the case. Anyone knows why? On the other hand, maxTotalDataSizeMB does work as expected. I have checked a couple places for hints why frozenTimePeriodInSecs does not work. The results of those checks are further down below as screenshots. - buckets: Whether there are buckets that contain only events older than 10 days. - btools: Whether the settings are actually taken into account. - monitoring console: Whether the settings are actually taken into account. - _internal logs: Check whether there are freeze events occuring. They only appear for maxTotalDataSizeMB. _audit Buckets _audit btool output monitoring console 1 monitoring console 2 freeze events