All Topics

Top

All Topics

I'm currently working on optimizing our Splunk deployment and would like to gather some insights on the performance metrics of Splunk forwarders. Transfer Time for Data Transmission: I'm intereste... See more...
I'm currently working on optimizing our Splunk deployment and would like to gather some insights on the performance metrics of Splunk forwarders. Transfer Time for Data Transmission: I'm interested in understanding the typical time it takes for a Splunk forwarder to send a significant volume of data, say 10 GB, to the indexer. Are there any benchmarks or best practices for estimating this transfer time? Are there any factors or configurations that can significantly affect this transfer time? Expected EPS (Events Per Second): Additionally, I'm curious about the achievable Event Per Second (EPS) rates with Splunk forwarders. What are the typical EPS rates that organizations achieve in real-world scenarios? Are there any strategies or optimizations that can help improve EPS rates while maintaining stability and reliability? Any insights, experiences, or recommendations regarding these performance metrics would be greatly appreciated. Thank you!
Hi All,   How can I create  Splunk dashboard ppt for my dashboard to present to the client.Is there any samples or any blogs.And how to create requirement document for Splunk dashboard. Thanks, Ka... See more...
Hi All,   How can I create  Splunk dashboard ppt for my dashboard to present to the client.Is there any samples or any blogs.And how to create requirement document for Splunk dashboard. Thanks, Karthigeyan
Hi Dear Malaysian Splunkers,  Part of the SplunkTrust tasks, I have created a Splunk User Group for Kuala Lumper Malaysia.  https://usergroups.splunk.com/kuala-lumpur-splunk-user-group/   Pls joi... See more...
Hi Dear Malaysian Splunkers,  Part of the SplunkTrust tasks, I have created a Splunk User Group for Kuala Lumper Malaysia.  https://usergroups.splunk.com/kuala-lumpur-splunk-user-group/   Pls join and lets discuss monthly about Splunk and getting more value from the data. see you there. thanks.    Best Regards Sekar
Hello, I have this search for tabular format.   index="webbff" "SUCCESS: REQUEST" | table _time verificationId code BROWSER BROWSER_VERSION OS OS_VERSION USER_AGENT status | rename verificationId... See more...
Hello, I have this search for tabular format.   index="webbff" "SUCCESS: REQUEST" | table _time verificationId code BROWSER BROWSER_VERSION OS OS_VERSION USER_AGENT status | rename verificationId as "Verification ID", code as "HRC" | sort -_time   The issue is at BROWSER column where even when user access our app via Edge it still shows as Chrome. I found a dissimilarity between the two logs. One that is accessed via Edge contains "Edg" in the logs. Edge logs   metadata={BROWSER=Chrome, LOCALE=, OS=Windows, USER_AGENT=Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/124.0.0.0 Safari/xxx.xx Edg/124.0.0.0, BROWSER_VERSION=124, LONGITUDE=, OS_VERSION=10, IP_ADDRESS=, APP_VERSION=, LATITUDE=})   Chrome logs   metadata={BROWSER=Chrome, LOCALE=, OS=Mac OS X, USER_AGENT=Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/xxx.xx (KHTML, like Gecko) Chrome/124.0.0.0 Safari/xxx.xx, BROWSER_VERSION=124, LONGITUDE=, OS_VERSION=10, IP_ADDRESS=, APP_VERSION=, LATITUDE=})   My question is, how do i create a conditional search for BROWSER like if contains Edg then Edge else BROWSER?
hey guys, with data retention being set, is there a way to whitelist a specific container to prevent it from being deleted?
I apologize if the following question might be a bit basic.  But I'm confused with the results.   When I append the  following code into the "search" line, it returns a shortened list of results. (f... See more...
I apologize if the following question might be a bit basic.  But I'm confused with the results.   When I append the  following code into the "search" line, it returns a shortened list of results. (from 47 to 3)  AND ("a" in ("a"))   original code.  index=main_service ABC_DATASET Arguments.email="my_email@company_X.com" | rename device_model as hardware, device_build as builds, device_train as trains, ABC_DATASET.Check_For_Feature_Availability as Check_Feature_Availability | search (Check_Feature_Availability=false) AND ("a" in ("a")) | table builds, trains, Check_Feature_Availability   I was expecting to see the same number of results.  Am I wrong about my expectations, or am I missing something here? TIA     index=main_service  ABC_DATASET  Arguments.email="my_email@company_X.com" | rename device_model as hardware, device_build as builds, device_train as trains, ABC_DATASET.Check_For_Feature_Availability as Check_Feature_Availability | search (Check_Feature_Availability=false)  AND ("a" in ("a")) | table builds, trains, Check_Feature_Availability
Could someone help me in deriving solution for this case below? Background : We have an app and in which we set all our saved searches as durable ones as we dont want to miss any runs. So any schedu... See more...
Could someone help me in deriving solution for this case below? Background : We have an app and in which we set all our saved searches as durable ones as we dont want to miss any runs. So any scheduled search if it fails on that particular scheduled time due to any issues like infra related or resource related it will be covered in next run. So am trying to capture the last status even after the durable logic applied.  Lets say I have 4 events. So the first two runs  (Scheduled_time=12345  AND Scheduled_time=12346)  of ALERT ABC failed. And in the third schedule during 12347 those two are covered and in that 12347 is also covered and all are success.  So if I take query like this first .. | stats last(status) by savedsearch_name scheduled_time I get output like this savedsearch_name last(status) scheduled_time ABC                    skipped                   12345 ABC                    skipped                   12346 ABC                    success                   12347   I need to write a logic that take A. jobs whose last status is not success - So here  ABC 12345 and ABC 12346 B. where durable_cursor != scheduled_time. So it will pick events for that job where multiple jobs covered for that missed duration. In this case here it will pick my EVENT 3  C. Then I have to derive like this. Take the failed saved search job name with its scheduled time in which its failed and check that scheduled_time falls within next durable_cursor and scheduled_time with status=success. .. TAKE FAILED SAVEDSEARCH NAME TIME as FAILEDTIME | where durable_cursor!=scheduled_time | eval Flag=if(FAILEDTIME>=durable_cursor OR FAILEDTIME<=scheduled_time, "COVERED", "NOT COVERED") with its schedule_time and check again if that job (with its job name) other scheduled time run falls betweee EVENT 4 : savedsearch_name = ABC ; status = success; scheduled_time =12347 EVENT 3 : savedsearch_name = ABC ; status = success ;  durable_cursor=12345 scheduled_time =12347 EVENT 2 : savedsearch_name = ABC ; status = skipped ; scheduled_time =12346 EVENT 1 : savedsearch_name = ABC ; status = skipped ; scheduled_time =12345 How I derived so far and where I stuck. I took this in two reports First report will take all the Jobs whose last status is not success and tabled output with fields SAVEDSEARCH NAME, SCHEDULEDTIME AS FAILEDTIME, LAST(STATUS) as FAILEDSTATU Then I saved this result in lookup Thsi has to run for last one hour window Second Report It will refer the lookup and take the failed savedsearch names from the lookup and search only those events in Splunk internal sets and search only the events where durable_cursor!=scheduled_time and then check if that failed savedsearch time falls within durable_cursor and next scheduled_time and check if status is success. Thsi is working fine if I have one savedsearch job for one time. But not for multivalues Lets say Job A itself is having four runs in an hour and except first all are failures. In this case I could not cover as referring values from lookup as multivalue field not matching the exact stuff Here is the question I posted for the same https://community.splunk.com/t5/Splunk-Search/How-to-retrieve-value-from-lookup-for-multivalue-field/m-p/684637#M233699   If somebody have any alternate or better thoughts on this can you please throw some light on this.
In this Knowledge Base Article, we’ll walk you through the process of collecting Prometheus metrics from a Python application, forwarding them to Cisco Cloud Observability platform using OpenTelemetr... See more...
In this Knowledge Base Article, we’ll walk you through the process of collecting Prometheus metrics from a Python application, forwarding them to Cisco Cloud Observability platform using OpenTelemetry, and visualizing them for effective monitoring. Do NOTE : Cisco Cloud Observability has been depreciated in favor or Splunk, So if you are not an existing customer that has been onboarded to Cisco Cloud Observability, This article is not for you. Setting up the Python Application Let’s start with creating a Python application that generates Prometheus metrics. We’ll use the  prometheus_client  library to create and expose these metrics. If you haven't installed the library, you can do so with: pip3 install prometheus_client Now, let’s dive into the Python script: import random import time from prometheus_client import start_http_server, Counter, Summary # Define Prometheus metrics some_counter = Counter(name="myapp_some_counter_total", documentation="Sample counter") request_latency = Summary(name="myapp_request_latency_seconds", documentation="Request latency in seconds") def main() -> None: start_http_server(port=9090) while True: try: # Simulate application logic here process_request() time.sleep(5) # Sleep for a few seconds between metric updates except KeyboardInterrupt: break def process_request(): # Simulate processing a request and record metrics with request_latency.time(): random_sleep_time = random.uniform(0.1, 0.5) time.sleep(random_sleep_time) some_counter.inc() if __name__ == "__main__": main() This Python script sets up a simple HTTP server on port 9090 and generates two Prometheus metrics:  myapp_some_counter_total  and  myapp_request_latency_seconds . To produce the load: curl -v http://localhost:9090 The logs will look like: * Trying 127.0.0.1:8081... * Connected to localhost (127.0.0.1) port 8081 (#0) > GET / HTTP/1.1 > Host: localhost:8081 > User-Agent: curl/7.81.0 > Accept: */* > * Mark bundle as not supporting multiuse * HTTP 1.0, assume close after body < HTTP/1.0 200 OK < Date: Thu, 07 Dec 2023 15:58:00 GMT < Server: WSGIServer/0.2 CPython/3.10.12 < Content-Type: text/plain; version=0.0.4; charset=utf-8 < Content-Length: 2527 < # HELP python_gc_objects_collected_total Objects collected during gc # TYPE python_gc_objects_collected_total counter python_gc_objects_collected_total{generation="0"} 371.0 python_gc_objects_collected_total{generation="1"} 33.0 python_gc_objects_collected_total{generation="2"} 0.0 # HELP python_gc_objects_uncollectable_total Uncollectable objects found during GC # TYPE python_gc_objects_uncollectable_total counter python_gc_objects_uncollectable_total{generation="0"} 0.0 python_gc_objects_uncollectable_total{generation="1"} 0.0 python_gc_objects_uncollectable_total{generation="2"} 0.0 # HELP python_gc_collections_total Number of times this generation was collected # TYPE python_gc_collections_total counter python_gc_collections_total{generation="0"} 40.0 python_gc_collections_total{generation="1"} 3.0 python_gc_collections_total{generation="2"} 0.0 Deploying OpenTelemetry Collector To collect and forward metrics to Cisco Cloud Observability, we’ll use OpenTelemetry Collector. This component plays a vital role in gathering metrics from various sources and exporting them to different backends. In this case, we’ll configure it to forward metrics to AppDynamics. Installing OpenTelemetry Collector on Ubuntu Make sure you’re on an Ubuntu machine. If not, adjust the installation instructions accordingly. Install OpenTelemetry Collector: https://medium.com/@abhimanyubajaj98/linux-host-monitoring-with-appdynamics-deploying-opentelemetry-collector-via-terraform-a6971f02c0b2 For this tutorial, we will edit /opt/appdynamics/appdynamics.conf and add another variable called: APPD_OTELCOL_EXTRA_CONFIG=--config=file:/opt/appdynamics/config.yaml​ Our appdynamics.conf file will look like: APPD_OTELCOL_CLIENT_ID=<client-id> APPD_OTELCOL_CLIENT_SECRET=<client-secret> APPD_OTELCOL_TOKEN_URL=<tenant-url> APPD_OTELCOL_ENDPOINT_URL=<tenant_endpoint> APPD_LOGCOL_COLLECTORS_LOGGING_ENABLED=true APPD_OTELCOL_EXTRA_CONFIG=--config=file:/opt/appdynamics/config.yaml We will create the config.yaml like below. Configuring OpenTelemetry Collector Create an  config.yaml  configuration file with the following content: extensions: oauth2client: client_id: "${env:APPD_OTELCOL_CLIENT_ID}" client_secret: "${env:APPD_OTELCOL_CLIENT_SECRET}" token_url: "${env:APPD_OTELCOL_TOKEN_URL}" health_check: endpoint: 0.0.0.0:13133 zpages: endpoint: 0.0.0.0:55679 receivers: otlp: protocols: grpc: endpoint: 0.0.0.0:4317 http: endpoint: 0.0.0.0:4318 prometheus: config: scrape_configs: - job_name: 'prometheus' static_configs: - targets: ['localhost:9090'] processors: # defaults based on perf testing for k8s nodes batch: send_batch_size: 1000 timeout: 10s send_batch_max_size: 1000 memory_limiter: check_interval: 5s limit_mib: 1536 exporters: otlphttp: retry_on_failure: max_elapsed_time: 180 metrics_endpoint: "${env:APPD_OTELCOL_ENDPOINT_URL}/v1/metrics" traces_endpoint: "${env:APPD_OTELCOL_ENDPOINT_URL}/v1/trace" logs_endpoint: "${env:APPD_OTELCOL_ENDPOINT_URL}/v1/logs" auth: authenticator: oauth2client service: telemetry: logs: level: debug extensions: [zpages, health_check, oauth2client] pipelines: metrics: receivers: [prometheus, otlp] processors: [memory_limiter, batch] exporters: [otlphttp] traces: receivers: [otlp] processors: [memory_limiter, batch] exporters: [otlphttp] In this configuration file, we have set up the OpenTelemetry Collector to receive metrics from the Prometheus receiver and export them to Cisco Cloud Observability using the OTLP exporter. Forwarding Metrics to Cisco Cloud Observability With the OpenTelemetry Collector configured, it will now collect metrics from your Python application and forward them to Cisco Cloud Observability. This seamless integration enables you to monitor your application’s performance in real-time. Monitoring Metrics in AppDynamics You can use UQL to query the metrics. This is a very basic example, you can create attributes. To learn more about the AppDynamics metric model, check out this AppDynamics Docs page
Hello, I have a static data about 200,000 rows (potentially grow) needs to be moved to a summary index daily. 1) Is it possible to move the data from DBXquery to summary index and re-write the data... See more...
Hello, I have a static data about 200,000 rows (potentially grow) needs to be moved to a summary index daily. 1) Is it possible to move the data from DBXquery to summary index and re-write the data daily, so there will not be old data with _time after the re-write? 2) Is it possible to use summary index without _time and make it like DBXquery?  The reason I do this is because I want to do data manipulation (split, etc)  and move it to another "placeholder" other than CSV or DBXquery, so I can perform correlation with another index.  For example:  | dbxquery query=" SELECT * from Table_Test"   the scheduled report for summary index will add something like this: summaryindex  spool=t  uselb=t  addtime=t  index="summary" file="test_file" name="test" marker="hostname=\"https://test.com/\",report=\"test\"" Please suggest. Thank you for your help.
Hey, I installed splunk enterprise free trial on ubuntu server and this is the first time I am using splunk so I am following a video. I am having trouble locating "local event logs" option while add... See more...
Hey, I installed splunk enterprise free trial on ubuntu server and this is the first time I am using splunk so I am following a video. I am having trouble locating "local event logs" option while adding data to splunk from a universal forwarder in windows server. I want to capture event logs from windows server to see in splunk. Please help me out as soon as possible. Thank you.
Hello! I have been trying to get some logs into a metric index and I'm wondering if they can be improved with better field extraction. These are what the logs look like:     t=1713291900 path="/da... See more...
Hello! I have been trying to get some logs into a metric index and I'm wondering if they can be improved with better field extraction. These are what the logs look like:     t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s4 type=COUNTER value=12 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s6 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s7 type=COUNTER value=2 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3 type=COUNTER value=104 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8 type=COUNTER value=18 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9:10 type=COUNTER value=8 t=1713291900 path="/data/p1/p2" stat=s1:s2:s3:s4 type=COUNTER value=104 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=140 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=3 t=1713291900 path="/data/p1/p2" stat=s1:s2:s5:s8:s9 type=COUNTER value=1 t=1713291900 path="/data/p3/p4" stat=s20 type=COUNTER value=585 t=1713291900 path="/data/p3/p4" stat=s21 type=COUNTER value=585 t=1713291900 path="/data/p3/p4" stat=s22 type=TIMEELAPSED value=5497.12 t=1713291900 path="/data/p3/p5" stat=s23 type=COUNTER value=585 t=1713291900 path="/data/p1/p5" stat=s24 type=COUNTER value=585 t=1713291900 path="/data/p1/p5" stat=s25 type=TIMEELAPSED value=5497.12 t=1713291900 path="/data/p1/p5/p6" stat=s26 type=COUNTER value=253 t=1713291900 path="/data/p1/p5/p6" stat=s27 type=GAUGE value=1     t is the epoch time. path is the path of a URL which is in double quotes, always starts with /data/, and can have anywhere between 2 and 7 (maybe more) subpaths. stat is is either a single stat (like s20) OR a colon-delimited string of between 3 and 6 stat names. type is either COUNTER, TIMEELAPSED, or GAUGE. value is the metric. Right now I've been able to get a metric index set up that: Assigns t as the timestamp and ignores t as a dimension or metric Makes value the metric Makes path, stat, and type dimensions This is my transforms.conf:     [metrics_field_extraction] REGEX = ([a-zA-Z0-9_\.]+)=\"?([a-zA-Z0-9_\.\/:-]+) [metric-schema:cm_log2metrics_keyvalue] METRIC-SCHEMA-MEASURES = value METRIC-SCHEMA-WHITELIST-DIMS = stat,path,type METRIC-SCHEMA-BLACKLIST-DIMS = t     And props.conf (it's basically log2metrics_keyvalue, we need cm_ to match to our license):     [cm_log2metrics_keyvalue] DATETIME_CONFIG = LINE_BREAKER = ([\r\n]+) METRIC-SCHEMA-TRANSFORMS = metric-schema:cm_log2metrics_keyvalue TRANSFORMS-EXTRACT = metrics_field_extraction NO_BINARY_CHECK = true category = Log to Metrics description = '<key>=<value>' formatted data. Log-to-metrics processing converts the keys with numeric values into metric data points. disabled = false pulldown_type = 1      path and stat are extracted exactly as they appear in the logs. However, I'm wondering if it's possible to get each part in the path & stat fields into their own dimension, so instead of: _time path stat value type 4/22/24 2:20:00.000 PM /p1/p2/p3 s1:s2:s3 500 COUNTER   It would be: _time path1 path2 path3 stat1 stat2 stat3 value type 4/22/24 2:20:00.000 PM p1 p2 p3 s1 s2 s3 500 COUNTER   My thinking was that we'd be able to get really granular stats and interesting graphs. Thanks in advance!
I'm having issues getting parsing working using a custom config otel specification. The `log.file.path` should be one of these two formats: 1. /splunk-otel/app-api-starter-project-template/app-api-s... See more...
I'm having issues getting parsing working using a custom config otel specification. The `log.file.path` should be one of these two formats: 1. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template-96bfdf8866-9jz7m/app-api-starter-project-template.log 2. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template.log One with and one without the pod name. We are doing it this way so that we only index one application log file in a set of directories rather than picking up a ton of kubernetes logs that we will never review, but yet have to store. At the bottom is the full otel config. We are noticing that regardless of the file path (1 or 2) above, it keeps going to the default option, and in the `catchall` attribute in splunk, it has the value of log.file.path which always is the 1st format above (e.g. /splunk-otel/app-api-starter-project-template/app-api-starter-project-template-96bfdf8866-9jz7m/app-api-starter-project-template.log). - id: catchall type: move from: attributes["log.file.path"] to: attributes["catchall"] Why is it that it's not going to the route `parse-deep-filepath` considering the Regex should match. We want to be able to pull out the `application name`, the `pod name`, and the `namespace` which are all reflected in the full `log.file.path` receivers: filelog/mule-logs-volume: include: - /splunk-otel/*/app*.log - /splunk-otel/*/*/app*.log start_at: beginning include_file_path: true include_file_name: true resource: com.splunk.sourcetype: mule-logs k8s.cluster.name: {{ k8s_cluster_instance_name }} deployment.environment: {{ aws_environment_name }} splunk_server: {{ splunk_host }} operators: - type: router id: get-format routes: - output: parse-deep-filepath expr: 'log.file.path matches "^/splunk-otel/[^/]+/[^/]+/app-[^/]+[.]log$"' - output: parse-shallow-filepath expr: 'log.file.path matches "^/splunk-otel/[^/]+/app-[^/]+[.]log$"' - output: nil-filepath expr: 'log.file.path matches "^<nil>$"' default: catchall # Extract metadata from file path - id: parse-deep-filepath type: regex_parser regex: '^/splunk-otel/(?P<namespace>[^/]+)/(?P<pod_name>[^/]+)/(?P<application>[^/]+)[.]log$' parse_from: attributes["log.file.path"] - id: parse-shallow-filepath type: regex_parser regex: '^/splunk-otel/(?P<namespace>[^/]+)/(?P<application>[^/]+)[.]log$' parse_from: attributes["log.file.path"] - id: nil-filepath type: move from: attributes["log.file.path"] to: attributes["nil_filepath"] - id: catchall type: move from: attributes["log.file.path"] to: attributes["catchall"] exporters: splunk_hec/logs: # Splunk HTTP Event Collector token. token: "{{ splunk_token }}" # URL to a Splunk instance to send data to. endpoint: "{{ splunk_full_endpoint }}" # Optional Splunk source: https://docs.splunk.com/Splexicon:Source source: "output" # Splunk index, optional name of the Splunk index targeted. index: "{{ splunk_index_name }}" # Maximum HTTP connections to use simultaneously when sending data. Defaults to 100. #max_connections: 20 # Whether to disable gzip compression over HTTP. Defaults to false. disable_compression: false # HTTP timeout when sending data. Defaults to 10s. timeout: 900s tls: # Whether to skip checking the certificate of the HEC endpoint when sending data over HTTPS. Defaults to false. # For this demo, we use a self-signed certificate on the Splunk docker instance, so this flag is set to true. insecure_skip_verify: true processors: batch: extensions: health_check: endpoint: 0.0.0.0:8080 pprof: endpoint: :1888 zpages: endpoint: :55679 file_storage/checkpoint: directory: /output/ timeout: 10s compaction: on_start: true directory: /output/ max_transaction_size: 65_536 service: extensions: [pprof, zpages, health_check, file_storage/checkpoint] pipelines: logs: receivers: [filelog/mule-logs-volume] processors: [batch] exporters: [splunk_hec/logs]
I have a current Splunk install in my production environment, all running RedHat Linux.  I have a single server w/ Splunk Enterprise installed on it, as well as SplunkForwarder.  I have 100+ other se... See more...
I have a current Splunk install in my production environment, all running RedHat Linux.  I have a single server w/ Splunk Enterprise installed on it, as well as SplunkForwarder.  I have 100+ other servers w/ SplunkForwarder installed on them all pushing logs to the Splunk Enterprise server.  All servers had v9.1.2 of the forwarder installed, and the Enterprise server was also this version. I recently updated the Splunk Enterprise server, as well as the Splunk Forwarders on all servers, to version 9.2.0.1 successfully.  With one exception.  The forwarder installed on my Splunk Enterprise server (named "splunkenter1") fails.  It displays the error listed below where it says that the splunkforwarder package is conflicting with the splunk install. I have another Splunk Enterprise install (using the same set-up) in another environment, and I did not run into this issue.  That upgrade worked without issue. I've tried Google'ing the issue, but haven't found much.  Anyone have any ideas on what could be causing this, or has anyone seen this before?   [root@splunkenter1 ~]# dnf update splunkforwarder Last metadata expiration check: 0:01:36 ago on Mon 22 Apr 2024 04:47:07 PM UTC. Dependencies resolved. ======================================================================================================== Package Architecture Version Repository Size ======================================================================================================== Upgrading: splunkforwarder x86_64 9.2.0.1-d8ae995bf219 splunk-repo 44 M Transaction Summary ======================================================================================================== Upgrade 1 Package Total download size: 44 M Is this ok [y/N]: y Downloading Packages: splunkforwarder-9.2.0.1-d8ae995bf219.x86_64.rpm 41 MB/s | 44 MB 00:01 -------------------------------------------------------------------------------------------------------- Total 41 MB/s | 44 MB 00:01 Running transaction check Transaction check succeeded. Running transaction test The downloaded packages were saved in cache until the next successful transaction. You can remove cached packages by executing 'dnf clean packages'. Error: Transaction test error: file /usr/lib/.build-id/03/f57acc2883000e6b54bf75c7e67d1a07446919 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/06/a82be30cc16ea5bea39f78f8056447e18beb15 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/1a/b0b8e873c6d668dcd3361470954d12004926cd from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/1e/8edb02a946c645cd20558aa8a6b420792f5541 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/35/e87a7fb154de7d5226e5a0a28c80ffd0c1be48 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/3a/3aac493bff5bb22e02b8726142dd67443dd03c from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/42/abc0f2a26bfb13b563104e87287312420c707e from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/44/6a270f1de8d26f47bf9ff9ae778e1fd3332403 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/64/b2324ff715d30c8a91dee6a980d63c291648d8 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/65/274a42201dd21f83996ba7c8bd0ba0dc3894c8 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/6d/dd008477651e7c8febce4699a739aaf188b0ae from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/88/cbe6deabd44a4766207eebf7c5e74f7ed53120 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/8a/6ee8699fb74fb883874a1123d91acf0b0d98a6 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/94/ea2865a21761f062a2db312845c535d5429bfc from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/95/d5fe61c313d8a5616f8a45f6c7d05151283ab6 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/96/b9463c40fc6541345a4b87634e8517281f8d4d from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/99/93008fdae763af21c831956de21501bb09e197 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/9b/2a882e45910da32603baf28a13b1630987184e from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/9f/b5fd366b32867d537caa84d4b2b521f5c21083 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/a0/1ae9032915dce67a58e8696c3c9fe195193d77 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/a1/616e140409dc54f0db2bf02ed7e114f07490af from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/b6/6dd3d33542916fff507849621dac5f763a98a2 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/b6/fd3c259a4c6e552d9b067f39e66c03cc134895 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/b7/e3d0b70694caa826df19d93b7341de0decdad3 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/bc/f1c9c6878bb887ef6869012b79c97546983b83 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/c8/d218675e02086588c28882c28b3533069d505c from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/d0/be01f291a5b978e02dcdd0069b82ce8a764dbf from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/d3/7dcf7bcf859ed048625d20139782517947e6e0 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/d7/30a0409850e89f806f3798ca99b378c335b7a5 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/dc/259ac038741ecbd76f6052a9fa403bc5ab5ab3 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/de/294f4dd1fa80d590074161566f06b39b9230fb from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/e0/0ee3712cdbd590286c2b8da49724fdaf6dee15 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/e6/7f07efdda1fcfe82b6ceb170412f22e03d2ab5 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/ec/dc3eeaba4750e657f5910fa2adb21365533f27 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/ee/6addfc324fb4bf57058df3adf7ea55dff4953f from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f1/0b5a5bc3bcb996183924bd6029efba8290c71a from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f2/c0dd88030fc9e343f6d9104a5015938cfe3503 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f3/61ef732e036606eef3d78bb13f6d6165bcd927 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f4/c1fc01304f2796efaabefd2a6350ba67cc9edc from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/f9/3cf5828d46fbdd6e82b2d18a4a5c650b84c185 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/fa/a370a95319b4a8ce1bd239652457843a09c15e from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64 file /usr/lib/.build-id/fd/201b0799acb29720c90a6129be08800ce4b7e5 from install of splunkforwarder-9.2.0.1-d8ae995bf219.x86_64 conflicts with file from package splunk-9.2.0.1-d8ae995bf219.x86_64  
Hello, if we have on DS "app/local" with conf files, is that possible restarting it that it pushes DS "app/local" to HF "app/local" and deletes custom local conf files on HF (created from HF GUI)? ... See more...
Hello, if we have on DS "app/local" with conf files, is that possible restarting it that it pushes DS "app/local" to HF "app/local" and deletes custom local conf files on HF (created from HF GUI)? Thanks.
Hello, How to solve " Events might not be returned in sub-second order due to search memory limits" without increasing the value of the following limits.conf setting:[search]:max_rawsize_perchunk?... See more...
Hello, How to solve " Events might not be returned in sub-second order due to search memory limits" without increasing the value of the following limits.conf setting:[search]:max_rawsize_perchunk? I got a message after I scheduled a query to move more than 150k rows into a summary index. I appreciate your help. Thank you
The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the integration of Artificial Intelligence (AI) and Machine Learning (ML) is revolutionizing the wa... See more...
The Transformative Power of AI and ML in Enhancing Observability   In the realm of IT operations, the integration of Artificial Intelligence (AI) and Machine Learning (ML) is revolutionizing the way organizations maintain observability over their systems. Observability practices are enhanced by AI and ML to predict and prevent incidents before they occur, leading to significant improvements in reliability, availability, and customer experience. These technologies are applied across varying maturity stages, from gaining basic visibility to employing advanced predictive models that preempt potential issues. 1 The financial implications of downtime are profound, with each hour potentially costing organizations an average of $365,000, thus underscoring the critical need for efficient operational practices. This blog explores the transformative impact of AI and ML on observability within IT environments, demonstrating the benefits through various use cases and highlighting the substantial return on investment for organizations that adopt these practices. Harnessing AI and ML for Enhanced Observability   Organizations increasingly recognize the benefits of observability in enhancing reliability, availability and customer experience. The integration of Artificial Intelligence (AI) and Machine Learning (ML) into observability practices offers predictive capabilities, allowing for the anticipation and prevention of incidents before they occur. AI and ML are applied in three maturity stages for observability: from basic visibility without AI to proactive AI that establishes baselines for normal operations, and finally, to advanced predictive models that can preempt potential issues based on historical data. One of the roles of AI in observability is to provide a deeper understanding of 'normal' operational patterns and to direct attention to anomalies that require immediate action. This approach is instrumental in resolving issues faster, reducing downtime impact, and instilling confidence in maintaining customer-facing and internal services.     The complexity of modern technology stacks makes it challenging to discern operational health, a dilemma that AI and ML help to address by centralizing visibility and offering actionable insights. Features that support this include dynamic baselining, predictive modeling, and improving detection accuracy. These capabilities are encapsulated in a variety of products designed to simplify the adoption of AI and ML for organizations, regardless of their current maturity stage in observability. Understanding AIOps and Its Practical Applications   AIOps is the application of AI and  machine learning to operations. This includes anomaly detection, alert noise reduction, probable root cause, automation and remediation, and proactive prevention.. The importance of AIOps stems from its ability to predict and prevent potential IT issues before they impact customers, which is crucial for maintaining high availability and reliability of IT services. AIOps has a range of use cases, demonstrating its benefits across various scenarios. For instance, it aids in swiftly resolving downtime, enhancing detection efficiency, and reducing manual processing efforts. A practical example includes an organization that shifted from static to dynamic thresholding to improve alert accuracy, as static thresholds were either overwhelming during peak hours or missing anomalies during off hours. This shift led to massive improvements in detection efficiency. The monetary benefits of AIOps adoption are significant. Moreover, downtime not only incurs financial losses but also damages reputation, sometimes causing customers to switch to competitors. AIOps steps in as a vital tool to navigate the complexities of modern IT environments, providing a predictive approach to maintaining service performance and preventing issues. It achieves this by collecting data, defining Key Performance Indicators (KPIs), establishing baselines for typical performance, and building predictive models to anticipate and mitigate potential issues.   While the implementation of AIOps may seem challenging, tools and products are available to facilitate the transition, allowing organizations to start with simple use cases and gradually progress to more complex applications. Overall, AIOps offers a path to proactive IT operations management, enabling organizations to stay ahead of potential service performance issues and drive better business outcomes. The Challenge of Observability in the Age of Big Data   In today's digital landscape, organizations are grappling with the immense volumes of data generated by their IT environments. This surge in data presents a significant challenge in maintaining observability over their systems. As data volumes expand, the task of monitoring and managing the performance of IT services becomes increasingly complex. One of the critical issues faced in this scenario is the phenomenon of 'alert storms,' where the sheer quantity of alerts overwhelms the IT teams, making it difficult to pinpoint and troubleshoot performance issues effectively. The recent Splunk AI for Observability webinar revealed that organizations are indeed in the midst of what could be described as the 'perfect storm' of challenges, with many participants acknowledging struggles with growing data volumes, alert storms, and troubleshooting. Among these, too many alerts stood out as a particularly prominent issue, as it can obscure the root causes of performance degradation, leading to extended downtime and a scramble to restore services. The economic impact of such downtime is staggering. A survey published in the 'Digital Resilience Pays Off' report highlighted that, on average, each hour of downtime can cost organizations up to $365,000, not to mention the potential reputational damage that can arise from poor customer experiences. To combat these issues, organizations are turning to artificial intelligence (AI) to enhance their observability practices. AI is leveraged to predict or prevent incidents before they occur, and customers are segmented into three maturity stages based on their use of AI. The most advanced customers employ predictive models that can foresee and mitigate negative outcomes, whereas the less mature ones are yet to harness AI's full potential. In conclusion, the path to overcoming the challenges posed by growing data volumes lies in the strategic application of AI. The Evolution and Impact of AI and ML    Artificial Intelligence (AI) and Machine Learning (ML) have become integral components of Splunk products, offering significant advantages in detecting service performance issues. The integration of these technologies within Splunk's suite has a rich history, with nearly a decade of implementation. The profound impact of AI and ML is evident in the ability to predict or prevent incidents before they happen, enhancing the observability of systems. Organizations leveraging AI and ML in Splunk's offerings report substantial benefits, such as accelerated problem resolution and enhanced reliability of customer-facing and internal services. The evidence supporting the return on investment for those adopting observability practices is compelling, with statistics showing that each hour of downtime can cost an average of $365,000, highlighting the critical nature of maintaining operational efficiency. Successful AI and ML implementations have led to increased detection efficiency, reduced manual processing, and the identification of previously unknown scenarios within service performance. These advancements have been showcased through customer stories, such as the IG Group's transition from static thresholds to dynamic baselining, which has drastically improved detection efficiency. Another example is AIB's use of Splunk products to facilitate triage processes, leading to the discovery of an issue caused by unusual snowfall in Ireland. Lastly, StubHub's use of baseline models helped control application errors and uncover hidden issues. To streamline the adoption of AI and ML, Splunk ensures its products are accessible and supportive of users at any stage of their AI and ML journey, from simple use cases to complex predictive model deployments. The Splunk App for Anomaly Detection exemplifies this commitment, automating the detection of anomalies in key metrics and KPIs, thereby demonstrating Splunk's dedication to enhancing service performance through advanced technology. Enhancing Operational Efficiency with Anomaly Detection    The introduction of the Splunk App for Anomaly Detection marks a significant advancement in operational efficiency for organizations leveraging Splunk’s AI capabilities. A demonstration of this app in action reveals its capacity to effortlessly operationalize anomaly detectors for actionable alerting, thus simplifying the traditionally complex and technical process of anomaly detection. By automating the configuration for specific metrics or KPIs, the app streamlines the machine learning process to detect anomalies at ingest time, effectively removing barriers such as the need for complex SPL, statistical knowledge, and parameter tuning. The benefits of implementing anomaly detection are substantial and multifaceted. The demonstration highlights how this technology enables organizations to detect both isolated point anomalies and sustained anomaly intervals, accompanied by confidence scores to gauge the significance of the detected anomalies. Moreover, the app facilitates efficient detection by alerting users when anomalies with high confidence scores are found, and provides the SPL query generated by the app for further use within Splunk. Operational efficiency is further enhanced when anomaly detection is paired with predictive AI capabilities. Organizations can predict potential incidents before they occur, mitigating downtime and improving service reliability. This predictive approach is demonstrated by the Anomaly app's ability to identify anomalous behavior and to recommend appropriate actions in real-time, thus preventing potential service disruptions.     In conclusion, the Splunk App for Anomaly Detection exemplifies the intersection of AI and operational efficiency, providing a powerful tool for organizations to proactively manage and maintain the performance of their systems and services. Understanding Splunk's AI Principles and the AI Assistant   Splunk integrates AI into its observability suite, offering domain-specific AI capabilities that enhance the efficiency of AI-driven processes. The involvement of humans remains crucial, ensuring that AI assists rather than replaces human decision-making. A notable innovation is the introduction of the Splunk AI Assistant for SPL, which simplifies the creation of SPL queries and their understanding. This assistant, currently in public preview, promises to unlock the full potential of SPL-powered Splunk products. Statistics underline the importance of AI in observability, demonstrating how AI applications in Splunk significantly reduce downtime costs, which can average $365,000 per hour. With complex technology stacks, AI helps organizations predict and prevent incidents, using models to forewarn about potential issues and enabling quick resolution. The AI-driven capabilities are designed to be open and extensible, allowing users to customize models or employ their own, maintaining Splunk's versatile problem-solving essence. The Splunk AI Assistant aims to be a scalable aid across the ecosystem, with future enhancements like an AI assistant for observability cloud and an improved adaptive thresholding experience. In essence, Splunk's AI principles and the AI Assistant offer a comprehensive, user-friendly AI integration that fosters proactive, informed, and efficient observability practices. Enhancing Security and Observability with AI and Unified Platforms   The integration of Artificial Intelligence (AI) into observability practices significantly accelerates the ability to detect, investigate, and respond to incidents. AI's role in enhancing security and observability is pivotal, particularly in predicting or preventing incidents before they occur. Organizations are adopting AI to gain insights on what normal performance looks like in their environments and to detect deviations that may indicate emerging issues. With the growth of data volumes and the complexity of technology stacks, AI becomes an indispensable tool in managing alert storms and troubleshooting performance issues. A survey by Splunk has found that each hour of downtime can cost organizations an average of $365,000, underscoring the financial impact of operational disruptions beyond the reputational damage it may cause to customer-facing services. The comprehensive nature of the Splunk platform caters to this need, with its decade-long incorporation of AI and machine learning to support a wide range of use cases. Furthermore, the importance of a unified platform cannot be overstated. Splunk's unified platform serves as a cohesive foundation for SecOps, ITOps, and DevOps teams, providing solutions that are tailored to their specific requirements while reaping the benefits of a holistic approach. This unified approach ensures that these teams can operate more efficiently, with better visibility and tools for rapid response, thereby reducing mean time to resolution (MTTR) and mitigating the risk of service performance issues. AIOps: A Mature Approach to Observability   In the increasingly complex IT landscape, AIOps emerges as a mature approach to observability, playing a pivotal role in enhancing incident response and operational efficiency. This approach is fostered by the need to predict or prevent incidents before they occur, leading to continuous improvement and predictive analytics capabilities within IT operations. The integration of AIOps has demonstrated a significant impact, particularly in the realm of incident management. Organizations that embrace AIOps have observed a faster resolution of problems, resulting in improved availability and reliability of services. Notably, customers leveraging AIOps have reported the ability to fix issues more swiftly and confidently maintain customer-facing and internal services. A mature AIOps strategy encompasses various stages of implementation, ranging from basic visibility into IT environments to advanced predictive models. These models can anticipate potential negative outcomes based on historical data, enabling preemptive remediation efforts. The statistical evidence supporting AIOps is compelling, with reports indicating that each hour of downtime can cost an average of $365,000, highlighting the financial incentives to adopt such strategies. Moreover, AIOps facilitates the identification of previously unknown scenarios, thereby driving innovation and informed decision-making. The journey toward AIOps maturity not only increases detection efficiency but also reduces manual processing and uncovers new insights, all of which contribute to the overarching goal of achieving digital resilience and operational excellence.     In conclusion, the integration of AI and ML into observability practices is recognized as a transformative force in IT operations. It is demonstrated that these technologies significantly enhance the ability to predict and prevent incidents, leading to improved service reliability and customer experience. The financial benefits are underscored by the high cost associated with downtime, encouraging organizations to adopt AI and ML to maintain operational efficiency. Through various use cases, the effectiveness of AI-driven solutions in addressing complex technology stack challenges is illustrated, showing a substantial return on investment. A mature approach to observability, encompassing AIOps, is presented as essential for ongoing improvement and innovation within organizations. It is concluded that AI and ML are indispensable in the contemporary digital landscape for driving business resilience and efficiency.  1 From Digital Resilience Pays Of (p.7) by  Splunk © 2023 Splunk Inc. 
Hello, I have a standalone Splunk Enterprise 9.1.3 instance with some DCs and servers connected to it using Forwarder Management console. At the moment I have 2 server classes configured, 1 for the... See more...
Hello, I have a standalone Splunk Enterprise 9.1.3 instance with some DCs and servers connected to it using Forwarder Management console. At the moment I have 2 server classes configured, 1 for the DCs and the other one for the servers. The server class for the DCs includes only the inputs.conf file for Windows logs: [WinEventLog://Security] disabled = 0 index = myindex followTail=true start_from = oldest current_only = 0 evt_resolve_ad_obj = 1 checkpointInterval = 5 whitelist = 4624,4634,4625,4728,4729 renderXml=false Moreover, in the Splunk Enterprise I configured 2 transforms for splitting the logs in two separeted indexes, like this: props.conf: [WinEventLog:Security] TRANSFORMS-security = rewrite_ad_group_management, rewrite_index_adm transforms.conf: [rewrite_ad_group_management] REGEX = EventCode=(4728|4729) DEST_KEY = _MetaData:Index FORMAT = index1 [rewrite_index_adm] REGEX = Account Name:\s+.*\.adm DEST_KEY = _MetaData:Index FORMAT = index2 In particular, the goal is to forward the authentication events (4624,4634,4625) for only admin users (Account Name:\s+.*\.adm) in index2 and only EventCode 4728 and 4729 in index1, and the events that not match none transform should remain in myindex. At the moment the first transform is not working, so I'm receiving Events 4728 and 4729 in index2, am I missing something or there is a better logic to do that? I tried to combine also 4624,4634,4625 and Account Name:\s+.*\.adm with  (?ms)EventCode=(4624|4634|4625)\X*Account Name:\s+.*\.adm Thanks in advance
I am having some dashboards created by Splunk Dashboard Studio. Anyone know where I could set static color based on values in the dashboard? Thanks much!
Hi Team  How to convert millsec value to seconds  index=testing | timechart max("event.Properties.duration") Can anyone helps to with spl query search converting value  millsec value to seconds... See more...
Hi Team  How to convert millsec value to seconds  index=testing | timechart max("event.Properties.duration") Can anyone helps to with spl query search converting value  millsec value to seconds       
Hello, It seems that in the dashboard studio the static choropleth map has no legend. Here is the spl: index=xxxxxxxx sourcetype=yyyyyy mailgate* src=* | iplocation src | stats count by Country |... See more...
Hello, It seems that in the dashboard studio the static choropleth map has no legend. Here is the spl: index=xxxxxxxx sourcetype=yyyyyy mailgate* src=* | iplocation src | stats count by Country | geom geo_countries allFeatures=True featureIdField=Country If I put this map in a classic dashboard I get the map with the legend but in the dashboard studio no legend is showed. Is it a way to show this legend in the dashboard studio? Regards, Emile