All Topics

Top

All Topics

I made a terrible mistake and tried to use Splunk as a non-admin for the first time in a year or so.  With that mistake I experienced normal user woes of job queuing.  In reaction to queuing I went t... See more...
I made a terrible mistake and tried to use Splunk as a non-admin for the first time in a year or so.  With that mistake I experienced normal user woes of job queuing.  In reaction to queuing I went to the job manager to delete all of my own jobs except the latest queued job I cared about.  Upon deletion of older jobs my queued search did not resume within a reasonable period of time (within 5 seconds).  I then went back to view the job activity monitor and saw that jobs I deleted seconds before were still present.   How long is someone expected to wait until queued jobs resume after deletion of older jobs?  Seems like the desired effect only comes after a matter of minutes, not seconds.  Is this configurable?
Hi all I am trying to join two queries but unable to get the expected result. I am using join command to extract username from base query and then look for the details of username from main query. I... See more...
Hi all I am trying to join two queries but unable to get the expected result. I am using join command to extract username from base query and then look for the details of username from main query. I am also trying to accomodate time constraints here, ex look for a user in main query if the time difference it was captured in sub query and main query is 120 secs. I am also using multiple eval commands and also tried appendcols
Whether you're a seasoned pro or just dipping your toes into the data-driven universe, there's something for everyone with Splunk Education.   If you’re looking for “just the facts,” we got you. ... See more...
Whether you're a seasoned pro or just dipping your toes into the data-driven universe, there's something for everyone with Splunk Education.   If you’re looking for “just the facts,” we got you. Skim through our newest infographic to discover how Splunk Education can strengthen your proficiency, build foundational skills at your own pace, deepen core knowledge with hands-on learning, demonstrate mastery with certifications, and share knowledge with industry peers. Here's a sneak peek at just some of what you'll discover: Click here to view/download the full infographic   So, are you ready to unleash your full potential with Splunk Education? Go to Splunk Education to enroll in your first or next course and take your learning adventure to new places today.    – Callie Skokos on behalf of the Splunk Education Crew
  In the ever-evolving landscape of cloud-native applications, maintaining visibility and monitoring performance are paramount. This article aims to guide you through the process of setting... See more...
  In the ever-evolving landscape of cloud-native applications, maintaining visibility and monitoring performance are paramount. This article aims to guide you through the process of setting up an observability framework using OpenTelemetry (Otel) in conjunction with AppDynamics. By leveraging the power of OtelCollectors and the AppDynamics backend, we can gather, process, and analyze telemetry data (traces, logs, and metrics) to ensure our applications are performing optimally and to quickly troubleshoot any issues that may arise. Pre-requisite: Access to AppDynamics Cisco Cloud Observability endpoint Url. You would need to generate credentials from step 3 as defined in https://docs.appdynamics.com/observability/cisco-cloud-observability/en/kubernetes-and-app-service-monitoring/install-kubernetes-and-app-service-monitoring#InstallKubernetesandAppServiceMonitoring-helm-charts Step 1: Configuring the OtelCollector Creating the Configuration The first step in our journey involves creating a configuration for our OtelCollector. This is accomplished by defining a  ConfigMap  in Kubernetes, which outlines how our collector will operate. This configuration specifies the protocols for receiving telemetry data, the processing of this data, and how it will be exported to our observability backend, such as AppDynamics. Here's an example of what this  ConfigMap  might look like: apiVersion: v1 kind: ConfigMap metadata: name: collector-config namespace: appd-cloud-apps data: collector.yaml: | receivers: otlp: protocols: grpc: endpoint: http: endpoint: processors: batch: send_batch_size: 1000 timeout: 10s send_batch_max_size: 1000 exporters: logging: verbosity: detailed otlphttp/cnao: auth: authenticator: oauth2client traces_endpoint: https://xxxx-xx-xx-xx.xxx.appdynamics.com/data/v1/trace logs_endpoint: https://xx-pdx-xxx-xx.xxx.appdynamics.com/data/v1/logs extensions: health_check: endpoint: 0.0.0.0:13133 pprof: endpoint: 0.0.0.0:17777 oauth2client: client_id: xxxx client_secret: xxxx token_url: xxx service: extensions: [health_check, pprof, oauth2client] pipelines: traces: receivers: [otlp] processors: [batch] exporters: [logging, otlphttp/cnao] logs: receivers: [otlp] processors: [batch] exporters: [logging, otlphttp/cnao] telemetry: logs: level: "debug" This configuration is the heart of our observability setup, integrating seamlessly with AppDynamics to provide a detailed view of our application’s performance. Step 2: Deploying the OtelCollector Launching the Collector With our configuration in place, the next step is to deploy the OtelCollector within our Kubernetes cluster. This deployment ensures that the collector is operational and can begin processing telemetry data as defined. The deployment configuration ties our  ConfigMap  to the OtelCollector, enabling it to start receiving, processing, and exporting telemetry data. Here's a basic example of what the deployment configuration might include: apiVersion: apps/v1 kind: Deployment metadata: name: opentelemetrycollector namespace: appd-cloud-apps spec: replicas: 1 selector: matchLabels: app.kubernetes.io/name: opentelemetrycollector template: metadata: labels: app.kubernetes.io/name: opentelemetrycollector spec: containers: - name: otelcol args: - --config=/conf/collector.yaml image: docker.io/otel/opentelemetry-collector-contrib:latest volumeMounts: - mountPath: /conf name: collector-config volumes: - configMap: name: collector-config items: - key: collector.yaml path: collector.yaml name: collector-config This ensures our OtelCollector is primed to handle telemetry data, marking a crucial step towards full observability. Step 3: Instrumenting an Application with the OpenTelemetry Java Agent Setting Up the Application for Telemetry The final step involves instrumenting our application with the OpenTelemetry Java Agent. This is crucial for collecting telemetry data from the application itself. By deploying a Kubernetes application with the Java Agent attached, we enable our application to send telemetry data directly to the OtelCollector. This setup includes an init container to prepare the Java Agent and a sidecar container to collect and forward this telemetry data to our central OtelCollector and AppDynamics for analysis. apiVersion: apps/v1 kind: Deployment metadata: name: tomcat-otel-personal namespace: appd-cloud-apps spec: replicas: 1 selector: matchLabels: app: tomcat-otel-personal template: metadata: labels: app: tomcat-otel-personal spec: initContainers: - name: otel-agent-attach-java command: - cp - -r - /javaagent.jar - /otel-auto-instrumentation-java/javaagent.jar image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:latest volumeMounts: - mountPath: /otel-auto-instrumentation-java name: otel-agent-repo containers: - name: sidecar-otel-collector image: otel/opentelemetry-collector args: - --config=/conf/agent.yaml volumeMounts: - name: sidecar-otel-collector-config mountPath: /conf - name: tomcat-app image: docker.io/abhimanyubajaj98/tomcat-app-buildx imagePullPolicy: Always ports: - containerPort: 8080 volumeMounts: - mountPath: /otel-auto-instrumentation-java name: otel-agent-repo env: - name: JAVA_TOOL_OPTIONS value: "-javaagent:/otel-auto-instrumentation-java/javaagent.jar -Dotel.resource.attributes=service.name=open-otel-abhi,service.namespace=open-otel-abhi -Dotel.traces.exporter=otlp,logging" - name: OTEL_EXPORTER_OTLP_PROTOCOL value: grpc By following these steps, we’ve successfully configured and deployed an observability framework using OpenTelemetry and integrated it with AppDynamics. This setup not only enhances the visibility into our application’s performance but also empowers us to proactively manage and troubleshoot any issues that may arise, ensuring optimal performance and reliability.   Once you are done, hover to the Cisco Cloud Observability UI -> Services. Filter based on service.name. In our case the service.name is open-otel-abhi  
   Introduction In the fast-evolving Kubernetes ecosystem, ensuring optimal performance and reliability of applications is crucial. AppDynamics offers a seamless way to monitor your Java app... See more...
   Introduction In the fast-evolving Kubernetes ecosystem, ensuring optimal performance and reliability of applications is crucial. AppDynamics offers a seamless way to monitor your Java applications deployed in Kubernetes clusters. This Medium article guides you through integrating your application with AppDynamics for comprehensive monitoring and observability, leveraging OpenTelemetry Collector for enhanced telemetry data collection.   Prerequisites Before diving into deployment, ensure the following: AppDynamics Controller Onboarding: Your AppDynamics Controller should be onboarded. Follow the integration guide here. Java Application Configuration: Your Java application must be capable of communicating with the OpenTelemetry Collector. Deployment Steps 1. Prepare Deployment YAML To begin, prepare your deployment YAML to deploy your application alongside an OpenTelemetry Collector sidecar. This configuration ensures the collection and forwarding of telemetry data to AppDynamics, enriching your monitoring insights. The deployment configuration involves: An application named  unified-monitoring-java-app  within the  appd-cloud-apps  namespace. Two containers: your application ( abhimanyubajaj98/unified-monitoring-java-app ) and the OpenTelemetry Collector sidecar ( otel/opentelemetry-collector-contrib:latest ). A ConfigMap named  agent-config  supplying the OpenTelemetry Collector configuration ( agent.yaml ). 2. YAML Configuration Here’s a snapshot of the YAML configuration needed: apiVersion: apps/v1 kind: Deployment metadata: name: unified-monitoring-java-app labels: app: unified-monitoring-java-app namespace: appd-cloud-apps spec: replicas: 1 selector: matchLabels: app: unified-monitoring-java-app template: metadata: labels: app: unified-monitoring-java-app spec: volumes: - name: sidecar-otel-collector-config configMap: name: agent-config items: - key: agent.yaml path: agent.yaml containers: - name: sidecar-otel-collector image: docker.io/otel/opentelemetry-collector-contrib:latest args: - --config=/conf/agent.yaml volumeMounts: - name: sidecar-otel-collector-config mountPath: /conf - name: unified-monitoring-java-app image: abhimanyubajaj98/unified-monitoring-java-app imagePullPolicy: Always ports: - containerPort: 8080 env: - name: JAVA_TOOL_OPTIONS value: "-Xmx512m" And the corresponding service: apiVersion: v1 kind: Service metadata: name: unified-monitoring-java-app labels: app: unified-monitoring-java-app namespace: appd-cloud-apps spec: ports: - port: 8080 targetPort: 8080 selector: app: unified-monitoring-java-app For detailed YAML configurations and the ConfigMap ( agent.yaml ), please refer to the provided snippets above. --- apiVersion: v1 kind: ConfigMap metadata: name: agent-config namespace: appd-cloud-apps data: agent.yaml: | receivers: otlp: protocols: grpc: endpoint: http: endpoint: processors: batch: send_batch_size: 1000 timeout: 10s send_batch_max_size: 1000 exporters: logging: verbosity: detailed otlphttp/cnao: auth: authenticator: oauth2client #endpoint: https://xxxx-pdx-p01-c4.observe.appdynamics.com/data traces_endpoint: https://xxxx-pdx-p01-c4.observe.appdynamics.com/data/v1/trace logs_endpoint: https://xxx-pdx-p01-c4.observe.appdynamics.com/data/v1/logs extensions: health_check: endpoint: 0.0.0.0:13133 pprof: endpoint: 0.0.0.0:17777 oauth2client: client_id: xxxx client_secret: xxxx token_url: https://xxxx-pdx-p01-c4.observe.appdynamics.com/auth/xxx-xxx-xx-xx-xxxx/default/oauth2/token #tenantId: xx-xxx-xx-xx-xxx service: extensions: [health_check, pprof, oauth2client] pipelines: traces: receivers: [otlp] processors: [batch] exporters: [logging, otlphttp/cnao] logs: receivers: [otlp] processors: [batch] exporters: [logging, otlphttp/cnao] telemetry: logs: level: "debug" Ensure to replace  client_id ,  client_secret ,  token_url , and  controller-key  with your actual credentials. 3. Deployment To deploy, execute the following commands: Create the namespace (if not already present): kubectl create ns appd-cloud-apps 2. Apply the ConfigMap and deployment files: kubectl create -f agent.yaml kubectl create -f unified-monitoring-java-app.yaml 4. Integrate with AppDynamics Cluster Agent Deploy the AppDynamics Cluster Agent using Helm charts with specific parameters to enable OpenTelemetry. Our values.yaml looks like: installClusterAgent: true installInfraViz: true # Docker images imageInfo: agentImage: docker.io/appdynamics/cluster-agent agentTag: 24.1.0-253 #agentTag: 22.1.0 operatorImage: docker.io/appdynamics/cluster-agent-operator #operatorTag: 22.1.0 operatorTag: 24.1.0-694 imagePullPolicy: Always # Will be used for operator pod machineAgentImage: docker.io/appdynamics/machine-agent-analytics machineAgentTag: latest machineAgentWinImage: docker.io/appdynamics/machine-agent-analytics machineAgentWinTag: win-latest netVizImage: docker.io/appdynamics/machine-agent-netviz netvizTag: latest # AppDynamics controller info (VALUES TO BE PROVIDED BY THE USER) controllerInfo: url: https://xxxx.saas.appdynamics.com:443 account: xxxx username: password: accessKey: globalAccount: cxxxxx # To be provided when using machineAgent Window Image # SSL properties customSSLCert: null # Proxy config authenticateProxy: false proxyUrl: null proxyUser: null proxyPassword: null # RBAC config createServiceAccount: true agentServiceAccount: appdynamics-cluster-agent operatorServiceAccount: appdynamics-operator infravizServiceAccount: appdynamics-infraviz # Cluster agent config clusterAgent: nsToMonitorRegex: abhi-java-apps appName: k8s-aks-abhi logProperties: logFileSizeMb: 5 logFileBackups: 3 logLevel: TRACE # Profiling specific config - set pprofEnabled true if profiling need to be enabled, # provide pprofPort if you need different port else default port 9991 will be assigned agentProfiler: pprofEnabled: false pprofPort: 9991 instrumentationConfig: enabled: true instrumentationMethod: Env #Env enableForceReInstrumentation: true nsToInstrumentRegex: appd-cloud-apps #any namespace you want to instrument defaultAppName: Abhi-personal-tomcat appNameStrategy: label numberOfTaskWorkers: 4 resourcesToInstrument: - Deployment - StatefulSet instrumentationRules: - namespaceRegex: appd-cloud-apps matchString: unified-monitoring-java-app language: java appNameLabel: app instrumentContainer: select containerMatchString: unified-monitoring-java-app customAgentConfig: "-Dotel.traces.exporter=none -Dotel.metrics.exporter=none -Dotel.logs.exporter=otlp -Dappdynamics.opentelemetry.enabled=true -Dotel.resource.attributes=service.name=unified-monitoring-java-app,service.namespace=unified-monitoring-java-app" imageInfo: image: "docker.io/appdynamics/java-agent:latest" agentMountPath: /opt/appdynamics imagePullPolicy: Always # Netviz config Important arguments we need to remember are: customAgentConfig: "-Dotel.traces.exporter=none -Dotel.metrics.exporter=none -Dotel.logs.exporter=otlp -Dappdynamics.opentelemetry.enabled=true -Dotel.resource.attributes=service.name=unified-monitoring-java-app,service.namespace=unified-monitoring-java-app" To deploy: kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key=xxxxx helm install -f values.yaml abhi-cluster-agent appdynamics-cloud-helmcharts/cluster-agent -n appdynamics Once done, Your application pods will restart and report to the controller with unified-monitoring-java-app Application name. Conclusion Upon completion, your application pods will restart and begin reporting to the AppDynamics Controller with the specified application name. This setup not only simplifies monitoring across Kubernetes clusters but also ensures that your applications are performing optimally, with detailed insights readily available in your AppDynamics dashboard. For further customization and advanced configuration, visit the official AppDynamics documentation.
Hi All, I need help on troubleshooting metric coming to sim_metrics i.e SIM add - on Splunk observability is configured with a service "test". When I ran SIM command on splunk search head, I se... See more...
Hi All, I need help on troubleshooting metric coming to sim_metrics i.e SIM add - on Splunk observability is configured with a service "test". When I ran SIM command on splunk search head, I see there are metrics. Same if I run with mstats it is not returning any result. It was pulling data a week back but not now What could be the troubleshooting steps when there is issue like this ? What are the points i have to check on ? Summary : Data is  being pulled by SIM  Add on , So I am seeing metrics when using SIM command.  But when I try mstats on same metrics, it is not returning any result. Can anyone help me what could be the issue. From where I have to troubleshoot ? REgards, PNV
I have a question. I have a table that contains groups of people with their email addresses. I want to use this table in the recipients field when creating an alert to notify users via email. For thi... See more...
I have a question. I have a table that contains groups of people with their email addresses. I want to use this table in the recipients field when creating an alert to notify users via email. For this, I want to know if I can use $result.fieldname$ to call that table in the 'to' field when configuring the recipients.     
Hi - Recently we have upgraded splunk to version 9.1.3 . Noticed that I can not not start the splunk using :   "./splunk start --accept-licnese = yes" , forcing my to use "systemctl start Splunkd" ... See more...
Hi - Recently we have upgraded splunk to version 9.1.3 . Noticed that I can not not start the splunk using :   "./splunk start --accept-licnese = yes" , forcing my to use "systemctl start Splunkd" to start splunk   Could you please let me know how to pass --accept-license=yes with "systemctl start Splunkd"
I'm trying to create a workload management rule to prevent users from searching with "All Time". After researching, it seems that best practice would be to not run "All Time" searches as they produ... See more...
I'm trying to create a workload management rule to prevent users from searching with "All Time". After researching, it seems that best practice would be to not run "All Time" searches as they produce long run times and use more memory/cpu. Are there any types of searches, users or otherwise exceptions that should be allowed to use "All Time"?
Q: Given a "timechart span=1m sep='-" last(foo) as foo last( bar) as bar by  hostname", how would I get a unique value of the bar-* fields? This has to be a standard problem, but I cannot find any... See more...
Q: Given a "timechart span=1m sep='-" last(foo) as foo last( bar) as bar by  hostname", how would I get a unique value of the bar-* fields? This has to be a standard problem, but I cannot find any writeup of solving it... Background: I'm processing Apache Impala logs for data specific to a query, server, and pool (i.e., cluster). The data arrives on multiple lines that are easily combined with a transaction and rex-ed out to get the values. Ignoring the per-query values, I end up with: | fields _time hostname reserved max_mem The next step is to summarize the reserved and max_mem by minute, taking the last value by hostname and summing the reserved values, extracting a single max_mem value. I can get the data by host using: | timechart span=1m sep="-" last( reserved ) as reserved last( max_mem ) as max_mem by hostname which gives me a set of reserved-* and max_mem-* fields. The reserved values can be summed with: | addtotals fieldname=reserved reserved-* Issue: The problem I'm having is getting the single unique value of max_mem back out of it. The syntax "| stats values( max_mem-* ) as max_mem" does not work, but gives the idea of what I'm trying to accomplish. I've tried variations on bin to group the values with stats to post-process them, but gotten nowhere. I get the funny feeling that there may be a way to "| addcols [ values( max_mem-* ) as max_mem " but that doesn't get me anywhere either. A slightly different approach would be leaving the individual reserved values as-is, finding some way to get the single max_mem value out of the timechart, and plotting it as an area chart using max_mem as a layover  (i.e., the addtotals can be skipped). In either case, I'm still stuck getting the unique value from max_mem-* as a single field for propagation with the reserved values. Aside: The input to this report is taken from the transaction list which includes memory estimates and SQL statements per query. I need that much for other purposes. The summary here of last reserved & max_mem per time unit is taken from the per-query events because the are the one place that the numbers are available.
Hi All,   How can I optimize the below query? Can we convert it to tstats?   index=abc host=def* stalled | rex field=_raw "symbol (?<symbol>.*) /" | eval hourofday = strftime(_time, "%H") | w... See more...
Hi All,   How can I optimize the below query? Can we convert it to tstats?   index=abc host=def* stalled | rex field=_raw "symbol (?<symbol>.*) /" | eval hourofday = strftime(_time, "%H") | where NOT (hourofday>2 AND hourofday <= 4) | timechart dc(symbol) span=15m | eventstats avg("count") as avg stdev("count") as stdev | eval lowerBound=-1, upperBound=(avg+stdev*exact(4)) | eval isOutlier=if('count' < lowerBound OR 'count' > upperBound, 1, 0) | fields _time, "count", lowerBound, upperBound, isOutlier, * | sort -_time | head 1 | where isOutlier=1
Hello everyone, I am trying to follow this guide https://research.splunk.com/endpoint/ceaed840-56b3-4a70-b8e1-d762b1c5c08c/ and I created the macros that this guide is referencing, but I am unable ... See more...
Hello everyone, I am trying to follow this guide https://research.splunk.com/endpoint/ceaed840-56b3-4a70-b8e1-d762b1c5c08c/ and I created the macros that this guide is referencing, but I am unable to create the macro for windows_rdp_connection_successful_filter, because I am unsure how to create an empty macro in Splunk web. The guide says "windows_rdp_connection_successful_filter is a empty macro by default. It allows the user to filter out any results (false positives) without editing the SPL." What does this even mean? We are currently using Splunk Enterprise 9.0.5
I'm using script interface for custom REST endpoints, and it uses: from splunk.persistconn.application import PersistentServerConnectionApplication I understand it's a package inside splunk ente... See more...
I'm using script interface for custom REST endpoints, and it uses: from splunk.persistconn.application import PersistentServerConnectionApplication I understand it's a package inside splunk enterprise, but is there a chance it is uploaded to PyPI?
The Cisco Networks Add-on for Splunk Enterprise is licensed under Creative Commons. This license does not allow for commercial use...I have been unable to track down a way to "purchase" a license tha... See more...
The Cisco Networks Add-on for Splunk Enterprise is licensed under Creative Commons. This license does not allow for commercial use...I have been unable to track down a way to "purchase" a license that would allow me to utilize this Add-on legally. Is there any chance someone can point me in the right direction?
I have an issue with adding indexed fields to each of the new (splatted) sourcetype: Configuration that "duplicated" indexed fields for each sourcetype: Now I see fields: indexedfileds1, indexedfil... See more...
I have an issue with adding indexed fields to each of the new (splatted) sourcetype: Configuration that "duplicated" indexed fields for each sourcetype: Now I see fields: indexedfileds1, indexedfileds2 and indexedfileds3 as 200%, For example: indexedfields1 values: valuie1 150% value2 50% props.conf [MAIN SOURCE] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-changesourcetype = sourcetype1, sourcetype2 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3 [sourcetype1] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 [sourcetype2] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 transforms.conf [indexedfield1] REGEX= FORMAT= WRITE_META= [indexedfield2] REGEX= FORMAT= WRITE_META= [indexedfield3] REGEX= FORMAT= WRITE_META= [sourcetype1] DEST_KEY-MetaData:Sourcetype REGEX = some regex FORMAT = sourcetype::sourcetype1 [sourcetype2] DEST_KEY-MetaData:Sourcetype REGEX = some regex FORMAT = sourcetype::sourcetype2   I thought to move the indexed fields to each of the new sourcetype but then I see no indexed fields. Check with | tstats count props.conf [MAIN SOURCE] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-changesourcetype = sourcetype1, sourcetype2 [sourcetype1] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3 [sourcetype2] SHOULD_LINEMERGE = false LINE_BREAKER = ([\r\n]+) TIME_PREFIX = {\"time\":\" MAX_TIMESTAMP_LOOKAHEAD = 25 TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6N%:z TRUNCATE = 999999 TRANSFORMS-indexedfields = indexedfield1, indexedfield2, indexedfield3   What is the needed configuration to see indexed fields per sourcetype, w/o showing 200% Thanks
Hi, We have 3 indexers and 1 search head (replication factor = 3).I need to permanently remove one indexer  What is the correct procedure: 1. Change replication factor = 2 and then remove inde... See more...
Hi, We have 3 indexers and 1 search head (replication factor = 3).I need to permanently remove one indexer  What is the correct procedure: 1. Change replication factor = 2 and then remove indexer OR 1. Remove indexer and after that change the replication factor to 2   Thanks
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re sharing all the details of a brand new Splunkbase app which helps you discover use cases in Lantern’s Use Case Explorer for the Splunk Platform. We’re also highlighting a batch of new Splunk Edge Processor articles that help new users learn how it works, and help more experienced users get even more value from it. As usual, we’ve also got links to every new article that we published over the month of February.  Use Case Explorer App We’re excited to announce the launch of a brand new app that makes it easier than ever for you to work with the Use Case Explorer for the Splunk Platform - the Use Case Explorer App for Splunk. This app searches your Splunk data sources and recommends use cases you can use right away, using the 350 different procedures you can find within the Use Case Explorer for the Splunk Platform. It’s a great tool for identifying new ways you can get more value out of your Splunk implementation, and it links you to the relevant articles in Lantern so you can get started easily. The Use Case Explorer content is designed to help you achieve your Security and IT Modernization goals - even if you're not using Splunk's premium security and observability products. (If you are using these products, you can check out the guidance for them within the Use Case Explorer for Security and Use Case Explorer for Observability.) The Use Case Explorer also contains a wide range of industry-specific use cases. Check out the app today, and don’t hesitate to let us know how it’s helped you by dropping a comment below! Doing More with Splunk Edge Processor This month the Lantern team has been working with experts from all across Splunk to publish new articles that highlight some of the key capabilities in Splunk Edge Processor. Here’s more info on three that we’ve published this month: Reducing Windows security event log volume with Splunk Edge Processor features a great video from the experts at Splunk Edu that shows you how Splunk Edge Processor can be used to help you better manage security event log volume. Converting logs into metrics with Edge Processor for beginners is a great place to get started if you’re new to how Edge Processor works. It shows you how to build metrics with dimensions so you can remove complexity from data, reduce resource consumption, improve data quality, and ultimately reduce mean time to identify problems. Finally, Enriching data via real-time threat detection with KV Store lookups in Edge Processor shows you how to utilize lookups to cross-reference threat intelligence data, which enhances your ability to detect and respond to cybersecurity threats in a timely and efficient manner. We’re continuing to plan even more Edge Processor articles in the future, so drop a comment below if there are any tips you’d like to see, or use cases you’d like us to cover! This Month’s New Articles Here’s the rest of everything that’s new on Lantern, published over the month of February: Customizing JMX metric collection with OpenTelemetry Enriching data via real-time threat detection with KV Store lookups in Edge Processor Rigor to Synthetics Migration Migrating from Tenable LCE to Splunk Enterprise Security Splunk IT Service Intelligence Owner's Manual Checking for event time indexing Checking for KPI search success Maintaining service entities Maintaining adaptive thresholds Monitoring for KPI search lag Splunk User Behavior Analytics (UBA) Owner's Manual Tuning anomaly rules Checking for sizing adherence Patching for operating system security Cleaning up backup file directories Validating data source integrity Implementing a reingestion pipeline for AWS logs using Kinesis Data Firehose Using Amazon SageMaker to predict risk scores We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
I'm using Splunk Enterprise 9 on Windows Server 2019 and monitoring a simple log file that has CRLF lines endings and is encoded as UTF8. My inputs stanza is as follows:   [monitor://c:\windows\deb... See more...
I'm using Splunk Enterprise 9 on Windows Server 2019 and monitoring a simple log file that has CRLF lines endings and is encoded as UTF8. My inputs stanza is as follows:   [monitor://c:\windows\debug\test.log] disabled = 0 sourcetype = my_sourcetype index=test   Consider two consectuive lines in the log file   Some data 1 Some data 2   When indexed this creates a single event rather than my expectation of 2 events. Where am I going wrong?    
I need help in understanding that what sourcetype would be ideal to parse logs of this File type  
Hello All,   I am currently testing upgrading from Splunk Enterprise version 9.0.4 to 9.2.0.1 but get the below error.    Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-... See more...
Hello All,   I am currently testing upgrading from Splunk Enterprise version 9.0.4 to 9.2.0.1 but get the below error.    Traceback (most recent call last): File "/opt/splunk/lib/python3.7/site-packages/splunk/clilib/cli.py",  line 39, in <module> from splunk.rcUtils import makeRestCall, CliArgError, NoEndPointError, InvalidStatusCodeError MemoryError Error running pre-start tasks.    I will add that there are a few more lines to the error but this is an air-gapped environment and hoping there is no need to manually type it all out   TIA Leon