All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Using splunk 8.0.2.1 I have a container (spring boot that uses tomcat underneath) that I'm running that I'm attempting to push the contents to the HEC.   I'm starting the container like this: docke... See more...
Using splunk 8.0.2.1 I have a container (spring boot that uses tomcat underneath) that I'm running that I'm attempting to push the contents to the HEC.   I'm starting the container like this: docker run --name test-spring-boot-app --publish 8080:8080 --log-driver=splunk --log-opt splunk-token=SOME-TOKEN --log-opt splunk-url=http://ec2-someip.compute-1.amazonaws.com:8088 --log-opt splunk-format=inline --log-opt splunk-sourcetype=log4j-test test-spring-boot-app I can't for the life of me get ingested logs to merge multi-line events.  The Exception in the log below shows up as a single event for every line even though I've tried every combination I can think of to try to get it to merge.  It almost appears that it is ignoring my source type.  I have the token in HEC selected with the log4j-test source type as well. My log output looks like this:   . ____ _ __ _ _ /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ \\/ ___)| |_)| | | | | || (_| | ) ) ) ) ' |____| .__|_| |_|_| |_\__, | / / / / =========|_|==============|___/=/_/_/_/ :: Spring Boot :: (v2.3.1.RELEASE) 2020-06-29 19:57:52,828 [main] INFO com.sss.app.ws.TestSpringBootAppApplication - Starting TestSpringBootAppApplication v0.0.1-SNAPSHOT on 84837ec423e5 with PID 1 (/spring-boot-test.jar started by root in /) 2020-06-29 19:57:52,843 [main] INFO com.sss.app.ws.TestSpringBootAppApplication - No active profile set, falling back to default profiles: default 2020-06-29 19:57:54,370 [main] INFO org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat initialized with port(s): 8080 (http) 2020-06-29 19:57:54,406 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Initializing ProtocolHandler ["http-nio-8080"] 2020-06-29 19:57:54,407 [main] INFO org.apache.catalina.core.StandardService - Starting service [Tomcat] 2020-06-29 19:57:54,408 [main] INFO org.apache.catalina.core.StandardEngine - Starting Servlet engine: [Apache Tomcat/9.0.36] 2020-06-29 19:57:54,520 [main] INFO org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring embedded WebApplicationContext 2020-06-29 19:57:54,520 [main] INFO org.springframework.boot.web.servlet.context.ServletWebServerApplicationContext - Root WebApplicationContext: initialization completed in 1597 ms 2020-06-29 19:57:54,856 [main] INFO org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor - Initializing ExecutorService 'applicationTaskExecutor' 2020-06-29 19:57:55,080 [main] INFO org.apache.coyote.http11.Http11NioProtocol - Starting ProtocolHandler ["http-nio-8080"] 2020-06-29 19:57:55,128 [main] INFO org.springframework.boot.web.embedded.tomcat.TomcatWebServer - Tomcat started on port(s): 8080 (http) with context path '' 2020-06-29 19:57:55,143 [main] INFO com.sss.app.ws.TestSpringBootAppApplication - Started TestSpringBootAppApplication in 2.877 seconds (JVM running for 4.391) 2020-06-29 19:58:01,670 [http-nio-8080-exec-1] INFO org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/] - Initializing Spring DispatcherServlet 'dispatcherServlet' 2020-06-29 19:58:01,670 [http-nio-8080-exec-1] INFO org.springframework.web.servlet.DispatcherServlet - Initializing Servlet 'dispatcherServlet' 2020-06-29 19:58:01,680 [http-nio-8080-exec-1] INFO org.springframework.web.servlet.DispatcherServlet - Completed initialization in 10 ms 2020-06-29 19:58:01,807 [http-nio-8080-exec-1] INFO com.sss.app.ws.controller.TestController - foo bar log: true 2020-06-29 19:58:01,807 [http-nio-8080-exec-1] INFO com.sss.app.ws.controller.TestController - The querystring parameter name was supplied as: mark 2020-06-29 19:58:01,807 [http-nio-8080-exec-1] INFO com.sss.app.ws.controller.TestController - The querystring parameter exc was supplied as: true 2020-06-29 19:58:01,813 [http-nio-8080-exec-1] ERROR org.apache.catalina.core.ContainerBase.[Tomcat].[localhost].[/].[dispatcherServlet] - Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is java.lang.Exception: Give me an exception please] with root cause java.lang.Exception: Give me an exception please at com.sss.app.ws.controller.TestController.getTest(TestController.java:47) ~[classes!/:0.0.1-SNAPSHOT] at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_111-internal] at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_111-internal] at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_111-internal] at java.lang.reflect.Method.invoke(Method.java:498) ~[?:1.8.0_111-internal] at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:190) ~[spring-web-5.2.7.RELEASE.jar!/:5.2.7.RELEASE] at org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:138) ~[spring-web-5.2.7.RELEASE.jar!/:5.2.7.RELEASE]   In my props.conf I have log4j-test which looks like:   ./splunk btool --debug props list log4j-test | more /home/ubuntu/apps/splunk/etc/system/default/props.conf [log4j-test] /home/ubuntu/apps/splunk/etc/system/default/props.conf ADD_EXTRA_TIME_FIELDS = True /home/ubuntu/apps/splunk/etc/system/default/props.conf ANNOTATE_PUNCT = True /home/ubuntu/apps/splunk/etc/system/default/props.conf AUTO_KV_JSON = true /home/ubuntu/apps/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE = \d\d?:\d\d:\d\d /home/ubuntu/apps/splunk/etc/system/default/props.conf BREAK_ONLY_BEFORE_DATE = True /home/ubuntu/apps/splunk/etc/system/default/props.conf CHARSET = UTF-8 /home/ubuntu/apps/splunk/etc/system/default/props.conf DATETIME_CONFIG = /etc/datetime.xml /home/ubuntu/apps/splunk/etc/system/default/props.conf DEPTH_LIMIT = 1000 /home/ubuntu/apps/splunk/etc/system/default/props.conf HEADER_MODE = /home/ubuntu/apps/splunk/etc/system/default/props.conf LEARN_MODEL = true /home/ubuntu/apps/splunk/etc/system/default/props.conf LEARN_SOURCETYPE = true /home/ubuntu/apps/splunk/etc/system/default/props.conf LINE_BREAKER_LOOKBEHIND = 100 /home/ubuntu/apps/splunk/etc/system/default/props.conf MATCH_LIMIT = 100000 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_DAYS_AGO = 2000 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_DAYS_HENCE = 2 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_DIFF_SECS_AGO = 3600 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_DIFF_SECS_HENCE = 604800 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_EVENTS = 256 /home/ubuntu/apps/splunk/etc/system/default/props.conf MAX_TIMESTAMP_LOOKAHEAD = 128 /home/ubuntu/apps/splunk/etc/system/default/props.conf MUST_BREAK_AFTER = /home/ubuntu/apps/splunk/etc/system/default/props.conf MUST_NOT_BREAK_AFTER = /home/ubuntu/apps/splunk/etc/system/default/props.conf MUST_NOT_BREAK_BEFORE = /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION = indexing /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION-all = full /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION-inner = inner /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION-outer = outer /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION-raw = none /home/ubuntu/apps/splunk/etc/system/default/props.conf SEGMENTATION-standard = standard /home/ubuntu/apps/splunk/etc/system/default/props.conf SHOULD_LINEMERGE = true /home/ubuntu/apps/splunk/etc/system/default/props.conf TRANSFORMS = /home/ubuntu/apps/splunk/etc/system/default/props.conf TRUNCATE = 10000 /home/ubuntu/apps/splunk/etc/system/default/props.conf category = Application /home/ubuntu/apps/splunk/etc/system/default/props.conf description = Test Output produced by any Java 2 Enterprise Edition (J2EE) application server using log4j /home/ubuntu/apps/splunk/etc/system/default/props.conf detect_trailing_nulls = false /home/ubuntu/apps/splunk/etc/system/default/props.conf maxDist = 75 /home/ubuntu/apps/splunk/etc/system/default/props.conf priority = /home/ubuntu/apps/splunk/etc/system/default/props.conf pulldown_type = true /home/ubuntu/apps/splunk/etc/system/default/props.conf sourcetype =   Any thoughts would be greatly appreciated.
Due to migration from a stand-alone indexer (on Windows) to three-node indexer cluster (CentOS), we are doing a test migration of an index. We had managed to copy all buckets to one of the new nodes ... See more...
Due to migration from a stand-alone indexer (on Windows) to three-node indexer cluster (CentOS), we are doing a test migration of an index. We had managed to copy all buckets to one of the new nodes (indexer01) and appended instances GUID to the end of the bucket folders, started splunk instaces, exited maintenance mode and waited for the replication to happen. And everything worked like charm until... We tried to search this newly migrated index for period "All time" on the cluster. The other two nodes returned the warning such as:   [indexer02] Failed to read size=257 event(s) from rawdata in bucket='qualys~18~C221DE6A-20A8-41A8-A8D2-27E1F7A4B043' path='/opt/splunkcold/qualys/colddb/rb_1526515073_1526386468_18_A221DE6B-20A00-41A8-A8D2-27E1F7A4B043. Rawdata may be corrupt, see search.log. Results may be incomplete!     We tried with repairing the specific bucket with splunk rebuild command and reinitiated the replication, but the result was same. So we dug a little deeper and figured out that the rawdata folder, which should contain journal.gz, slicemin.dat and slicesv2.dat also contains weird plain-text non-compressed file with raw events. It is named with a number which doesn't tell much.   The question is the following - what is this file. It is located only on indexer01 and it is not being replicated to other nodes. Is there any way to append this file to journal.gz or to force the replication of this file as well.
I'm trying to setup splunk-connect for kubernetes, I'm currently testing with Splunk Cloud and a k8s running on Docker Desktop. I did set up an HEC on my splunk could environment and confirmed it ca... See more...
I'm trying to setup splunk-connect for kubernetes, I'm currently testing with Splunk Cloud and a k8s running on Docker Desktop. I did set up an HEC on my splunk could environment and confirmed it can receive events with the generated account, using a curl like this: curl -k "https://mysplunk.splunkcloud.com:8088/services/collector" \ -H "Authorization: Splunk MY_HEC_TOKEN" \ -d '{"event": "Hello, world!", "sourcetype": "manual"}' Here is my current values file that I'm using to set up Splunk Connect:     global: logLevel: info # If local splunk configurations are not present, the global ones will be used (if available) splunk: # It has exactly the same configs as splunk.hec does hec: host: mysplunk.splunkcloud.com port: 8088 token: MY_HEC_TOKEN protocol: https indexName: default insecureSSL: false # local config for logging chart splunk-kubernetes-logging: journalLogPath: /run/log/journal splunk: hec: indexName: k8s-logs # local config for objects chart splunk-kubernetes-objects: rbac: create: true serviceAccount: create: true name: splunk-kubernetes-objects kubernetes: insecureSSL: true objects: core: v1: - name: pods - name: namespaces - name: nodes - name: services - name: config_maps - name: secrets - name: persistent_volumes - name: service_accounts - name: persistent_volume_claims - name: resource_quotes - name: component_statuses - name: events - name: watch apps: v1: - name: deployments - name: daemon_sets - name: replica_sets - name: stateful_sets splunk: hec: indexName: k8s-meta # local config for metrics chart splunk-kubernetes-metrics: rbac: create: true serviceAccount: create: true name: splunk-kubernetes-metrics splunk: hec: indexName: k8s-metrics kubernetes: clusterName: "docker-desktop" prometheus_enabled: true   The I install Splunk Connect like this $ helm upgrade splunk-connect-daemonset \ --set splunk-kubernetes-metrics.splunk.hec.indexName=default \ https://github.com/splunk/splunk-connect-for-kubernetes/releases/download/1.2.0/splunk-connect-for-kubernetes-1.2.0.tgz The installation seems to go smooth, I can see the pods created   $ kubectl get pods NAME READY STATUS RESTARTS AGE splunk-connect-daemonset-splunk-kubernetes-logging-zdz5q 1/1 Running 0 23m splunk-connect-daemonset-splunk-kubernetes-metrics-agg-77cmfx75 1/1 Running 0 23m splunk-connect-daemonset-splunk-kubernetes-metrics-w2rg6 1/1 Running 0 23m splunk-connect-daemonset-splunk-kubernetes-objects-5748df8nbl7r 1/1 Running 0 23m   Now the problem is no events are being sent to Splunk Cloud account. And look at the logs I can see problems but not sure how to proceed   $ k logs splunk-connect-daemonset-splunk-kubernetes-objects-5748df8nbl7r 2020-06-29 20:08:17 +0000 [info]: Worker 0 finished unexpectedly with status 1 2020-06-29 20:08:17 +0000 [info]: adding filter pattern="kube.**" type="jq_transformer" 2020-06-29 20:08:17 +0000 [info]: adding filter pattern="kube.**" type="jq_transformer" 2020-06-29 20:08:18 +0000 [info]: adding match pattern="kube.**" type="splunk_hec" 2020-06-29 20:08:18 +0000 [info]: adding source type="kubernetes_objects" 2020-06-29 20:08:18 +0000 [warn]: #0 both of Plugin @id and path for <storage> are not specified. Using on-memory store. 2020-06-29 20:08:18 +0000 [info]: adding source type="kubernetes_objects" 2020-06-29 20:08:18 +0000 [warn]: #0 both of Plugin @id and path for <storage> are not specified. Using on-memory store. 2020-06-29 20:08:18 +0000 [warn]: parameter 'cluster_name' in <fields> cluster_name </fields> is not used. 2020-06-29 20:08:18 +0000 [info]: #0 starting fluentd worker pid=58296 ppid=1 worker=0 2020-06-29 20:08:18 +0000 [info]: #0 fluentd worker is now running worker=0 2020-06-29 20:08:18 +0000 [warn]: #0 thread exited by unexpected error plugin=Fluent::Plugin::KubernetesObjectsInput title=:pull_resource_quotes error_class=NoMethodError error="undefined method `get_resource_quotes' for #<Kubeclient::Client:0x000055585882a130>\nDid you mean? get_resource_quotas\n get_resource_quota\n watch_resource_quotas" #<Thread:0x0000555858d851e8@pull_resource_quotes@/usr/share/gems/gems/fluentd-1.9.1/lib/fluent/plugin_helper/thread.rb:70 run> terminated with exception (report_on_exception is true): /usr/share/gems/gems/kubeclient-4.6.0/lib/kubeclient/common.rb:103:in `method_missing': undefined method `get_resource_quotes' for #<Kubeclient::Client:0x000055585882a130> (NoMethodError) Did you mean? get_resource_quotas get_resource_quota watch_resource_quotas from /usr/share/gems/gems/kubeclient-4.6.0/lib/kubeclient/common.rb:101:in `method_missing' from /opt/app-root/src/gem/fluent-plugin-kubernetes-objects-1.1.3/lib/fluent/plugin/in_kubernetes_objects.rb:203:in `public_send' from /opt/app-root/src/gem/fluent-plugin-kubernetes-objects-1.1.3/lib/fluent/plugin/in_kubernetes_objects.rb:203:in `block in create_pull_thread' from /usr/share/gems/gems/fluentd-1.9.1/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create' 2020-06-29 20:08:18 +0000 [error]: #0 unexpected error error_class=NoMethodError error="undefined method `get_resource_quotes' for #<Kubeclient::Client:0x000055585882a130>\nDid you mean? get_resource_quotas\n get_resource_quota\n watch_resource_quotas" 2020-06-29 20:08:18 +0000 [error]: #0 /usr/share/gems/gems/kubeclient-4.6.0/lib/kubeclient/common.rb:103:in `method_missing' 2020-06-29 20:08:18 +0000 [error]: #0 /usr/share/gems/gems/kubeclient-4.6.0/lib/kubeclient/common.rb:101:in `method_missing' 2020-06-29 20:08:18 +0000 [error]: #0 /opt/app-root/src/gem/fluent-plugin-kubernetes-objects-1.1.3/lib/fluent/plugin/in_kubernetes_objects.rb:203:in `public_send' 2020-06-29 20:08:18 +0000 [error]: #0 /opt/app-root/src/gem/fluent-plugin-kubernetes-objects-1.1.3/lib/fluent/plugin/in_kubernetes_objects.rb:203:in `block in create_pull_thread' 2020-06-29 20:08:18 +0000 [error]: #0 /usr/share/gems/gems/fluentd-1.9.1/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create' 2020-06-29 20:08:18 +0000 [error]: #0 unexpected error error_class=NoMethodError error="undefined method `get_resource_quotes' for #<Kubeclient::Client:0x000055585882a130>\nDid you mean? get_resource_quotas\n get_resource_quota\n watch_resource_quotas" 2020-06-29 20:08:18 +0000 [error]: #0 suppressed same stacktrace 2020-06-29 20:08:19 +0000 [info]: Worker 0 finished unexpectedly with status 1   And the logs from the logging daemon set   $ k logs splunk-connect-daemonset-splunk-kubernetes-logging-zdz5q 2020-06-29 20:09:45 +0000 [info]: #0 Timeout flush: tail.containers.var.log.containers.kube-controller-manager-docker-desktop_kube-system_kube-controller-manager-bcf1f05eb5c2c0ede7bcebe0934cbe3ba246937f7b623871627520c76f287498.log:stderr 2020-06-29 20:09:48 +0000 [error]: #0 failed to flush the buffer, and hit limit for retries. dropping all chunks in the buffer queue. retry_times=3 records=130227 error_class=OpenSSL::SSL::SSLError error="SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain)" 2020-06-29 20:09:48 +0000 [error]: #0 suppressed same stacktrace 2020-06-29 20:09:49 +0000 [warn]: #0 failed to flush the buffer. retry_time=0 next_retry_seconds=2020-06-29 20:09:50 +0000 chunk="5a93ea3871f0b809090979783479a275" error_class=OpenSSL::SSL::SSLError error="SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain)" 2020-06-29 20:09:49 +0000 [warn]: #0 suppressed same stacktrace 2020-06-29 20:09:50 +0000 [warn]: #0 failed to flush the buffer. retry_time=1 next_retry_seconds=2020-06-29 20:09:51 +0000 chunk="5a93ea3871f0b809090979783479a275" error_class=OpenSSL::SSL::SSLError error="SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain)" 2020-06-29 20:09:50 +0000 [warn]: #0 suppressed same stacktrace 2020-06-29 20:09:51 +0000 [warn]: #0 failed to flush the buffer. retry_time=2 next_retry_seconds=2020-06-29 20:09:53 +0000 chunk="5a93ea3871f0b809090979783479a275" error_class=OpenSSL::SSL::SSLError error="SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain)" 2020-06-29 20:09:51 +0000 [warn]: #0 suppressed same stacktrace 2020-06-29 20:09:53 +0000 [warn]: #0 failed to flush the buffer. retry_time=3 next_retry_seconds=2020-06-29 20:09:57 +0000 chunk="5a93ea3871f0b809090979783479a275" error_class=OpenSSL::SSL::SSLError error="SSL_connect returned=1 errno=0 state=error: certificate verify failed (self signed certificate in certificate chain)" 2020-06-29 20:09:53 +0000 [warn]: #0 suppressed same stacktrace    
Cluster indexer across the site is configured with Smartstore. Each indexer has 6TB partition that is utilized by $SPLUNK_HOME+$SPLUNK_DB The Cache Manager is configured as below $SPLUNK_HOME/e... See more...
Cluster indexer across the site is configured with Smartstore. Each indexer has 6TB partition that is utilized by $SPLUNK_HOME+$SPLUNK_DB The Cache Manager is configured as below $SPLUNK_HOME/etc/system/default/server.conf [diskUsage] $SPLUNK_HOME/etc/system/default/server.conf minFreeSpace = 5000 $SPLUNK_HOME/etc/slave-apps/_cluster/local/server.conf [cachemanager] $SPLUNK_HOME/etc/system/default/server.conf evict_on_stable = false $SPLUNK_HOME/etc/slave-apps/_cluster/local/server.conf eviction_padding = 5120 $SPLUNK_HOME/etc/slave-apps/_cluster/local/server.conf eviction_policy = lru $SPLUNK_HOME/etc/slave-apps/_cluster/local/server.conf hotlist_bloom_filter_recency_hours = 720 $SPLUNK_HOME/etc/slave-apps/_cluster/local/server.conf hotlist_recency_secs = 604800 $SPLUNK_HOME/etc/slave-apps/_cluster/local/server.conf max_cache_size = 4096000 $SPLUNK_HOME/etc/slave-apps/_cluster/local/server.conf max_concurrent_downloads = 8 $SPLUNK_HOME/etc/slave-apps/_cluster/local/server.conf max_concurrent_uploads = 8 $SPLUNK_HOME/etc/slave-apps/_cluster/local/server.conf remote.s3.multipart_max_connections = 4 $SPLUNK_HOME/etc/slave-apps/_cluster/local/server.conf remote.s3.multipart_upload.part_size = 536870912 The indexer is showing partition 6TB partition  97% utilized , although it should not have crossed 4TB based on  max_cache_size = 4096000  Filesystem      1K-blocks       Used Available Use% Mounted ondevtmpfs         71967028          0  71967028   0% /devtmpfs            71990600          0  71990600   0% /dev/shmtmpfs            71990600    4219944  67770656   6% /runtmpfs            71990600          0  71990600   0% /sys/fs/cgroup/dev/nvme0n1p2   20959212    6812056  14147156  33% /none             71990600          0  71990600   0% /run/shm/dev/nvme1n1   6391527336 5864488560 204899848  97% /opt/splunktmpfs            14398120          0  14398120   0% /run/user/1003 Here is Debug entry for CacheManger {06-10-2020 19:32:42.604 +0000 DEBUG CacheManager - The system has freebytes=210838605824 with minfreebytes=5242880000 cachereserve=5368709120 totalpadding=10611589120 buckets_size=3069799919616 maxSize=4294967296000 06-10-2020 19:32:42.607 +0000 DEBUG CacheManager - The system has freebytes=210838536192 with minfreebytes=5242880000 cachereserve=5368709120 totalpadding=10611589120 buckets_size=3069799919616 maxSize=4294967296000 06-10-2020 19:32:46.502 +0000 DEBUG CacheManager - The system has freebytes=210850021376 with minfreebytes=5242880000 cachereserve=5368709120 totalpadding=10611589120 buckets_size=3069799919616 maxSize=4294967296000 06-10-2020 19:32:46.505 +0000 DEBUG CacheManager - The system has freebytes=210850172928 with minfreebytes=5242880000 cachereserve=5368709120 totalpadding=10611589120 buckets_size=3069799919616 maxSize=4294967296000 06-10-2020 19:33:06.727 +0000 DEBUG CacheManager - The system has freebytes=210255511552 with minfreebytes=5242880000 cachereserve=5368709120 totalpadding=10611589120 buckets_size=3069799919616 maxSize=4294967296000 Note From DEBUG observation   : freebytes=  210072649728  minfreebytes=  5242880000 cachereserve=  5368709120 totalpadding=  10611589120 buckets_size=  3069785296896     <<<<<<    3TB As calculated by cacahemanager maxSize=               4294967296000     <<<<<< configured 4TB limit The issue is cache has almost utilized 6TB of disk space but as per the calculation it shows usage of 3TB.  Due to this miscalculation Splunk is not evicting the buckets.        
Hi! I used the "Cluster Behavior by App Usage"  example in the Clustering Numeric Fields workflow within the Splunk MLTK Showcase. It produces the cluster visualization shown below. Can you help me ... See more...
Hi! I used the "Cluster Behavior by App Usage"  example in the Clustering Numeric Fields workflow within the Splunk MLTK Showcase. It produces the cluster visualization shown below. Can you help me understand the meaning of this visualization or recommend resources for understanding this visualization? How do I know which fields are clustered by looking at this? I understand the coloring has something to do with it, but there are multiple plots, and I would love some help trying to understand what this means. Thank you so much!  
I receive a new csv file every day in the following format: color  1/22/20 1/23/20 1/24/20 1/25/20 yellow     1                2                0                 5 green        0              2   ... See more...
I receive a new csv file every day in the following format: color  1/22/20 1/23/20 1/24/20 1/25/20 yellow     1                2                0                 5 green        0              2                4                  3 purple     7             200             5                   3   Column 1 is the color, the column to the right starts a date column.  Every day there will be a new column that is dated from the previous day.  Today is Jun 29, so the csv I received this morning has columns from 1/1/2020 through 6/28/2020, all with a number indicating how many code yellows, greens, purples (and several other colors) occurred on that day.   My search has started as: index="coded-colors" sourcetype="csv" | stats sum(*) by color Unfortunately, this chart isn't very good.  I get all my colors on the (line) chart, but instead of dates moving from left to right, there are on top of each other.  All the dates for yellow are stacked on top of each other, all the dates for red are stacked on top of each other, and the colors move from left to right. I'd like to have a nice line chart that has 1/1/2020 on the far left and the current date on the far right. I would like the "elevation" of the chart to be determined by the number of events for that day/date and colors.  What has me stumped is how to use the column names as they continually change and are the actual date. I have no control over the csv file I receive.  Arg.   Thank you in advance. R  
I'm trying to figure out what has changed in the "Splunk Add-on for Java Management Extensions" add-on between version 3.3.0 and 4.0.0. Previously I was able to configure the add-on to use a differe... See more...
I'm trying to figure out what has changed in the "Splunk Add-on for Java Management Extensions" add-on between version 3.3.0 and 4.0.0. Previously I was able to configure the add-on to use a different trust store and add the required jar files to allow the connection via a REST connector to a server. This allowed us to remotely monitor IBM WebSphere Liberty application servers.  Liberty currently only supports using a REST connection to remotely monitor. The changes made in version 3.3.0 where  to add the restConnector.jar file to the bin/lib directory under the add-on install directory and update the java_const.py file to add the  javax.net.ssl.trustStore javax.net.ssl.trustStorePassword javax.net.ssl.trustStoreType java parameters the JAVA_COMMON_ARGS.   Now with the same configuration changes in 4.0.0, I am receiving authentication errors connecting to the rest server exact same credentials as previous.  I'm primarily interested if something has changed in the jmxmodinput.jar     2020-06-29 14:53:55,172 - com.splunk.jmx.ServerConfigValidator -1219 [main] ERROR [] - Failed to make connection with JMX server. Reason- CWWKX0229E: There was a problem with the user credentials provided. The server responded with code 401 and message 'Unauthorized' java.io.IOException: CWWKX0229E: There was a problem with the user credentials provided. The server responded with code 401 and message 'Unauthorized' at com.ibm.ws.jmx.connector.client.rest.internal.RESTMBeanServerConnection.getBadCredentialsException(RESTMBeanServerConnection.java:1771) ~[restConnector.jar:?] at com.ibm.ws.jmx.connector.client.rest.internal.RESTMBeanServerConnection.loadJMXServerInfo(RESTMBeanServerConnection.java:270) ~[restConnector.jar:?] at com.ibm.ws.jmx.connector.client.rest.internal.RESTMBeanServerConnection.<init>(RESTMBeanServerConnection.java:160) ~[restConnector.jar:?] at com.ibm.ws.jmx.connector.client.rest.internal.Connector.connect(Connector.java:373) ~[restConnector.jar:?] at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:270) ~[?:1.8.0_232] at com.splunk.jmx.ServerConfigValidator.connect(Unknown Source) ~[jmxmodinput.jar:?] at com.splunk.jmx.ServerConfigValidator.main(Unknown Source) [jmxmodinput.jar:?]    
Hi, I'm trying to configure the Teams Add-on for Splunk (https://splunkbase.splunk.com/app/4994) on Splunk Cloud and I have gotten the UserReport working just fine, but none of the other data works. ... See more...
Hi, I'm trying to configure the Teams Add-on for Splunk (https://splunkbase.splunk.com/app/4994) on Splunk Cloud and I have gotten the UserReport working just fine, but none of the other data works.  I have created a WebHook pointing to idm-xx.splunkcloud.com to ingest the CDR data, but I have received no traffic into it. I've granted all the privileges to the Azure App that are called for in the detailed directions.   In fact, nothing but the User Reports are working.  Is there a step I'm missing?  
Organization System  Scan Due  Date email of SA ABC Jack 7-Feb-21 jack@email.com ABC Jill 9-May-20 jill@email.com 123 Bob Unspecified bob@email.com 123 Alice Unspecified ... See more...
Organization System  Scan Due  Date email of SA ABC Jack 7-Feb-21 jack@email.com ABC Jill 9-May-20 jill@email.com 123 Bob Unspecified bob@email.com 123 Alice Unspecified alice@email.com 456 James 10-Jan-21 james@email.com   | inputlookup scan_due_date.csv | eval date = strptime('Scan Date', "%d-%b-%Y") | eval duedate = if(isnull(date) OR date="", "Unspecified", date) | eval status=case(duedate >= now(),"Not Expired",duedate="Unspecified",duedate true(),"Overdue") | fields date duedate status | stats count by status I am trying to set up email alerting for systems, the above table and search work to show which systems are passed or current their scan date. I want to make an alert for each system that is overdue for their scan date or Unspecified, without having to write an alert for each individual system.  Each individual system has their own unique email.  Is there a way to write an eval statement that will look at the status field and email the system owner that the system is overdue or Unspecified without having to write out 50 different eval statements for each system email to generate an alert? Thanks!
I need to integrate Splunk with a springboot application. The idea is having a form allowing user to enter keywords, date range, and other details used as part of search query while looking for logs,... See more...
I need to integrate Splunk with a springboot application. The idea is having a form allowing user to enter keywords, date range, and other details used as part of search query while looking for logs, and then giving results in a structured format to the end user. Thus, please suggest some paths to achieve the same  In case I need to ask the question in some other category, please share the same.  
if In Source data log, I get an event -18 May 2020 17:46:51,623 [13] INFO  BWT - BWT - Mura Map - Accepted Then the main query execution should not execute. source="c:\\program files (x86)\\prysm\\... See more...
if In Source data log, I get an event -18 May 2020 17:46:51,623 [13] INFO  BWT - BWT - Mura Map - Accepted Then the main query execution should not execute. source="c:\\program files (x86)\\prysm\\servo\\logs\\vegaservo.log" "PcalLogger - LaserNits" earliest=-7d@d latest=now Tile PA = Low |stats max(VAL) AS max , min(VAL) AS min by Laser, TILE,host | eval delta_diff = max - min | fields host,Laser, TILE, max, min,delta_diff | where delta_diff > 6 | eval LE_Laser_Decay=TILE.":".Laser.":".delta_diff| stats values(LE_Laser_Decay) as LE_Laser_Decay by host | eval LE_Laser_Decay=mvjoin(LE_Laser_Decay,", ") |lookup Walls_Reference Host as host OUTPUTNEW Wall as wall Active as active | where active == 1 | table wall LE_Laser_Decay    @to4kawa  How do i give a condition so that main execution does not execute if that log event is occurred.    
Hi ,    I have installed Splunk version 7.2.6 in some of the servers and I don't see the type dropdown in one of the servers in the field transformations page UI . Can you please let me know how... See more...
Hi ,    I have installed Splunk version 7.2.6 in some of the servers and I don't see the type dropdown in one of the servers in the field transformations page UI . Can you please let me know how to enable this dropdown ?  Thanks  
Hi everyone,  Why does this search return nothing     | stats count(status=200) AS Success     While this search returns what I expect?       | stats count(eval(status=200)) AS success   ... See more...
Hi everyone,  Why does this search return nothing     | stats count(status=200) AS Success     While this search returns what I expect?       | stats count(eval(status=200)) AS success      
I have installed the Mint SDK for Android. For this I followed the instructions 1:1 from this video: https://www.youtube.com/watch?v=vecdk2HUASw However, I got the following errors: ERROR: Unable t... See more...
I have installed the Mint SDK for Android. For this I followed the instructions 1:1 from this video: https://www.youtube.com/watch?v=vecdk2HUASw However, I got the following errors: ERROR: Unable to find method 'com.android.build.gradle.BasePlugin.getVariantManager()Lcom/android/build/gradle/internal/VariantManager;'. Failed to notify project evaluation listener. > com.android.build.gradle.BasePlugin.getVariantManager()Lcom/android/build/gradle/internal/VariantManager; By removing " apply plugin: 'com.splunk.mint.gradle.android.plugin' from build.gradle(:app), I got my code then to work. I let my app crash as recommended and received an error email from the Mint Management Console. In the Console, I can also see the mobile App crashes that I created. HOWEVER: Other than the Error Report, I can't see any data in the Mint Management Console. All Reports are just spinning and loading forever. Even if I add Mint.logEvent("We are here"); or  String txID = Mint.transactionStart("Test1"); Mint.transactionStop(txID); to my app, I still don't receive any data in the Mint Management Console.   Then I tried to send data via  Mint.initAndStartSessionHEC(this.getApplication(), "http://URL/:8088/services/collector/mint", "myTokenAsObtainedFromSplunkEnterprise"); directly to Splunk Enterprise (hosted on AWS with an open listening port on 8088), there was no data arriving at Splunk.   Does anybody know what could go wrong here? Why do I have to remove the Splunk.mint.gradle.android.plugin from my Gradle file to make my code work? Why do data don't get through from my MobileApp to Mint Mgt Console or Splunk even though Mint Mgt Console is able to detect intentional crashes of my Mobile App?
The TA-nmon is not sending data to 'nmon' index on my splunk instance. I was looking through the troubleshoot guide for TA-nmon and I noticed that the forwarder was not running any of the three expec... See more...
The TA-nmon is not sending data to 'nmon' index on my splunk instance. I was looking through the troubleshoot guide for TA-nmon and I noticed that the forwarder was not running any of the three expected running processes (fifo_reader.py, fifo_reader.sh, nmon.fifo) when I ran a 'ps -ef | egrep nmon'. I tried starting these processes manually but it returned the message:     couldn't run "/opt/osi/splunkforwarder/etc/apps/TA-nmon/bin/nmon_helper.sh": Permission denied     I saw another solution to this issue where the user changed the permissions for these exectuables, how would I do that? 
newbie question - As a consumer of splunk cloud, do I control what version of the splunk cloud platform I can be on or is that all controlled by splunk on the backend?   
We ran kvstore clean command without a backup and looking for a way to restore it. We particularly want to recreate one lookup(minemeldfeeds_lookup) which comes with Palo Alto Networks, version -6.2.... See more...
We ran kvstore clean command without a backup and looking for a way to restore it. We particularly want to recreate one lookup(minemeldfeeds_lookup) which comes with Palo Alto Networks, version -6.2.0. Please let us know if there is a way to do it Below is the lookup  error Error in 'lookup' command: Could not construct lookup 'minemeldfeeds_lookup, indicator, AS, client_ip, OUTPUT, value.autofocus_tags, AS, client_autofocus_tags'. 
In the event that universal forwarders installed on internal servers forward to a heavy forwarder and the heavy forwarder forwards to splunk cloud, is the heavy forwarder indexing before sending to s... See more...
In the event that universal forwarders installed on internal servers forward to a heavy forwarder and the heavy forwarder forwards to splunk cloud, is the heavy forwarder indexing before sending to splunk cloud?
Hello Splunker's : I have a problem with the Splunk export button; when I export my table and I open it in Excel I find problems with special characters for example: SÃ@©curité, 321801. I sear... See more...
Hello Splunker's : I have a problem with the Splunk export button; when I export my table and I open it in Excel I find problems with special characters for example: SÃ@©curité, 321801. I searched and I found a JS code which allows to change the encoding: https://community.splunk.com/t5/Getting-Data-In/Export-to-csv-on-click-of- button/td-p/444724 My question is how I can link this code with the export button of Splunk? Thank you for your help.
Hello Everyone, Does anyone know if there is any method in Splunk to index encrypted input files like PGP encrypted files. @ashish9433