All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'm afraid to say I removed the inputs last week but still I can see errors in last 15 minutes 
Ahhhh... one more thing. I think the error can persist from before you disabled/deleted the inputs. AFAIR I had similar issues with VMware vCenter inputs. Until the events rolled off the _internal in... See more...
Ahhhh... one more thing. I think the error can persist from before you disabled/deleted the inputs. AFAIR I had similar issues with VMware vCenter inputs. Until the events rolled off the _internal index, the error persisted within the WebUI.
I have found this info sofar: https://splunk.my.site.com/customer/s/article/The-code-execution-cannot-proceed-because-LIBEAY32-dll-was-not-found-Reinstalling-the-program-may-fix-this-problem
There is no significance of MSCS inputs when I output the content from "splunk show config inputs" still the error is present. This is very strange.
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3... See more...
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3 months). Once data reaches the frozen state, how can I control the frozen storage if the frozen disk becomes full? How does Splunk handle the frozen storage in such scenarios? [custom_index] repFactor = auto homePath = volume:hot/$_index_name/db coldPath = volume:cold/$_index_name/colddb thawedPath = /opt/thawed/$_index_name/thaweddb homePath.maxDataSizeMB = 1664000 coldPath.maxDataSizeMB = 1664000 maxWarmDBCount = 200 frozenTimePeriodInSecs = 7776000 maxDataSize = auto_high_volume coldToFrozenDir = /opt/frozen/custom_index/frozendb
That's even more interesting because if there is no input defined (not even disabled ones), nothing should be started. Maybe your settings were not applied. Check output of "splunk show config input... See more...
That's even more interesting because if there is no input defined (not even disabled ones), nothing should be started. Maybe your settings were not applied. Check output of "splunk show config inputs" to see what are the contents of in-memory Splunk's "running-config".
Have you installed AWS TA and Splunk Add-on for Amazon Kinesis Firehose for parsing? Document
MSCS TA uses service principle for authentication. Please review below document to configure and connect with TA with the same - https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Co... See more...
MSCS TA uses service principle for authentication. Please review below document to configure and connect with TA with the same - https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/ConfigureappinAzureAD/
Yes, I have already checked the btool output. Nothing shows up when I run the command as the inputs.conf are removed.
Let me ask you first, why would you want to map your 8089 splunkd port to 443? 443 is for webUI (if enabled and redirected from the default 8000). 8089 is the port your API is expected to be at.
@bowesmana Thank you soooo much, it worked like a charm I will sure try it out. (linked list option)
Did you check the btool output? Inputs shouldn't normally be run when disabled. That's the whole point of defined disabled inputs - define them in a "ready to run" state by default but let them be en... See more...
Did you check the btool output? Inputs shouldn't normally be run when disabled. That's the whole point of defined disabled inputs - define them in a "ready to run" state by default but let them be enabled or disabled selectively.
You can't replace docs and management with tools. [ | makeresults annotate=f | eval t1="ind", t2="ex", t3=t1.t2 | eval {t3}="_internal" | table * | fields - t1 t2 t3 _time ] | stats count by... See more...
You can't replace docs and management with tools. [ | makeresults annotate=f | eval t1="ind", t2="ex", t3=t1.t2 | eval {t3}="_internal" | table * | fields - t1 t2 t3 _time ] | stats count by index  
OK. Let's back up a little. You have a record with TASKID=1 UPDATED=1 VALUE="A" TASKIDUPDATED="1-1" You update the VALUE and the UPDATED field and the TASKIDUPDATED field is updated as well so ... See more...
OK. Let's back up a little. You have a record with TASKID=1 UPDATED=1 VALUE="A" TASKIDUPDATED="1-1" You update the VALUE and the UPDATED field and the TASKIDUPDATED field is updated as well so you have TASKID=1 UPDATED=2 VALUE="B" TASKIDUPDATED="1-2" From Splunk's point of view it's a completely different entity since your TASKIDUPDATED changed (even though from your database point of view it can still be the same record). Splunk doesn't care about state of your database. It just fetches some results from database query. You can - to some extent - compare it to the file monitor input. If you have a log file which Splunk is monitoring and you change some sequence of bytes in the middle of that file to a different sequence, Splunk has no way of knowing that something changed - the event which had been read from that position and ingested into Splunk stays the same. (of course there can be issues when Splunk notices file that file has been truncated and decides to reread whole file or just stops reading from the file because it decides it reached the end of the file but these are beside the main point). BTW, remember that setting a non-numeric column to bee your rising column may yield unpredictible results due to quirkness of sorting. EDIT warning - previous version of this reply mistakenly used the same field name twice.
I have experience working with Ingest Actions. However, I was reading about Ingest Processor which I guess is not different than the ingest actions in terms of functionality. Both does configure data... See more...
I have experience working with Ingest Actions. However, I was reading about Ingest Processor which I guess is not different than the ingest actions in terms of functionality. Both does configure data flows, control data format, apply transformation rules prior to indexing, and route to destinations. Major difference I see is Ingest actions configs are present in props & transforms on HF/Indexer level while the Ingest Processor is a complete cloud solution which comes into action between HF layer and Splunk cloud indexer layer that has separate UI solution for configuration.
OK. I added an idea https://ideas.splunk.com/ideas/EID-I-2471 Feel free to upvote and/or comment
It's the MSCS and google TA. On SHC, inputs.conf are removed from default and local still the error appears as below on all the members. ERROR ModularInputs [1990877 ConfReplicationThread] - Unable ... See more...
It's the MSCS and google TA. On SHC, inputs.conf are removed from default and local still the error appears as below on all the members. ERROR ModularInputs [1990877 ConfReplicationThread] - Unable to initialize modular input "mscs_storage_table" defined in the app "splunk_ta_microsoft-cloudservices": Introspecting scheme=mscs_storage_table: script running failed
You have to configure the webhook input as described in the shared dcos. Launch the Microsoft Teams Add-on for Splunk. Select Inputs > Create New Input > Teams Webhook. Have you done it? If not ... See more...
You have to configure the webhook input as described in the shared dcos. Launch the Microsoft Teams Add-on for Splunk. Select Inputs > Create New Input > Teams Webhook. Have you done it? If not create the input first and then: The webhook address will be available via the internal ip on the instance where you've configured the webhook and you have to use the port that you've configured during the webhook setup.  curl <internal_ip_of_your_splunk_instance>:<the_configured_port> -d '{"value": "test"}' For an initial test you could execute the curl on the same instance where you've configured the webhook.  curl 127.0.0.1:<the_configured_port> -d '{"value": "test"}' To make the webhook address publicly accessible there are different ways of course as mentioned in the documentation The webhook must be a publicly accessible, HTTPS-secured endpoint that is addressable via a URL. You have two options to set up the Splunk instance running the Teams add-on. You can make it publicly accessible via HTTPS. Or you can use a load balancer, reverse proxy, tunnel, etc. in front of your Splunk instance running the add-on. The second option here can be preferable if you don't want to expose the Splunk heavy forwarder to the internet, as the public traffic terminates at that demarcation and then continues on internally to the Splunk heavy forwarder.
I have been trying to set up splunk on my Kubernetes cluster so i can use it with a python script to access the rest API. i have a splunk enterprise standalone instance running. i used traefik ... See more...
I have been trying to set up splunk on my Kubernetes cluster so i can use it with a python script to access the rest API. i have a splunk enterprise standalone instance running. i used traefik ingress to expose port 8089      apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: splunk-ingress namespace: splunk annotations: cert-manager.io/cluster-issuer: letsencrypt-issuer traefik.ingress.kubernetes.io/router.entrypoints: websecure spec: ingressClassName: common-traefik tls: - hosts: - splunk.example.com secretName: app-certificate rules: - host: splunk.example.com http: paths: - path: / pathType: Prefix backend: service: name: splunk-stdln-standalone-service port: number: 8089       when i try to curl to the client it returns internal server error       curl -X POST https://splunk.example.com/services/auth/login --data-urlencode username=admin --data-urlencode password=<mysplunkpassword> -k -v       output:     * Host splunk.example.com:443 was resolved. * IPv6: (none) * IPv4: xx.xx.xxx.xxx * Trying xx.xx.xxx.xxx:443... * Connected to splunk.example.com (xx.xx.xxx.xxx) port 443 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * (304) (IN), TLS handshake, Server hello (2): * (304) (IN), TLS handshake, Unknown (8): * (304) (IN), TLS handshake, Certificate (11): * (304) (IN), TLS handshake, CERT verify (15): * (304) (IN), TLS handshake, Finished (20): * (304) (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / AEAD-CHACHA20-POLY1305-SHA256 / [blank] / UNDEF * ALPN: server accepted h2 * Server certificate: * subject: CN=splunk.example.com * start date: Dec 6 23:53:06 2024 GMT * expire date: Mar 6 23:53:05 2025 GMT * issuer: C=US; O=Let's Encrypt; CN=R10 * SSL certificate verify ok. * using HTTP/2 * [HTTP/2] [1] OPENED stream for https://splunk.example.com/services/auth/login * [HTTP/2] [1] [:method: POST] * [HTTP/2] [1] [:scheme: https] * [HTTP/2] [1] [:authority: splunk.example.com] * [HTTP/2] [1] [:path: /services/auth/login] * [HTTP/2] [1] [user-agent: curl/8.7.1] * [HTTP/2] [1] [accept: */*] * [HTTP/2] [1] [content-length: 34] * [HTTP/2] [1] [content-type: application/x-www-form-urlencoded] > POST /services/auth/login HTTP/2 > Host: splunk.example.com > User-Agent: curl/8.7.1 > Accept: */* > Content-Length: 34 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 34 bytes < HTTP/2 500 < content-length: 21 < date: Mon, 09 Dec 2024 06:54:50 GMT < * Connection #0 to host splunk.example.com left intact Internal Server Error%     when i port forward to localhost the curl works     curl -X POST https://localhost:8089/services/auth/login --data-urlencode username=admin --data-urlencode password=<mysplunkpassword> -k -v     output:     Note: Unnecessary use of -X or --request, POST is already inferred. * Host localhost:8089 was resolved. * IPv6: ::1 * IPv4: 127.0.0.1 * Trying [::1]:8089... * Connected to localhost (::1) port 8089 * ALPN: curl offers h2,http/1.1 * (304) (OUT), TLS handshake, Client hello (1): * (304) (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (IN), TLS handshake, Server key exchange (12): * TLSv1.2 (IN), TLS handshake, Server finished (14): * TLSv1.2 (OUT), TLS handshake, Client key exchange (16): * TLSv1.2 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.2 (OUT), TLS handshake, Finished (20): * TLSv1.2 (IN), TLS change cipher, Change cipher spec (1): * TLSv1.2 (IN), TLS handshake, Finished (20): * SSL connection using TLSv1.2 / ECDHE-RSA-AES256-GCM-SHA384 / [blank] / UNDEF * ALPN: server did not agree on a protocol. Uses default. * Server certificate: * subject: CN=SplunkServerDefaultCert; O=SplunkUser * start date: Dec 9 02:21:04 2024 GMT * expire date: Dec 9 02:21:04 2027 GMT * issuer: C=US; ST=CA; L=San Francisco; O=Splunk; CN=SplunkCommonCA; emailAddress=support@splunk.com * SSL certificate verify result: self signed certificate in certificate chain (19), continuing anyway. * using HTTP/1.x > POST /services/auth/login HTTP/1.1 > Host: localhost:8089 > User-Agent: curl/8.7.1 > Accept: */* > Content-Length: 34 > Content-Type: application/x-www-form-urlencoded > * upload completely sent off: 34 bytes < HTTP/1.1 200 OK < Date: Mon, 09 Dec 2024 06:59:54 GMT < Expires: Thu, 26 Oct 1978 00:00:00 GMT < Cache-Control: no-store, no-cache, must-revalidate, max-age=0 < Content-Type: text/xml; charset=UTF-8 < X-Content-Type-Options: nosniff < Content-Length: 204 < Connection: Keep-Alive < X-Frame-Options: SAMEORIGIN < Server: Splunkd < <response> <sessionKey> {some sessionKey...} </sessionKey> <messages> <msg code=""></msg> </messages> </response> * Connection #0 to host localhost left intact      I am using default confs not sure if i need to update my server.conf  for this more context: i checked the splunkd.log from when i made the request and i get these logs: 12-09-2024 17:19:36.904 +0000 WARN SSLCommon [951 HTTPDispatch] - Received fatal SSL3 alert. ssl_state='SSLv3 read client key exchange A', alert_description='bad certificate'. 12-09-2024 17:19:36.904 +0000 WARN HttpListener [951 HTTPDispatch] - Socket error from 192.168.xx.xx:52528 while idling: error:14094412:SSL routines:ssl3_read_bytes:sslv3 alert bad certificate
Hi,   What do you mean by small (&)......by lowering the fonts?