All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi @yuanliu  I am working on a dashboard in splunk and need help implementing specefic filtering requirements.I have a table with the following fields. message (contain log details) component (ind... See more...
Hi @yuanliu  I am working on a dashboard in splunk and need help implementing specefic filtering requirements.I have a table with the following fields. message (contain log details) component (indicates the source components) My requirement are: 1.Add multiselect dropdown to filter the component field. 2. add textbox input to filter the message field using comma-separated keywords. for example: if the textbox contains error, timeout it should filter rows where the message field contain error or timeout in case both present we need to show both the values.   Any suggestions or example are greatly appreciated, Thank you. 
Wait, you're seing those errors in events in _internal? Not just in the webui? That's unexpected.
I'm afraid to say I removed the inputs last week but still I can see errors in last 15 minutes 
Ahhhh... one more thing. I think the error can persist from before you disabled/deleted the inputs. AFAIR I had similar issues with VMware vCenter inputs. Until the events rolled off the _internal in... See more...
Ahhhh... one more thing. I think the error can persist from before you disabled/deleted the inputs. AFAIR I had similar issues with VMware vCenter inputs. Until the events rolled off the _internal index, the error persisted within the WebUI.
I have found this info sofar: https://splunk.my.site.com/customer/s/article/The-code-execution-cannot-proceed-because-LIBEAY32-dll-was-not-found-Reinstalling-the-program-may-fix-this-problem
There is no significance of MSCS inputs when I output the content from "splunk show config inputs" still the error is present. This is very strange.
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3... See more...
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3 months). Once data reaches the frozen state, how can I control the frozen storage if the frozen disk becomes full? How does Splunk handle the frozen storage in such scenarios? [custom_index] repFactor = auto homePath = volume:hot/$_index_name/db coldPath = volume:cold/$_index_name/colddb thawedPath = /opt/thawed/$_index_name/thaweddb homePath.maxDataSizeMB = 1664000 coldPath.maxDataSizeMB = 1664000 maxWarmDBCount = 200 frozenTimePeriodInSecs = 7776000 maxDataSize = auto_high_volume coldToFrozenDir = /opt/frozen/custom_index/frozendb
That's even more interesting because if there is no input defined (not even disabled ones), nothing should be started. Maybe your settings were not applied. Check output of "splunk show config input... See more...
That's even more interesting because if there is no input defined (not even disabled ones), nothing should be started. Maybe your settings were not applied. Check output of "splunk show config inputs" to see what are the contents of in-memory Splunk's "running-config".
Have you installed AWS TA and Splunk Add-on for Amazon Kinesis Firehose for parsing? Document
MSCS TA uses service principle for authentication. Please review below document to configure and connect with TA with the same - https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Co... See more...
MSCS TA uses service principle for authentication. Please review below document to configure and connect with TA with the same - https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/ConfigureappinAzureAD/
Yes, I have already checked the btool output. Nothing shows up when I run the command as the inputs.conf are removed.
Let me ask you first, why would you want to map your 8089 splunkd port to 443? 443 is for webUI (if enabled and redirected from the default 8000). 8089 is the port your API is expected to be at.
@bowesmana Thank you soooo much, it worked like a charm I will sure try it out. (linked list option)
Did you check the btool output? Inputs shouldn't normally be run when disabled. That's the whole point of defined disabled inputs - define them in a "ready to run" state by default but let them be en... See more...
Did you check the btool output? Inputs shouldn't normally be run when disabled. That's the whole point of defined disabled inputs - define them in a "ready to run" state by default but let them be enabled or disabled selectively.
You can't replace docs and management with tools. [ | makeresults annotate=f | eval t1="ind", t2="ex", t3=t1.t2 | eval {t3}="_internal" | table * | fields - t1 t2 t3 _time ] | stats count by... See more...
You can't replace docs and management with tools. [ | makeresults annotate=f | eval t1="ind", t2="ex", t3=t1.t2 | eval {t3}="_internal" | table * | fields - t1 t2 t3 _time ] | stats count by index  
OK. Let's back up a little. You have a record with TASKID=1 UPDATED=1 VALUE="A" TASKIDUPDATED="1-1" You update the VALUE and the UPDATED field and the TASKIDUPDATED field is updated as well so ... See more...
OK. Let's back up a little. You have a record with TASKID=1 UPDATED=1 VALUE="A" TASKIDUPDATED="1-1" You update the VALUE and the UPDATED field and the TASKIDUPDATED field is updated as well so you have TASKID=1 UPDATED=2 VALUE="B" TASKIDUPDATED="1-2" From Splunk's point of view it's a completely different entity since your TASKIDUPDATED changed (even though from your database point of view it can still be the same record). Splunk doesn't care about state of your database. It just fetches some results from database query. You can - to some extent - compare it to the file monitor input. If you have a log file which Splunk is monitoring and you change some sequence of bytes in the middle of that file to a different sequence, Splunk has no way of knowing that something changed - the event which had been read from that position and ingested into Splunk stays the same. (of course there can be issues when Splunk notices file that file has been truncated and decides to reread whole file or just stops reading from the file because it decides it reached the end of the file but these are beside the main point). BTW, remember that setting a non-numeric column to bee your rising column may yield unpredictible results due to quirkness of sorting. EDIT warning - previous version of this reply mistakenly used the same field name twice.
I have experience working with Ingest Actions. However, I was reading about Ingest Processor which I guess is not different than the ingest actions in terms of functionality. Both does configure data... See more...
I have experience working with Ingest Actions. However, I was reading about Ingest Processor which I guess is not different than the ingest actions in terms of functionality. Both does configure data flows, control data format, apply transformation rules prior to indexing, and route to destinations. Major difference I see is Ingest actions configs are present in props & transforms on HF/Indexer level while the Ingest Processor is a complete cloud solution which comes into action between HF layer and Splunk cloud indexer layer that has separate UI solution for configuration.
OK. I added an idea https://ideas.splunk.com/ideas/EID-I-2471 Feel free to upvote and/or comment
It's the MSCS and google TA. On SHC, inputs.conf are removed from default and local still the error appears as below on all the members. ERROR ModularInputs [1990877 ConfReplicationThread] - Unable ... See more...
It's the MSCS and google TA. On SHC, inputs.conf are removed from default and local still the error appears as below on all the members. ERROR ModularInputs [1990877 ConfReplicationThread] - Unable to initialize modular input "mscs_storage_table" defined in the app "splunk_ta_microsoft-cloudservices": Introspecting scheme=mscs_storage_table: script running failed
You have to configure the webhook input as described in the shared dcos. Launch the Microsoft Teams Add-on for Splunk. Select Inputs > Create New Input > Teams Webhook. Have you done it? If not ... See more...
You have to configure the webhook input as described in the shared dcos. Launch the Microsoft Teams Add-on for Splunk. Select Inputs > Create New Input > Teams Webhook. Have you done it? If not create the input first and then: The webhook address will be available via the internal ip on the instance where you've configured the webhook and you have to use the port that you've configured during the webhook setup.  curl <internal_ip_of_your_splunk_instance>:<the_configured_port> -d '{"value": "test"}' For an initial test you could execute the curl on the same instance where you've configured the webhook.  curl 127.0.0.1:<the_configured_port> -d '{"value": "test"}' To make the webhook address publicly accessible there are different ways of course as mentioned in the documentation The webhook must be a publicly accessible, HTTPS-secured endpoint that is addressable via a URL. You have two options to set up the Splunk instance running the Teams add-on. You can make it publicly accessible via HTTPS. Or you can use a load balancer, reverse proxy, tunnel, etc. in front of your Splunk instance running the add-on. The second option here can be preferable if you don't want to expose the Splunk heavy forwarder to the internet, as the public traffic terminates at that demarcation and then continues on internally to the Splunk heavy forwarder.