All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Splunk Experts, I'v been trying to apply three condition, but I'm bit complicating. So would like to have some inputs. I have a runtime search which will produce three fields Category, Data, P... See more...
Hi Splunk Experts, I'v been trying to apply three condition, but I'm bit complicating. So would like to have some inputs. I have a runtime search which will produce three fields Category, Data, Percent and I join/ append some data from lookup using User. The lookup has multi-value fields which are prefixed with Lookup. User Category Data Percent LookupCategory LookupData LookupPercent LookupND1 LookupND2 User094 103 2064 3.44 101 102 104 7865 4268 1976 7.10 3.21 3.56 4.90 2.11 3.10 2.20 1.10 0.46 User871 102 5108 5.58 103 3897 7.31 5.23 2.08 User131 104 664 0.71 103 104 105 2287 1576 438 0.22 0.30 0.82 0.11 0.08 0.50 0.11 0.02 0.32 User755 104 1241 1.23 102 104 4493 975 0.97 1.12 0.42 1.01 0.55 0.11 My conditions are as follow: 1. Use Precedence Category if it's greater than current Category. For Ex below dataset: The Category is 103, I have to check which is the max(LookupPercent) between 101 to 103 and use it if the value in (101 or 102) is greater than 103. User094 103 2064 3.44 101 102 104 7865 4268 1976 7.10 3.21 3.56 4.90 2.11 3.10 2.20 1.10 0.46 2. Ignore if the LookupCategory has no CategoryValue equal to or greater than In below case Category is 102, but the lookup has only 103, but no data between 101 to 102. So ignore. User871 102 5108 5.58 103 3897 7.31 5.23 2.08 3. If the Lookup Current Category Percent is lesser than immediate following category, then find abs difference of Current Category with lookup Category and immediate following Category using Data field and if immediate following is near then use immediate following category. LookupCategory 104's Percent 0.30 is less than 105's 0.82. So as further step abs(664 - 1576) and abs(664 - 438), as (664 - 438) is less than (664 - 1576), the 105's row data should be filtered/ used. User131 104 664 0.71 103 104 105 2287 1576 438 0.22 0.30 0.82 0.11 0.08 0.50 0.11 0.02 0.32 4. Straight forward, none of above condition matches Same lookupCatagory 104's row should be used for Category 104. User755 104 1241 1.23 102 104 4493 975 0.97 1.12 0.42 1.01 0.55 0.11
Hi @yuanliu  I am working on a dashboard in splunk and need help implementing specefic filtering requirements.I have a table with the following fields. message (contain log details) component (ind... See more...
Hi @yuanliu  I am working on a dashboard in splunk and need help implementing specefic filtering requirements.I have a table with the following fields. message (contain log details) component (indicates the source components) My requirement are: 1.Add multiselect dropdown to filter the component field. 2. add textbox input to filter the message field using comma-separated keywords. for example: if the textbox contains error, timeout it should filter rows where the message field contain error or timeout in case both present we need to show both the values.   Any suggestions or example are greatly appreciated, Thank you. 
Wait, you're seing those errors in events in _internal? Not just in the webui? That's unexpected.
I'm afraid to say I removed the inputs last week but still I can see errors in last 15 minutes 
Ahhhh... one more thing. I think the error can persist from before you disabled/deleted the inputs. AFAIR I had similar issues with VMware vCenter inputs. Until the events rolled off the _internal in... See more...
Ahhhh... one more thing. I think the error can persist from before you disabled/deleted the inputs. AFAIR I had similar issues with VMware vCenter inputs. Until the events rolled off the _internal index, the error persisted within the WebUI.
I have found this info sofar: https://splunk.my.site.com/customer/s/article/The-code-execution-cannot-proceed-because-LIBEAY32-dll-was-not-found-Reinstalling-the-program-may-fix-this-problem
There is no significance of MSCS inputs when I output the content from "splunk show config inputs" still the error is present. This is very strange.
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3... See more...
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3 months). Once data reaches the frozen state, how can I control the frozen storage if the frozen disk becomes full? How does Splunk handle the frozen storage in such scenarios? [custom_index] repFactor = auto homePath = volume:hot/$_index_name/db coldPath = volume:cold/$_index_name/colddb thawedPath = /opt/thawed/$_index_name/thaweddb homePath.maxDataSizeMB = 1664000 coldPath.maxDataSizeMB = 1664000 maxWarmDBCount = 200 frozenTimePeriodInSecs = 7776000 maxDataSize = auto_high_volume coldToFrozenDir = /opt/frozen/custom_index/frozendb
That's even more interesting because if there is no input defined (not even disabled ones), nothing should be started. Maybe your settings were not applied. Check output of "splunk show config input... See more...
That's even more interesting because if there is no input defined (not even disabled ones), nothing should be started. Maybe your settings were not applied. Check output of "splunk show config inputs" to see what are the contents of in-memory Splunk's "running-config".
Have you installed AWS TA and Splunk Add-on for Amazon Kinesis Firehose for parsing? Document
MSCS TA uses service principle for authentication. Please review below document to configure and connect with TA with the same - https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Co... See more...
MSCS TA uses service principle for authentication. Please review below document to configure and connect with TA with the same - https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/ConfigureappinAzureAD/
Yes, I have already checked the btool output. Nothing shows up when I run the command as the inputs.conf are removed.
Let me ask you first, why would you want to map your 8089 splunkd port to 443? 443 is for webUI (if enabled and redirected from the default 8000). 8089 is the port your API is expected to be at.
@bowesmana Thank you soooo much, it worked like a charm I will sure try it out. (linked list option)
Did you check the btool output? Inputs shouldn't normally be run when disabled. That's the whole point of defined disabled inputs - define them in a "ready to run" state by default but let them be en... See more...
Did you check the btool output? Inputs shouldn't normally be run when disabled. That's the whole point of defined disabled inputs - define them in a "ready to run" state by default but let them be enabled or disabled selectively.
You can't replace docs and management with tools. [ | makeresults annotate=f | eval t1="ind", t2="ex", t3=t1.t2 | eval {t3}="_internal" | table * | fields - t1 t2 t3 _time ] | stats count by... See more...
You can't replace docs and management with tools. [ | makeresults annotate=f | eval t1="ind", t2="ex", t3=t1.t2 | eval {t3}="_internal" | table * | fields - t1 t2 t3 _time ] | stats count by index  
OK. Let's back up a little. You have a record with TASKID=1 UPDATED=1 VALUE="A" TASKIDUPDATED="1-1" You update the VALUE and the UPDATED field and the TASKIDUPDATED field is updated as well so ... See more...
OK. Let's back up a little. You have a record with TASKID=1 UPDATED=1 VALUE="A" TASKIDUPDATED="1-1" You update the VALUE and the UPDATED field and the TASKIDUPDATED field is updated as well so you have TASKID=1 UPDATED=2 VALUE="B" TASKIDUPDATED="1-2" From Splunk's point of view it's a completely different entity since your TASKIDUPDATED changed (even though from your database point of view it can still be the same record). Splunk doesn't care about state of your database. It just fetches some results from database query. You can - to some extent - compare it to the file monitor input. If you have a log file which Splunk is monitoring and you change some sequence of bytes in the middle of that file to a different sequence, Splunk has no way of knowing that something changed - the event which had been read from that position and ingested into Splunk stays the same. (of course there can be issues when Splunk notices file that file has been truncated and decides to reread whole file or just stops reading from the file because it decides it reached the end of the file but these are beside the main point). BTW, remember that setting a non-numeric column to bee your rising column may yield unpredictible results due to quirkness of sorting. EDIT warning - previous version of this reply mistakenly used the same field name twice.
I have experience working with Ingest Actions. However, I was reading about Ingest Processor which I guess is not different than the ingest actions in terms of functionality. Both does configure data... See more...
I have experience working with Ingest Actions. However, I was reading about Ingest Processor which I guess is not different than the ingest actions in terms of functionality. Both does configure data flows, control data format, apply transformation rules prior to indexing, and route to destinations. Major difference I see is Ingest actions configs are present in props & transforms on HF/Indexer level while the Ingest Processor is a complete cloud solution which comes into action between HF layer and Splunk cloud indexer layer that has separate UI solution for configuration.
OK. I added an idea https://ideas.splunk.com/ideas/EID-I-2471 Feel free to upvote and/or comment
It's the MSCS and google TA. On SHC, inputs.conf are removed from default and local still the error appears as below on all the members. ERROR ModularInputs [1990877 ConfReplicationThread] - Unable ... See more...
It's the MSCS and google TA. On SHC, inputs.conf are removed from default and local still the error appears as below on all the members. ERROR ModularInputs [1990877 ConfReplicationThread] - Unable to initialize modular input "mscs_storage_table" defined in the app "splunk_ta_microsoft-cloudservices": Introspecting scheme=mscs_storage_table: script running failed