All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hello,   I have a Heavy Forwarder, and it was configured just to forward not index: [indexAndForward] index = false I tried to install the DB Connect App on that HF but we faced th... See more...
Hello,   I have a Heavy Forwarder, and it was configured just to forward not index: [indexAndForward] index = false I tried to install the DB Connect App on that HF but we faced the below ERROR:   Any Ideas?
The most important thing about writing an external lookup is here https://dev.splunk.com/enterprise/docs/devtools/externallookups/createexternallookup: For each row in the input CSV table, populate... See more...
The most important thing about writing an external lookup is here https://dev.splunk.com/enterprise/docs/devtools/externallookups/createexternallookup: For each row in the input CSV table, populate the missing values. Then, to return this data to your search results, write each row in the output CSV table to the STDOUT output stream In other words, the external lookup scripts gets CSV-formatted data on input, fills the gaps by whatever means necessary and returns the CSV on output from which splunkd performs "normal" lookup process. So. 1. Just like with any lookup the fields you specify in fields_list in transforms.conf must match the fields you use in the lookup command in SPL. If they don't, you have to use the AS clause. 2. The fields in fields_list must be properly processed and returned by the lookup script. The explicit names of the fields in case of the example external lookup is in fact not strictly necessary external_cmd = external_lookup.py clienthost clientip In this case the "clienthost clientip" part is just a list of parameters accepted by the external_lookup.py script because someone wrote the script itself as accepting dynamically specified column names. If those were hardcoded at the script level (always processing the "clienthist" and "clientip" columns from the input CSV stream) you could define it simply as external_cmd = external_lookup.py So the minimal version of the working external lookup for returning the length of the field called "data" should look (with one caveat explained later) like this: transforms.conf: [test_lenlookup] external_cmd = lenlookup.py fields_list = data, length python.version = python3 And the lenlookup.py file itself:     #!/usr/bin/env python3 import csv import sys def main(): infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) w = csv.DictWriter(outfile, fieldnames=["data","length"]) w.writeheader() for result in r: if result["data"]: result["length"]=len(result["data"]) w.writerow(result) main()     Yes, it doesn't do any sanity checking or error handling but it does work for something like   | makeresults | eval data="whatever" | lookup test_lenlookup data Of course this is a simple len() python function which in your case might or might not be what you need so the core functionality you might need to rewrite on your own. One important caveat. Even thought the spec file for transforms .conf says external_cmd = <string> * Provides the command and arguments to invoke to perform a lookup. Use this for external (or "scripted") lookups, where you interface with with an external script rather than a lookup table. * This string is parsed like a shell command. * The first argument is expected to be a python script (or executable file) located in $SPLUNK_HOME/etc/apps/<app_name>/bin. * Presence of this field indicates that the lookup is external and command based. * Default: empty string I was unable to run my external lookup when the script was placed anywhere else than $SPLUNK_HOME/etc/system/bin. Judging from Answers history it seems to be some kind of a bug. EDIT: OK. I found it. It seems that for an external lookup to work you must give permissions to both the lookup definition (which you may as well do in WebUI) as well as to the script file itself (which you must do using the .meta file. So in this case you need something like this: [bin/lenlookup.py] access= read : [*] export = system [transforms/test_lenlookup] access = read : [*] export = system
Hi Splunk Experts, I'v been trying to apply three condition, but I'm bit complicating. So would like to have some inputs. I have a runtime search which will produce three fields Category, Data, P... See more...
Hi Splunk Experts, I'v been trying to apply three condition, but I'm bit complicating. So would like to have some inputs. I have a runtime search which will produce three fields Category, Data, Percent and I join/ append some data from lookup using User. The lookup has multi-value fields which are prefixed with Lookup. User Category Data Percent LookupCategory LookupData LookupPercent LookupND1 LookupND2 User094 103 2064 3.44 101 102 104 7865 4268 1976 7.10 3.21 3.56 4.90 2.11 3.10 2.20 1.10 0.46 User871 102 5108 5.58 103 3897 7.31 5.23 2.08 User131 104 664 0.71 103 104 105 2287 1576 438 0.22 0.30 0.82 0.11 0.08 0.50 0.11 0.02 0.32 User755 104 1241 1.23 102 104 4493 975 0.97 1.12 0.42 1.01 0.55 0.11 My conditions are as follow: 1. Use Precedence Category if it's greater than current Category. For Ex below dataset: The Category is 103, I have to check which is the max(LookupPercent) between 101 to 103 and use it if the value in (101 or 102) is greater than 103. User094 103 2064 3.44 101 102 104 7865 4268 1976 7.10 3.21 3.56 4.90 2.11 3.10 2.20 1.10 0.46 2. Ignore if the LookupCategory has no CategoryValue equal to or greater than In below case Category is 102, but the lookup has only 103, but no data between 101 to 102. So ignore. User871 102 5108 5.58 103 3897 7.31 5.23 2.08 3. If the Lookup Current Category Percent is lesser than immediate following category, then find abs difference of Current Category with lookup Category and immediate following Category using Data field and if immediate following is near then use immediate following category. LookupCategory 104's Percent 0.30 is less than 105's 0.82. So as further step abs(664 - 1576) and abs(664 - 438), as (664 - 438) is less than (664 - 1576), the 105's row data should be filtered/ used. User131 104 664 0.71 103 104 105 2287 1576 438 0.22 0.30 0.82 0.11 0.08 0.50 0.11 0.02 0.32 4. Straight forward, none of above condition matches Same lookupCatagory 104's row should be used for Category 104. User755 104 1241 1.23 102 104 4493 975 0.97 1.12 0.42 1.01 0.55 0.11
Hi @yuanliu  I am working on a dashboard in splunk and need help implementing specefic filtering requirements.I have a table with the following fields. message (contain log details) component (ind... See more...
Hi @yuanliu  I am working on a dashboard in splunk and need help implementing specefic filtering requirements.I have a table with the following fields. message (contain log details) component (indicates the source components) My requirement are: 1.Add multiselect dropdown to filter the component field. 2. add textbox input to filter the message field using comma-separated keywords. for example: if the textbox contains error, timeout it should filter rows where the message field contain error or timeout in case both present we need to show both the values.   Any suggestions or example are greatly appreciated, Thank you. 
Wait, you're seing those errors in events in _internal? Not just in the webui? That's unexpected.
I'm afraid to say I removed the inputs last week but still I can see errors in last 15 minutes 
Ahhhh... one more thing. I think the error can persist from before you disabled/deleted the inputs. AFAIR I had similar issues with VMware vCenter inputs. Until the events rolled off the _internal in... See more...
Ahhhh... one more thing. I think the error can persist from before you disabled/deleted the inputs. AFAIR I had similar issues with VMware vCenter inputs. Until the events rolled off the _internal index, the error persisted within the WebUI.
I have found this info sofar: https://splunk.my.site.com/customer/s/article/The-code-execution-cannot-proceed-because-LIBEAY32-dll-was-not-found-Reinstalling-the-program-may-fix-this-problem
There is no significance of MSCS inputs when I output the content from "splunk show config inputs" still the error is present. This is very strange.
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3... See more...
Hello,  My index configuration is provided below, but I have a question regarding frozenTimePeriodInSecs = 7776000. I have configured Splunk to move data to frozen storage after 7,776,000 seconds (3 months). Once data reaches the frozen state, how can I control the frozen storage if the frozen disk becomes full? How does Splunk handle the frozen storage in such scenarios? [custom_index] repFactor = auto homePath = volume:hot/$_index_name/db coldPath = volume:cold/$_index_name/colddb thawedPath = /opt/thawed/$_index_name/thaweddb homePath.maxDataSizeMB = 1664000 coldPath.maxDataSizeMB = 1664000 maxWarmDBCount = 200 frozenTimePeriodInSecs = 7776000 maxDataSize = auto_high_volume coldToFrozenDir = /opt/frozen/custom_index/frozendb
That's even more interesting because if there is no input defined (not even disabled ones), nothing should be started. Maybe your settings were not applied. Check output of "splunk show config input... See more...
That's even more interesting because if there is no input defined (not even disabled ones), nothing should be started. Maybe your settings were not applied. Check output of "splunk show config inputs" to see what are the contents of in-memory Splunk's "running-config".
Have you installed AWS TA and Splunk Add-on for Amazon Kinesis Firehose for parsing? Document
MSCS TA uses service principle for authentication. Please review below document to configure and connect with TA with the same - https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Co... See more...
MSCS TA uses service principle for authentication. Please review below document to configure and connect with TA with the same - https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/ConfigureappinAzureAD/
Yes, I have already checked the btool output. Nothing shows up when I run the command as the inputs.conf are removed.
Let me ask you first, why would you want to map your 8089 splunkd port to 443? 443 is for webUI (if enabled and redirected from the default 8000). 8089 is the port your API is expected to be at.
@bowesmana Thank you soooo much, it worked like a charm I will sure try it out. (linked list option)
Did you check the btool output? Inputs shouldn't normally be run when disabled. That's the whole point of defined disabled inputs - define them in a "ready to run" state by default but let them be en... See more...
Did you check the btool output? Inputs shouldn't normally be run when disabled. That's the whole point of defined disabled inputs - define them in a "ready to run" state by default but let them be enabled or disabled selectively.
You can't replace docs and management with tools. [ | makeresults annotate=f | eval t1="ind", t2="ex", t3=t1.t2 | eval {t3}="_internal" | table * | fields - t1 t2 t3 _time ] | stats count by... See more...
You can't replace docs and management with tools. [ | makeresults annotate=f | eval t1="ind", t2="ex", t3=t1.t2 | eval {t3}="_internal" | table * | fields - t1 t2 t3 _time ] | stats count by index  
OK. Let's back up a little. You have a record with TASKID=1 UPDATED=1 VALUE="A" TASKIDUPDATED="1-1" You update the VALUE and the UPDATED field and the TASKIDUPDATED field is updated as well so ... See more...
OK. Let's back up a little. You have a record with TASKID=1 UPDATED=1 VALUE="A" TASKIDUPDATED="1-1" You update the VALUE and the UPDATED field and the TASKIDUPDATED field is updated as well so you have TASKID=1 UPDATED=2 VALUE="B" TASKIDUPDATED="1-2" From Splunk's point of view it's a completely different entity since your TASKIDUPDATED changed (even though from your database point of view it can still be the same record). Splunk doesn't care about state of your database. It just fetches some results from database query. You can - to some extent - compare it to the file monitor input. If you have a log file which Splunk is monitoring and you change some sequence of bytes in the middle of that file to a different sequence, Splunk has no way of knowing that something changed - the event which had been read from that position and ingested into Splunk stays the same. (of course there can be issues when Splunk notices file that file has been truncated and decides to reread whole file or just stops reading from the file because it decides it reached the end of the file but these are beside the main point). BTW, remember that setting a non-numeric column to bee your rising column may yield unpredictible results due to quirkness of sorting. EDIT warning - previous version of this reply mistakenly used the same field name twice.
I have experience working with Ingest Actions. However, I was reading about Ingest Processor which I guess is not different than the ingest actions in terms of functionality. Both does configure data... See more...
I have experience working with Ingest Actions. However, I was reading about Ingest Processor which I guess is not different than the ingest actions in terms of functionality. Both does configure data flows, control data format, apply transformation rules prior to indexing, and route to destinations. Major difference I see is Ingest actions configs are present in props & transforms on HF/Indexer level while the Ingest Processor is a complete cloud solution which comes into action between HF layer and Splunk cloud indexer layer that has separate UI solution for configuration.