All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The current universal forwarder 9.0.9 included in SOAR 6.2.2 is being flagged for an openssl vulnerability. Does anyone know what version UF is packaged in the 6.3.1 SOAR release?
Why is that? Serious question. I've never tried to do so but it shouldn't need to index anything locally.
Ok that's way too much logic for me to follow on a Monday morning before I have even had coffee.  I would split the fields into mv unique options.  Then start evaluating a new field based upon your l... See more...
Ok that's way too much logic for me to follow on a Monday morning before I have even had coffee.  I would split the fields into mv unique options.  Then start evaluating a new field based upon your logic flow.  Anything with a TRUE outcome can be your final results.
Can you provide an anonymized sample of what this search displays and an example record of what you want the final output to be?
Contact Methods for US/CAN https://www.splunk.com/en_us/about-splunk/contact-us.html?locale=en_us Non US/CAN locations https://www.splunk.com/en_us/about-splunk/contact-us.html?locale=en_us#custom... See more...
Contact Methods for US/CAN https://www.splunk.com/en_us/about-splunk/contact-us.html?locale=en_us Non US/CAN locations https://www.splunk.com/en_us/about-splunk/contact-us.html?locale=en_us#customer-support  
Hi @Sabahat - apologies for very late repsonse , I hope this has been already resolved but if not, this is visible for sc_admin role, I am not sure about Power User. Thank you.
That was just a friendly reminder that while "tools" like yours can find some typical cases there might be a lot of them which you might miss with them. As long as you are aware of it and you're usin... See more...
That was just a friendly reminder that while "tools" like yours can find some typical cases there might be a lot of them which you might miss with them. As long as you are aware of it and you're using it only as means of a quick help, that's fine and dandy. But there are often questions around here "how to find all XXX defined/used by ...". For which the usual answer is - there is no 100% sure way to do so.
What do you mean with that? i didnt meant to ask my question in a way that i would want to replace docs and management with tools.
I believe so but I've never tested and I don't have a dev environment to verify.  You can try inside your regex to create an unnamed capture group.  Inside the FORMAT tag replace <new-value> with "$1".
Same here, i'm using my business email and no activation email. Can't also create support tickets, that's really not a great way to welcome new customers
I'm not able to even open a support ticket as there's a required field i can't fill in. tried with both gmail and my company account, there's no email/domain filtering in our domain.  
Dear Splunkers, running version 9.3.1 and I would like to perform a search in which I would like to identify what are the most common hours trucks have been visiting my site location. My search que... See more...
Dear Splunkers, running version 9.3.1 and I would like to perform a search in which I would like to identify what are the most common hours trucks have been visiting my site location. My search query is following: | addinfo | eval _time = strptime(Start_time,"%m/%d/%Y %H:%M") | addinfo | where _time>=info_min_time AND (_time<=info_max_time OR info_max_time="+Infinity") | search Plate!=0 | search Location="*" | timechart span=1h count by Plate limit=50 Like this Im able see trucks visiting location by time in a span. How to continue to display what are the most common hours during which my trucks visiting locations. Thank you
Splunk does _not_ handle frozen storage. It's up to you. As soon as splunkd pushes a bucket out to frozen it loses all interest in further well-being of that bucket and/or storage it's on.
That means exactly what it says - you have some searches defined (most probably because you distribute the same apps to several different kinds of splunk components) which normally should run as sche... See more...
That means exactly what it says - you have some searches defined (most probably because you distribute the same apps to several different kinds of splunk components) which normally should run as scheduled searches but will not because you're using a forwarder license.
That is very strange. I'd try restarting splunkd and if the problem persists I'd raise a support case because a non-existent input should definitely _not_ run,
Hi @AliMaher , which kind of license did you have on your HF? to use DB-Connect, also without local indexing, you cannot use the Forwarder License, but you must configure the HF as a license client... See more...
Hi @AliMaher , which kind of license did you have on your HF? to use DB-Connect, also without local indexing, you cannot use the Forwarder License, but you must configure the HF as a license client, connecting it to the License Master. Ciao. Giuseppe
I see those error in both web ui and _internal
Hello,   I have a Heavy Forwarder, and it was configured just to forward not index: [indexAndForward] index = false I tried to install the DB Connect App on that HF but we faced th... See more...
Hello,   I have a Heavy Forwarder, and it was configured just to forward not index: [indexAndForward] index = false I tried to install the DB Connect App on that HF but we faced the below ERROR:   Any Ideas?
The most important thing about writing an external lookup is here https://dev.splunk.com/enterprise/docs/devtools/externallookups/createexternallookup: For each row in the input CSV table, populate... See more...
The most important thing about writing an external lookup is here https://dev.splunk.com/enterprise/docs/devtools/externallookups/createexternallookup: For each row in the input CSV table, populate the missing values. Then, to return this data to your search results, write each row in the output CSV table to the STDOUT output stream In other words, the external lookup scripts gets CSV-formatted data on input, fills the gaps by whatever means necessary and returns the CSV on output from which splunkd performs "normal" lookup process. So. 1. Just like with any lookup the fields you specify in fields_list in transforms.conf must match the fields you use in the lookup command in SPL. If they don't, you have to use the AS clause. 2. The fields in fields_list must be properly processed and returned by the lookup script. The explicit names of the fields in case of the example external lookup is in fact not strictly necessary external_cmd = external_lookup.py clienthost clientip In this case the "clienthost clientip" part is just a list of parameters accepted by the external_lookup.py script because someone wrote the script itself as accepting dynamically specified column names. If those were hardcoded at the script level (always processing the "clienthist" and "clientip" columns from the input CSV stream) you could define it simply as external_cmd = external_lookup.py So the minimal version of the working external lookup for returning the length of the field called "data" should look (with one caveat explained later) like this: transforms.conf: [test_lenlookup] external_cmd = lenlookup.py fields_list = data, length python.version = python3 And the lenlookup.py file itself:     #!/usr/bin/env python3 import csv import sys def main(): infile = sys.stdin outfile = sys.stdout r = csv.DictReader(infile) w = csv.DictWriter(outfile, fieldnames=["data","length"]) w.writeheader() for result in r: if result["data"]: result["length"]=len(result["data"]) w.writerow(result) main()     Yes, it doesn't do any sanity checking or error handling but it does work for something like   | makeresults | eval data="whatever" | lookup test_lenlookup data Of course this is a simple len() python function which in your case might or might not be what you need so the core functionality you might need to rewrite on your own. One important caveat. Even thought the spec file for transforms .conf says external_cmd = <string> * Provides the command and arguments to invoke to perform a lookup. Use this for external (or "scripted") lookups, where you interface with with an external script rather than a lookup table. * This string is parsed like a shell command. * The first argument is expected to be a python script (or executable file) located in $SPLUNK_HOME/etc/apps/<app_name>/bin. * Presence of this field indicates that the lookup is external and command based. * Default: empty string I was unable to run my external lookup when the script was placed anywhere else than $SPLUNK_HOME/etc/system/bin. Judging from Answers history it seems to be some kind of a bug. EDIT: OK. I found it. It seems that for an external lookup to work you must give permissions to both the lookup definition (which you may as well do in WebUI) as well as to the script file itself (which you must do using the .meta file. So in this case you need something like this: [bin/lenlookup.py] access= read : [*] export = system [transforms/test_lenlookup] access = read : [*] export = system
Hi Splunk Experts, I'v been trying to apply three condition, but I'm bit complicating. So would like to have some inputs. I have a runtime search which will produce three fields Category, Data, P... See more...
Hi Splunk Experts, I'v been trying to apply three condition, but I'm bit complicating. So would like to have some inputs. I have a runtime search which will produce three fields Category, Data, Percent and I join/ append some data from lookup using User. The lookup has multi-value fields which are prefixed with Lookup. User Category Data Percent LookupCategory LookupData LookupPercent LookupND1 LookupND2 User094 103 2064 3.44 101 102 104 7865 4268 1976 7.10 3.21 3.56 4.90 2.11 3.10 2.20 1.10 0.46 User871 102 5108 5.58 103 3897 7.31 5.23 2.08 User131 104 664 0.71 103 104 105 2287 1576 438 0.22 0.30 0.82 0.11 0.08 0.50 0.11 0.02 0.32 User755 104 1241 1.23 102 104 4493 975 0.97 1.12 0.42 1.01 0.55 0.11 My conditions are as follow: 1. Use Precedence Category if it's greater than current Category. For Ex below dataset: The Category is 103, I have to check which is the max(LookupPercent) between 101 to 103 and use it if the value in (101 or 102) is greater than 103. User094 103 2064 3.44 101 102 104 7865 4268 1976 7.10 3.21 3.56 4.90 2.11 3.10 2.20 1.10 0.46 2. Ignore if the LookupCategory has no CategoryValue equal to or greater than In below case Category is 102, but the lookup has only 103, but no data between 101 to 102. So ignore. User871 102 5108 5.58 103 3897 7.31 5.23 2.08 3. If the Lookup Current Category Percent is lesser than immediate following category, then find abs difference of Current Category with lookup Category and immediate following Category using Data field and if immediate following is near then use immediate following category. LookupCategory 104's Percent 0.30 is less than 105's 0.82. So as further step abs(664 - 1576) and abs(664 - 438), as (664 - 438) is less than (664 - 1576), the 105's row data should be filtered/ used. User131 104 664 0.71 103 104 105 2287 1576 438 0.22 0.30 0.82 0.11 0.08 0.50 0.11 0.02 0.32 4. Straight forward, none of above condition matches Same lookupCatagory 104's row should be used for Category 104. User755 104 1241 1.23 102 104 4493 975 0.97 1.12 0.42 1.01 0.55 0.11