All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Are you using selfjoin or join? Either way, selfjoin is not the right command - join is also not the way to do things in Splunk as it has limitations, however, your SPL indicates your 2 data sets ha... See more...
Are you using selfjoin or join? Either way, selfjoin is not the right command - join is also not the way to do things in Splunk as it has limitations, however, your SPL indicates your 2 data sets have  index=acn_ac_snow_ticket_idx - INC (Ticket_Number) uid log_description source_host index=acn_lendlease_certificate_tier3_idx - tower, metric_value, alert_value, add_info, instance, source_host and you are trying to join these two on source_host
index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*" | rex field=... See more...
index=acn_ac_snow_ticket_idx code_message=create uid="*Saml : Days to expire*" OR uid="*Self_Signed : Days to expire*" OR uid="*CA : Days to expire*" OR uid="*Entrust : Days to expire*" | rex field=_raw "\"(?<INC>INC\d+)," | rex field=uid "(?i)^(?P<source_host>.+?)__" | table INC uid log_description source_host | dedup INC uid log_description source_host | rename INC as "Ticket_Number" | selfjoin source_host [ search index=acn_lendlease_certificate_tier3_idx tower=* | table *] | stats latest(tower) as Tower, latest(source_host) as source_host , latest(metric_value) as "Days To Expire", latest(alert_value) as alert_value, latest(add_info) as "Additional Info" by instance,Ticket_Number | eval alert_value=case(alert_value==100,"Active",alert_value==300,"About to Expire", alert_value==500,"Expired") | where alert_value="Active" | search Tower="*" AND alert_value="*" | sort "Days To Expire" | rename instance as "Serial Number / Server ID", Tower as "Certificate Type" , source_host as Certificate , alert_value as "Certificate Status" I am trying to map incident number with respect to source_host using join command but its not working as expected
What do you have in your real search before you do the eventstats as it will push all the data to the search head, including _raw, so unless you use the fields statement you will be sending all the e... See more...
What do you have in your real search before you do the eventstats as it will push all the data to the search head, including _raw, so unless you use the fields statement you will be sending all the event data to the SH. You are also doing lots of multivalue splits, which is going to be pretty memory hungry on the SH. Building a tree is a tricky thing in Splunk, but if your network paths are not often changing, it may be possible to create a lookup that for 'Server-A' you can lookup its network and discover the behind firewall state. What is the depth of the tree in your case, your example is 3 tier, going from server via the LB - if it's only 3 tier, then you could perhaps build your pathways just be fetching the name="LoadBalancer" objects and using stats values() rather than eventstats to create the lookup - as at that point you don't care about the IPs.
The values() statement requires 'eval', i.e. | eventstats values(eval(if(match(name,"student-1"), name, null()))) as student by grade
mvfilter only takes a single field https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/MultivalueEvalFunctions#mvfilter.28.26lt.3Bpredicate.26gt.3B.29 Use mvmap instead | makeresult... See more...
mvfilter only takes a single field https://docs.splunk.com/Documentation/Splunk/9.2.0/SearchReference/MultivalueEvalFunctions#mvfilter.28.26lt.3Bpredicate.26gt.3B.29 Use mvmap instead | makeresults | eval fullcode= "code-abc-1111,code-abc-2222,code-xyz-1111,code-xyz-222" | eval partialcode="code-abc" | makemv delim="," fullcode | eval fullcode2=mvmap(fullcode, if(match(fullcode,partialcode), fullcode, null()))  
That's not a valid rex sed statement, use this example | makeresults | eval ip=split("010.1.2.3,10.013.2.3",",") | mvexpand ip | rex field=ip mode=sed "s/\b0+\B//"  
Hello All, I'm trying to remove leading zeros in IP addresses using rex and mode=sed . the regular expression I'm trying to use for substitution is "\b0+\B" . However, it's not returning the require... See more...
Hello All, I'm trying to remove leading zeros in IP addresses using rex and mode=sed . the regular expression I'm trying to use for substitution is "\b0+\B" . However, it's not returning the required output. Example : | rex field=<IP address field> mode=sed "\b0+\B" I even tried with double back slash. But, no luck. Kindly assist to resolve this issue. Regards, Sid
Several problems with this search. Success is 0.  You can verify this by removing everything after the first stats. (I will analyze this later.) You should not do stats by Percent_Failure because ... See more...
Several problems with this search. Success is 0.  You can verify this by removing everything after the first stats. (I will analyze this later.) You should not do stats by Percent_Failure because you will always get 1. Why Success count is 0?  You need to explain what is the output in index=john-doe-index.  By explain, I mean illustrate some real events (anonymize as needed).  Short of that, you need to explain whether you have a field named statusCode AND if yes, how this code is extracted.  Second, you need to explain what "2**" signifies.  Is this a wildcard expression, or does your event record success events as literal 2**?  These details are necessary so volunteers do not have to read your mind.  Here, I will speculate that your codes are like 200, 203, and so on. (Not "2**".) I am not sure when can count(eval(statusCode="2**")) give non-zero count but I know in most cases it will end up with zero.  If you really, really, really want to use eval, you can do something like     | stats count as Total count(eval(if(searchmatch("statusCode=2*"), "success", null()))) as Success | eval Failure = Total - Success | eval Percent_Failure = round((Failure/Total)*100)."%" | table Percent_Failure Total     (If you are using wildcard in search, there is no need for "2**".) This is a lot messier than simply   | eval success = if(searchmatch("statusCode=2*"), "success", null()) | stats count as Total count(success) as Success | eval Failure = Total - Success | eval Percent_Failure = round((Failure/Total)*100)."%" | table Percent_Failure Total   Here is a full emulation using _internal.   index="_internal" | extract access-extractions | rename status as statusCode ``` data emulation above ```   My result is Percent_Failure Total 98% 257002 (Of course, in my emulation, most events do not have field status so they count as "failure" in this formula.) If you use | stats count by Percent_Failure instead of table, all you get is Total 1.
If I used variable in the mvfilter match, i got the following error Error in 'EvalCommand': The arguments to the 'mvfilter' function are invalid. If I replaced the partialcode with a string, it... See more...
If I used variable in the mvfilter match, i got the following error Error in 'EvalCommand': The arguments to the 'mvfilter' function are invalid. If I replaced the partialcode with a string, it worked fine Please help. Thank you so much | makeresults | eval fullcode= "code-abc-1111,code-abc-2222,code-xyz-1111,code-xyz-222" | eval partialcode="code-abc" | makemv delim="," fullcode | eval fullcode2=mvfilter(match(fullcode,partialcode)) This one worked fine | makeresults | eval fullcode= "code-abc-1111,code-abc-2222,code-xyz-1111,code-xyz-222" | eval partialcode="code-abc" | makemv delim="," fullcode | eval fullcode2=mvfilter(match(fullcode,"code-abc"))  
You are using Windows OS for your DS and deploying to non-Windows servers.  This cannot be made to work.  Start over and use a Linxus OS for your DS.
Hello, Is it possible to use eventstats with conditions? For example: I only want to apply eventstats only if field name contains "student-1" | eventstats values(if(match(name,"student-1"), n... See more...
Hello, Is it possible to use eventstats with conditions? For example: I only want to apply eventstats only if field name contains "student-1" | eventstats values(if(match(name,"student-1"), name, null())) as student by  grade Please suggest. Thanks
I am trying to ingest a csv file and have indicated in the UF inputs.conf file as shown below [monitor://C:\<directory>\file.csv] index = csv_data sourcetype = csv crcSalt = <SOURCE> I created a ne... See more...
I am trying to ingest a csv file and have indicated in the UF inputs.conf file as shown below [monitor://C:\<directory>\file.csv] index = csv_data sourcetype = csv crcSalt = <SOURCE> I created a new index on the Splunk GUI page and even added the new index in indexes.conf on the Splunk machine. However, it seems like the data is not ingesting to the index 'csv_data' which I indicated. When I change the index in the UF inputs.conf to my lastChanceIndex, for some reason, it starts to ingest the csv data.  How do I make the data ingest to the csv_data index instead of the lastChanceIndex? Am I missing a step?
Hello, I need help improve efficiency of my search using eventstats. The search worked just fine, but when I applied to large set of data, it took too long.  Please suggest.  Thank you  IP 19... See more...
Hello, I need help improve efficiency of my search using eventstats. The search worked just fine, but when I applied to large set of data, it took too long.  Please suggest.  Thank you  IP 192.168.1.7 of server-A is connected to "LoadBalancer-to-Server" network, LoadBalancer-A is connected to "LoadBalancer-to-Server" network and "Firewall-to-Loadbalancer" network. So, server-A is behind a firewall.  (behindfirewall = "yes") ip name network behindfirewall 192.168.1.1 LoadBalancer-A Loadbalancer-to-Server yes 172.168.1.1 LoadBalancer-A Firewall-to-Loadbalancer yes 192.168.1.7 server-A Loadbalancer-to-Server yes 192.168.1.8 server-B Loadbalancer-to-Server yes 192.168.1.9 server-C network-1 no 192.168.1.9 server-D network-2 no   | makeresults format=csv data="ip,name,network, 192.168.1.1,LoadBalancer-A,Loadbalancer-to-Server 172.168.1.1,LoadBalancer-A,Firewall-to-Loadbalancer 192.168.1.7,server-A,Loadbalancer-to-Server 192.168.1.8,server-B,Loadbalancer-to-Server 192.168.1.9,server-C,network-1 192.168.1.9,server-D,network-2" | eventstats values(name) as servergroup by network | eventstats values(network) as networkgroup by name | eventstats values(networkgroup) as networkpath by servergroup | eval behindfirewall = if(match(networkpath,"Firewall-to-Loadbalancer"),"yes","no") | table ip, name, network, servergroup, networkgroup *  
Hi, I am trying to get the error percentage of the https response request but its not working as expected.   index="john-doe-index" | stats count AS Total count(eval(statusCode="2**")) as Success ... See more...
Hi, I am trying to get the error percentage of the https response request but its not working as expected.   index="john-doe-index" | stats count AS Total count(eval(statusCode="2**")) as Success | eval Failure = Total - Success | eval Percent_Failure = round((Failure/Total)*100)."%" | stats count by Percent_Failure     showing the following result.   I took the above query from previous answers, not sure why its not working on my end. because the ratio of 4xx , 2xx are high and result is showing 100% and count 1 all the time.   Thanks!
Hi, I see there is an option in the PSA deployment script to configure more than 1  Heimdall replicas. What are the advantages and disadvantages of using more than 1 Heimdall replica? We have my scr... See more...
Hi, I see there is an option in the PSA deployment script to configure more than 1  Heimdall replicas. What are the advantages and disadvantages of using more than 1 Heimdall replica? We have my scripts running on our PSAs, and I am trying to figure out if this would help with performance. Thanks Roberto
@RahulMisra1 It looks like you have double accounting - you are counting yesterday twice because your first search is -30 days to NOW, which includes yesterday and then you append yesterday's data, s... See more...
@RahulMisra1 It looks like you have double accounting - you are counting yesterday twice because your first search is -30 days to NOW, which includes yesterday and then you append yesterday's data, so you will effectively count yesterday twice. You don't actually need the append as you already have the data from the first search to calculate yesterday. One question though - is your last 30 days supposed to include yesterday - anyway this search should work index=<> source="/****" IP!="10.*" earliest=-30d@d latest=now [ | inputlookup ip_tracking.csv | rename MIDS AS MID | format ] ``` If the data time is -1d to end of day, then the data is yesterday ``` | eval Day=if(_time>=relative_time(now(),"-1d@d") AND _time<=relative_time(now(),"@d"),"yesterday","Last30Day") ``` Now count yesterday, but then count last 30 days as ALL items, which includes yesterday ``` | stats count(eval(Day="yesterday")) AS yesterday count AS Last30Day BY IP The final count as last30Day will just count all items in the search including yesterday - if you want that to exclude yesterday BUT also include today up to now, then use your original eval statement in the count.  
Hi Team, I have a field extraction  and a calculated field which is not working Please let me know whether there is any other way to extract it EXTRACT-User = \"path\"\:\"auth\/(abc|xyz)\/lo... See more...
Hi Team, I have a field extraction  and a calculated field which is not working Please let me know whether there is any other way to extract it EXTRACT-User = \"path\"\:\"auth\/(abc|xyz)\/login\/(?<User>[\w\_]+) EVAL-user = if(error="invalid credentials",User,'auth.display_name') "auth.display_name" is the existing field
I feel your pain - it's tough getting started with all the possibilities - anyway, the options you are trying are unlikely to be the right solution.  appendcols is rarely a good solution as it requir... See more...
I feel your pain - it's tough getting started with all the possibilities - anyway, the options you are trying are unlikely to be the right solution.  appendcols is rarely a good solution as it requires the two data sets to line up exactly and join is never a good place to go - it's just not the Splunk way to do things. If I understand your use case, you are trying to find data from your index 'win' which only relate to the nt_host devices in your lookup file. It's not clear what the matching element in both index and lookup is - you are setting user to nt_host but the previous line you replace user in some way. Anyway, there are generally two ways to achieve this, depending on your data volume. 1. Using a subsearch search index=win EventCode=4725 src_user="*" [ | inputlookup Assets | rename nt_host as user | fields user ] ``` The subsearch above will restrict the search to user=nt_host ``` | stats count by src_user, EventCode, signature, user ``` And this lookup will then fetch the DN - it can be done after the stats as the data does not change for the group by user ``` | lookup Assets nt_host as user OUTPUT nt_host distinguishedName 2. Using a lookup search index=win EventCode=4725 src_user="*" | stats count by src_user, EventCode, signature, user ``` And this lookup will then fetch the DN - it can be done after the stats as the data does not change for the group by user ``` | lookup Assets nt_host as user OUTPUT nt_host distinguishedName ``` Now remove all the ones that were not in the Assets lookup ``` | where isnotnull(nt_host) If the data volume in the lookup is large then subsearch can be inefficient and if the lookup represents most of the data set in index, then it's probably more efficient to use the second approach. However, I am not sure about your 'user' field and whether that is intended to be nt_host
Based on your Slack response, I think this is what you will want ...your search ... | rex field=host_list max_match=0 "(?<prefix>[^0-9]*)(?<id>\d+)" | eval prefix=mvdedup(prefix) | foreach id mode=m... See more...
Based on your Slack response, I think this is what you will want ...your search ... | rex field=host_list max_match=0 "(?<prefix>[^0-9]*)(?<id>\d+)" | eval prefix=mvdedup(prefix) | foreach id mode=multivalue [ eval n=<<ITEM>>, diff=n-prev, ss=case(isnull(ss), n, diff>1, mvappend(ss, n), true(), ss), ee=case(isnull(ss), null(), diff>1, if(r=mvindex(ss,-2), mvappend(ee, " "), mvappend(ee, r)), true(), ee), r=n, prev=n ] | eval ee=if(r=mvindex(ss,-1), mvappend(ee, " "), mvappend(ee, r)) | eval ranges=prefix."[".mvjoin(rtrim(mvzip(ss, ee, "-"), "- "), ",")."]" | fields - diff id n prev r ss ee
Hello All, Perhaps I have the 64K $ question. I am trying to understand (better) the IOWAIT warnings and errors. The yellow and red icons, etc.  I know that IOWAIT can be an issue, and only on Linu... See more...
Hello All, Perhaps I have the 64K $ question. I am trying to understand (better) the IOWAIT warnings and errors. The yellow and red icons, etc.  I know that IOWAIT can be an issue, and only on Linux based servers. I will guess that running Splunk Enterprise on a virtual linux machine makes things harder. I have revised the Health Report Managaer settings per a Splunk forum posting, and the issue is resolved for the most part. I can run an "unreasonable"  search and get the warining icon, and then as the search progresses, the red error icon. I have run some linux commands like iostat,  and iotop while the search is running but do not see any useful data. I am just curious how Splunk determines the IOWAIT values as part of the health monitoring. I was also wondering if I reset the healh repoting values back to the default, how I might go about reducing the "IOWAIT" characteristic on the Splunk server. Thanks for any hints or tips ewholz