Awesome, glad I could be of some help!
Something you may want to consider with your index/source specifications - the wildcard (*) can be fairly expensive depending on how many events you're looking at, so it might be worth investing some time to figure out if you're always going to be checking for events in a specific set of indexes with certain sources. Cheers!
... View more
Have you checked out this link yet?
The lookup command does (I believe) what you're trying to do with appeldcols! They've got some info in there on using the "OUTPUTNEW" command where you can essentially tell Splunk "okay, the field in the lookup file is going to be named X, but the field in my search calls it Y."
Something you may want to consider is if your "eval AppID=BOA_AIT" pipe is necessary. Being able to tell Splunk field names in a search and a lookup table are named different but are the same might actually eliminate the need for that command. Does that answer your question or did that just bring up more questions? If you need more help with the lookup command syntax, there's a pretty cool post here:
... View more
I've recently upgraded a test server of mine from 6.x.x to 7.2.x to find a weird bug and I'm wondering if anyone else is having a similar issue. The following scenario works just fine in 6 but doesn't work in 7. I have a tstats command that requires earliest/latest parameters, then pipes to an addinfo command, but I think I'm getting two different results. It appears that I only get events within the earliest/latest limits, but the addinfo command returns the time picker's earliest/latest limits regardless of time parameters.
Another part I'm finding peculiar is if I don't use tstats and I just do a normal index="my_index" search, everything seems to work as I intended. To put it in a pseudo-code context, I have two searches with the time picker set to last 30 days:
A: | tstats sum(base.purchase) from datamodel=MyDataModel.base where earliest=-7d latest=@d | addinfo
B: index=my_index earliest=-7d latest=@d | stats sum(purchase) | addinfo
Search A and B will both give me a sum of all purchases within the last week, but search A will set the info_min_time value to be the epoch time of 30 days ago (time picker value) while search B will set the info_min_time value to be the epoch time of 7 days ago (the searches earliest parameter).
Has anyone else run into this problem or been able to replicate similar results? Some of the searches I'm running are using a combination of the tstats/earliest/latest/addinfo commands and I'd like to avoid switching from tstats for as long as possible.
... View more
I'm having a frustrating time attempting to set up a test environment with Index Clustering and I've reached a tipping point! I've searched online for answers but I'm not finding anything substantial that's been able to fix my problem. The VM network that I set up has one Deployment Server (DS), a Master Node (MN), a Search Head (SH), 3 Indexers, and 2 Forwarders. I set the Replication Factor to 3, and the Search Factor to 2. I followed the following steps to set up the network and create the index cluster:
Created VMs, installed Splunk on each box, pinged entire network to ensure connectivity between every VM.
On the DS I configured some Apps, created some server classes, and organized the forwarders all nice and neat-like.
On the MN I enabled indexer clustering via UI and set everything to default values and created a simple password for the cluster.
I enabled each indexer as a peer node and connected them to the MN via UI - I received an error saying they couldn't communicate with the MN or the Replication Factor hadn't been met yet.
Finally, I enabled the SH via UI.
This is where I'm running into some problems. I haven't begun sending data from my forwarders yet but the _audit and _internal aren't being replicated fully, there's only one replicated and searchable copy between all three. I've waited for over an hour while I worked on other projects but the replication has stayed the same. There's a few buckets that were replicated to other indexers but after a brief period of time they stopped, so 4/10 buckets would become 5/11, then 6/12, etc...
So far I have tried:
Checked that all relevant ports were being used by Splunk.
Navigated to the "Bucket Status" page to try and find a manual solution.
Uninstalling and reinstalling Splunk entirely. (yes)
These are some of the error messages I've received on the MN:
**Search peer 'indexer1_name' has the following message: Indexer Clustering: Too many bucket replication errors to target peer='indexer2_ip_address'8080. Will stop streaming data from hot buckets to this target while errors persist. Check for network connectivity from the cluster peer reporting this issue to the replication port of target peer. If this condition persists, you can temporarily put that peer in manual detention.**
**06-28-2018 14:27:08.061 -0400 INFO CMMaster - event=handleReplicationError bid=_internal~7~9EB230C3-F26E-4110-A543-1C5DBB249AAC tgt=E106836F-8C34-4AAF-8922-8E859E898E62 peer_name='indexer2_name' msg='target doesn't have bucket now. ignoring'**
**06-28-2018 14:27:08.061 -0400 INFO CMMaster - replication error src=A6FBB117-781D-4AD8-B620-8981371DE05F tgt=E106836F-8C34-4AAF-8922-8E859E898E62 failing=tgt bid=_internal~7~9EB230C3-F26E-4110-A543-1C5DBB249AAC**
**06-28-2018 14:27:08.056 -0400 INFO CMMaster - postpone_service for bid=_internal~8~E106836F-8C34-4AAF-8922-8E859E898E62 time=150.000**
I'm wondering if anyone has a hunch about what the happy heck could be going on that I'm overlooking. I've set up a cluster before in a separate Splunk Lab so this is extra weird to me - I thought I had most of the basics down, but apparently not! Any thoughts or advice would be greatly appreciated. Thanks,
... View more
I've been searching for a few days about this particular subject and I'm coming up short. I have a .css file working with my dashboard, and it's fine for the most part, but it doesn't resize to a user's browser window. For example - the background image I'm using is 1600x1200, but anything larger or smaller than those dimensions doesn't adjust, so it either gets cut off or there's empty space visible.
Does anyone else have experience with this situation, or am I trying to twist Splunk's arm too hard on this one?
Everything works statically, nothing is broken, I'd just like for the result to be a little more visually friendly and adjust to a user's window. Thanks in advance!
... View more
I've been working with Splunk's DB Connect App for a few weeks and I have a question regarding the rising column. I'm attempting to only pull records that have been approved (weekly) so I have my SQL's WHERE clause structured similar to this:
WHERE approval_flag=1 AND rising_column > ?
ORDER BY rising_column
My question is this - will the rising column be updated to the most recent event regardless of an approval flag (or other condition)?
The reason I ask this question is because I noticed certain information missing during the weeks I tested a rising column solution. The information pertains to employee hours, and hours are approved by supervisors. Since there are multiple supervisors, hours are approved at different times. I'm wondering if the rising column will update to the most recent event, even if it isn't approved, perhaps leaving out events that were once unapproved but later approved.
This is all based on a hunch because I haven't found a lot of documentation on this particular subject of rising column. Thanks for any information/tips that may come!
... View more
For sure! The overall problem is finding employee utilization. This essentially boils down to finding the number of days possible to work in a given time period, and out of all those dates, how many hours were people working. Holidays comes into play when we have employees who were hired at different times, which is why I need to find the number of holidays by employee.
I could also be approaching all of this way wrong, that's just how my brain broke down the problem: find the time range by employee, count the number of holidays and business days in that time range, and subtract holidays from business days to get workable days.
To do all of that, however, I convert dates (in different formats - yay inconsistencies) from human readable to epoch and go from there. Again, could be way wrong, but since things are broken up by employee I update info_min_time with the hire_epoch value if hire_epoch is larger (hired after the earliest time in time range).
I appreciate your help!
... View more
Hello Splunk Community,
I've tried to do my homework on the subject and I'm coming up short, so here I am. I'm a few months new to Splunk and I have a question regarding multivalue fields. The problem I'm working with is calculating the number of federal holidays between two dates by employee while accounting for hire-date. So for example, if I was looking at two employees, one starting in Jan, the other in Feb, if I look at holidays between Jan and Feb by employee, the individual hired in Feb shouldn't have New Years or MLK counted against them.
My current strategy is to reference a lookup table containing several years worth of federal holidays. It's a bit hack-y, as it adds two multivalue fields to each event - the holiday name and date. I've used the 'addinfo' command to get a min/max time from the time selector, and a striptime() command to evaluate the epoch time of each holiday's date, but when I use the mvfilter command to compare the epoch holiday time and the info_min_time/info_max_time I get an error saying the arguments to mvfilter are invalid. I did some digging and found out that mvfilter(X) only works when X is an expression referencing one field, not more than one.
So for instance, this line gives me an error:
| eval in_range=mvfilter(epoch_holiday>=info_min_time AND epoch_holiday <= info_max_time)
While this line does not:
| eval keep=mvfilter(epoch_holiday>=1483228800 AND epoch_holiday <= 1488326400)
So my big question - is there a way to compare a multivalue field to one or more single value field(s)? I've tried using mvexpand/mvcombine but it messes with the events in a weird way. I'm wondering if I'm asking Splunk to do something it's not quite designed to do, but any help would be greatly appreciated. Thanks!
... View more