sideview's Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

sideview's Topics

I have a situation where I have two multi-valued fields in my data, and i want to call mvexpand on ONE of the fields and leave the second field multi-valued. Unfortunately mvexpand seems to fall do... See more...
I have a situation where I have two multi-valued fields in my data, and i want to call mvexpand on ONE of the fields and leave the second field multi-valued. Unfortunately mvexpand seems to fall down here. It correctly expands out my first field but it at the same time flattens my other multivalued value. (For the record mvcombine has the same problem) Here's a simple but completely artificial scenario to reproduce: | stats count | eval field1="foo-bar-baz" | eval field2="fred-mildred" | makemv field1 delim="-" | makemv field2 delim="-" That gives me one row, and 'field1' has 3 values and 'field2' has 2 values. Now tack on an mvexpand: | stats count | eval field1="foo-bar-baz" | eval field2="fred-mildred" | makemv field1 delim="-" | makemv field2 delim="-" | mvexpand field1 I should have 3 rows now, and each of the rows should still have the multivalued value for field2. However it throws away the multivalued values and mysteriously falls back to the original string value..... Is there any way around this problem? Ultimately what this is all a part of, is that one of my multivalued fields represents all 'previous' values of a certain field. And my second multivalued field is all the 'current' values of that same field. I want to mvexpand the current values, then filter the set down to only the rows where the current (singlevalued) value is NOT contained in the previous set (multivalued), and then I get a nice table of notable additions basically. Open to other suggestions here too. NOTE: its ugly but i found a hack using eval to forcibly join and re-split the strings on either side. So instead of | mvexpand field1 , I do this: | eval field2=mvjoin(field2, "#_$_%") | mvexpand field1 | eval field2=split(field2, "#_$_%")
We're setting up a custom data input and I'm wondering whether it's a bad idea to just write everything to WinEventLog, and then have Splunk index it from there. From the .NET side this seems lik... See more...
We're setting up a custom data input and I'm wondering whether it's a bad idea to just write everything to WinEventLog, and then have Splunk index it from there. From the .NET side this seems like a very cheap and simple way to go, whereas setting it up as a scripted input in this case will actually require a bit more work. But we're concerned that this will be an awful performance bottleneck, or worse that it'll look great for a while and then fail catastrophically under load someday. It'll be quite a lot of data coming through this path and maybe Windows Event Log is only a good solution if you're dealing with tiny data... Thanks in advance.
In a funny way Im looking for the opposite of fillnull. I have some fields which are sometimes coming through with emptystring values which creates empty columns in my table and I dont see a way ... See more...
In a funny way Im looking for the opposite of fillnull. I have some fields which are sometimes coming through with emptystring values which creates empty columns in my table and I dont see a way to filter them out. Reading the docs for eval, and testing this out, i can troubleshoot to verify that the conditional logic is working but i still get the empty columns showing up in tables. <my search> | eval foo=if(len(foo)=0, null(), foo) for that matter I would expect even this to effectively null out the field but it doesnt either. <my search> | eval foo=if(len(foo)=0, some_nonexistent_field_name, foo) Maybe there is no way to filter them out and the best practice is to ensure that they never get created in the first place?
When I'm sending in data over TCP, once in a blue moon Splunk will split one of the events into two parts, so I get the first portion of the text in one event, and the second in another. Obviou... See more...
When I'm sending in data over TCP, once in a blue moon Splunk will split one of the events into two parts, so I get the first portion of the text in one event, and the second in another. Obviously this causes a lot of problems. As a guess I thought maybe Splunk might be able to tell the difference between EOF and mere linebreaks or something, so I tried setting various explicit LINE_BREAKER keys in the props stanza I tried the following three values (separately of course) but none worked. LINE_BREAKER=[\n]+ LINE_BREAKER=[\r\n]+ LINE_BREAKER=(\x00)<\d+> In fact they all make matters worse in that they cause my events to get indexed multiline, with 57 lines per event. And that's even though I have SHOULD_LINEMERGE=False in the stanza. The third LINE_BREAKER value btw I got from http://answers.splunk.com/questions/603/juniper-netscreen-tcp-syslog-messages-not-breaking-properly which seemed to have solved the problem over there. So Im definitely doing at least one thing wrong. Is it just that the sending process is responsible for aligning it's TCP packets with linebreaks? Is my network just way flakier than a normal network should be? Is it possible to make this problem go away with some key that tells the tcp input to be a little patient and wait a few seconds somehow? (btw this is a 64-bit splunk running on windows 7)
I have a dataset where the rows in my search results all have a 'value' field, and there's another field that specifies what exactly this is the value of. So picture having name="color" va... See more...
I have a dataset where the rows in my search results all have a 'value' field, and there's another field that specifies what exactly this is the value of. So picture having name="color" value="red" how could I get these rows to have color="red" And of course this is given that i have no idea what any of the names are going to be up front so I have to set it dynamically. Since there's a lot going on in the regex already I am a bit reluctant to try and do it in the transforms.conf stanza itself. However I'm at a loss for what such a regex would like like so any help there is appreciated too. I'm sure someone has run into this before and rather than hack my way through it I thought I'd ask what the best practice is.
It seems like if you I have a numeric multivalued field, I should be able to use eval to take the max and min of the values per row. For example, I have a 'bytes' field on my events. i form tho... See more...
It seems like if you I have a numeric multivalued field, I should be able to use eval to take the max and min of the values per row. For example, I have a 'bytes' field on my events. i form those events into transactions and now i have a nice multivalued 'bytes' field on my transaction rows. From here I'd like to get the max and min values of bytes per row, ie so I end up with a single 'maxBytes' number per transaction row. <my search> | transaction user | eval maxBytes=max(bytes) However when I do this I end up with a multiValued maxBytes, and its exactly as though I had just done: <my search> | transaction user | eval maxBytes=bytes Is there a reason for why it works this way or is there a workaround for it?
I actually need a right join in some cases. I know im not supposed to use joins at all, and wherever possible use a disjunction plus stats, or use a lookup, because these are faster, better, chea... See more...
I actually need a right join in some cases. I know im not supposed to use joins at all, and wherever possible use a disjunction plus stats, or use a lookup, because these are faster, better, cheaper, awesomer etc. (http://answers.splunk.com/questions/822/simulating-a-sql-join-in-splunk/1717#1717) However sometimes there's just no other way. One side of my data comes from a search and the other side comes from inputlookup. So i cant just glue together two sets of 'events' in a single search with some OR's and stitch them back together with stats count by foo later. . Anyway, proceeding with join, I have 2 searches that return events with a field called sourceHost search <somewhat expensive search> inputlookup foo.csv Following best practices with join, the cheaper smaller search goes inside the brackets: search <somewhat expensive search> | join common_field [inputlookup foo.csv] "inner" is the default type, so if rows are in left side and not in the right side, or in the right side and not on the left side, they'll be dropped. the docs say that type="outer" and type="left" are synonymous. http://www.splunk.com/base/Documentation/latest/SearchReference/Join and as far as I can tell there's no type="right". Is there another way?
Im trying to use timechart to pass along the values of a particular field for each time bucket. I know that the fields are there, and that the fields exist in 100% of the events. However values(... See more...
Im trying to use timechart to pass along the values of a particular field for each time bucket. I know that the fields are there, and that the fields exist in 100% of the events. However values() and first() are not finding them: eg: index="_internal" source="*metrics.log" group=tcpin_connections sourceHost=* | timechart values(sourceHost) dc(sourceHost) happily tells me that there are 8 distinct sourceHost values per time bucket, but the values(sourceHost) column says there are no values, and the UI gives me the error: Specified field(s) missing from results: 'sourceHost' I've also noticed that some fields like 'destPort' however, are passed along by values() and first() just fine.
When a saved search has more than about 50 characters, the names get truncated and you get a "..." injected in the middle. Sometimes I can rewrite the name, or find a way to put it inside a <co... See more...
When a saved search has more than about 50 characters, the names get truncated and you get a "..." injected in the middle. Sometimes I can rewrite the name, or find a way to put it inside a <collection> tag so as to be less verbose. However sometimes I just want to raise that truncation limit from 50 to like 70. In many cases I actually need the long name, for instance to display in the UI when the saved search is actually run. So i often end up just accepting the truncation of the search name in the menu even though it looks bad and effects usability there. Is there any way to do this? To raise the truncation limit? As a side note, when you're only say one character over the limit, splunk will actually replace that one characters with the three character sequence "...", which is a bit silly.
In a dashboard we're working with we are displaying a table of events and the times always have 000 as the millisecond values so we dont want to waste space showing those. And we really dont need to... See more...
In a dashboard we're working with we are displaying a table of events and the times always have 000 as the millisecond values so we dont want to waste space showing those. And we really dont need to show the year either. Probably not even the month or the day because the header says that these are all from the last few hours. Basically any space we can save on this dashboard would be great. So is there a way that I can reformat the big times like "12/31/00 4:00:00.000 PM" ?
We want to end up with this kind of table on a dashboard. Average GB By Host and Time host last 24 hours last week last month web1 31.42 14.2 ... See more...
We want to end up with this kind of table on a dashboard. Average GB By Host and Time host last 24 hours last week last month web1 31.42 14.2 18.66 web2 33.59 32.4 32.14 web1 43.5 35.3 34.91 However we cant think of a way to do this without running subsearches and using the join command which seems very nasty. In this example we'd run the search over the last month and get the big stat that way, and then run 2 other searches in join commands to get the other ones. Again, very very nasty. And although it might work, we're worried about the scaling limits when using join and we're pretty sure the number of events in this case will hit the limits, which rules this out anyway. But is there any other way to actually get these in a table or chart? If it's not in a table, we can think of a limited alternative using NxM SingleValue modules but that would be a little lame and we're hoping to really getting this data in single tables and graphable in single charts.
This has come up about one and a half times today. Basically we want to run a search, over say the past hour, and for each of the 15minute periods in the past hour, compute a statistic like count... See more...
This has come up about one and a half times today. Basically we want to run a search, over say the past hour, and for each of the 15minute periods in the past hour, compute a statistic like count or sum(bytes). Super trivial in timechart however we want the time intervals to be on the columns of the table, and the statistic to be in the rows. In other words the reflection of timechart's normal output. To phrase this in terms of the internal data from the metrics log, say the timechart search would run over the last hour and look like index=_internal source=*metrics.log group=per_sourcetype_thruput | timechart span=15m count by series Im simplifying a bit, but say the table ends up looking like this: _time scheduler audittrail splunkd 4/13/10 6:45:00.000 AM 30 27 30 4/13/10 7:00:00.000 AM 29 29 29 4/13/10 7:15:00.000 AM 30 26 30 4/13/10 7:30:00.000 AM 3 3 3 It's close but again in this case we actually need these columns to be the rows and these rows to be the columns. We need this partly because we want to show a column graph where the X-axis is sourcetype, and for each sourcetype, there are 4 (non-stacked) columns, one for each of the consecutive days. Also we want this because this is the tabular format that is more familiar to the customer. I can think of 2 solutions in the abstract but I've had no success with either: 1) Find some kind of a "transpose" command that can just reflect the data. I cant find one. 2) use chart and bin myself and literally do what timechart does but in the opposite way. This actually works fine except that the time values in the column headers are epochtime integers rather than human-readable string times. Is there a way to do something like the convert command but on the actual field names instead of field values? index=_internal source=*metrics.log group=per_sourcetype_thruput | chart count over series by _time Is there a way to get the desired end result? Using either of these methods, or some other way?
on March 13th, -1mon maps to February 13th, at whatever the current time of day is. And -1mon@d maps to February 13th 12AM. In the dashboard we're dealing with, this is basically what we want. ... See more...
on March 13th, -1mon maps to February 13th, at whatever the current time of day is. And -1mon@d maps to February 13th 12AM. In the dashboard we're dealing with, this is basically what we want. However it seems like on March 29th, 30th and 31st, ie in cases where that date didnt exist in the previous month it seems splunkd must do something a little arbitrary. So just to confirm, on all 3 of these days (3/29, 3/30, 3/31) does -1mon just map each to March 1st 12AM?