sideview's Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

sideview's Topics

Posting this in case other folks run into it.    It's possible for an app to ship an alert disabled,  in such a way that when any user tries to enable it via going to manager and selecting "Edit ... See more...
Posting this in case other folks run into it.    It's possible for an app to ship an alert disabled,  in such a way that when any user tries to enable it via going to manager and selecting "Edit > Enable",   it doesn't work. Instead of enabling the alert, nothing happens at all.   You click the green button and nothing happens. Looking at the browser console,  there are no errors when this happens and the javascript makes no attempt to post anything at all to Splunk.   The question has two parts.   -- what is the root cause of this,  and how can folks avoid accidentally shipping app content like this? -- what workaround might exist for the end users who need to enable the disabled alert?  
I updated to 8.2.2.1 and suddenly all of our unit test output is polluted with hundreds of Authorization Failed messages,  each coming from various calls to splunk.rest.simpleRequest. The Authorizat... See more...
I updated to 8.2.2.1 and suddenly all of our unit test output is polluted with hundreds of Authorization Failed messages,  each coming from various calls to splunk.rest.simpleRequest. The Authorization failures themselves are perfectly normal - many of our tests actually assert that ownership and permissions are set the right way, and testing that involves trying to do things with the wrong user and asserting that the thing fails.   What's problematic is how formerly nice clean unit test output to the console or to stdout is now polluted with all this stuff about these normal failures. for example,  picture dozens or hundreds of these: Authorization Failed: b'{"success":false,"messages":[{"text":"It looks like your Splunk Enterprise\\nuser account does not have the correct capabilities to be able to post licenses.\\nReach out to your local Splunk admin(s) for help, and/or contact Sideview support\\nfor more detail."}]} Curious if anyone has run into this or knows where the messages might be coming from.
(I am reposting this question from email, with permission from the person who emailed) I need to basically join 3 indexes where the ‘join’ info is in a 4th index. The 3 former indexes have around ... See more...
(I am reposting this question from email, with permission from the person who emailed) I need to basically join 3 indexes where the ‘join’ info is in a 4th index. The 3 former indexes have around 50000 entries while the 4th index around 500000. The fields in the indexes are: Indexes containing the data: Index A, B, C, …:  name status id Index containing relationships:  Index REL:        id parent child The parent and child values in REL match the id value in A, B and C. And also note that the id values in A, B and C never collide, in other words the "single id space" of the REL events has no overlaps across the entities in A, B, C From A to B is normally a 1 to many relation, and from B to C is a many-to-many relation. The specific use case here is that A represents Applications, B represents Deployments of those applications, and C represents Servers implementing those deployments. A server can be used for several deployments hence the many-to-many relation here.   The obvious way would be to do something like:       index=A | rename id as A_id, name as Aname, status as Astatus | join max=0 type=left A_id [ | search index=REL | rename parent as A_id, child as B_id ] | join max=0 type=left B_id [ | search index=B | rename is as B_id, name as Bname, status as Bstatus ] | join max=0 type=left B_id [ | search index=REL | rename parent as B_id, child as C_id ] | join max=0 type=left C_id [ | search index=C | rename id as C_id, name as Cname, status as Cstatus ] | table Aname Astatus Bname Bstatus Cname Cstatus       This, of course, fails miserably because the join only returns 10000 results while the REL index has 400000 events… I can rewrite the first join as:       index IN(A REL) | eval parent=if(index=REL, parent, id), child =if(index=REL, child, id) | stats values(name) as Aname values(statu) as Astatus values(child) as childs by parent | table Aname Astatus childs       But I’m at a loss how to squeeze in the other indexes and relation… And I also have some hope that there's a way to avoid join entirely. UPDATE: Here is a run-anywhere search to fabricate some sample input rows | makeresults | fields - _time | eval data="1,A,appl1,,;2,A,appl2,,;3,D,depl1,,;4,D,depl2,,;5,D,depl3,,;6,S,serv1,,;7,S,serv2,,;8,S,serv3,,;9,S,serv4,,;10,R,,1,3;11,R,,2,4;12,R,,2,5;13,R,,3,6;14,R,,4,7;15,R,,5,8;16,R,,5,9", data=split(data, ";") | mvexpand data | rex field=data "(?<sys_id>[^,]*),(?<type>[^,]*),(?<name>[^,]*),(?<parent>[^,]*),(?<child>[^,]*)" | fields sys_id type name parent child | eval parent=if(parent=="",null(),parent), child=if(child=="",null(),child) And the desired output,  is rows that map app1 to depl1 and serv1   and that map app2 to depl2,depl3 and serv2,serv3,serv4
Example: Say I have two lookups A and B. Let's say they're both file-based lookups (even though I don't think it actually matters) A takes an input field called "first" and outputs an output fie... See more...
Example: Say I have two lookups A and B. Let's say they're both file-based lookups (even though I don't think it actually matters) A takes an input field called "first" and outputs an output field called "second" B takes an input field called "second" and outputs an output field called "third" Let's say they are both "automatic" courtesy of this in props.conf: LOOKUP-aaa = A first OUTPUT second LOOKUP-bbb = B second OUTPUT third We've also made an example with no inline renaming or anything - the external field names happen to be identical to the column names in the lookup. It's a little artificial but hey it's an example. More background A) The Splunk docs claim that this isn't supported. ( https://docs.splunk.com/Documentation/Splunk/8.0.2/Knowledge/Makeyourlookupautomatic See the statement "Splunk software does not support nested automatic lookups") However 2) They clearly do work at least a little, because apps use them and there are answers posts telling you how to do it. ( https://answers.splunk.com/answers/94609/automatic-lookup-on-a-field-that-is-automatically-looked-up.html https://answers.splunk.com/answers/209148/can-you-perform-an-automatic-lookup-based-on-the-o.html ) Question 1 -- how much of this actually works? Question 2 -- what changes have there been to that answer over time, if any?
When using the rest API, the app I'm working on uses REST API Urls like /splunkd/__raw/servicesNS/USERNAME/APPNAME/saved/searches but whenever I paste these URL's into my browser to sanity... See more...
When using the rest API, the app I'm working on uses REST API Urls like /splunkd/__raw/servicesNS/USERNAME/APPNAME/saved/searches but whenever I paste these URL's into my browser to sanity check something, it doesn't work. Specifically I get an error in Firefox saying "Error loading stylesheet: Parsing an XSLT stylesheet failed.", and in Chrome the page loads blank. At least with Chrome I can view-results and see the XML source. Does anyone know what causes this and how I can fix it?
[this is posted and paraphrased from a customer who submitted this as a question to sideview support ] We use Cisco Contact Center and of course UCM, but we need to do reporting on some complex c... See more...
[this is posted and paraphrased from a customer who submitted this as a question to sideview support ] We use Cisco Contact Center and of course UCM, but we need to do reporting on some complex call flow that is technically all happening outside of UCCX. What is going on, is a lot of customers don't actually call the main numbers they're supposed to - instead they call one of about a dozen other DID numbers, which are different desks and stations in different buildings, and those people then forward the call on to someone who can help. The problem is a) this is all happening outside of UCCX so management can't see it, and b) the calls often really don't end well. People get transferred around a lot and then end up transferred to the general voicemail where they get angry and hang up without leaving a message. What we'd like to do, is find all the calls where a customer a) calls into one of these 12 numbers , b) they get transferred more than say 3 times, c) they end up on the general voicemail box, and they go on-hook to terminate the call. Ideally we'd like to also do chart and graph things about those calls too.
[this question was posted on behalf of a customer of ours, from a support request] Here's what we are looking to do. Customer calls a doctor's office main line and hits an auto attendant( so c... See more...
[this question was posted on behalf of a customer of ours, from a support request] Here's what we are looking to do. Customer calls a doctor's office main line and hits an auto attendant( so call gets answered on leg 1). The caller has several options to be able to route to. The caller selects our front desk. I am looking for zero duration calls on that second call leg to see how many calls go unanswered by staff. (But then of course I need the app to show us the whole call flow for the given call. ) Is it possible to filter the CDR for this specific data?
[Note - this is posted on behalf of a prospect of ours, from a pre-sales support thread] I am evaluating the Cisco CDR Reporting and Analytics app and I am wondering if we can somehow run a repor... See more...
[Note - this is posted on behalf of a prospect of ours, from a pre-sales support thread] I am evaluating the Cisco CDR Reporting and Analytics app and I am wondering if we can somehow run a report that monitors for a spam call and alerts us. Maybe checking for an extreme amount calls to a site within a limited time period. Is there something that has been previously done?
The fieldformat command ( http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Fieldformat ) offers a way to affect how a certain field in your search results renders, without actuall... See more...
The fieldformat command ( http://docs.splunk.com/Documentation/Splunk/latest/SearchReference/Fieldformat ) offers a way to affect how a certain field in your search results renders, without actually modifying the field value when it's exported or piped to outputcsv etc. However it does some weird things with postprocess and I'm curious if anyone has run into this or figured out the root cause. Here's an example: index=_internal group=per_sourcetype_thruput series=splunkd* | stats count by series | fieldformat count=if(series="splunkd_access","OH HAI",count) This produces two fields, series and count . The fieldformat command inspects the series field and if the series is "splunkd", it rewrites the count field to "OH HAI". If you send a postprocess request of "fields series count" to this job, you'd expect to get back the same results, and indeed you do. But if you send a postprocess request of just "fields count", you'd still expect the "OH HAI" logic to run, ie you'd still expect the count corresponding to splunkd to have been rewritten to "OH HAI", but it isn't. The "OH HAI" logic actually fails completely. as though the series field wasn't defined in the underlying job, and splunkd's count goes back to being the count number. One explanation is if the fieldformat command acts as a flag for splunk to "sneak in" some postprocess syntax later on certain kinds of requests, and that it does so by tacking its secret "fieldformat postprocess" string onto the end of any user-supplied postprocess string. This would mean our postprocess had already removed the "series" field, and thus the supossed "fieldformat postprocess" couldn't see it. Has anyone seen this? I ran into this in a more complicated scenario in one of our apps and it was baffling. I'd love to hear if anyone has found the root cause, found any workarounds, or knows how fieldformat is implemented in Splunk.
(NOTE: I am posting this on behalf of a customer of ours who asked this of us a while back) This is in Callmanager CDR. I need to calculate a kind of "non overlapping" duration for a set of VTC ... See more...
(NOTE: I am posting this on behalf of a customer of ours who asked this of us a while back) This is in Callmanager CDR. I need to calculate a kind of "non overlapping" duration for a set of VTC Calls. I don't want just the "total duration" of calls through the extension, but rather the total clock time that the number has been on 1 or more VTC calls, vs the total clock time that it has sat idle. I can't figure it out. For a while we thought the difference might be to use the 'duration_elapsed' field instead of 'duration' and 'duration_total' but those didn't work out. by the way we are using the "Cisco CDR Reporting and Analytics" app for Cisco Unified Communications Manager.
We frequently have search results where for one or more numeric fields, each row might have only one value for the numeric field or the row might have a multivalued value for that numeric field. ... See more...
We frequently have search results where for one or more numeric fields, each row might have only one value for the numeric field or the row might have a multivalued value for that numeric field. Here is a made up search duplicating the basic situation. Paste this into your search bar. | stats count as numeric | eval numeric=mvrange(3,8) | mvexpand numeric | eval categorical=case(numeric%3=0,"A",numeric%3=1,"B",numeric%3=2,"C") | stats values(numeric) as mvnumeric by categorical This dummy search will give you results of | categorical | mvnumeric | | A | 3 | | | 6 | -------------------------------------- | B | 4 | | | 7 | -------------------------------------- | C | 5 | Note in particular that the first two rows have multivalued values and the last row has only a regular single value. Now say we add a search of | search mvnumeric>0 to the end to filter this set. Since all the rows have a numeric value greater than zero, in fact since all values are greater than zero, I'd expect all rows to get returned. However only the value with the single-value gets returned. Likewise with other terms you can see that the greater-than and less-than operators just don't work with any multivalue rows - the rows always fail to match. Is this a known bug? Is there any magic to be worked in the search language? mvexpand offers a sort of a workaround, but | mvexpand | search mvnumeric>4 doesn't work cause that'll throw away the other values and we need the whole picture on the final rows. Which leaves this sort of thing and it's way too clunky to be of use here.. | mvexpand mvnumeric | eval matchesOurExpression=if(mvnumeric>N,1,0) | stats values(*) as * by <the id fields by which we were grouping before we mvexpanded> | search matchesOurExpression=1
I maintain an app with a data input wizard, under the hood of which is a custom controller that can list and create normal "monitor" data inputs very reliably. I now need to expand this to listing... See more...
I maintain an app with a data input wizard, under the hood of which is a custom controller that can list and create normal "monitor" data inputs very reliably. I now need to expand this to listing and creating "batch" data inputs aka "sinkhole" inputs, and from what I'm seeing maybe this isn't even possible. The REST API Path for normal monitor inputs is /data/inputs/monitor and in inputs.conf these look like: [monitor://D:\some\path\*] index = foo sourcetype = bar I now need to be able to also list and create "batch" data inputs. In inputs.conf these end up in the form: [batch://D:\some\path\*] move_policy = sinkhole index = foo sourcetype = bar I cannot use a oneshot input. In this particular use case, files are ftp'ed to the given directory every minute or so in real time and I want to delete them as they are indexed. Hence batch is perfect. 1) Listing batch inputs: There seems to be no separate endpoint for these, but they do get listed (oddly) in the output results for /data/inputs/monitor. So, the only way I've found to list them, is to request /data/inputs/monitor and then look for the "move_policy" key being set to sinkhole. Q1: Is there a better way? 2) Creating batch inputs: I have not found any way to do this. There is no /data/inputs/batch or /data/inputs/sinkhole endpoint and thus no /data/inputs/batch/_new to POST to. Following the weirdness of #1 above, if I use the entity code to create a "_new" monitor input, and I sneak in a "move_policy" key set to "sinkhole", that doesn't work either - it just complains Argument "move_policy" is not supported by this handler. Q2: Am I missing something? Is there a way to actually do what I need? Eager for any help or any advice. If I have to conclude that such a simple data input administration task is impossible, I'll be sad and I'll have to just resort to doing filesystem operations to write the conf files directly through popen, and then prompt the user to restart the server afterwards.
I'm not sure if this is a common thing, or if it reflects something that I've somehow setup weirdly, but if pretty much anything is out of whack in my SplunkJS views, I get an uncaught javascript e... See more...
I'm not sure if this is a common thing, or if it reflects something that I've somehow setup weirdly, but if pretty much anything is out of whack in my SplunkJS views, I get an uncaught javascript exception "e is undefined", typically in headerview.js line 1 character 152,155. Sometimes an "e is indefined" exception will show up elsewhere though, like in mvc.js. Within headerview.js, the expression that throws it is almost always "e.fn.carousel" if that helps, and peeking inside the code, it's part of Bootstrap Carousel. When this occurs, which is extremely often, it really hampers development. You're making progress and then suddenly you hit a brick wall and unless there's an obvious culprit you have to just slowly back out changes until it goes away, then by looking more closely at the most recently-backed-out change you can figure out what it didn't like. Is there some workaround I can do to make the splunkjs framework handle it's exceptions reliably? Does anyone have any ideas to investigate or troubleshoot? I'm running Splunk 6.2.2 on Windows. Thanks in advance.
I have several fields that are ID fields, for which both the names and the values are quite long. I need these id fields to all be present because they are crucial for table drilldown interaction, b... See more...
I have several fields that are ID fields, for which both the names and the values are quite long. I need these id fields to all be present because they are crucial for table drilldown interaction, but I want to keep them from rendering in the table, taking up space and distracting the user. Looking in the docs, examples, tutorials, example apps etc, I haven't been able to find anything that talked about this.
OK. A bit of a journey here. I am searching for a good reliable method of bucketing numeric field values into categorical ranges, with no a-priori knowledge of what the range actually is, and pref... See more...
OK. A bit of a journey here. I am searching for a good reliable method of bucketing numeric field values into categorical ranges, with no a-priori knowledge of what the range actually is, and preferably where I can specify based on the number of bins rather than with explicit span. First step, bin command (aka bucket) The problem here is that bin doesn't represent the intervening buckets for which no data exists. Seen from the perspective of the individual search commands, the behavior makes sense. But it's unacceptable for use in charts. <search terms> | bin foo bins=10 | stats count by foo Next stop: chart command. It can actually do this... sometimes. timechart can have a "bins" or "span" command not just in the beginning to control _time, but also after the split by field, to control on-the-fly bucketing of a numeric split-by. Chart command is even nuttier, allowing on the fly numeric-to-categorical bucketing for both the group-by and the split-by field. Here it is in action: <search terms> | chart bins=10 count over foo And it works! Yay. The intervening buckets of 1-2, 2-3, 3-4 all show up in this particular case. Except... in most cases this feature completely loses its mind. Here is a good illustration. It's quite interesting behavior but unfortunately answers will only let me post two images so I have tried to find a representatively nutty one for you: <search terms> | chart bins=50 count over foo So... Is there any explanation for what makes chart command unreliable here, and is there a way to beat it with a stick to make it reliably bucket things? Or has anyone found a good flexible way to add in the empty buckets that bin/bucket omits? Bad solutions: - In other questions I've seen makecontinuous put forward as a solution, but much effort seems to suggest that makecontinuous does not do anything useful, at least not in 6.2. =/ If you can make makecontinuous solve this though, please tell me how. - Various tricks that allow for | append [| stats count | eval foo=mvrange(0,50) ] | stats sum(count) as count by foo to glue on a dummy row for each value being bucketed. While these work for particular cases, it's super clunky and I believe all such tricks require a-priori knowledge that I don't have.
The Problem: Unlike with conf, I don't think lookups have any default/local layering. So if your app has one or more lookups that are designed to be customized by users, you have a paradox. ... See more...
The Problem: Unlike with conf, I don't think lookups have any default/local layering. So if your app has one or more lookups that are designed to be customized by users, you have a paradox. On the one hand you need to ship the lookup file with a placeholder row in it, or else the Splunk UI will display a lot of errors when the lookup(s) fail. On the other hand you can't actually ship the lookup file in the app because then when the customer upgrades to a maintenance release of the app, the shipped lookup file will unexpectedly overwrite all of their local edits. I'd love to hear what ideas people have had in the field to work around this problem. Plus maybe all this time there is some secret way to ship a default lookup or an ftr-generated lookup or something. What I have done in this situation, is clunky and subject to some problems. The app ships with the lookup missing, and then secretly whenever the homepage is hit the app runs a search like this. | inputlookup groups | append [|stats count | eval number="aaaaa" | eval group="PLACEHOLDER_GROUP" | eval name="PLACEHOLDER_NAME"] | dedup number group name | sort 0 number | streamstats count | search (name="PLACEHOLDER_NAME" AND count="1") OR NOT name="PLACEHOLDER_NAME" | fields number group name | outputlookup groups Which, if there is no lookup there, it will create one with a single row "aaaaa,PLACEHOLDER_NAME,PLACEHOLDER_GROUP". And if there is a lookup there, it will read the whole thing in, sort it by number and then write it out again, So, it does nothing, except for use a bunch of system resources for no reason. Bizarre, clunky, subject to problems, but it generally solves the problem. problems with that solution - if users use the 6.X navigation to skip over the homepage pageview entirely but it generally solves the problem. - doesn't help you for any apps/TA's that are designed to be pushed out to indexers. - when the lookup is very large, the resource hit can get pretty intense from hauling the whole lookup off disk and then back again. - clearly evil. And if you have a use case where you want to have some number of "default" rows, but not clobber on subsequent upgrades, you could ship a second lookup suffixed _default, and then in your semi-evil ftr inputlookup+outputlookup search, manually merge the _default lookup into the main one. Again - curious if anyone has found a better way to deal with the situation of shipping customer-editable lookups.
<posted on behalf of a customer> I see there are app-packaging instructions on http://docs.splunk.com/Documentation/Splunk/6.1/AdvancedDev/PackageApp . We already have everything in its own... See more...
<posted on behalf of a customer> I see there are app-packaging instructions on http://docs.splunk.com/Documentation/Splunk/6.1/AdvancedDev/PackageApp . We already have everything in its own folder in etc/apps, and everything that's a core part of the app is in default and we have an app.conf and all that stuff. We're not moving it to Splunkbase or anything so we don't need the screenshot and all that stuff. but what are the best practices and strategies (or worst practices and pitfalls)?
I have a customer who has a table with some categorical fields and then three numeric fields. They want certain columns to be certain colors always, and then they need custom heatmap logic for th... See more...
I have a customer who has a table with some categorical fields and then three numeric fields. They want certain columns to be certain colors always, and then they need custom heatmap logic for the numeric fields. For every row, if the second number is larger than the first they want the second cell to have a red background. Likewise if the third number is larger than the second, they want the third cell top have a red background. The customer has tried attacking the problem by writing some custom javascript to customize the module behavior but they're a little lost and not achieving any success there.
It seems that if you have a lot of fields being extracted automatically, like via INDEXED_EXTRACTIONS=csv or via automatic kv extraction, beyond any fields that are explicitly mentioned in your s... See more...
It seems that if you have a lot of fields being extracted automatically, like via INDEXED_EXTRACTIONS=csv or via automatic kv extraction, beyond any fields that are explicitly mentioned in your search, Splunk 6.0 will only allow itself to automatically extract about 100 more fields. This really prevents certain commands like fieldsummary or transpose from working properly. Basically if you have more than 100 fields, you don't know what their names are, and you need to get that list of field names or values in Splunk, you wont be able to using the Splunk search language. Whatever search language you use - fieldsummary or stats first(*) as * | transpose , Splunk will ignore some of your extraction rules each time and you'll always end up with an incomplete list of fields. Which one it ignores seems somewhat random, so certain fields will be appearing and dissappearing from your results over time. Does anyone know if anything can be set in limits.conf or in the search language to override this behavior? I believe it is related to these other posts http://answers.splunk.com/answers/82252/how-many-field-extract-in-splunk http://answers.splunk.com/answers/117884/fields-not-automatically-extracting Also, I've already tried setting maxcols in the [kv] stanza in limits.conf and it has no effect here - probably because that key only affects keys being generated by kv and autokv configurations, and this is generated via INDEXED_EXTRACTIONS=csv There is one very limited workaround that I have found - if you mention all the fields in your search explicitly somehow, like with an enormous fields clause, this forces Splunk to extract them, no matter how many of them there are. However that doesn't help situations where you really don't know in advance or with certainty what they will be. In my particular case there is a csv sourcetype where we don't know in advance what the fields are going to be. But we need the field list and using the search language itself to get it is the only way we have. Depending on various factors there are hundreds of fields that might be present in the sourcetype, and on any one customer environment there will be about 100 - 140 fields present.
This is in regards to using the streamstats command with a "by" clause, and at the same time specifying window=N to tell it to only compute the statistics using the N most recent rows. The Splun... See more...
This is in regards to using the streamstats command with a "by" clause, and at the same time specifying window=N to tell it to only compute the statistics using the N most recent rows. The Splunk docs for streamstats say that the window will take into account the "by" field: See here under "More examples" http://docs.splunk.com/Documentation/Splunk/6.0.1/SearchReference/Streamstats Specifically it says: Example 1: Compute the average value of foo for each value of bar including only the only 5 events with that value of bar. ... | streamstats avg(foo) by bar window=5 global=f However this does not seem to be the case. When I use window=N with a by clause, the logic around window=N seems to ignore the by clause and it only looks at the 5 previous rows regardless of what value they had for the by clause. Of course depending on your sort order those rows may or may not have the same value for the "by" field as the current row, and when streamstats calcualted the statistics for those 5 rows, it does correctly discard rows whose by fields dont match. The end result is confusion! Does anyone know whether the docs are wrong or whether this is a bug in streamstats? and can anyone think of a workaround? I need to basically have this process rows that have _time deviceName and a field called isBlank that is either 1 or zero. | streamstats current=f window=24 sum(isBlank) as rollingBlankHourCount by deviceName