sideview's Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

sideview's Topics

App developers can use restmap.conf to define custom REST endpoints on splunkd's port aka the management port (eg https://localhost:8089). However there doesnt appear to be any mechanism to use rest... See more...
App developers can use restmap.conf to define custom REST endpoints on splunkd's port aka the management port (eg https://localhost:8089). However there doesnt appear to be any mechanism to use restmap.conf to do the same on SplunkWeb's port, (eg http://localhost:8000). I know that I can get what I need by creating a custom UI module. (I can package a custom UI module in my app, custom modules can have python handlers, and that python will respond to requests at http://localhost:8000/en-US/module/system/Splunk.Module.MyCustomModule/render ) But I'm reluctant to create a custom UI module that is designed to never be used from the UI. Plus this would leave me no way to associate relevant capabilities with the endpoint, a security feature which restmap.conf does offer. Is there a third way that I'm missing? ie is there a way to hit a restmap.conf endpoint from some proxied URL on SplunkWeb? For instance: Splunkd's search API is all accessible from SplunkWeb via a little proxy that it has under /api/search: http://localhost:8000/en-US/api/search/jobs/ /results, so maybe some similar mechanism exists for endpoints created by restmap.conf ?
I'm trying to piece things together from the restmap.conf docs, to get a working custom endpoint that I can use. Note that i do not want to use this with setup.xml and I just want a plain old endpoi... See more...
I'm trying to piece things together from the restmap.conf docs, to get a working custom endpoint that I can use. Note that i do not want to use this with setup.xml and I just want a plain old endpoint that extends splunk.rest.BaseRestHandler , not an EAI endpoint. http://www.splunk.com/base/Documentation/4.2.1/admin/restmapconf Following what's written in restmap.conf I think I've done everything right and I've read through the doc a few times, but no luck. I get a 500 response when I go to https://localhost:8089/services/my_path , saying "ImportError: No module named MyFileName" and I dont see how to debug or troubleshoot anything. Here's my stanza in restmap.conf: [script:random_unique_name_like_say_fred] match = /my_path handler = MyFileName.MyClassName requireAuthentication = true and in $SPLUNK_HOME/etc/apps/ /default/rest/MyFileName.py there is a class defined called MyClassName that extends splunk.rest.BaseRestHandler. And the python is pretty simple and running it manually it seems free of syntax errors. Ideally if someone can point me to an app on Splunkbase that has successfully set up a non-EAI custom rest endpoint (ie one that is NOT used from guided setup aka setup.xml). Failing that, can anyone see what I'm doing wrong, or can you tell me if there are any tricks to get some kind of debugging or troubleshooting going?
If I go to my Answers profile page, and then 'edit user settings' and then 'email notification settings', it has a section called "Notify me when:" with a bunch of conditions. One of these conditio... See more...
If I go to my Answers profile page, and then 'edit user settings' and then 'email notification settings', it has a section called "Notify me when:" with a bunch of conditions. One of these conditions is "A new question matching my interesting tags is posted". This would be great because I could then add various tags and get notified about them instantly, most notably I could add my app's tags and thus answer my user's questions much more rapidly. However in the new answers site there doesn't actually seem to be any way in any screen I can see to add "interesting tags" anymore. Am i just missing it?
I'm interested in setting up a Splunk server where each customer would have their own indexes and would only be able to search that one index. However we'd definitely need to build the overall s... See more...
I'm interested in setting up a Splunk server where each customer would have their own indexes and would only be able to search that one index. However we'd definitely need to build the overall system such that the indexes, users, roles could be created on the fly without restarting the server. I know you cannot create indexes on the fly in 4.1 (ie without restarting), and although the docs don't say that you can do this in 4.2, I thought I'd ask -- can this be done in 4.2 and if so how would you go about it?
I'm encountering some difficulties trying to get setup.xml working reliably across multiple apps. When I take a step back and follow the simpler instructions that involve no custom rest endpoint:... See more...
I'm encountering some difficulties trying to get setup.xml working reliably across multiple apps. When I take a step back and follow the simpler instructions that involve no custom rest endpoint: http://www.splunk.com/base/Documentation/4.2.1/Developer/SetupExample1 I hit a weird message when the form is submitted. Here's my endpoint and my entity. I'm just writing a single macro. <block title="My Block Title" endpoint="admin/macros" entity="my_macro_name"> <input field="definition"> <label>Foo</label> <type>text</type> </input> </block> The setup.xml does load correctly, but submitting the form gives: Encountered the following error while trying to update: In handler 'localapps': Argument &quot;disabled&quot; is not supported by this handler. (the quote characters do indeed appear escaped like that, in case anyone's wondering.) Can anyone shed any light on what the error message is talking about?
I'm not sure if this is something that I did on this system, or something the windows app did maybe, but why do I have all my local winEventLog data getting indexed twice? everything comes in as... See more...
I'm not sure if this is something that I did on this system, or something the windows app did maybe, but why do I have all my local winEventLog data getting indexed twice? everything comes in as both: sourcetype="WinEventLog:Security" and sourcetype="WMI:WinEventLog:Security"`. And can someone tell me which one I should turn off? At a higher level it's quite silly that all of the source keys for the windows inputs are set to the exact same value as the sourcetype. I would expect something more along the lines of: sourcetype=WinEventLog:Security" source="WMI" sourcetype=WinEventLog:Security" source="localEventLog"
I'm writing an app that's based on a scripted input, and I'm trying to just dump out my key value pairs so the field extraction will be handled by autoKV. someField=bar someOtherField=12.4 ... See more...
I'm writing an app that's based on a scripted input, and I'm trying to just dump out my key value pairs so the field extraction will be handled by autoKV. someField=bar someOtherField=12.4 Some of my field values can have space characters in them, but that's OK -- if you dig around in the docs the answer for that is to wrap the values in quotes and then autoKV will be tolerant of space characters: someField="foo bar" someOtherField="12.4" However if the field values also contain quote characters, I dont think there's any way to get autoKV to index the value correctly. eg: someField="foo \" bar" results a field value of "foo \" . I had hoped that autoKV would be smart enough to extract it as foo " bar , or failing that, as 'foo \" bar' Any ideas, or do I have to switch to a csv approach?
So I have a dashboard and I want to display the most recent value of fieldA, for each value of fieldB and fieldC, shown as a table where values of fieldB are down the left, and values of fieldC are ... See more...
So I have a dashboard and I want to display the most recent value of fieldA, for each value of fieldB and fieldC, shown as a table where values of fieldB are down the left, and values of fieldC are across the top. This is simple enough -- you just use the chart command. * | chart last(fieldA) over fieldB by fieldC and we should end up with a table like: fieldB | fieldC_value1 | fieldC_value1 | fieldC_value1 | fieldB_value1 | 128.3 | active | 0.412 | fieldB_value2 | 99.3 | active | 0.31 | Except my data has some non-numeric values of fieldC (note the 'active' values above). And the chart command doesnt like non-numeric values; it throws them away. So what happens, since is there is no numeric value for fieldC anywhere, my 'active' column disappears entirely. Likewise chart last(fieldC) is always null, whereas stats last(fieldC) is always correct. Once you know to look for this spooky behavior it's pretty easy to reproduce. Here's an example: 1) * | head 10000 | chart last(date_hour) over date_second by date_minute you may have to adjust the 10000, but this should show a result with 60 rows, with one for each second. Sneak in the following eval clause, which sets the date_hour field to "mayhem" whenever the date_second field is equal to '0'. * | head 10000 | eval date_hour=if(searchmatch("date_second=0"),"mayhem",date_hour) | chart last(date_hour) over date_second by date_minute Now the entire 0th row dissappears and you only have 59 results. Question: Is there some search language trick that can get me the end result I want? I dont actually know any of the values ahead of time. Is there some incantation I can use to simply turn off this behavior in chart? NOTE: this is essentially the same issue that I brought up in http://answers.splunk.com/questions/2295/how-come-some-fields-disappear-when-they-go-into-timechart-chart, except in that case I could work around the problem by using stats by _time, and here I dont see any workaround.
If I go to launcher in my 4.1.6 instance, and I go to "Browse more apps", it lists a ton of apps (125) but not one of them has been updated in the last 7 months. I know there are a ton of apps ... See more...
If I go to launcher in my 4.1.6 instance, and I go to "Browse more apps", it lists a ton of apps (125) but not one of them has been updated in the last 7 months. I know there are a ton of apps on Splunkbase that have been updated very recently, but none of these are appearing in Launcher. The 3 most recent are the only ones that were last updated during 2010: "Netcat Shell" (last updated 07/30/2010), "TCP or UDP Sending" (02/25/2010) "JMS Receiver for indexing" (02/03/2010) Then there are 41 apps which were last updated in 2009, 45 apps which were last updated in 2008 and 36 apps which were last updated in 2007. So this really is the old and dead apps. Any app that hasnt been touched in that long is for all intents and purposes dead. Is this just a bug I'm experiencing or is this a known problem?
Given a set of clientip values from internal IP's, external IP's, as well as different classes of internal networks on different interfaces... a) what's the cleanest and most efficient way to clas... See more...
Given a set of clientip values from internal IP's, external IP's, as well as different classes of internal networks on different interfaces... a) what's the cleanest and most efficient way to classify each clientip as internal/external? and b) what's the best way to put an actual class=A, class=B class=C field in there?
So I've been using CHECK_FOR_HEADER=true for various csv data in some apps I'm building. I've learned a great deal about it recently, but I still have a lot to learn and I wonder if anyone can help... See more...
So I've been using CHECK_FOR_HEADER=true for various csv data in some apps I'm building. I've learned a great deal about it recently, but I still have a lot to learn and I wonder if anyone can help me with advice about the following problem. I'm using guided setup so that the user setting up the app can tell splunk up front which column to use as the timestamp. Specifically, the guided setup writes a value to TIME_PREFIX, and all is well. (I cant really let splunk figure it out because there are a couple other epochTime values in there and I cant allow the ambiguity) Now the data comes in, and CHECK_FOR_HEADER now does it's really weird thing where it looks at the props stanza [foo], looks at the data, writes another stanza to etc/apps/learned, and calls the sourcetype, [foo-2]. ( http://www.splunk.com/base/Documentation/4.1.7/Admin/Extractfieldsfromfileheadersatindextime#props.conf ) Another key ingredient is that I leave links back to the setup page -- the user can always run the app setup again later. The problem is that the CHECK_FOR_HEADER magic has meant that the real config is now hidden in etc/apps/learned. My guided setup's custom handler can write to the main props stanza to its heart's content, but it'll never effect the behavior of this 'learned' sourcetype. This would maybe be OK if there was any way for the user to go edit etc/apps/learned/props.conf stanzas in Manager, but it looks like there isnt (That is question #1). So I'm facing a choice of various evils, and I dont know much about any of them: 1) try to make a custom manager that can actually dredge up the learned stanzas. OK the custom manager XML side of this is fine, but the fact that etc/apps/learned is totally invisible in the normal manager pages makes me think that EAI wont even give the stanzas back to me or that it might not let me edit them, or that there might be evil consequences thereof (That is question #2). 2) In my custom rest endpoint, pull out any and all 'learned' stanzas and push config changes to them too as necessary (possibly same problem as above) 3) Tell the user that they have to go dig around in etc/apps/learned and hand-edit props.conf. Sadness. 4) abandon CHECK_FOR_HEADER, switch to setting up the app after the data is indexed, and have some crazy system on setup where I retreive the first events, and turn that text into an extraction. (doable, but nasty. Any paths-less-taken out there? ) advice, EAI lore, and/or cautionary tales? tia
I have lots of little searches and postProcess searches all over the place, where the request only needs a single sorted field out of a larger datacube set. (ie using one result to populate a serie... See more...
I have lots of little searches and postProcess searches all over the place, where the request only needs a single sorted field out of a larger datacube set. (ie using one result to populate a series of pulldowns or a little clickable tables) I used to do | stats count by fieldname | fields - count but since the whole thing is in a macro anyway (with one argument), i switched a while ago to doing | dedup fieldname | fields fieldname | sort fieldname My reasoning I think being that the counting was unnecessary work. Anyway, my questions are - Is there more compact and/or better performing search-language that can do the same thing as this trio? if dedup, fields and sort is sensible, is there an optimal order that these three operations should always be done in? If there is a best practice with respect to ordering those three commands, where/when is the difference significant and why?
In some conditions the head command knows that the search has completed all the information that the user asked for, and it reaches back into the search pipeline and shuts down the search. EG: i... See more...
In some conditions the head command knows that the search has completed all the information that the user asked for, and it reaches back into the search pipeline and shuts down the search. EG: if you run index=_internal over all time, it'll take a really long time. But if you run index=_internal | dedup group | head 5 it'll complete in a few seconds. To take another example index=_internal | stats count by group | head 5 it's pretty similar, but the system knows that the counts are still going to increase, so it lets this search run to completion. Is there a good summary in the docs or in search.bnf that explain under what circumstances we can rely on this behavior?
I have multiline events where there's a fair bit of auto-kv extraction that is good, but then there's a lot of noise as well. I can create regexes to match the really noisy bits and this works we... See more...
I have multiline events where there's a fair bit of auto-kv extraction that is good, but then there's a lot of noise as well. I can create regexes to match the really noisy bits and this works well. I nearly get perfect coverage on the high-value fields that I actually need. The problem is that even when I have a regex matching, sometimes the same field appears in a foo=bar pair further down into the event, and the autoKV match is clobbering my more explicit regex match. Can someone point me in the right direction? (Obviously the answer is to make the logging less deranged, but it's not an option atm unfortunateley) ------------------------------------- Fields: Field=GoodValue;foo=bar;jackiechan=theman AnotherGoodField = AnotherGoodValue User = bob ..... Field : BadNoisyValueThatClobbersMyGoodValue ------------------------------------- One idea is - can I tell the autokv stuff not to pay attention to colons? All the colon stuff is hideously noisy in this sourcetype.
So Splunk of course has an important but subtle distinction between 1) rows that are straight out of the index (these rows are events) and 2) rows that have been transformed or otherwise altered by t... See more...
So Splunk of course has an important but subtle distinction between 1) rows that are straight out of the index (these rows are events) and 2) rows that have been transformed or otherwise altered by the so called 'transforming' search commands, eg:stats (these are results). In most cases this just causes small things like the automatic switching into the table view in the search UI, or the 'build report' link changing to 'show report' etc. However it causes some problems when you're developing views, most notably in the event renderers. This is because event renderers will only run against the event rows, but they are extremely useful (possibly more so) when used against result rows. This is even though (as of 4.1.something) you can give EventsViewer a param <param name="entityName">results</param> and it will render the results as though they were events. Even when you do that it still wont run the event renderers against the result rows like one might expect. So, what I always resort to in apps is this ugly little thing. (My apologies for any aneurisms this causes) `foo NOT foo | append [<search> | stats something something] | eval eventtype="my_custom_event_renderer"` and then I let the EventsViewer render the events. The append command has a neat side-effect of bleaching off the events-vs-results nature of the appended rows. So they get appended into an empty 'events' set and thus get laundered into events. But that's a little nutty and I've always wondered if there was some search command that could do this without all the evil bits. ie where I could take <search> | stats count by user and tack on <search> | stats count by user | makeEvents and the stats rows would then really become events. This would have obvious and strange implications for the MultiFieldViewer interactions in the sidebar as well as for the generic drilldown intention logic and even for the addterm intentions that are used when you click on 'event fields'. (You can see the crazy bugs this would introduce just by trying the "foo NOT foo | append" trick yourself in the search UI. Then start clicking on values in the sidebar or in the events and watch what it does.) However arguably the inconsistency has been in the product ever since EventsViewer was changed to allow <param name="entityName">results</param> . Probably the better way to fix this overall would be to follow through and improve EventsViewer so that a) it processed event_renderer stanzas even when they were results, and b) it did the right things in 'results' mode wrt not sending intentions around as though things were 'events'. That would really be best as I'd have my cake, the system would become more consistent and complete, all without nasty "foo NOT foo" tricks and without breaking the core assumptions of the sidebar and the intentions... hope this makes sense... But until the day that all happens, is there some less hideous search language trick I could use?
There's a scripted input that I wanted to create a while ago, but it had to do some 'setup' stuff at the beginning and this setup stuff took longer than the schedule I needed to run the script on. ... See more...
There's a scripted input that I wanted to create a while ago, but it had to do some 'setup' stuff at the beginning and this setup stuff took longer than the schedule I needed to run the script on. Naturally this was problematic. We tried briefly having the script just sleep periodically and then go back to returning data. However ExecProcessor didnt seem to like this arrangement. Specifically, none of the data we were returning out of stdout would get indexed into splunk until the script was actually killed. Is that the way it's supposed to work? This was a windows .bat file as the scripted input, and it was on Splunk 4.1.5. So assuming I'm not crazy and by default the data doesnt get indexed until the script terminates, is there then any way in 4.1.5 or the upcoming 4.2 perhaps to have a scripted input that is constantly running and returning data rather than running on a schedule? Ideally such a script would get somewhat managed by the ExecProcessor, ie restarted if it ever did terminate or get killed.
Im curious if anyone has any advice, cautionary tales, or good examples about how to go about indexing data from a database, particularly an Access database. Is it better to write it as a script... See more...
Im curious if anyone has any advice, cautionary tales, or good examples about how to go about indexing data from a database, particularly an Access database. Is it better to write it as a scripted input doing ODBC? This seems perfectly straightforward but I know Splunk's ExecProcessor get a little unhappy and even ornery when the script doesn't want to exit and I wonder if anyone's run into troubles here. In my case I'd need to pull in new rows from the DB at least every minute if not every 30 seconds and this seems more aggressive than most scripted inputs I've seen. The other way that springs to mind is to write a little windows service that runs constantly and polls the DB every 30 seconds and sends the data over TCP to splunk. Which doesnt seem that hard either. So anyway, i'm looking for any recommendations or examples or stories that you have. the documentation talks about this a bit ( http://www.splunk.com/base/Documentation/4.1/AppManagement/DataSources#Example_of_tailing_database_inputs ) and it's been mentioned on Answers ( http://answers.splunk.com/questions/2448/can-splunk-monitor-mssql-database-content ) and there is an app on splunkbase ( http://splunkbase.splunk.com/apps/All/3.x/app:Script+for+database+inputs ) but the app dates back to the 3.X days which scares me a bit cause MAN that was a long time ago. Thanks in advance for any thoughts, recommendations, examples.
Is there a charting configuration that will decrease the size of the big square markers that line charts get when you have charting.chart.showMarkers set to True? I'm developing a chart and there ... See more...
Is there a charting configuration that will decrease the size of the big square markers that line charts get when you have charting.chart.showMarkers set to True? I'm developing a chart and there will often be a lot of null values from point to point. I have to leave charting.chart.nullValueMode set to 'gaps', (because in my data it would be very misleading to set it to either 'zero' or to 'connect') Also there will sometimes be a point that has null values on either side. Due to how Splunk's charting works, unless I set 'charting.chart.showMarkers' to True, such points then end up invisible. So therefore I have to have showMarkers on. Unfortunately they're quite big and chunky and I want to make them a lot smaller. Based on the charting reference pages, ie: http://www.splunk.com/base/Documentation/latest/Developer/CustomChartingConfig-chartlegend#valuemarker and for brush lore, http://www.splunk.com/base/Documentation/latest/Developer/CustomChartingConfig-FontColorBrushPalette#Brush_palette, it seems like there must be a way to do simple things like this, but it's a little short on examples here.
My problem seems very similar to http://answers.splunk.com/questions/4175/redirects-before-and-after-our-apps-setup-xml-with-a-custom-eai-endpoint-is-sh but either its not the same thing or that ... See more...
My problem seems very similar to http://answers.splunk.com/questions/4175/redirects-before-and-after-our-apps-setup-xml-with-a-custom-eai-endpoint-is-sh but either its not the same thing or that thing wasnt fixed in 4.1.5 (because im using 4.1.5) I have my setup.xml posting to my custom endpoint, and all is well. But from that endpoint I dont have any user context. Specifically when I call splunk.auth.getCurrentUser() , i get back a placeholder user whose name is 'UNDEFINED_USER', whereas I'd expect to get back the user currently logged into SplunkWeb. And obviously if i make namespaced EAI calls with that null user I cant do much with them cause he doesnt exist. and for troubleshooting purposes, if i throw the following line in there, hackSessionKey = splunk.auth.getSessionKey("admin", "changeme") then I do indeed get admin's user context and everything works fine, but of course that is not a solution. So can anyone tell me how to get the proper user context from here? Thanks in advance.
Reverse engineering this stuff from the logs and existing usage in SplunkWeb's python code, I see a lot of things use the big flat 'admin/foo' paths to get/set data in EAI. However I also know v... See more...
Reverse engineering this stuff from the logs and existing usage in SplunkWeb's python code, I see a lot of things use the big flat 'admin/foo' paths to get/set data in EAI. However I also know vaguely from overhearing conversations at Splunk that this big flat list of 'admin/foo' endpoints is considered less than ideal and I thought I overheard that for each of them there is a more fundamental endpoint that we're all supposed to use. And another data point is that I know that I can usually go to https://localhost:8089/servicesNS/admin/<app_name>/data , click past the stern security warnings from firefox, and there I should be able to drive to the stuff I want. Then once I've found it, its easy to determine the proper EAI path by just looking at the browser URL. The problem is that I cant find the 'proper' path for macros, and i cant find any path at all for extractions that are defined in props.conf eg: 1) if I want to get a macro using the splunk.entity class in python, the only path I know is 'admin/macros', as in splunk.entity.getEntity("admin/macros", "my_macro_name", namespace="my_app_name", owner="splunk.auth.getCurrentUser()['name']) 2) And I have an extracted field that is defined in my app and I cannot find a way to get this at all from EAI. (Maybe it would be there if I had defined it over in transforms and merely referred to it from props? ) Thanks in advance for any and all help.