All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Without knowing more about your javascript, is there something happening where you are doing a require of a module that should be included with an import?  Or, you might need to load abcxyz.js in a d... See more...
Without knowing more about your javascript, is there something happening where you are doing a require of a module that should be included with an import?  Or, you might need to load abcxyz.js in a different way because of the contents. This answer over on StackOverflow addresses the more generic javascript quirkiness you could be running into.
You can use the gauge command to set your limits. Here is a dummy search where I make up some decibel data:   index=_internal | eval decibels=(-1 * date_minute) | stats avg(decibels) as avg... See more...
You can use the gauge command to set your limits. Here is a dummy search where I make up some decibel data:   index=_internal | eval decibels=(-1 * date_minute) | stats avg(decibels) as avg_decibels | eval avg_decibels = round(avg_decibels,2) | gauge avg_decibels -100 -75 -50 0     I can then use that for the radial chart:  
Unfortunately it does not work. using sub search will change the query source value but not the collect one
See this document. https://docs.splunk.com/Documentation/Splunk/latest/admin/inputsconf#Event_Log_filtering Just be aware that there are two different formats and you use one of them depending on w... See more...
See this document. https://docs.splunk.com/Documentation/Splunk/latest/admin/inputsconf#Event_Log_filtering Just be aware that there are two different formats and you use one of them depending on whether you ingest your events in "old style" plain text format or as XML.
Thank you @bowesmana for your comprehensive reply and example! It works fine - but unfortunately it still doesn't get the logarithmic scale on the overlay right. While setting  <option name="char... See more...
Thank you @bowesmana for your comprehensive reply and example! It works fine - but unfortunately it still doesn't get the logarithmic scale on the overlay right. While setting  <option name="charting.axisY2.scale">log</option> does not yield any validation error, it simply doesn't work as expected. Your example image also shows a linear secondary Y axis. When editing this dashboard in the graphical editor, I get an error when I try to change the Y axis to logarithmic. Maybe there is just no possible way in Splunk to do what I want to do?
Have you tried forcing a page reload so your browser fetches resouces again/clear history/etc?  That sort of "failed to load source" for visualizations has happened before: Solved: Calendar Heat Map... See more...
Have you tried forcing a page reload so your browser fetches resouces again/clear history/etc?  That sort of "failed to load source" for visualizations has happened before: Solved: Calendar Heat Map - Custom Visualization: How do I... - Splunk Community
We are using the Splunk Universal Forwarder on Windows servers to capture event viewer logs into Splunk.  We have a known issue with a product causing a large number of events to be recorded in the e... See more...
We are using the Splunk Universal Forwarder on Windows servers to capture event viewer logs into Splunk.  We have a known issue with a product causing a large number of events to be recorded in the event viewer which are then sent into Splunk.  How can we filter out a specific event from the Universal Forwarder so that it is not sent into Splunk?
In a modified  search_mrsparkle/templates/pages/base.html, we have a <script> tag inserted just before the </body> tag, as follows: <script src="${make_url('/static/js/abcxyz.js')}"></script></bod... See more...
In a modified  search_mrsparkle/templates/pages/base.html, we have a <script> tag inserted just before the </body> tag, as follows: <script src="${make_url('/static/js/abcxyz.js')}"></script></body> with abcxyz.js placed in the search_mrsparkle/exposed/js directory. The abcxyz.js file has the following code:   require(['splunkjs/mvc'], function(mvc) { ... } which performs some magical stuff on the web page.  But when the page loads, the debugging console reports "require is not defined".  This used to work under SE 9.0.0.1 (and earlier) but now fails under SE 9.1.1. Yes, we realize we are modifying Splunk-delivered code, but we have requirements that required us taking these drastic actions. Anyone have any ideas on how to remedy this issue? --------------------------------------------------------------------------- @mhoustonludlam_ @C_Mooney
No. You can't do that. You need a constant parameter for the collect command. If you want to generate it dynamically, you need to do a subsearch from which you return the value of the parameter (the ... See more...
No. You can't do that. You need a constant parameter for the collect command. If you want to generate it dynamically, you need to do a subsearch from which you return the value of the parameter (the subsearch is executed before the main search). Another option is to use the collect command with output_format=hec - then you can specify your metadata fields on a per-event basis but that's more complicated. See https://docs.splunk.com/Documentation/Splunk/9.1.1/SearchReference/Collect Collect is generally a relatively tricky command with some non-obvious restrictions (and uses your license if you use sourcetype different than the default stash one) so it's worth reading thoroughly the docs about it and test it on a dev environment before trying to run it in prod.
1. There are no samples of neither orignal data nor search results so we can't know what you mean, 2. Splunk does not manipulate data on its own unless it's configured to do so. We don't know your c... See more...
1. There are no samples of neither orignal data nor search results so we can't know what you mean, 2. Splunk does not manipulate data on its own unless it's configured to do so. We don't know your configuration so we can't tell you what's going on during the onboarding process. Did you check the configuration for sourcetype, source and host in question? Do you even refer to raw data, search-time extracted fields or indexed fields? We have no idea what's going on because you haven't shown anything apart from a simple search (which we have no idea of knowing what to expect from not knowing the events) and some random timestamps.  
I edited my question. That works in two eval  parameters but not on the source parameter in the | collect
Like with a programming language (writing searches in SPL is a form of programming after all), the order of operations does matter. So | eval a=b,c=a will yield different results than | eval c=a,... See more...
Like with a programming language (writing searches in SPL is a form of programming after all), the order of operations does matter. So | eval a=b,c=a will yield different results than | eval c=a,a=b  
How to assign the value of param name original to the source in the | collect statement index=123  | eval original=abcd,  | collect index=qaz source=original    
I don't have any 500's in my _internal index (this is not a flex...just a fresh install before I have had a chance to break anything).  So this is what my results look like:   Maybe for the tim... See more...
I don't have any 500's in my _internal index (this is not a flex...just a fresh install before I have had a chance to break anything).  So this is what my results look like:   Maybe for the timerange you don't have any 5xx errors?  If I flub the query a little more in my environment and change the boolean criteria a bit in the SPL to be >=300<400 (see highlighted section) then it works correctly for me:  
Can you provide a screenshot of the event data within Splunk, and what it looks like within the file?  If necessary redact anything private. It would also help if you could have the Splunk default fi... See more...
Can you provide a screenshot of the event data within Splunk, and what it looks like within the file?  If necessary redact anything private. It would also help if you could have the Splunk default fields selected so they appear in-line with your event data (host, index, linecount, punct, source, sourcetype, splunk_server, timestamp) I'm having a difficult time visualizing only the timestamp portion being different between two events and one log file.  
Hi Forum, I have written a script that pull off the receive power from optical transceivers on every hour.   All is well with this except, as the values are a measurement in loss, they are negative ... See more...
Hi Forum, I have written a script that pull off the receive power from optical transceivers on every hour.   All is well with this except, as the values are a measurement in loss, they are negative values in bells. I would really like to represent this with the single value radial - I can get it to work with a perfectly with a marker gauge but having that "rev counter" type representation would not only be so cool bit so useful to get power readings at  glance on our long range transmission kit, its such a perfect representation I think for this kind of measurement, and would really appeal to that more "scientific" engineering type of audience. When I use the single value radial I cannot for the life of me work out where I can adjust the scale (ideally -40dBm to 0dBm.  I just expected this to be like managing any other sort of float (I am working with a decimal number, not a string or anything), just to happens to be a negative value. Am I just missing something really silly?  Any help would be gratefully received - I'm using dashboard studio if that makes a difference.   Thank you
@_JP Hi, the query which you shared me works same like the one which I shared. The percentage values are showing 0. Is it because we need to add decimal values after 0. I tried adding    (fourxxErr... See more...
@_JP Hi, the query which you shared me works same like the one which I shared. The percentage values are showing 0. Is it because we need to add decimal values after 0. I tried adding    (fourxxErrors / TotalRequests) * 100, 2)   instead of  (fourxxErrors / TotalRequests) * 100, 0) But no use. Do you have any other idea?  
I think the issue is more complicated than that. I understand not to look for internal.   that is not the issue. the issue is that splunk generates different data from the orginal source with diffe... See more...
I think the issue is more complicated than that. I understand not to look for internal.   that is not the issue. the issue is that splunk generates different data from the orginal source with different test date. which is NOT in the file.  it has to do with the cluster environment. anyone super expert in such?
Remove all the ticks/quotes from the field names in your SPL - for what you are doing they aren't necessary.  I only quote my field names if they have any negative space characters.  Once I did that ... See more...
Remove all the ticks/quotes from the field names in your SPL - for what you are doing they aren't necessary.  I only quote my field names if they have any negative space characters.  Once I did that I was able to "fake" your search in my environment and get results:     index=_internal status=* | rename status AS HTTPStatus | stats count(eval(HTTPStatus >= 400 AND HTTPStatus < 500)) AS fourxxErrors, count(eval(HTTPStatus >= 500 AND HTTPStatus < 600)) AS fivexxErrors, count AS TotalRequests | eval fourxxPercentage = if(TotalRequests > 0, (fourxxErrors / TotalRequests) * 100, 0), fivexxPercentage = if(TotalRequests > 0, (fivexxErrors / TotalRequests) * 100, 0) | table fourxxPercentage, fivexxPercentage     Also, I just added this at the beginning to turn misc data in _internal into events that "look" like yours for this example purpose: index=_internal status=* | rename status AS HTTPStatus  
I am not seeing attachments or other screenshots.   Usually when I see duplicate events like this it has been because a file was replicated somehow "underneath" Splunk within a directory where Spl... See more...
I am not seeing attachments or other screenshots.   Usually when I see duplicate events like this it has been because a file was replicated somehow "underneath" Splunk within a directory where Splunk thinks it is a new file and starts indexing it again. Or, I've seen this happen if you have the log files going to a shared mount point and two different Forwarders are pointing at the same files.  A few questions to help you troubleshoot: - You mention splunk-server.  What does the splunk_server field, along with values of things like host, sourcetype, and source look like for these events?  - Your timestamps are wildly off, and not necessarily in a predictable way (e.g. just by 1 hour).  Does your log data have timestamps within it, or are you relying on the timestamp being derived from when Splunk "sees" your log? - Have you poked around in the _internal index to see where Splunk "saw" any files matching the following: /var/log/acobjson/*100223*rtm*  NOTE:  Don't look for source=/var/log/acobjson/*100223*rtm* in index=_internal, because the source= in this context refers to the Splunk log files that were indexed.  You can start without specifying a field, but you can also try something like index=_internal series=/var/log/acobjson/*100223*rtm* since that is one field Splunk will log this info in as it is monitoring files.