I've just upgraded to Splunk 6.5.2 from 6.3.1 and the data event of the SearchManager seems to be firing twice under certain circumstances. This behaviour was not present in 6.3.1. Is this a bug? I'm aware that some of the search events have changed in 6.5 (e.g. finalized has been removed), but I can't find anything referring to this behaviour in the documentation. Some very simple code below demonstrates the problem:
JavaScript tester.js
require([
'underscore',
'jquery',
'splunkjs/mvc',
'splunkjs/mvc/searchmanager',
'splunkjs/mvc/simplexml/ready!'
], function(_, $, mvc, SearchManager) {
var searchManager = new SearchManager({
autostart: false,
search: "| makeresults"
});
searchManager.data("results").on('data', function() {
console.log("data event fired");
});
searchManager.startSearch();
});
SimpleXML tester.xml
<form script="tester.js">
<label>Tester</label>
<row>
Test of data event
</row>
</form>
Console Output
data event fired
data event fired
If I use the _bump command, and then reload the tester.xml dashboard page, the data event only fires once. If I reload the page again, it fires twice, and then continues to fire twice until I call _bump again. Has anyone experienced this issue?
This was raised by me to Engineering after Redman11 contacted support and was explained as expected, and the way that results are requested has changed in 6.4+:
As per the given JS file the "data event fired for results" message will show up in console when we get data for the search/jobs//results? request.
The UI keeps making the search/jobs?id= request and based on its response it decides whether to make the search/jobs//results? request or not.
If the resultPreviewCount property in the response of search/jobs?id= request is greater than 0 only then the UI makes the search/jobs//results? request.
In 6.3.x, we get resultPreviewCount: 0 for all the search/jobs?id= requests except for the last one after which we make the search/jobs//results? request.
Second last search/jobs?id= responds with resultPreviewCount: 0.
Last search/jobs?id= responds with resultPreviewCount: 9.
In 6.4.x, we get resultPreviewCount: 10 for all the search/jobs?id= so thats why we keep making search/jobs//results? request after each of them.
Second last search/jobs?id= responds with resultPreviewCount: 10
Last search/jobs?id= responds with resultPreviewCount: 10
so if you're relying on the ondata event, you'll need to take that into account that it might fire multiple times on newer splunk versions.
(This has been a while ago, so just copy pasting this from the case)
Is this still an expected behavior? I notice this issue quite frequently when dealing with a savedsearch and a postprocess vice a search.
If this is still expected behavior, are there any workarounds to guaranteeing only one set of a data? I am creating divs based on the data returned and my workaround is to run a function to delete all divs when it starts otherwise I end up with 2-4 times as many sets of data.
I have still the same issue on 7.2.6.
But it occurs not with every search. Just now and then two events are stored after calling Searchmanager.startSearch()
Any Solution?
I have accepted that it will always have a risk of occurring and that there is no fix.
Depending on how you're using the data this might not be effective but this is sloppy pseudo code on how we handled it for now;
when passing the data into javascript function
set variable counter = 0
if variable counter == 0 then
use the data however you want
counter++
Now at least it won't loop.
lol, I see just now that your problem is not my problem. I should go into the weekend 😄
My problem is, that I use the searchmanager to execute a search which writes data from the dashboard (basically the dashboards sole reason is collecting user input which can't be automatically collected) into a summary index.
There it occurs sometimes that one search results in two events added to the index. In other words the search is executed twice, but not the js code.
Any idea anyway?
Hi,
Have you found any resolution to this issue?
Hi Redman11,
did you find a solution for this problem? I have exactly the same problem.
Thank you in advance!
For info - this behaviour is still present in Splunk 6.6.1
Hi, no I don't yet have a solution that isn't a hack. I was going to upgrade to 6.6 and see if it is resolved then. If not, I'll raise a support case with Splunk.